BITS WILP Machine Learning Mid-Sem Exam 2016-H2
Birla Institute of Technology & Science, Pilani
Work-Integrated Learning Programmes Division
First Semester 2016-2017
Mid-Semester Test
(EC-2 Regular)
Course No. : IS ZC464
Course Title : MACHINE LEARNING
Nature of Exam : Closed Book
Weightage : 35%
Duration : 2 Hours
Date
of Exam : 25/09/2016 (FN)
No.
of pages: 1; No. of questions: 3
Note:
1. Please follow all the Instructions
to Candidates given on the cover page of the answer book.
2. All parts of a question
should be answered consecutively. Each answer should start from a fresh
page.
3. Assumptions made if any,
should be stated clearly at the beginning of your answer.
Q1. Answer the following questions [4 ´ 3 = 12]
a) Describe the meaning of 'best
hypothesis' in the context of function approximation in machine learning .
b) What is Bayes' theorem? How
is it significant in machine learning?
c) Explain MAP technique used in
learning? Give example.
d) Explain the role of 'error'
in prediction of target value given the test data. Also explain the role of
training in prediction.
Q2. Answer the following questions [7 + 7 =14]
a) The chances of children in primary
schools in villages dropping (D) their studies are high due to various
reasons. The major factors are lack of
basic needs of children at home (N) and
lack of infrastructure (I) such as school building and availability of
teachers who can teach well and motivate student. The statistics collected as
the joint probabilities are given in the following table.
I
|
~ I
|
|||
N
|
~N
|
N
|
~N
|
|
D
|
0.098
|
0.022
|
0.06
|
0.02
|
~D
|
0.018
|
0.062
|
0.32
|
0.4
|
Use Bayes' theorem to compute the posterior probability P(D
| N) using the given joint probabilities. Explain all steps of calculation.
[Note: A calculation without the correct expression will not be given credit.]
b)
Consider a linear model of the form
where is the vector of parameters and is represented
as W. The function are the basis functions. Explain the
significance of the parameters W in linear regression. Comment on the parameters W in the context of
approximating data using a straight line.
Q3.
Answer the following questions [5 + 4 = 9]
a) What do you understand by entropy? Calculate the
entropy for the following data.
Symbols->
|
A
|
B
|
C
|
D
|
E
|
Probability->
|
0.20
|
0.15
|
0.10
|
0.35
|
0.20
|
b) What is the significance of
attribute selection in decision tree based learning? Explain with an
appropriate example.
NOTE: IF YOU HAVE ANY QUESTION PAPERS FOR POSTING ONLINE, PLEASE SHARE AT 'ashishjainblogger@gmail.com'
No comments:
Post a Comment