Top 5 Machine Learning Quiz Questions with Answers explanation, Interview questions on machine learning, quiz questions for data scientist answers explained, machine learning exam questions
Machine learning MCQ - Set 04
1. As the number
of training examples goes to infinity, your model trained on that data will
have:
a) Lower variance
b) Higher variance
c) Same variance
d) None of the
above
View Answer
Answer: (a) Lower variacce
Once you have more training examples you’ll have
lower test-error (variance of the model decrease, meaning we are less
overfitting).
Refer here for more details: In Machine Learning, What is Better: More Data or better Algorithms
High-variance – a model that represent training set well, but at risk of overfitting to noisy or unrepresentative training data.High bias – a simpler model that doesn’t tend to overfit, but may underfit training data, failing to capture important regularities. |
2. Suppose we like
to calculate P(H|E, F) and we have no conditional independence information.
Which of the following sets of numbers are sufficient for the calculation?
a) P(E, F), P(H),
P(E|H), P(F|H)
b) P(E, F), P(H), P(E, F|H)
c) P(H), P(E|H),
P(F|H)
d) P(E, F), P(E|H),
P(F|H)
View Answer
Answer: (b) P(E, F), P(H), P(E, F|H)
This is Bayes’
rule;
P(H|E F) = (P(E
F|H)*P(H)) / P(E F)
|
3. Suppose you are
given an EM algorithm that finds maximum likelihood estimates for a model with
latent variables. You are asked to modify the algorithm so that it finds MAP
estimates instead. Which step or steps do you need to modify?
a) Expectation
b) Maximization
c) No modification
necessary
d) Both
View Answer
Answer: (b) Maximization
We need to modify Maximization step.
EM is an
optimization strategy for objective functions that can be interpreted as
likelihoods in the presence of missing data. EM is an iterative algorithm
with two linked steps:
E-step: fill-in
hidden values using inference
M-step: apply
standard MLE/MAP method to completed data
Refer here for
more: Relationship between EM, MLE and MAP
|
4. Which of the
following is/are true regarding an SVM?
a) For two dimensional data points, the separating hyperplane
learnt by a linear SVM will be a straight line.
b) In theory, a
Gaussian kernel SVM cannot model any complex separating hyperplane.
c) For every kernel
function used in a SVM, one can obtain an equivalent closed form basis expansion.
d) Overfitting in
an SVM is not a function of number of support vectors.
View Answer
Answer: (a) For two dimensional data points, the separating
hyperplane learnt by a linear SVM will be a straight line
SVM or Support
Vector Machine is a linear model for classification and regression problems.
It can solve linear and non-linear problems and work well for many practical
problems. The algorithm creates a line or a hyperplane which separates the
data into classes.
A hyperplane in an n-dimensional Euclidean space is a flat, n-1 dimensional subset of that space that divides the space into two disconnected parts. |
5. Which of the
following best describes what discriminative approaches try to model? (w are the parameters in the model)
a) p(y|x, w)
b) p(y, x)
c) p(w|x, w)
d) None of the
above
View Answer
Answer: (a) p(y|x, w)
Machine learning is to learn a (random) function that
maps a variable X (feature) to a variable Y(class) using a (labeled) dataset.
A Generative Model learns the joint probability
distribution p(x,y). It predicts the conditional probability with the help of
Bayes Theorem. To get the
conditional probability P(Y|X), generative models estimate the prior P(Y) and
likelihood P(X|Y) from training data and use Bayes rule to calculate the
posterior P(Y |X)
Discriminative approaches model the posterior probability P(y|x) directly. |
**********************
No comments:
Post a Comment