Top
3 Machine Learning Quiz Questions with Answers explanation, Interview
questions on machine learning, quiz questions for data scientist answers
explained, machine learning exam questions, question bank in machine
learning, k-means, elbow method, decision tree, entropy calculation
Machine
learning MCQ - Set 21
1. In terms of the
bias-variance trade-off, which of the following is substantially more harmful
to the test error than the training error?
a) Bias
b) Loss
c) Variance
d) Risk
Click here to view answer
Ans : (c)
Answer: (c) Variance
Training error -
It is the error that you get when you run the trained model back on the
training data.
Test error – It is
the error that you get when you run the trained model on a set of unseen data.
Variance occurs
when the model performs well on the trained dataset but does not do well on a
dataset that it is not trained on, like a test dataset or validation dataset.
Variance tells us how scattered are
the predicted value from the actual value.
High variance causes overfitting that implies that
the algorithm models random noise present in the training data.
|
2. Which of the
following learning algorithms will return a classifier if the training data is
not linearly separable?
a) Hard margin SVM
b) Soft margin SVM
c) Perceptron
d) Naïve bayes
Click here to view answer
Ans : (b)
Answer: (b) Soft margin SVM
Soft margin SVM
If the data set is
not linearly separable (due to noise), the standard approach is to allow the decision
margin to make a few mistakes (some points - outliers or noisy examples - are
inside or on the wrong side of the margin). It tries to balance the trade-off
between finding a line that maximizes the margin and minimizes the
misclassification. [For more information, refer here]
The trick Soft
Margin SVM is using is very simple, it adds slack variables to the
constraints of the optimization problem. By adding the slack variables, when
minimizing the objective function, it is possible to satisfy the constraint
even if the example does not meet the original constraint. [For more, refer here]
|
3. Modeling a
classification rule directly from the input data like in logistic regression fits
which of the following classification methods?
a) Discriminative
classification
b) Generative
classification
c) Probabilistic
classification
d) All of the above
Click here to view answer
Ans : (a)
Answer: (a) Discriminative classification
Discriminative
Classifiers learn the features in the input that are most useful to
distinguish between the various possible classes.
Discriminative
classifiers directly calculates the posterior probability P(y|x) or learn a
direct map from input x to the class labels. So, these models try to learn
the decision boundary for the model.
Generative
classification
In generative model, we model the conditional
probability of the input x given the label y. Generative model learns the
joint probability distribution P(x, y) and uses Bayes’ theorem to find the
conditional probability.
Probabilistic classification
Probabilistic
classification is the study of approximating a joint distribution with a
product distribution. Bayes rule is used to estimate the conditional
probability of a class label y,
and then assumptions are made on the model, to decompose this probability into
a product of conditional probabilities.
|
**********************
Related links:
What is discriminative classification
Can SVM classify non-linearly separable data
Variance is more harmful to test error in machine learning
No comments:
Post a Comment