Top
3 Machine Learning Quiz Questions with Answers explanation, Interview
questions on machine learning, quiz questions for data scientist answers
explained, machine learning exam questions, question bank in machine
learning, subset selection, overfitting, SVM classifier, slack variables, generative models, generative models can be used for classification
Machine
learning Quiz Questions - Set 28
1. Which of the
following is true about generative models?
a) They capture the
joint probability
b) The perceptron
is a generative model
c) Generative
models can be used for classification
d) They capture the
conditional probability
Click here to view answer and explanation
Ans : (a) and (c)
Answer: (a) They capture the joint probability and (c) Generative
models can be used for classification
Generative models
are useful for unsupervised learning tasks. A generative model learns
parameters by maximizing the joint probability P(X,Y). Generative models
encode full probability distributions and specify how to generate data that
fit such distributions. Bayesian networks are well-known examples of such
models. Refer here for more information.
Generative
Classifiers tries to model class, i.e., what are the features of the class.
In short, it models how a particular class would generate input data. When a
new observation is given to these classifiers, it tries to predict which
class would have most likely generated the given observation. Refer here for
more information.
|
2. Which of the
following are true about subset selection?
a) Subset selection
can substantially decrease the bias of support vector machines
b) Ridge regression
frequently eliminates some of the features
c) Finding the true
best subset takes exponential time
d)
Subset selection can reduce overfitting
Click here to view answer and explanation
Ans : (d)
Answer: (d) Subset selection can reduce overfitting
A classifier is
said to overfit to a dataset if it models the training data too closely and
gives poor predictions on new data. This occurs when there is insufficient
data to train the classifier and the data does not fully cover the concept
being learned.
Subset selection
reduces over-fitting.
Feature subset selection
is the process of identifying and removing as much of the irrelevant and redundant
information as possible. This reduces the dimensionality of the data and allows
learning algorithms to operate faster and more effectively.
|
3. What can help to
reduce overfitting in an SVM classifier?
a) High-degree
polynomial features
b) Setting a very
low learning rate
c) Use of slack
variables
d) Normalizing the
data
Click here to view answer and explanation
Ans : (c)
Answer: (c) Use of slack variables
The reason that
SVMs tend to be resistant to over-fitting, even in cases where the number of
attributes is greater than the number of observations, is that it uses
regularization. The key to avoid over-fitting lies in careful tuning of the
regularization parameter, C,
and in the case of non-linear SVMs, careful choice of kernel and tuning of
the kernel parameters.
Without slack
variables the SVM would be forced into always fitting the data exactly and
would often overfit as a result.
|
**********************
Related links:
List the type of regularized regression
Multiple choice quiz questions in machine learning
What is the use of slack variables in SVM
over-fitting in SVM classifier
How generative models used for classification
subset selection can reduce overfitting in SVM
No comments:
Post a Comment