TOPICS (Click to Navigate)

Pages

Monday, January 11, 2021

Machine Learning TRUE or FALSE Questions with Answers 19

Machine learning exam questions, ML solved quiz questions, Machine Learning TRUE or FALSE questions, TOP 5 machine learning quiz questions with answers

Machine Learning TRUE / FALSE Questions - SET 19

1. Solving a non linear separation problem with a hard margin Kernelized SVM (Gaussian RBF Kernel) might lead to overfitting.

(a) TRUE                                                   (b) FALSE

Answer: TRUE

When there are outliers, hard margin SVM + Gaussian-RBF kernel result in an unnecessarily complicated decision boundary that overfits the training noise.

In SVM, to avoid overfitting, we choose a Soft Margin, instead of a Hard margin, i.e. we let some data points enter our margin intentionally so that our classifier don’t overfit on our training sample.

SVM is less prone to overfitting than other methods.

[Refer here for more]

 

2. Random forests can be used to classify infinite dimensional data.

(a) TRUE                                                   (b) FALSE

Answer: TRUE

Random forest is great with high dimensional data since we are working with subsets of data. With Random Forests there’s almost no harm in keeping columns whose importance is not certain and no harm in adding more columns.

But, random forests do not have high performance when dealing with very-high-dimensional data

 

3. The training accuracy increases as the size of the tree grows (assuming no noise).

(a) TRUE                                                   (b) FALSE

Answer: TRUE

The training accuracy increases as the size of the tree grows until the tree fits all the training data.

A decision tree overfits the training data when its accuracy on the training data goes up but its accuracy on unseen data goes down.

 

4. Hierarchical clustering methods require a predefined number of clusters, much like k-means.

(a) TRUE                                                   (b) FALSE

Answer: FALSE

We do not need to predefine the number of clusters in hierarchical clustering like we do in k-means clustering. Hierarchical clustering considers each data point as individual cluster and groups similar objects into clusters.

 

5. Suppose that X1, X2, ..., Xm are categorical input attributes and Y is categorical output attribute. Suppose we plan to learn a decision tree without pruning, using the standard algorithm. The maximum depth of the decision tree must be less than m+1.

(a) TRUE                                                   (b) FALSE

Answer: TRUE

Because the attributes are categorical and can each be split only once.

 

*********************

Related links:

 

Decision tree

Overfitting in decision tree

Random forest

Support vector machine

No comments:

Post a Comment