Multiple choices questions in Machine learning. Interview
questions on machine learning, quiz questions for data scientist
answers explained, hyperparameters, tuning hyperparameters, risk of tuning hyperparameters using test dataset, what are hyperparameters? when will the model overfit?
Machine Learning MCQ - Risk involved in tuning the hyperparameters using a test set
1. What is the risk with tuning hyper-parameters using a test dataset?
a) Model will overfit the test set
b) Model will underfit the test set
c) Model will overfit the training set
d) Model will perform balanced
Answer: (a) Model will overfit the test set
The model will not generalize well to unseen data because it overfits the test set. Tuning model hyper-parameters to a test set means that the hyper-parameters may overfit to that test set. If the same test set is used to estimate performance, it will produce an overestimate. The test set should be used only for testing, not for parameter tuning. Using a separate validation set for tuning and test set for measuring performance provides unbiased, realistic measurement of performance.
What are hyper-parameters?Hyper-parameters are parameters whose values control the learning process and determine the values of model parameters that a learning algorithm ends up learning. We can’t calculate their values from the data. Example: Number of clusters in clustering, number of hidden layers in a neural network, and depth of a tree are some of the examples of hyper-parameters.
What is the hyper-parameter tuning?Hyper-parameter tuning is the process of choosing the right combination of hyper-parameters that maximizes the model performance. It works by running multiple trials in a single training process. Each trial is a complete execution of your training application with values for your chosen hyper-parameters, set within the limits you specify. This process once finished will give you the set of hyper-parameter values that are best suited for the model to give optimal results.
|
No comments:
Post a Comment