Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, Exam questions in machine learning, what is dropout in a neural network, purpose of dropout in deep learning, where is dropout applied in a neural network?
Machine Learning MCQ - Application of dropout in a neural network to reduce overfitting
1. Which of the
following is true about dropout?
a) Dropout leads to sparsity in the trained
weights
b) At test time, dropout is
applied with inverted keep probability
c) The larger the keep probability of a layer,
the stronger the regularization of the weights in that layer
d) Dropout is
applied to different layers of a neural network, but not the output layer
Answer: (d) Dropout is applied to different layers of a neural network, but not the output layer
The term
“dropout” refers to dropping out the nodes (input and hidden layer) in a
neural network. All the forward and backwards connections with a dropped node
are temporarily removed, thus creating new network architecture out of the
parent network. The nodes are dropped by a dropout probability of p.
Why dropout is
not used in output layer? Dropout is typically not used in the output layer of a neural network because the output layer is responsible for making final predictions, and this layer should produce deterministic and stable results. Random dropout could interfere with the reliability of those predictions. Alternate to dropout at the output layer? If needed
one could use any other regularization techniques that do not affect the
stability of the prediction at the output layer. Why not option (b)? Keep probability is the probability of retaining neurons during dropout. Also, dropout is applied during training but not during testing phase. Why not option (c)? Having a larger keep probability (say 95% of neurons are kept during dropout) may lead to overfit problem. In such cases, dropout may not be effective.
|
Related links:
What is dropout in a neural network and why is it used?
Where can we use dropout in a neural network?
Why we cannot use dropout technique in the output layer of a neural net?
If at all you need to use some technique to overcome overfitting in the output layer, what we can do?
Machine learning solved mcq, machine learning solved mcq