Natural language processing quiz questions with answers, NLP true false interview questions, NLP quiz questions for competitive exams
NLP TRUE/FALSE Quiz Questions - SET 03
1. Models that
assign probabilities to sequences of words are called Support Vector Models.
(a)
TRUE (b)
FALSE
View Answer
Answer:
(b) FALSE
Models that assign probabilities to sequences of words are
called language models or LMs.
|
2. Number of
trigrams in the following sentence is 4; “calculates the similarity between two
strings”.
(a)
TRUE (b)
FALSE
View Answer
Answer:
(a) TRUE
There are four trigrams (3-grams) in the given sentence if we
do not include the start and end markers. The trigrams are;
“calculates the similarity”, “the similarity between”,
“similarity between two” and “between two strings”.
|
3. We normalize
the counts of words in an n-gram model to make the value to fall between 0 and
100.
(a)
TRUE (b)
FALSE
View Answer
Answer:
(b) FALSE
We normalize the counts of words in an n-gram model to make
the value to fall between 0 and 1.
We get the maximum likelihood estimation (MLE) for the
parameters of an n-gram model by getting counts from a normalize corpus, and
normalizing the counts so that they lie between 0 and 1.
|
4. To calculate
the bigram probability of a word wn given the previous word wn-1,
we count the occurrence of word sequence “wn-1 wn” and
normalize this by the count of wn-1.
(a)
TRUE (b)
FALSE
View Answer
Answer:
(a) TRUE
To compute a particular bigram probability of a word y given a
previous word x, we will compute the count of the bigram C(x y) and normalize
by the count of unigram C(x).
P(y|x) = count(y x)/count(y)
This is called Maximum Likelihood Estimate (MLE).
|
5. It is better to
compute the probabilities in a language model as log probabilities.
(a)
TRUE (b)
FALSE
View Answer
Answer:
(a) TRUE
Since probabilities are (by definition) less than or equal to probabilities
1, the more probabilities we multiply together, the smaller the product
becomes. Multiplying enough n-grams together would result in numerical
underflow. By using log probabilities instead of raw probabilities, we get
numbers that are not as small.
|
*************************
Related links:
- Go to Natural Langugage Processing home page
No comments:
Post a Comment