Natural language processing keywords, what is add-1 smoothing, what is Laplace smoothing, explain add-1 smoothing with an example, unigram and bi-gram with add-1 laplace smoothing
Add-1 (Laplace) smoothing
We have used Maximum Likelihood Estimation (MLE) for training the parameters of an N-gram model. The problem with MLE is that it assigns zero probability to unknown (unseen) words. This is because, MLE uses a training corpus. If the word in the test set is not available in the training set, then the count of that particular word is zero and it leads to zero probability.
To eliminate this zero probability, we can do smoothing. Smoothing is about taking some probability mass from the events seen in training and assigns it to unseen events. Add-1 smoothing (also called as Laplace smoothing) is a simple smoothing technique that Add 1 to the count of all n-grams in the training set before normalizing into probabilities.
Example:
Recall that the unigram and bi-gram probabilities for a word w are calculated as follows;
P(w) = C(w)/N
P(wn|wn-1) = C(wn-1 wn)/C(wn-1)
Where, P(w) is the unigram probability, P(wn-1 wn) is the bigram probability, C(w) is the count of occurrence of w in the training set, C(wn-1 wn) is the count of bigram (wn-1 wn) in the training set, N is the total number of word tokens in the training set.
Add-1 smoothing for unigrams
PLaplace(w) = (C(w)+1)/N+|V|
Here, N is the total number of tokens in the training set and |V| is the size of the vocabulary represents the unique set of words in the training set.
As we have added 1 to the numerator, we have to normalize that by adding the count of unique words with the denominator in order to normalize.
Add-1 smoothing for bigrams
PLaplace(wn|wn-1) = (C(wn-1 wn)+1)/C(wn-1)+|V|
*************************
Related articles:
No comments:
Post a Comment