This post is in continuation of the previous post Text Classification With Python Using fastText. This post describes how to improve fastText classifier using various techniques.
More on Precision and Recall
Precision: Number of correct labels out of total labels predicted by classifier. Recall: Number of labels successfully predicted out of real labels. Example:
Why not put knives in the dishwasher?
This question has three labels on StackExchange: equipment, cleaning and knives. Let us obtain top five labels predicted from our model (k = top k labels to predict):
1 | text = ['Why not put knives in the dishwasher?'] |
This gives us food-safety, baking, equipment, substitutions and bread. One out of five labels predicted by the model is correct, giving a precision of 0.20. Out of the three real labels, only one is predicted by the model, giving a recall of 0.33.
Improving the Model
We ran the model with default parameters and training data as it is. Now let’s tweak a little bit. We will employ following techniques to improve :
- Preprocessing the data
- Changing the number of epochs (using the option epoch, standard range 5 - 50)
- Changing the learning rate (using the option lr, standard range 0.1 - 1.0)
- Using word n-grams (using the option wordNgrams, standard range 1 - 5)
We will perform these techniques and see improvement in precision and recall at each stage. Preprocessing The Data Preprocessing includes removal of special characters and converting entire text to lower case.
1 | cat cooking.stackexchange.txt sed -e "s/([.\!?,'/()])/ 1 /g" tr "\[:upper:\]" "[:lower:]" > cooking.preprocessed.txt |
So after preprocessing precision and recall have improved. More Epoch and Increased Learning Rate Epoch can be set using epoch parameter. Default value is 5. We are going to set it to 25. More epoch will result into increased training time but it would be worth.
1 | classifier = fasttext.supervised('cooking.train', 'model\_cooking', epoch=25) |
Now let’s change learning rate with lr parameter:
1 | classifier = fasttext.supervised('cooking.train', 'model\_cooking', lr=1.0) |
Results with both epoch and lr together:
1 | classifier = fasttext.supervised('cooking.train', 'model\_cooking', epoch=25, lr=1.0) |
Using Word n-grams Word n-grams deal with sequencing of tokens in the text. See examples of word n-grams on Wikipedia.
1 | classifier = fasttext.supervised('cooking.train', 'model\_cooking', epoch=25, lr=1.0, ) |
I am unable to show results for word n-grams because Python on my system keeps crashing. I will update the post asap.