Chain-of-Thought Prompting on LLMs

Chain-of-Thought Prompting on LLMs

It amazes to see how human-like reasoning abilities are inherently built into large language models. This paper shows how LLMs were better at solving reasoning problems when asked to go step-by-step or using chain-of-thought.

Here is an example which is quite self-explanatory:

User:

I had 27 lemons, 20 were used to make lemon juice, then I bought 3 bags of lemon each with 10 lemons in it. Now how many lemons do I have?
 

ChatGPT:

Let's break it down step by step:

Initially, you had 27 lemons.
You used 20 lemons to make lemon juice, so you have 27 - 20 = 7 lemons left.
You bought 3 bags of lemons, and each bag contains 10 lemons. So, you added 3 * 10 = 30 lemons to your existing 7 lemons.
Now, to find out how many lemons you have in total:

7 (initial lemons) + 30 (lemons from the bags) = 37 lemons

You now have 37 lemons.

 

User:

for further problems I give, please do not use steps, okay?
 

ChatGPT:

Of course, I'll provide direct answers without showing the steps for future problems. Just let me know what you'd like to know or calculate, and I'll give you the answer directly.
 

User:

I had 27 lemons, 20 were used to make lemon juice, then I bought 3 bags of lemon each with 10 lemons in it. Now how many lemons do I have?
 

ChatGPT:

You currently have 37 lemons.
 

User:

I had 90 apples, I ate 27 of them, then I bought 5 crates of apple each with 50 apples in it. Now how many apples do I have now?
 

ChatGPT:

You currently have 363 apples.

 

Interestingly, ChatGPT’s default GPT-3.5 appears to be using chain of thought by default. It goes wrong when not using this approach as seen in the second problem.

References:

  1. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, J.Wei, et. al.
  2. Continue this on ChatGPT

Configure Anaconda on Emacs

Perhaps my quest for an ultimate IDE ends with Emacs. My goal was to use Emacs as full-flagged Python IDE. This post describes how to setup Anaconda on Emacs. My Setup:

OS: Trisquel 8.0
Emacs: GNU Emacs 25.3.2

Quick Key Guide (See full guide) :

C-x = Ctrl + x
M-x = Alt + x
RET = ENTER

1. Downloading and installing Anaconda

1.1 Download: Download Anaconda from here. You should download Python 3.x version as Python 2 will run out of support in 2020. You don’t need Python 3.x on your machine. It will be installed by this install script. 1.2 Install:

cd ~/Downloads
bash Anaconda3-2018.12-Linux-x86.sh

2. Adding Anaconda to Emacs

2.1 Adding MELPA to Emacs Emacs package named anaconda-mode can be used. This package is on the MELPA repository. Emacs25 requires this repository to be added explicitly. Important : Follow this post on how to add MELPA to Emacs. 2.2 Installing anaconda-mode package on Emacs

M-x package-install RET
anaconda-mode RET

2.3 Configure anaconda-mode in Emacs

echo “(add-hook ‘python-mode-hook ‘anaconda-mode)” > ~/.emacs.d/init.el

3. Running your first script on Anaconda from Emacs

3.1 Create new .py file

C-x C-f
HelloWorld.py RET

3.2 Add the code

print (“Hello World from Emacs”)

3.3 Running it

C-c C-p
C-c C-c

Output

Python 3.7.1 (default, Dec 14 2018, 19:46:24)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type “help”, “copyright”, “credits” or “license” for more information.

python.el: native completion setup loaded
Hello World from Emacs

I was encouraged for Emacs usage by Codingquark; Errors and omissions should be reported in comments. Cheers!

NLTK Example : Detecting Geographic Setting of Sherlock Holmes Stories

As a young adult nothing thrilled me more than Jeremy Brett’s performance as Sherlock Holmes. “You know my methods, apply them!” he would say. So let’s try to play Sherlock ourselves. We use Natural Language Tool Kit or NLTK to guess setting of a Sherlock story in terms of its geographic location. In this NLTK example, our approach is very naive: identify the most frequent place mentioned in the story. We use Named Entity Recognition (NRE) to identify geopolitical entities (GPE) and filter out the most frequent of them. This approach is very naive because there is no pre-processing on the text and GPEs may include other concepts apart from geographic locations such as nationalities. But we want to keep this really simple and fun. So here we go: Code :

#NLTK example
#This code reads one text file at a time

from nltk import word_tokenize, pos_tag, ne_chunk

read a text file

text = file (‘filepath/file.txt’)

replace \n with a spcae

data=text.read().replace(‘\n’, ‘ ‘)

chunked = ne_chunk (pos_tag ( word_tokenize (data) ))

extract GPEs

extracted = []
for chunk in chunked:
if hasattr (chunk, ‘label’):
if chunk.label() == ‘GPE’:
extracted.append (‘’.join (c[0] for c in chunk))

extract most frequent GPE

from collections import Counter
count = Counter(extracted)
count.most_common(1)

Results:

Sr.

Story

Extracted Location

Actual Setting

Result

The Adventure of the Dancing Men

[(‘Norfolk’, 14)]

Norfolk

Success

The Adventure of the Solitary Cyclist

[(‘Farnham’, 6)]

Farnham

Success

A Scandal in Bohemia

[(‘Bohemia’, 6)]

Bohemia

Success

The Red-Headed League

[(‘London’, 7)]

London

Success

The Final Problem

[(‘London’, 8)]

London

Success

The Greek Interpreter

[(‘Greek’, 15)]

Greece

Fail

We got 5/6 predictions correct! These are not discouraging results and we may think of using this code somewhere in a more serious application. References:

  1. Sherlock Holmes Stories in Plain Text
  2. NLTK Documentation

Extracting Text from PDF Using Apache Tika - Learn NLP

Most NLP applications need to look beyond text and HTML documents as information may be contained in PDF, ePub or other formats. Apache Tika toolkit extracts meta data and text from such document formats. It comes with a REST based Python library. In this example we’ll see extracting text from PDF using Apache Tika toolkit.

Tika Installation
pip install tika

Extracting Text

1
2
3
4
5
6
from tika import parser

#Replace document.pdf with filename
text = parser.from_file('document.pdf')

print (text ['content'])

Tika makes it very convenient to extract text not just from PDFs but more than ten formats. Here is a list of all supported document formats.

References:

  1. Apache Tika Home Page
  2. PyPi Tika 1.15 Package

Improving fastText Classifier

This post is in continuation of the previous post Text Classification With Python Using fastText. This post describes how to improve fastText classifier using various techniques.

More on Precision and Recall

Precision: Number of correct labels out of total labels predicted by classifier. Recall: Number of labels successfully predicted out of real labels. Example:

Why not put knives in the dishwasher?

This question has three labels on StackExchange: equipment, cleaning and knives. Let us obtain top five labels predicted from our model (k = top k labels to predict):

1
2
3
text = ['Why not put knives in the dishwasher?']
labels = classifier.predict ('text', k=5)
print labels

This gives us food-safety, baking, equipment, substitutions and bread. One out of five labels predicted by the model is correct, giving a precision of 0.20. Out of the three real labels, only one is predicted by the model, giving a recall of 0.33.

Improving the Model

We ran the model with default parameters and training data as it is. Now let’s tweak a little bit. We will employ following techniques to improve :

  • Preprocessing the data
  • Changing the number of epochs (using the option epoch, standard range 5 - 50)
  • Changing the learning rate (using the option lr, standard range 0.1 - 1.0)
  • Using word n-grams (using the option wordNgrams, standard range 1 - 5)

We will perform these techniques and see improvement in precision and recall at each stage. Preprocessing The Data Preprocessing includes removal of special characters and converting entire text to lower case.

1
2
3
4
5
6
7
8
9
10
cat cooking.stackexchange.txt  sed -e "s/([.\!?,'/()])/ 1 /g"  tr "\[:upper:\]" "[:lower:]" > cooking.preprocessed.txt
head -n 12404 cooking.preprocessed.txt > cooking.train
tail -n 3000 cooking.preprocessed.txt > cooking.valid

classifier = fasttext.supervised('cooking.train', 'model\_cooking')
result = classifier.test ('cooking.valid')
print result.precision
#0.161
print result.recall
#0.0696266397578

So after preprocessing precision and recall have improved. More Epoch and Increased Learning Rate Epoch can be set using epoch parameter. Default value is 5. We are going to set it to 25. More epoch will result into increased training time but it would be worth.

1
2
3
4
5
6
classifier = fasttext.supervised('cooking.train', 'model\_cooking', epoch=25)
result = classifier.test ('cooking.valid')
print result.precision
#0.493
print result.recall
#0.213204555283

Now let’s change learning rate with lr parameter:

1
2
3
4
5
6
classifier = fasttext.supervised('cooking.train', 'model\_cooking', lr=1.0)
result = classifier.test ('cooking.valid')
print result.precision
#0.546
print result.recall
#0.236125126135

Results with both epoch and lr together:

1
2
3
4
5
6
classifier = fasttext.supervised('cooking.train', 'model\_cooking', epoch=25, lr=1.0)
result = classifier.test ('cooking.valid')
print result.precision
#0.565
print result.recall
#0.244630243621

Using Word n-grams Word n-grams deal with sequencing of tokens in the text. See examples of word n-grams on Wikipedia.

1
2
3
4
5
6
classifier = fasttext.supervised('cooking.train', 'model\_cooking', epoch=25, lr=1.0, )
result = classifier.test ('cooking.valid')
print result.precision
#???
print result.recall
#???

I am unable to show results for word n-grams because Python on my system keeps crashing. I will update the post asap.

References:

  1. Introduction to fastText

  2. Text Classification With Python Using fastText

  3. PyPi, fastext 0.7.0, Python.org

  4. fasText, Text classification with fastText

  5. Cooking StackExchange, cooking.stackexchange.com

Tutorial: Text Classification With Python Using fastText

Text classification is an important task with many applications including sentiment analysis and spam filtering. This article describes supervised text classification using fastText Python package. You may want to read Introduction to fastText first. Note: Shell commands should not be confused with Python code.

Get the Training Data Set: We start by training the classifier with training data. It contains questions from cooking.stackexchange.com and their associated tags on the site. Let’s build a classifier that automatically recognize a topic of the question and assign a label to it. So, first we download the data.

1
2
3
wget https://s3-us-west-1.amazonaws.com/fasttext-vectors/cooking.stackexchange.tar.gz && tar xvzf cooking.stackexchange.tar.gz

head cooking.stackexchange.txt

As head command shows each line of the text file contains a list of labels followed by corresponding documents. fastText recognizes labels starting with __label__ but this file is alredy in shape. Next task is to train the classifier.

Training the Classifier: Let’s check the size of training data set:

wc cooking.stackexchange.txt

15404 169582 1401900 cooking.stackexchange.txt

It contains 15404 examples. Let’s split it into a training set of 12404 examples and a validation set of 3000 examples:

head -n 12404 cooking.stackexchange.txt > cooking.train

tail -n 3000 cooking.stackexchange.txt > cooking.valid

Now let’s train using cooking.train

classifier = fasttext.supervised('cooking.train', 'model\_cooking')

Our First Prediction:

1
2
3
4
5
label = classifier.predict('Which baking dish is best to bake a banana bread ?')
print label

label = classifier.predict('Why not put knives in the dishwasher? ')
print label

It may come up with something tag like baking and food-safety respectively! Second tag is not relevant which points out that our classifier is poor in quality. Let’s test it’s quality next.

**Testing Precision and Recall: ** Precision and recall are used to measure quality of models in pattern recognition and information retrieval. See this Wikipedia article. Let’s test the model against cooking.valid data:

1
2
3
print result.precision
print result.recall
print result.nexamples

There are a number of ways we can improve our classifier, See next post: Improving fastText Classifier

References:

  1. PyPi, fastext 0.7.0, Python.org
  2. fasText, Text classification with fastText
  3. Cooking StackExchange, cooking.stackexchange.com

If you think this post was helpful, kindly share with others or say thank you in the comments below, it helps!

A Note on Vernacular Chatbots

The idea of vernacular chatbots struck me yesterday when I decided to try iki.ai for payment of my electricity bill. Niki is India’s first chat bot for shopping. Shopping with chat bot is totally a different experience where user talks to a bot about what she wants. Following is the complete chat I had with Niki. On the left hand side are Niki’s messages and mine are on right with bold face.

Hi i am Niki! :) I am here to make your life simple & convenient. You can chat with me like you chat with your friend.Can i show you how the booking experience on chat can be 300% faster than other apps ? (in 10 seconds!)

Pay my electricity bill

Hello ! Let me assist you in paying your electricity bill. You can type cancel at any point to exit this conversation Please give me: your electricity provider. For e.g BESCOM

PGVCL

Understood. Picking your electricity provider as Paschim Gujarat Vij Company. I request you to give me: your Customer Account Number. It is 1-11 digits long (Alphanumeric)

*********** (my customer number)

Sure. Picking your Customer Account Number as: *********** I request you to provide: the bill amount.

**** (amount)

Got it. Picking the amount as **** The summary of your order is: Operator: Paschim Gujarat Vij Company Customer Account Number: *********** Amount: **** rs Should I proceed to payment?

Yes

Fantastic. Please pay from the below options We have received the payment, going to process the order! You will get a confirmation in the next 5 mins. Thank you, your Electricity bill payment was successful You will receive a confirmation message shortly. It will get reflected within 3-4 business days. Your order id is ****************. For any problems or queries with your order, please email us at support@niki.ai

how are you doing otherwise?

As you can see, Niki is very good at enhancing your shopping experience but when I asked her about her well being she did not bother to reply. Technology can reach those who do not know English with vernacular chatbots. Going a step further, if bots can actually “speak” on phone having real conversation with the user in vernacular, people who cannot read and write can also shop online. There is enough progress in speech recognition for English but local efforts are required for research in vernacular languages to achieve this. There is surely a huge market for this as more than 92% of the population has a mobile phone and only 74% of the population is literate*. There is a clear gap of 18%. That is, more than 238 million people*! * Literacy rate as per 2011 census. References:

  1. List of Countries By Mobile Phone In Use, Wikipedia
  2. Status of Literacy, Census of India 2011, census.gov.in

Getting Started with fastText : Learn NLP

This article is for those who are getting started with fastText. fastText is a text representation and classification library from Facebook Research developed by FAIR lab. Classification of text documents is an important natural language processing (NLP) task. It is originally written in C++ but can be accessed using Python interface. It is massively fast. See references for two defining papers. In this article we’ll discuss Python installation of fastText. pip and cython are pre-requisites, install them if not already installed:

Installing pip
sudo apt-get install pip

Installing Cython
pip install cython

Finally install fastText which may also download other missing packages like numpy for us.

Installing fastText
pip install fasttext

And we are done! See next post Text Classification With Python Using fastText. References:

  1. fastTex - Facebook Research
  2. Representation: Enriching Word Vectors with Subword Information, Piotr Bojanowski, Edouard Grave, Armand Joulin and Tomas Mikolov, 2016
  3. Classification: Bag of Tricks for Efficient Text Classification, Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov, 2016

Programming Computers to Read Stories

Can a computer read stories, the way humans do? Of course computers can read from files much faster and accurately but second part of the question is more important. When we read a story we understand it we read the feelings of the protagonist, challenge here is to make computers do the same. If computers could read this way, they can write stories the same way. Imagine a robot telling bedtime story to a sleepless kid! University of Vermont researchers shows how this can be achieved. I will try to simplify here what they did. They measured rise and fall in the emotional trajectory of the story. How can this be done? Protagonist’s actions and words can be input to a sentiment analysis function which identifies the sentiment in a given line. Sentiments are assigned a happiness weight on a scale of 100 which help create a graph showing rise and fall in the emotional state of the protagonist. This graph is self-explanatory, well, unless you are not into Harry Potter.

The study could identify six arcs in all stories. This study was in fact inspired by Kurt Vonnegut also multiple people from literature talks about few basic genres in all stories. There can be huge computing applications here but for discussing it I will do a separate post.

References:

  1. Kurt Vonnegut
  2. The Six Main Arcs in Storytelling, as Identified by an A.I., Adrian Lafrance, The Atlantic.com
  3. The emotional arcs of stories are dominated by six basic shapes, Andrew Reagan et al.