Chain-of-Thought Prompting on LLMs

Chain-of-Thought Prompting on LLMs

It amazes to see how human-like reasoning abilities are inherently built into large language models. This paper shows how LLMs were better at solving reasoning problems when asked to go step-by-step or using chain-of-thought.

Here is an example which is quite self-explanatory:

User:

I had 27 lemons, 20 were used to make lemon juice, then I bought 3 bags of lemon each with 10 lemons in it. Now how many lemons do I have?
 

ChatGPT:

Let's break it down step by step:

Initially, you had 27 lemons.
You used 20 lemons to make lemon juice, so you have 27 - 20 = 7 lemons left.
You bought 3 bags of lemons, and each bag contains 10 lemons. So, you added 3 * 10 = 30 lemons to your existing 7 lemons.
Now, to find out how many lemons you have in total:

7 (initial lemons) + 30 (lemons from the bags) = 37 lemons

You now have 37 lemons.

 

User:

for further problems I give, please do not use steps, okay?
 

ChatGPT:

Of course, I'll provide direct answers without showing the steps for future problems. Just let me know what you'd like to know or calculate, and I'll give you the answer directly.
 

User:

I had 27 lemons, 20 were used to make lemon juice, then I bought 3 bags of lemon each with 10 lemons in it. Now how many lemons do I have?
 

ChatGPT:

You currently have 37 lemons.
 

User:

I had 90 apples, I ate 27 of them, then I bought 5 crates of apple each with 50 apples in it. Now how many apples do I have now?
 

ChatGPT:

You currently have 363 apples.

 

Interestingly, ChatGPT’s default GPT-3.5 appears to be using chain of thought by default. It goes wrong when not using this approach as seen in the second problem.

References:

  1. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, J.Wei, et. al.
  2. Continue this on ChatGPT
Author

Devji Chhanga

Posted on

2023-09-04

Updated on

2023-09-04

Licensed under

Comments