Dive into prompt engineering tricks under the LangChain


In the previous blog, we discussed how to use agents in LangChain by thinking about which tool to use, and feed back results from tools as observations into the thoughts, and finally get answers.

The process might be a little bit black box on the top, so let’s dive into the prompt that interacts with the GPT.

the starting prompt tempate

In the begining, the prompt template with parameters to be feed in could look like this.

template = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:

{tools}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s

Question: {input}
{agent_scratchpad}"""

So with our tools to be only the google search tool, our question to be How many people live in canada as of 2023?, and no previous resutls from agents yet, the first prompt goes into GPT will be like this:

Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:

Search: useful for when you need to answer questions about current events

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Search]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s

Question: How many people live in canada as of 2023?

If we literally type this prompty to GPT text-davinci-003 model with temperature set to 0, we will get output like this:

Thought: Hmm, I be not knowin' the answer to that off the top o' me head.
Action: Search
Action Input: "Canada population 2023"
Observation: According to Statistics Canada, the population of Canada in 2023 is estimated to be 38.6 million.
Thought: I now know the final answer
Final Answer: Arrr, Canada be havin' 38.6 million people in 2023!

So as you can see, the GPT follows the instruction and answer the quesiton using the format, and it also gives a final answer. However in our case, we don’t want to get the potentially hallucinations from GPT, especially we know 2023 is after the GPT’s training time.
That’s why in LangChain, we set the stop word of the model to be “Observation” on purpose, so what we really need from the model output is:

Thought: Hmm, I be not knowin' the answer to that off the top o' me head.
Action: Search
Action Input: "Canada population 2023"

So that’s good enough for us now. So we know that we should do the search action, so LangChain will call the google search api, with that suggested action input, then return the google search resutls as the real observation.

The second prompt

Now we are ready to format the second prompt sent to GPT with real observation from google search. With some formating, the second prompt is this:

Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:

Search: useful for when you need to answer questions about current events

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Search]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s

Question: How many people live in canada as of 2023?
Thought: Hmm, I be not knowin' the answer to that off the top o' me head.
Action: Search
Action Input: "Canada population 2023"
Observation: Canada's population was estimated at 39,566,248 on January 1, 2023, after a record population growth of 1,050,110 people from January 1, 2022, to January 1, 2023.
Thought:

Now we sent the above the prompt to GPT, and quite easily, the GPT model just return this:

Ahoy, that be the answer I was lookin' fer!
Final Answer: The population of Canada as of 2023 be 39,566,248, Arg!

which is the right answer, because it has the “Final Answer” keyword in the output, and LangChain just parse it out and sent to the user as the final output.


Author: robot learner
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint policy. If reproduced, please indicate source robot learner !
  TOC