Post

Learning how to use prompts.

Let's create a prompt manually and build LangChain.

Learning how to use prompts.

Previous posts explained only using PromptTemplate.from_template().
The prompt templates are used via PromptTemplate.from_template() or ChatPromptTemplate.from_message(). The PromptTemplate is primarily used for general text-based models, while ChatPromptTemplate is mainly used for chat-based models.
In this post, I will explain using the gemma-2-2b-it model registered on Hugging Face, leveraging the ChatBrainAI() class I previously created.

Understanding PromptTemplate

It is a template primarily used for general text-based models. There are two ways to use PromptTemplate. I prefer PromptTemplate.from_template(). I will explain using only PromptTemplate.from_template().

  1. Using a PromptTemplate.from_template()
  2. Creating a PromptTemplate object
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from ai.local.langchain_brainai import ChatBrainAI, stream_response, invoke_response
from langchain_core.prompts import PromptTemplate

llm = ChatBrainAI('gemma-2-2b-it')

template = """<start_of_turn>system
You are a friendly AI assistant. Your name is DS2Man. Please answer questions briefly.
<end_of_turn>
<start_of_turn>user
{question}
<end_of_turn>
<start_of_turn>model
"""

prompt = PromptTemplate.from_template(
    template
) 

chain = prompt | llm

question = "What is the capital of the United States?"
response = chain.stream({"question": question})
stream_response(response)

# response = chain.invoke({"question": question})
# invoke_response(response, "<start_of_turn>model")
1
The capital of the United States is Washington, D.C.

Understanding ChatPromptTemplate

It is a template primarily used for chat-based conversational models, with roles categorized as system, user, and assistant. In particular, prompts based on conversational content are mainly implemented using ChatPromptTemplate.from_message().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from ai.local.langchain_brainai import ChatBrainAI, stream_response, invoke_response
from langchain_core.prompts import ChatPromptTemplate

llm = ChatBrainAI('gemma-2-2b-it')

template =  [
    ("system", "You are a friendly AI assistant. Your name is DS2Man. Please answer questions briefly."),
    ("human", "{question}\nAnswer:")
]

prompt = ChatPromptTemplate.from_messages(
    template
)

chain = prompt | llm

question = "What is the capital of the United States?"
response = chain.stream({"question": question})
stream_response(response)

# response = chain.invoke({"question": question})
# invoke_response(response, "<start_of_turn>model")
1
The capital of the United States is Washington, D.C.
This post is licensed under CC BY 4.0 by the author.