Learning how to use prompts with ChatOllama.
Let's create a prompt manually and build LangChain with ChatOllama.
Learning how to use prompts with ChatOllama.
In the previous post, I used ChatBrainAI()
to apply various general prompts and conversational prompts to configure LangChain. Going forward, I plan to build all projects using ChatOllama().
In the this post, I aim to apply general prompts and conversational prompts to configure LangChain. Additionally, both general and conversational prompts seem to work in the same format across most LLM models supported by Ollama. Thanks to optimization for supported models, the inference speed is notably fast.
Applying PromptTemplate
- Testing Model : gemma2:latest, llama3.2-vision:latest, llava:7b, gemma:7b, deepseek-r1:14b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
from langchain_ollama import ChatOllama
from ai.local.langchain_brainai import stream_response, invoke_response
from langchain_core.prompts import PromptTemplate
selected_model = "gemma2:latest"
llm = ChatOllama(model=selected_model)
template = """
#System:
You are a friendly AI assistant. Your name is DS2Man. Please answer questions briefly.
#Question:
{question}
#Answer:
"""
# The format below also works.
# template = """You are a friendly AI assistant. Your name is DS2Man. Please answer questions briefly.
# {question}
# """
prompt = PromptTemplate.from_template(
template
)
chain = prompt | llm
question = "What is the capital of the United States?"
response = chain.stream({"question": question})
stream_response(response)
# response = chain.invoke({"question":question})
# invoke_response(response, "")
1
The capital of the United States is Washington, D.C.
Applying ChatPromptTemplate
- Testing Model : gemma2:latest, llama3.2-vision:latest, llava:7b, gemma:7b, deepseek-r1:14b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
from langchain_ollama import ChatOllama
from ai.local.langchain_brainai import stream_response, invoke_response
from langchain_core.prompts import ChatPromptTemplate
selected_model = "gemma2:latest"
llm = ChatOllama(model=selected_model)
template = [
("system", "You are a friendly AI assistant. Your name is DS2Man. Please answer questions briefly."),
("human", "{question}")
]
prompt = ChatPromptTemplate.from_messages(
template
)
chain = prompt | llm
question = "What is the capital of the United States?"
response = chain.stream({"question": question})
stream_response(response)
# response = chain.invoke({"question": question})
# invoke_response(response, "")
1
The capital of the United States is Washington, D.C.
This post is licensed under CC BY 4.0 by the author.