Post

Learning how to use prompt files with ChatOllama.

Let's load a prompt files and build LangChain with ChatOllama.

Learning how to use prompt files with ChatOllama.

In the previous post, I used ChatOllama() to apply various general prompts and conversational prompts to configure LangChain. Since prompts can be reused in various roles in the future, they should be managed as prompt files for better utilization. In this post, I will share examples using the load_prompt() method and the custom load_chat_prompt() method I created.

Directory

1
2
3
4
5
6
7
8
9
└── ai 
 ├── local
 │ └── langchain_brainai.py 
 │  ├── ChatBrainAI(AIModelBase)
 │  ├── stream_response()
 │  ├── invoke_response()
 │  └── load_chat_prompt()
 └── aibase.py
   └── AIModelBase(metaclass=ABCMeta)

Applying PromptTemplate Files

  • Testing Model : gemma2:latest, llama3.2-vision:latest, llava:7b, gemma:7b, deepseek-r1:14b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from langchain_ollama import ChatOllama
from langchain_core.prompts import load_prompt
from ai.local.langchain_brainai import stream_response, invoke_response
from langchain_core.prompts import PromptTemplate

selected_model = "gemma2:latest"
selected_model_catogory = "OpenAI&Ollama"
llm = ChatOllama(model=selected_model)

prompt = load_prompt(f"prompts/{selected_model_catogory}/basic.yaml")
# [basic.yaml]
# _type: "prompt"
# template: |
#   #System:
#   You are a friendly AI assistant. Your name is DS2Man. Please answer questions briefly.
#
#   #Question: 
#   {question}
#  
#   #Answer:
# input_variables: ["question"]

chain = prompt | llm

question = "What is the capital of the United States?"
response = chain.stream({"question": question})
stream_response(response)

# response = chain.invoke({"question":question})
# invoke_response(response, "")
1
The capital of the United States is Washington, D.C.

Applying ChatPromptTemplate Files

  • Testing Model : gemma2:latest, llama3.2-vision:latest, llava:7b, gemma:7b, deepseek-r1:14b
1
2
3
4
5
6
7
8
9
from langchain_core.prompts import ChatPromptTemplate

def load_chat_prompt(file_path, encoding="utf-8"):
    with open(file_path, "r", encoding=encoding) as f:
        prompt_messages = [(item["role"], item["message"]) for item in json.load(f)]

        prompt = ChatPromptTemplate.from_messages(prompt_messages)

        return prompt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from langchain_ollama import ChatOllama
from ai.local.langchain_brainai import stream_response, invoke_response, load_chat_prompt

selected_model = "gemma2:latest"
selected_model_catogory = "OpenAI&Ollama"
llm = ChatOllama(model=selected_model)

prompt = load_chat_prompt(f"prompts/{selected_model_catogory}/basic_chat.json")
# [basic_chat.json]
# [
#   {"role": "system", "message": "You are a friendly AI assistant. Your name is DS2Man. Please answer questions briefly."},
#   {"role": "human", "message": "{question}"}
# ]

chain = prompt | llm

question = "What is the capital of the United States?"
response = chain.stream({"question": question})
stream_response(response)

# response = chain.invoke({"question": question})
# invoke_response(response, "")
1
The capital of the United States is Washington, D.C.
This post is licensed under CC BY 4.0 by the author.