๐ง Tutorial: LangChain Prompt Templates, Chat History & a Research Summary App
0. Prerequisites
You should already have:
-
Python 3.10+
-
langchain,langchain-openai,python-dotenv,streamlitinstalled -
A
.envfile with:
OPENAI_API_KEY=your_openai_api_key_here
1. Chat History Basics (chat_history.txt)
๐ File: chat_history.txt
HumanMessage(content="I want to request a refund for my order #12345.")
AIMessage(content="Your refund request for order #12345 has been initiated. It will be processed in 3-5 business days.")
✅ What is this?
-
A small log of a past conversation between a user and the AI.
-
It contains:
-
1 user message
-
1 AI response
-
๐ When would you use this?
-
When you want to load previous chat history into a new prompt so the AI has context (e.g., refund discussion).
❓ Why is it important?
-
LLMs are stateless by default.
-
To make them “remember”, you must give past messages again in each call (chat history).
๐ Note: Right now this file is just plain text, not actual
HumanMessage/AIMessageobjects. Later we’ll talk about how to properly use it withMessagesPlaceholder.
2. ChatPromptTemplate – Simple Prompt with Variables
๐งฉ Code
from langchain_core.prompts import ChatPromptTemplate
chat_template = ChatPromptTemplate([
('system', 'You are a helpful {domain} expert'),
('human', 'Explain in simple terms, what is {topic}')
])
prompt = chat_template.invoke({'domain': 'cricket', 'topic': 'Dusra'})
print(prompt)
✅ What is this?
-
ChatPromptTemplatelets you define a template for multi-role prompts (system, human, etc). -
You can parameterize the prompt with variables like
{domain}and{topic}.
๐ When to use it?
-
When you want to reuse the same pattern, but with different values.
-
Example: domain = “math / cooking / cricket”, topic = “integration / biryani / doosra”.
-
❓ Why useful?
-
Avoids hardcoding strings.
-
Makes prompts configurable and clean.
-
Great for reusable tools, apps, teaching examples.
๐ Example Output (what print(prompt) might show)
It prints a list of messages, something like:
[SystemMessage(content='You are a helpful cricket expert'),
HumanMessage(content='Explain in simple terms, what is Dusra')]
You can pass this prompt directly to a ChatOpenAI model:
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
result = model.invoke(prompt)
print(result.content)
3. Building a Manual Chat Loop with chat_history (Memory in Python List)
๐งฉ Code
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from dotenv import load_dotenv
load_dotenv()
model = ChatOpenAI()
chat_history = [
SystemMessage(content='You are a helpful AI assistant')
]
while True:
user_input = input('You: ')
chat_history.append(HumanMessage(content=user_input))
if user_input == 'exit':
break
result = model.invoke(chat_history)
chat_history.append(AIMessage(content=result.content))
print("AI: ", result.content)
print(chat_history)
✅ What is this?
-
A simple chat app in terminal using LangChain.
-
chat_historyis a Python list that stores:-
SystemMessage → role / behavior of AI
-
HumanMessage → user inputs
-
AIMessage → model responses
-
๐ When to use it?
-
When you want a multi-turn chat where:
-
Each new response depends on previous messages.
-
❓ Why this pattern?
-
LangChain
ChatOpenAIexpects a list of messages:-
[SystemMessage(...), HumanMessage(...), AIMessage(...), ...]
-
-
By appending messages over time, you create conversation memory.
▶️ Example Run
You: Hi
AI: Hello! How can I assist you today?
You: Tell me a joke about programmers
AI: Why do programmers prefer dark mode? Because light attracts bugs!
You: exit
At the end, print(chat_history) will show all messages (system + all user + all AI).
4. Using MessagesPlaceholder to Inject Chat History into a Prompt
๐งฉ Code
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# chat template
chat_template = ChatPromptTemplate([
('system', 'You are a helpful customer support agent'),
MessagesPlaceholder(variable_name='chat_history'),
('human', '{query}')
])
chat_history = []
# load chat history
with open('chat_history.txt') as f:
chat_history.extend(f.readlines())
print(chat_history)
# create prompt
prompt = chat_template.invoke({
'chat_history': chat_history,
'query': 'Where is my refund'
})
print(prompt)
✅ What is MessagesPlaceholder?
-
A special placeholder used in chat templates that expects a list of messages.
-
Example: previous HumanMessage + AIMessage objects.
๐ When to use it?
-
When you want to build prompts like:
-
system: “You are a support agent”
-
history: all previous messages
-
human: current query
-
❗ Important Note (Extra Info)
Right now, you’re doing:
with open('chat_history.txt') as f:
chat_history.extend(f.readlines())
This gives you a list of strings (lines), not HumanMessage/AIMessage objects.
๐ Better Approach (Conceptually)
Store your history in a more structured way (e.g., JSON) and reconstruct as:
from langchain_core.messages import HumanMessage, AIMessage
chat_history = [
HumanMessage(content="I want to request a refund for my order #12345."),
AIMessage(content="Your refund request for order #12345 has been initiated. It will be processed in 3-5 business days.")
]
Then:
prompt = chat_template.invoke({
'chat_history': chat_history,
'query': 'Where is my refund'
})
❓ Why is this powerful?
-
You can mix:
-
System instructions
-
Past chat
-
New question
-
-
This is exactly how you build support bots that remember context.
๐ Example Printed prompt
It will look like a list of messages, something like:
[SystemMessage(content='You are a helpful customer support agent'),
HumanMessage(content='I want to request a refund for my order #12345.'),
AIMessage(content='Your refund request for order #12345 has been initiated. It will be processed in 3-5 business days.'),
HumanMessage(content='Where is my refund')]
5. Single-Turn Chat + Storing the Response as History
๐งฉ Code
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
model = ChatOpenAI()
messages = [
SystemMessage(content='You are a helpful assistant'),
HumanMessage(content='Tell me about LangChain')
]
result = model.invoke(messages)
messages.append(AIMessage(content=result.content))
print(messages)
✅ What is this?
-
One-shot chat:
-
System → role
-
Human → question
-
-
Then you append AI’s reply to
messages.
๐ When to use?
-
When you want to:
-
Run one interaction
-
But then keep history to reuse later (e.g. pass
messagesagain with a follow-up question).
-
❓ Why?
-
This is how you build your own conversation history structure.
๐ Example Output
result.content might be:
LangChain is a framework for building applications powered by large language models (LLMs). It helps you connect LLMs with prompts, tools, memory, and external data sources like databases and APIs.
Then messages becomes:
[
SystemMessage(content='You are a helpful assistant'),
HumanMessage(content='Tell me about LangChain'),
AIMessage(content='LangChain is a framework ...')
]
6. PromptTemplate for Research Paper Summary + Saving to JSON
๐งฉ Code
from langchain_core.prompts import PromptTemplate
# template
template = PromptTemplate(
template="""
Please summarize the research paper titled "{paper_input}" with the following specifications:
Explanation Style: {style_input}
Explanation Length: {length_input}
1. Mathematical Details:
- Include relevant mathematical equations if present in the paper.
- Explain the mathematical concepts using simple, intuitive code snippets where applicable.
2. Analogies:
- Use relatable analogies to simplify complex ideas.
If certain information is not available in the paper, respond with: "Insufficient information available" instead of guessing.
Ensure the summary is clear, accurate, and aligned with the provided style and length.
""",
input_variables=['paper_input', 'style_input','length_input'],
validate_template=True
)
template.save('template.json')
✅ What is this?
-
PromptTemplateis for single-text prompts, not multi-message. -
You define:
-
template→ prompt text with placeholders -
input_variables→ required variables
-
Then you save it to template.json so it can be reused elsewhere (e.g., in Streamlit app).
๐ When to use it?
-
When your prompt is structured and repeated with different input values.
-
Example: same summary format for different papers.
❓ Why save to JSON?
-
You can:
-
Load it in another script
-
Share it with other team members
-
Version control the prompt separately from the code
-
๐งพ Generated template.json
You showed:
{
"name": null,
"input_variables": [
"length_input",
"paper_input",
"style_input"
],
...
"template": "\nPlease summarize the research paper titled \"{paper_input}\" with the following specifications:\nExplanation Style: {style_input} \nExplanation Length: {length_input} \n..."
}
That’s exactly the serialized version of your PromptTemplate.
7. PromptTemplate + ChatOpenAI – Simple Example (Greet in 5 Languages)
๐งฉ Code
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
model = ChatOpenAI()
# detailed way
template2 = PromptTemplate(
template='Greet this person in 5 languages. The name of the person is {name}',
input_variables=['name']
)
# fill the values of the placeholders
prompt = template2.invoke({'name':'nitish'})
result = model.invoke(prompt)
print(result.content)
✅ What is happening?
-
Define a template with
{name}. -
template2.invoke({'name': 'nitish'})→ returns the full prompt string. -
Pass that to
model.invoke(...). -
Print the model’s answer.
๐ Example Output
Hello, Nitish! (English)
Hola, Nitish! (Spanish)
Bonjour, Nitish! (French)
Namaste, Nitish! (Hindi)
Ciao, Nitish! (Italian)
๐ When/Why?
-
Great pattern for small utility prompts:
-
Greeting generator
-
Email templates
-
Message generators
-
8. Building a Streamlit Research Summary Tool with a Saved Prompt
๐งฉ Code
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import streamlit as st
from langchain_core.prompts import PromptTemplate, load_prompt
load_dotenv()
model = ChatOpenAI()
st.header('Reasearch Tool')
paper_input = st.selectbox(
"Select Research Paper Name",
[
"Attention Is All You Need",
"BERT: Pre-training of Deep Bidirectional Transformers",
"GPT-3: Language Models are Few-Shot Learners",
"Diffusion Models Beat GANs on Image Synthesis"
]
)
style_input = st.selectbox(
"Select Explanation Style",
["Beginner-Friendly", "Technical", "Code-Oriented", "Mathematical"]
)
length_input = st.selectbox(
"Select Explanation Length",
["Short (1-2 paragraphs)", "Medium (3-5 paragraphs)", "Long (detailed explanation)"]
)
template = load_prompt('template.json')
if st.button('Summarize'):
chain = template | model
result = chain.invoke({
'paper_input': paper_input,
'style_input': style_input,
'length_input': length_input
})
st.write(result.content)
✅ What is this?
-
A Streamlit web app that:
-
Lets user pick:
-
Paper name
-
Explanation style
-
Explanation length
-
-
Loads your saved
PromptTemplatefromtemplate.json. -
Pipes it into the model with
chain = template | model. -
Shows the summary.
-
๐ When to use this pattern?
-
When you want to quickly build:
-
Internal demo tools
-
Prototypes
-
Simple end-user facing LLM apps.
-
❓ Why is it nice?
-
Separation of concerns:
-
Prompt is in
template.json -
App logic is in
.py
-
-
Super easy to modify just the template without touching app code.
▶️ Example Usage
Run:
streamlit run your_file_name.py
Select:
-
Paper: “Attention Is All You Need”
-
Style: “Beginner-Friendly”
-
Length: “Short (1-2 paragraphs)”
๐ก You’ll see something like:
"Attention Is All You Need" introduces the Transformer architecture, a neural network model that relies entirely on attention mechanisms instead of recurrent networks. In simple terms, it allows the model to focus on different parts of a sentence at once, making it faster and more efficient at understanding long sentences.
Mathematically, the key idea is the self-attention mechanism, which uses queries, keys, and values to compute how much attention each word should pay to every other word. Think of it like a student reading a paragraph and constantly checking back to earlier words to understand the full meaning.
9. Simple Poem Example with GPT-4
๐งฉ Code
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
model = ChatOpenAI(model='gpt-4', temperature=1.5)
result = model.invoke("Write a 5 line poem on cricket")
print(result.content)
๐ง What / When / Why
-
What: A simple, single call to GPT-4 with a creative task.
-
When: For quick experiments, testing temperature, or demos.
-
Why: Great for beginners to see how easy it is to get a response.
๐ Example Output
Under the sun the red ball flies high,
Willow sings songs to the cheering sky,
Footmarks and fielders dance on the ground,
Crowds hold their breath at each crackling sound,
In cricket’s rhythm, hearts and hopes are bound.
10. Summary: What You Learned (What / When / Why View)
๐งฉ Concepts
-
ChatPromptTemplate
-
What: Template for multi-message prompts (system + human).
-
When: When you need reusable structured prompts.
-
Why: Cleaner, parameterized prompts.
-
-
PromptTemplate
-
What: Single-text prompt with variables.
-
When: For one big instruction text (e.g., research summary).
-
Why: Encapsulates instructions, easy to reuse & save.
-
-
SystemMessage / HumanMessage / AIMessage
-
What: Roles used by ChatOpenAI.
-
When: Multi-turn chat or when you want explicit roles.
-
Why: More control over conversation behavior.
-
-
MessagesPlaceholder
-
What: A placeholder in ChatPromptTemplate for a list of messages.
-
When: You need to inject past chat history into a new prompt.
-
Why: Essential for support/chatbots with memory.
-
-
Saving PromptTemplate to JSON (
template.json)-
What: Serialized prompt definition.
-
When: Sharing, versioning, loading in apps (e.g. Streamlit).
-
Why: Separates prompt logic from code.
-
-
Streamlit + LangChain
-
What: Simple UI around your LLM logic.
-
When: Need a demo, internal tool, or quick prototype.
-
Why: Turn scripts → usable tool with minimal effort.
-
No comments:
Post a Comment