๐ Tutorial: LangChain Runnable Chains (Simple, Sequential, Parallel, Branching)
In this tutorial, you’ll learn how to:
-
Build a simple chain: Prompt → Model → OutputParser
-
Build a sequential chain: one LLM step feeds into another
-
Run parallel chains using
RunnableParallel(notes + quiz at same time) -
Use branching logic (
RunnableBranch) to react differently based on model output (positive/negative feedback) -
Visualize chains using
chain.get_graph().print_ascii()
✅ This assumes your environment,
requirements.txt, and.envare already set up like in your previous tutorial.
1. Simple Chain – Prompt → Model → OutputParser
✅ Code
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
prompt = PromptTemplate(
template='Generate 5 interesting facts about {topic}',
input_variables=['topic']
)
model = ChatOpenAI()
parser = StrOutputParser()
chain = prompt | model | parser
result = chain.invoke({'topic': 'cricket'})
print(result)
chain.get_graph().print_ascii()
๐ What is happening?
-
PromptTemplate→ builds the prompt text:
"Generate 5 interesting facts about cricket" -
ChatOpenAI→ sends that prompt to OpenAI chat model. -
StrOutputParser→ takes the LLM response and returns it as a plain string (instead ofAIMessage).
The chain:
PromptTemplate -> ChatOpenAI -> StrOutputParser
๐ When to use this pattern?
-
Whenever you need one simple step:
-
Generate ideas
-
Write short content
-
Transform text (summaries, paraphrasing, etc.)
-
❓ Why is it useful?
-
Keeps your pipeline clean and composable.
-
You can later plug this chain into bigger chains.
-
|(pipe) makes the flow easy to read: input → model → output.
๐งพ Example Output (will vary)
1. Cricket originated in England and has been played since the 16th century.
2. Test cricket is the longest format of the game, lasting up to five days.
3. Sachin Tendulkar holds the record for the most runs in international cricket.
4. The Cricket World Cup is held every four years and features One Day International matches.
5. The Indian Premier League (IPL) is one of the richest and most popular T20 leagues in the world.
The print_ascii() graph will look like:
+----------------+
| PromptTemplate |
+----------------+
|
+----------------+
| ChatOpenAI |
+----------------+
|
+-------------------+
| StrOutputParser |
+-------------------+
2. Sequential Chain – Report → Summary
First generate a detailed report, then summarize it into 5 bullet points.
✅ Code
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
prompt1 = PromptTemplate(
template='Generate a detailed report on {topic}',
input_variables=['topic']
)
prompt2 = PromptTemplate(
template='Generate a 5 pointer summary from the following text \n {text}',
input_variables=['text']
)
model = ChatOpenAI()
parser = StrOutputParser()
chain = prompt1 | model | parser | prompt2 | model | parser
result = chain.invoke({'topic': 'Unemployment in India'})
print(result)
chain.get_graph().print_ascii()
๐ What is happening step-by-step?
-
prompt1+topic→ full prompt:
"Generate a detailed report on Unemployment in India" -
model→ generates the report (long text). -
parser→ turns AIMessage → string. -
prompt2→ takes that report string as{text}and asks:
"Generate a 5 pointer summary from the following text ..." -
model→ generates 5-point summary. -
Final
parser→ returns summary as string.
So the chain is:
topic
└─> prompt1
└─> model
└─> parser (report string)
└─> prompt2
└─> model
└─> parser (summary string)
๐ When to use this pattern?
-
When you want multi-step logic:
-
First: detailed reasoning
-
Second: short user-friendly summary
-
-
Very common in:
-
Document analysis
-
Multi-stage content generation
-
Reasoning then simplification
-
❓ Why is it important?
-
It shows how LangChain lets you compose LLM calls like Lego blocks.
-
Each stage can refine or transform previous output.
๐งพ Example Output (summary, will vary)
1. Unemployment in India is influenced by population growth, skill mismatch, and structural issues in the economy.
2. Rural areas face seasonal and disguised unemployment, while urban regions struggle with educated unemployment.
3. Automation and lack of industrial growth contribute to limited job creation in formal sectors.
4. Government initiatives like MGNREGA, Skill India, and Make in India aim to address unemployment, but face implementation challenges.
5. Long-term solutions require improving education quality, promoting entrepreneurship, and supporting labor-intensive industries.
print_ascii() will show a longer chain graph from PromptTemplate through ChatOpenAI and StrOutputParser twice.
3. Parallel Chains – Notes + Quiz at the Same Time (RunnableParallel)
Generate notes and quiz questions from the same text in parallel, then merge them.
✅ Code
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain.schema.runnable import RunnableParallel
load_dotenv()
model1 = ChatOpenAI()
model2 = ChatAnthropic(model_name='claude-3-7-sonnet-20250219')
prompt1 = PromptTemplate(
template='Generate short and simple notes from the following text \n {text}',
input_variables=['text']
)
prompt2 = PromptTemplate(
template='Generate 5 short question answers from the following text \n {text}',
input_variables=['text']
)
prompt3 = PromptTemplate(
template='Merge the provided notes and quiz into a single document \n notes -> {notes} and quiz -> {quiz}',
input_variables=['notes', 'quiz']
)
parser = StrOutputParser()
parallel_chain = RunnableParallel({
'notes': prompt1 | model1 | parser,
'quiz': prompt2 | model2 | parser
})
merge_chain = prompt3 | model1 | parser
chain = parallel_chain | merge_chain
text = """
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
The advantages of support vector machines are:
Effective in high dimensional spaces.
Still effective in cases where number of dimensions is greater than the number of samples.
Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
Versatile: different Kernel functions can be specified for the decision function.
The disadvantages include:
Risk of overfitting with many features and few samples.
No direct probability estimates without extra computation.
"""
result = chain.invoke({'text': text})
print(result)
chain.get_graph().print_ascii()
๐ What is happening?
-
Parallel step with
RunnableParallel:-
'notes'key:-
prompt1+text→ ChatOpenAI → notes string
-
-
'quiz'key:-
prompt2+text→ ChatAnthropic → quiz string
-
Result of
parallel_chain.invoke({...})is:{ "notes": "...generated notes...", "quiz": "...generated questions..." } -
-
Merge step:
-
prompt3takes{notes}and{quiz}from this dict. -
Asks model1 (OpenAI) to merge these into a single document.
-
So the full chain looks like:
text
|
+-------------------+
| RunnableParallel |
+-------------------+
| |
notes chain quiz chain
\ /
\ /
merged by prompt3 -> model1 -> parser
๐ When to use RunnableParallel?
-
When you want to generate multiple things from the same input:
-
Notes + quiz
-
Summary + title + hashtags
-
SEO description + keywords + social post
-
❓ Why is it powerful?
-
You can:
-
Use different models for different tasks.
-
Run conceptually parallel steps in one high-level chain.
-
-
It fits how real apps work: multiple outputs for the same user input.
๐งพ Example Output (simplified, will vary)
Notes:
- SVMs are supervised learning methods used for classification, regression, and outlier detection.
- They work well in high-dimensional spaces.
- They use only a subset of training points (support vectors), making them memory-efficient.
- Different kernel functions can be used to adapt to various data types.
Quiz:
1. What are support vector machines primarily used for?
2. Why are SVMs effective in high-dimensional spaces?
3. What are support vectors in an SVM?
4. Name one advantage of using kernel functions in SVM.
5. What is one disadvantage of SVMs when the number of features is very large?
4. Branching Chains – Different Logic for Positive/Negative Feedback
Step 1: Classify sentiment (positive / negative).
Step 2: Based on sentiment, pick different response prompt (branch).
✅ Code
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser, PydanticOutputParser
from langchain.schema.runnable import RunnableBranch, RunnableLambda
from pydantic import BaseModel, Field
from typing import Literal
load_dotenv()
model = ChatOpenAI()
parser = StrOutputParser()
class Feedback(BaseModel):
sentiment: Literal['positive', 'negative'] = Field(
description='Give the sentiment of the feedback'
)
parser2 = PydanticOutputParser(pydantic_object=Feedback)
prompt1 = PromptTemplate(
template=(
'Classify the sentiment of the following feedback text into positive or negative\n'
'{feedback}\n'
'{format_instruction}'
),
input_variables=['feedback'],
partial_variables={'format_instruction': parser2.get_format_instructions()}
)
classifier_chain = prompt1 | model | parser2
prompt2 = PromptTemplate(
template='Write an appropriate response to this positive feedback:\n{feedback}',
input_variables=['feedback']
)
prompt3 = PromptTemplate(
template='Write an appropriate response to this negative feedback:\n{feedback}',
input_variables=['feedback']
)
branch_chain = RunnableBranch(
(lambda x: x.sentiment == 'positive', prompt2 | model | parser),
(lambda x: x.sentiment == 'negative', prompt3 | model | parser),
RunnableLambda(lambda x: "Could not determine sentiment.")
)
chain = classifier_chain | branch_chain
print(chain.invoke({'feedback': 'This is a beautiful phone'}))
chain.get_graph().print_ascii()
๐ What is happening?
4.1. PydanticOutputParser + Feedback model
class Feedback(BaseModel):
sentiment: Literal['positive', 'negative'] = Field(...)
-
This defines the expected output structure: a JSON-like object with one field:
sentiment.
parser2.get_format_instructions() gives instructions like:
“Respond in JSON with fields: sentiment: 'positive' or 'negative' ...”
So prompt1 tells the model how to output structured data.
classifier_chain = prompt1 | model | parser2:
-
prompt1→ classification instructions + format instructions -
model→ returns some text -
parser2→ parses that text into aFeedbackobject
Result of classifier_chain.invoke(...) is something like:
Feedback(sentiment='positive')
4.2. Branching with RunnableBranch
branch_chain = RunnableBranch(
(lambda x: x.sentiment == 'positive', prompt2 | model | parser),
(lambda x: x.sentiment == 'negative', prompt3 | model | parser),
RunnableLambda(lambda x: "Could not determine sentiment.")
)
-
It checks conditions in order:
-
If
x.sentiment == 'positive'→ run positive feedback chain. -
Else if
x.sentiment == 'negative'→ run negative feedback chain. -
Else → fallback
RunnableLambda→ static string.
-
Combined:
chain = classifier_chain | branch_chain
So full flow:
-
Classify feedback →
Feedback(sentiment='positive' | 'negative') -
Branch to the appropriate response template and model.
๐ When to use this pattern?
-
When your workflow depends on model output:
-
Sentiment → positive/negative path
-
Category → “billing” vs “technical support”
-
Language → English response vs Bangla response
-
❓ Why is it powerful?
-
You can implement conditional logic inside the chain itself.
-
Combines LLM decisions with deterministic control flow.
๐งพ Example Output
Input:
chain.invoke({'feedback': 'This is a beautiful phone'})
Possible output:
Thank you so much for your kind words! We’re glad to hear that you find the phone beautiful and enjoyable to use. If you have any questions or need help exploring more features, we’re always here to assist. ๐
If the feedback were:
"This phone keeps hanging and the battery drains too fast."
It would likely go through the negative branch and produce an apology + support-style response.
5. Summary – What You’ve Learned in This Tutorial
๐งฉ Patterns Covered
-
Simple chain –
PromptTemplate | ChatOpenAI | StrOutputParser-
What: Basic “prompt → LLM → string” pipeline
-
When: Any one-step generation or transformation
-
Why: Building block for everything else
-
-
Sequential chain – multi-step reasoning (
report → summary)-
What: One LLM’s output feeds into another prompt
-
When: You want staged processing (detailed → simplified, raw → cleaned)
-
Why: Reflects real-world pipelines
-
-
Parallel chain (
RunnableParallel) – notes & quiz together-
What: Multiple chains executed conceptually in parallel, returning a dict
-
When: Need multiple outputs from one input (notes, quiz, tags, etc.)
-
Why: Efficient composition & cleaner code
-
-
Branching chain (
RunnableBranch) – logic based on sentiment-
What: Conditional routing based on model output (via Pydantic parser)
-
When: Different flows for positive/negative, categories, etc.
-
Why: Mixes LLM intelligence with explicit control flow
-
-
PydanticOutputParser – structured, typed model output
-
What: Parse LLM response into a typed object (
Feedback) -
When: You want predictable, machine-readable output
-
Why: Makes downstream logic safer and cleaner
-
-
Graph visualization (
get_graph().print_ascii())-
What: ASCII diagram of your chain
-
When: Debugging or teaching how a chain is wired
-
Why: Helps beginners understand the execution flow.
-
No comments:
Post a Comment