๐ Tutorial: Advanced LangChain Runnables – Sequence, Parallel, Branch, Passthrough & Lambda
In this tutorial you’ll learn how to:
-
Use
RunnableSequencefor multi-step flows -
Use
RunnableParallelto get multiple outputs from one run -
Use
RunnablePassthroughto forward inputs inside a chain -
Use
RunnableBranchto add if/else logic -
Use
RunnableLambdato plug in your own Python functions
We’ll use ChatOpenAI everywhere, assuming you already set up:
-
OPENAI_API_KEYin.env -
pip install langchain langchain-openai python-dotenv
1. Sequential Chain with RunnableSequence – Joke → Explanation
Let’s start with the simplest: do one thing after another.
๐ป Code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv
from langchain.schema.runnable import RunnableSequence
load_dotenv()
prompt1 = PromptTemplate(
template='Write a joke about {topic}',
input_variables=['topic']
)
prompt2 = PromptTemplate(
template='Explain the following joke - {text}',
input_variables=['text']
)
model = ChatOpenAI()
parser = StrOutputParser()
chain = RunnableSequence(
prompt1, # build joke prompt
model, # generate joke
parser, # get joke as string
prompt2, # build explanation prompt from text
model, # generate explanation
parser # get explanation as string
)
print(chain.invoke({'topic': 'AI'}))
๐ What is happening?
-
prompt1+{'topic': 'AI'}→"Write a joke about AI". -
model→ generates the joke. -
parser→ convertsAIMessage→ plain string. -
prompt2takes that string as{text}→"Explain the following joke - <joke>". -
model→ explains the joke. -
Last
parser→ returns explanation string.
So the flow is:
input dict → joke prompt → LLM → joke text → explain prompt → LLM → explanation text
๐ When to use this?
-
When you need multi-step processing:
-
Generate → then explain
-
Analyze → then summarize
-
Extract → then transform
-
❓ Why is this useful?
-
You build structured flows instead of single prompts.
-
Easy to debug & extend (e.g., add more steps later).
๐งพ Example output (will vary)
This joke plays on the common fear that AI will take over human jobs. By making AI itself the one "applying for a job," it flips the perspective and makes the situation humorous instead of scary.
2. Conditional Summarization with RunnableBranch + RunnableSequence
If the report is too long, summarize it.
If it’s short, just return it as is.
๐ป Code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv
from langchain.schema.runnable import (
RunnableSequence,
RunnableBranch,
RunnablePassthrough,
)
load_dotenv()
prompt1 = PromptTemplate(
template='Write a detailed report on {topic}',
input_variables=['topic']
)
prompt2 = PromptTemplate(
template='Summarize the following text \n {text}',
input_variables=['text']
)
model = ChatOpenAI()
parser = StrOutputParser()
# First step: generate report text (string)
report_gen_chain = prompt1 | model | parser
# Second step: if report too long, summarize. Otherwise, return as-is
branch_chain = RunnableBranch(
(lambda x: len(x.split()) > 300, prompt2 | model | parser),
RunnablePassthrough() # default branch: just pass text through
)
# Final flow: report generation → branching logic
final_chain = RunnableSequence(report_gen_chain, branch_chain)
print(final_chain.invoke({'topic': 'Russia vs Ukraine'}))
๐ What is happening?
-
report_gen_chainoutputs a string (the report). -
branch_chainreceives that string asx:-
If
len(x.split()) > 300→ run summarization chain. -
Else →
RunnablePassthrough()→ return the original report.
-
๐ When to use RunnableBranch?
-
When your logic depends on the content:
-
Length of text
-
Sentiment label
-
Category / language
-
❓ Why is this powerful?
-
You combine:
-
LLMs for content generation
-
Python conditions for routing
-
-
It’s like
if/elseinside the LangChain pipeline.
๐ Example behavior
-
If model writes a huge report → you’ll see a summary.
-
If it’s somewhat short → you’ll see the original detailed report.
3. Joke + Word Count in Parallel – RunnableParallel + RunnableLambda
Generate a joke and compute its word count at the same time.
๐ป Code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv
from langchain.schema.runnable import (
RunnableSequence,
RunnableLambda,
RunnablePassthrough,
RunnableParallel,
)
load_dotenv()
def word_count(text: str) -> int:
return len(text.split())
prompt = PromptTemplate(
template='Write a joke about {topic}',
input_variables=['topic']
)
model = ChatOpenAI()
parser = StrOutputParser()
# Step 1: generate the joke (string)
joke_gen_chain = RunnableSequence(prompt, model, parser)
# Step 2: from that joke string, compute two things in parallel:
parallel_chain = RunnableParallel({
'joke': RunnablePassthrough(), # just forward the joke text
'word_count': RunnableLambda(word_count) # apply python function
})
# Full chain: joke → parallel processing
final_chain = RunnableSequence(joke_gen_chain, parallel_chain)
result = final_chain.invoke({'topic': 'AI'})
final_result = "{} \nword count - {}".format(
result['joke'],
result['word_count']
)
print(final_result)
๐ What is happening?
-
joke_gen_chain:-
Input:
{'topic': 'AI'} -
Output:
"some generated joke"(string)
-
-
parallel_chain:-
Receives that joke text as input.
-
RunnablePassthrough()→ returns the joke unchanged. -
RunnableLambda(word_count)→ calls your Python function on the joke.
-
-
Final result is a dict:
{
"joke": "<joke text>",
"word_count": 17
}
Then you format it into a string to print.
๐ When to use RunnableParallel?
-
When you want to compute multiple views of the same thing:
-
text + metadata (e.g., length, sentiment)
-
raw text + extracted title
-
answer + explanation
-
❓ Why is RunnableLambda useful?
-
It lets you plug in arbitrary Python logic into a LangChain pipeline.
-
Great for small utilities like word count, regex cleaning, etc.
๐งพ Example output (will vary)
Why did the AI go to therapy? Because it had too many unresolved loops and couldn’t process its feelings properly!
word count - 24
4. Tweet + LinkedIn Post in Parallel – RunnableParallel + RunnableSequence
Same topic, two platforms: Twitter & LinkedIn content together.
๐ป Code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv
from langchain.schema.runnable import RunnableSequence, RunnableParallel
load_dotenv()
prompt1 = PromptTemplate(
template='Generate a tweet about {topic}',
input_variables=['topic']
)
prompt2 = PromptTemplate(
template='Generate a Linkedin post about {topic}',
input_variables=['topic']
)
model = ChatOpenAI()
parser = StrOutputParser()
parallel_chain = RunnableParallel({
'tweet': RunnableSequence(prompt1, model, parser),
'linkedin': RunnableSequence(prompt2, model, parser)
})
result = parallel_chain.invoke({'topic': 'AI'})
print("Tweet:\n", result['tweet'])
print("\nLinkedIn:\n", result['linkedin'])
๐ What is happening?
-
Both sub-chains:
-
Take the same input:
{'topic': 'AI'} -
Build different prompts (tweet vs LinkedIn)
-
Use same model and parser.
-
-
RunnableParallelreturns:
{
"tweet": "<short tweet>",
"linkedin": "<longer professional post>"
}
๐ When to use this pattern?
-
Multi-channel content generation:
-
Tweet + LinkedIn + Email subject
-
Title + Meta description + Social caption
-
❓ Why is it nice?
-
All logic is declarative:
-
You just describe what outputs you want.
-
LangChain handles passing input through all branches.
-
๐งพ Example output (roughly):
Tweet:
AI isn’t here to replace humans—it’s here to amplify our potential. The real power is in humans + AI working together. ๐ค๐ค #AI #FutureOfWork
LinkedIn:
Artificial Intelligence is transforming the way we work, learn, and build products. But it’s not about replacing humans—it’s about augmenting our abilities. Teams that learn how to collaborate with AI will move faster, make better decisions, and unlock new opportunities. Now is the right time to upskill, experiment, and think about how AI can enhance value in your domain, not just automate tasks.
5. Joke + Explanation in Parallel (Generated Once, Used Twice)
Pattern: generate once → reuse output in multiple paths.
Let’s slightly improve your “joke + explanation” parallel example so it’s robust.
๐ป Code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv
from langchain.schema.runnable import (
RunnableSequence,
RunnableParallel,
RunnablePassthrough,
RunnableLambda,
)
load_dotenv()
prompt_joke = PromptTemplate(
template='Write a joke about {topic}',
input_variables=['topic']
)
prompt_explain = PromptTemplate(
template='Explain the following joke - {text}',
input_variables=['text']
)
model = ChatOpenAI()
parser = StrOutputParser()
# Step 1: generate joke (string)
joke_gen_chain = RunnableSequence(prompt_joke, model, parser)
# Helper: map joke string → {"text": joke} for the explanation prompt
to_explain_input = RunnableLambda(lambda joke_text: {"text": joke_text})
# Step 2: in parallel, keep original joke & generate explanation
parallel_chain = RunnableParallel({
'joke': RunnablePassthrough(),
'explanation': RunnableSequence(
to_explain_input,
prompt_explain,
model,
parser
)
})
# Final chain: joke → parallel (joke + explanation)
final_chain = RunnableSequence(joke_gen_chain, parallel_chain)
result = final_chain.invoke({'topic': 'cricket'})
print("Joke:\n", result['joke'])
print("\nExplanation:\n", result['explanation'])
๐ What is happening?
-
joke_gen_chain→"some cricket joke"(string). -
parallel_chainreceives that joke string:-
joke:RunnablePassthrough()→ returns joke unchanged. -
explanation:-
RunnableLambdaconverts string →{"text": joke}(whatprompt_explainexpects). -
prompt_explainbuilds:"Explain the following joke - <joke>" -
model+parserproduce explanation string.
-
-
Result:
{
"joke": "<cricket joke>",
"explanation": "<explanation of the joke>"
}
๐ When to use this pattern?
-
When one step’s output should be:
-
Returned as-is
-
Also sent into another chain for extra processing
-
❓ Why use RunnableLambda here?
-
Because
PromptTemplateexpects a dict input with{"text": ...}. -
RunnableLambdalets you reshape data in between steps.
6. Summary – Runnable Patterns You Now Know
✅ RunnableSequence
-
What: Step-by-step chain (A → B → C).
-
When: You want a fixed pipeline of transformations.
-
Why: Clear, readable, composable flows.
✅ RunnableParallel
-
What: Run multiple branches from the same input.
-
When: Need multiple outputs (tweet + LinkedIn, joke + word count).
-
Why: Saves mental complexity & keeps business logic clean.
✅ RunnablePassthrough
-
What: Just forwards the input to output.
-
When: You want the original value plus something derived from it.
-
Why: Simple way to keep original data in parallel chains.
✅ RunnableBranch
-
What: Conditional routing (
if / elif / else). -
When: Your next step depends on content (length, sentiment, category).
-
Why: Lets you blend LLMs with deterministic logic.
✅ RunnableLambda
-
What: Wraps a Python function into the chain.
-
When: You need custom logic (word count, reshaping dictionaries, etc.).
-
Why: Flexible bridge between LangChain and normal Python.
No comments:
Post a Comment