๐งช Tutorial: Building Your Own Mini-LangChain (Custom LLM, Prompt Template, and Chain)
In this tutorial, you will learn how LangChain works internally by building:
-
A fake LLM (NakliLLM)
-
A prompt template formatter (NakliPromptTemplate)
-
A mini chain class (NakliLLMChain) that works like
PromptTemplate | LLM | Parser
This helps you understand:
-
What LangChain is doing behind the scenes
-
How chaining works
-
How prompts + models combine together
Perfect for beginners!
1️⃣ Step 1 – Create a Fake LLM (NakliLLM)
import random
class NakliLLM:
def __init__(self):
print('LLM created')
def predict(self, prompt):
response_list = [
'Delhi is the capital of India',
'IPL is a cricket league',
'AI stands for Artificial Intelligence'
]
# Return RANDOM response, ignoring prompt
return {'response': random.choice(response_list)}
✅ WHAT is this?
A mock LLM that returns random answers. It does not use OpenAI or API keys.
⏰ WHEN to use this?
-
When teaching LangChain concepts
-
When testing LLM pipelines without paying for tokens
-
When prototyping LLM flow offline
❓ WHY is this useful?
It helps you understand how LangChain chains work without depending on real LLMs.
▶️ Expected output when initialized:
LLM created
2️⃣ Step 2 – Create a Prompt Template Class (like LangChain’s PromptTemplate)
class NakliPromptTemplate:
def __init__(self, template, input_variables):
self.template = template
self.input_variables = input_variables
def format(self, input_dict):
return self.template.format(**input_dict)
WHAT?
A simple template engine that replaces placeholders:
"Write a {length} poem about {topic}"
with actual values.
WHEN?
Always before calling an LLM — because LLMs need a properly formatted prompt as a string.
WHY?
It teaches how LangChain’s PromptTemplate works without the complexity.
3️⃣ Step 3 – Use Our Prompt Template
template = NakliPromptTemplate(
template='Write a {length} poem about {topic}',
input_variables=['length', 'topic']
)
prompt = template.format({'length': 'short', 'topic': 'india'})
print(prompt)
Output:
Write a short poem about india
WHAT?
Prompt preparation.
WHEN?
Before sending to ANY LLM.
WHY?
Prompts must be final strings before models use them.
4️⃣ Step 4 – Call The Fake LLM
llm = NakliLLM()
llm.predict(prompt)
Output example:
LLM created
{'response': 'AI stands for Artificial Intelligence'}
WHAT?
We pass a prompt → LLM returns a fake response.
WHEN?
Whenever you need LLM output.
WHY?
This simulates real LLM behavior in a toy environment.
5️⃣ Step 5 – Build a Mini Chain (NakliLLMChain)
Now we create the class:
class NakliLLMChain:
def __init__(self, llm, prompt_template):
self.llm = llm
self.prompt_template = prompt_template
def run(self, input_dict):
# 1. format the prompt
formatted_prompt = self.prompt_template.format(input_dict)
# 2. pass formatted prompt to LLM
result = self.llm.predict(formatted_prompt)
# 3. return output
return result['response']
WHAT is the chain?
A simple wrapper for:
PromptTemplate -> LLM -> Output
Exactly like:
prompt | model | StrOutputParser
WHEN is a chain used?
When you want to:
-
Combine prompt creation + LLM call
-
Reuse the same flow repeatedly
-
Organize code cleanly
WHY chain is important?
This is the core concept of LangChain — simple, modular pipelines.
6️⃣ Step 6 – Use the Chain
template = NakliPromptTemplate(
template='Write a {length} poem about {topic}',
input_variables=['length', 'topic']
)
llm = NakliLLM()
chain = NakliLLMChain(llm, template)
result = chain.run({'length': 'short', 'topic': 'india'})
print(result)
Sample Output:
LLM created
AI stands for Artificial Intelligence
๐ Full Concept Summary (Easy For Beginners)
| Component | What? | When? | Why? |
|---|---|---|---|
| NakliLLM | Fake language model | Offline testing | Learn chain flow without OpenAI |
| NakliPromptTemplate | Formats prompt text | Before any LLM call | Reusable & clean prompts |
| NakliLLMChain | Pipeline wrapper | Repeated tasks | Core LangChain concept |
| predict() | Simulates model output | During execution | Easy testing |
| template.format() | Fills placeholders | Before LLM call | Converts dict → final string |
No comments:
Post a Comment