💊 Pill of the Week
Imagine you have a complex task, like writing a detailed report or analyzing a lengthy customer feedback document. Trying to get an AI to do it all in one go can be like asking a chef to prepare a gourmet meal with just one instruction: "Make dinner!" The result might be edible, but probably not perfect.
This is where Prompt Chaining comes in. It's a powerful, yet elegantly simple, workflow pattern that helps Large Language Models (LLMs) tackle intricate problems by breaking them down into a series of smaller, more manageable steps.
Think of it like cleaning your house room by room instead of trying to tackle the whole place at once. When you focus on just the kitchen, you actually get it spotless. Try to clean everything simultaneously, and you'll probably just move clutter around.
Why is Prompt Chaining so Effective?
Enhanced Accuracy: By focusing on one sub-task at a time, the LLM can dedicate its full attention and reasoning power to that specific step, leading to more precise and reliable outputs for each stage.
Clarity and Control: You gain a clearer understanding of how the AI arrives at its final answer, making debugging and refinement much easier. Each step is transparent.
Modularity: Each part of your chain is a distinct component. This means you can easily swap out or improve individual steps without disrupting the entire workflow.
This pattern is particularly well-suited for tasks that naturally have a clear, sequential flow, where the successful completion of one stage is a prerequisite for the next. For example, generating a document outline, then checking that outline against specific criteria, and finally writing the document based on the refined outline.
Under the Hood
LangChain Expression Language (LCEL) makes building these chains incredibly intuitive. LCEL uses Python's familiar pipe operator (|
) to connect different components, creating a RunnableSequence
. This means the output of one component automatically flows as the input to the next, just like water through a pipe!
Every core component in LangChain – from your prompts to your LLMs and output parsers – implements the Runnable
protocol, making them perfectly compatible with this chaining mechanism.
Let's see how this works in practice with a real-world example.
🛠️Do It Yourself: Refining Sentiment Analysis
Imagine you have a stream of customer feedback, and you need to not only understand the sentiment but also extract key issues and present a concise, refined summary. This is a perfect candidate for prompt chaining.
The Challenge
We'll take a customer review and put it through a multi-stage process:
Sentiment Analysis: Determine the overall sentiment (Positive, Negative, or Neutral) and provide a brief explanation.
Key Phrase Extraction: Identify up to 5 important key phrases that capture the essence of the review.
Refined Summary: Create a concise 2-3 sentence summary incorporating the original review, its sentiment, and the extracted key phrases.
As an example we will use the following customer review:
“The new coffee machine is a disaster. It constantly leaks water, the coffee tastes burnt, and the brewing process takes forever. I'm extremely disappointed with this purchase and would not recommend it to anyone. The previous model was much better.”
Let’s begin!
Setting the Stage
First, we need to import the necessary tools from LangChain and initialize our Large Language Model. We'll use ChatOpenAI
for this example, specifically the gpt-4o
model.
# Install necessary libraries if you haven't already
# !pip install langchain-openai
import os
from getpass import getpass
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
from operator import itemgetter
from langchain_core.output_parsers import StrOutputParser
# Set up your OpenAI API key (replace with your actual key or environment variable)
# If your key is not already set as an environment variable, uncomment and run the following:
# os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API Key:")
# Initialize our AI brain (the LLM) using gpt-4o
llm = ChatOpenAI(model="gpt-4o")
Interpretation:
!pip install langchain-openai
: This command installs the required LangChain and OpenAI libraries in your environment.os.environ["OPENAI_API_KEY"] = getpass(...)
: This line is for securely setting your OpenAI API key as an environment variable, which is crucial for authenticating with OpenAI's models.ChatOpenAI(model="gpt-4o")
: This initializes our LLM instance, specifically using thegpt-4o
model.ChatPromptTemplate
,RunnablePassthrough
,RunnableLambda
,itemgetter
,StrOutputParser
: These are the core LangChain components we'll use to build our flexible and powerful chains.
Understanding Pure Sequential Chaining
Before diving into our multi-faceted example, it's helpful to see a simpler form of chaining where the output of one step directly becomes the sole input of the next. This illustrates the fundamental sequential flow.
# A simple, pure sequential chain example
pure_sequential_example = (
ChatPromptTemplate.from_template("What is the capital of {country}?")
| llm
| StrOutputParser()
| ChatPromptTemplate.from_template("Briefly tell me about {text}.") # {text} here would be the capital from the previous step
| llm
| StrOutputParser()
)
# Invoke the simple sequential chain
print(pure_sequential_example.invoke({"country": 'Spain'}))
The output of this code would be:
Madrid is the capital and largest city of Spain. It is located in the center of the country and serves as its political, economic, and cultural hub. Known for its vibrant nightlife, historic architecture, world-class museums like the Prado and Reina Sofia, and lively neighborhoods, Madrid is a city that seamlessly blends traditional charm with modern energy. It is also home to the Spanish royal family and government institutions.
Interpretation:
This simple chain first asks the LLM for the capital of a given country, then takes that capital as the input ({text}
) for a second prompt, asking for information about it. This perfectly illustrates how the output of one Runnable
(the capital city) becomes the input for the next Runnable
(ChatPromptTemplate.from_template("Briefly tell me about {text}.")
).
📖 Book of the Week
If you're building or want to build LLM-powered apps and agents — whether you're an AI developer, MLOps engineer, or product team lead — you need to check this out:
“Generative AI with LangChain” (Second Edition)
By Ben Auffarth & Leonid Kuligin
💡 This book isn’t just about building cool prototypes — it’s a practical guide to designing, scaling, and deploying production-ready GenAI systems using LangChain, LangGraph, and Python.
What sets it apart?
It tackles one of the biggest challenges in GenAI: moving from prototype to production — with a strong focus on multi-agent coordination, observability, and real-world deployment:
✅ Design robust LangGraph agent workflows that scale
✅ Build powerful RAG pipelines with re-ranking and hybrid search
✅ Apply enterprise-ready testing, monitoring, and error-handling
✅ Explore Tree-of-Thoughts, structured generation, and agent handoffs
✅ Work with top LLMs like Gemini, o3-mini, Mistral, DeepSeek, and Claude
This is the guide that turns experimentation into reliable AI infrastructure.
This is a must-read for:
🧠 AI engineers building multi-agent systems
🐍 Python devs deploying LLM apps in real-world environments
🏢 Enterprise teams moving GenAI projects into production
🔬 Anyone working with LangChain, LangGraph, or advanced RAG
✅ Devs who care about security, compliance, and ethical AI
Now we are ready for the more complex scenario!
Sentiment Analysis
Our first task is to determine the sentiment of the customer review. We'll define a prompt specifically for this and create a simple chain.
# Prompt for Sentiment Analysis
sentiment_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an expert sentiment analyzer. Analyze the sentiment of the following customer review and categorize it as 'Positive', 'Negative', or 'Neutral'. Provide a brief explanation for your categorization."),
("user", "Customer Review:\n{review}"),
]
)
# Define the sentiment analysis sub-chain
sentiment_analysis_chain = sentiment_prompt | llm | StrOutputParser()
Interpretation:
sentiment_prompt
: ThisChatPromptTemplate
is designed to take areview
as input. The system message instructs the AI to act as a sentiment analyzer and categorize the sentiment.sentiment_analysis_chain = sentiment_prompt | llm | StrOutputParser()
: This creates our first simple chain. It pipes thereview
into thesentiment_prompt
, sends the result to ourllm
(gpt-4o), and then usesStrOutputParser()
to convert the LLM's output into a clean string.
We can invoke this chain with the customer review and see what is the output:
print(sentiment_analysis_chain.invoke({"review": customer_review}))
Sentiment: Negative
Explanation: The review expresses clear dissatisfaction with the product, describing it as a "disaster" and highlighting several specific issues, including water leaks, burnt coffee flavor, and a long brewing process. Additionally, the reviewer explicitly states being "extremely disappointed" and advises against purchasing the machine, further reinforcing the negative sentiment.
Key Phrase Extraction
Next, we'll focus on extracting key phrases from the original review. This will be another independent step, so it gets its own prompt and simple chain.
# Prompt for Key Phrase Extraction
key_phrases_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an expert in extracting key phrases. From the following customer review, identify and list up to 5 of the most important key phrases that capture the essence of the review. Present them as a comma-separated list."),
("user", "Customer Review:\n{review}"),
]
)
# Define the key phrase extraction sub-chain
key_phrase_extraction_chain = key_phrases_prompt | llm | StrOutputParser()
Interpretation:
key_phrases_prompt
: This prompt focuses solely on extracting important key phrases from thereview
.key_phrase_extraction_chain = key_phrases_prompt | llm | StrOutputParser()
: Similar to the sentiment chain, this forms a self-contained unit that takes a review and returns a comma-separated list of key phrases.
Similarly, we can invoke this chain and see its output:
print(key_phrase_extraction_chain.invoke({"review": customer_review}))
leaks water, coffee tastes burnt, brewing process takes forever, extremely disappointed, previous model was better
Refined Summary
Finally, we define the prompt for generating the refined summary. This prompt is special because it will require inputs from the original review and the outputs of our previous two sub-tasks.
# Prompt for Refined Summary using outputs from previous steps
summary_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an expert summarizer. Based on the following customer review, its sentiment, and key phrases, create a concise, 2-3 sentence summary that incorporates these details."),
("user", "Customer Review:\n{review}\n\nSentiment:\n{sentiment_analysis}\n\nKey Phrases:\n{key_phrases}"),
]
)
Interpretation:
summary_prompt
: This prompt expects three distinct inputs:review
,sentiment_analysis
, andkey_phrases
. The system message guides the AI to combine these elements into a concise summary. Notice that we don't define a full chain for this yet, as it needs to receive inputs from the previous dynamic steps.
Building the Full Chain
Keep reading with a 7-day free trial
Subscribe to Machine Learning Pills to keep reading this post and get 7 days of free access to the full post archives.