Your First LangChain App in 5 Minutes: A No-Fluff Guide

Most AI tutorials overwhelm you with theory before showing anything useful. Not this one. In the time it takes to brew coffee, you’ll build a working LangChain app that personalizes greetings. No PhD required.

What We’re Building

A dead-simple AI greeter that:

  1. Takes a name (like “Sarah”)
  2. Generates a custom welcome message
  3. Spits out something like: “Hey Sarah! Ready to dive into LangChain?”

Here’s the entire code (we’ll break it down after):

python

Copy

Download

from langchain.llms import OpenAI

from langchain.prompts import PromptTemplate

from langchain.chains import LLMChain

# Set up the AI brain

llm = OpenAI(model=”gpt-3.5-turbo”, temperature=0.7)  # Not too robotic, not too wild

# Create a Mad Libs-style template

prompt = PromptTemplate(

    input_variables=[“name”],

    template=”Yo {name}! LangChain’s gonna blow your mind. What’s up?”

)

# Wire them together

greeter = LLMChain(llm=llm, prompt=prompt)

# Test drive it

print(greeter.run({“name”: “Alex”}))

Output:
“Yo Alex! LangChain’s gonna blow your mind. What’s up?”

Why This Matters

This isn’t just a fancy “Hello World.” You’ve just built:

  • Dynamic prompts (change “Alex” to any name)
  • Controlled creativity (that temperature knob)
  • A reusable pipeline (swap components later)

Line-by-Line for the Curious

  1. llm = OpenAI(…)
    • This is your AI’s engine. gpt-3.5-turbo is cheaper/faster than Davinci.
    • temperature=0.7 means “be fun but don’t go off the rails.” Try 0 for robot mode, 1 for drunk poet.
  2. PromptTemplate(…)
    • The {name} part is a placeholder—like a blank in Mad Libs.
    • Pro tip: Bad templates = bad outputs. “Write a tweet about {topic}” works better than “Say something about {topic}.”
  3. LLMChain(…)
    • This marries the AI and template so they play nice together.
    • Later, you can add memory or tools here without rewriting everything.
  4. .run({“name”: “Alex”})
    • The magic happens here. LangChain:
      1. Slots “Alex” into the template
      2. Sends “Yo Alex!…” to GPT-3.5
      3. Returns GPT’s response

Where Newbies Get Stuck

  • API Keys: Forgot to export OPENAI_API_KEY=’sk-…’? It’ll yell at you.
  • Overengineering: Start small. Fancy memory/agents come later.
  • Prompt Wrinkles: If output sucks, tweak the template first before blaming the AI.

Level Up: 2 Simple Tweaks

1. Add Emotions

Change the template to:

python

Copy

Download

“Say this excitedly: {name}! Dude. LangChain. Let’s. Go.”

Now it outputs:
“Alex! Dude. LangChain. Let’s. Go.”

2. Multi-Variable Prompts

python

Copy

Download

prompt = PromptTemplate(

    input_variables=[“name”, “mood”],

    template=”Hey {name}, you seem {mood}. Want a LangChain tip?”

)

print(greeter.run({“name”: “Jamie”, “mood”: “pumped”}))

Output:
“Hey Jamie, you seem pumped. Want a LangChain tip?”

Why This Beats Raw LLM Calls

Without LangChain, you’d be stuck doing:

python

Copy

Download

manual_prompt = f”Yo {name}!…”  # Have fun keeping track of these

response = openai.ChatCompletion.create(…)  # Good luck managing conversations

LangChain keeps you sane as your app grows.

What’s Next?

  • Memory: Make it remember past convos
  • Tools: Let it check weather/emails
  • Agents: Have it decide when to use which tool

But today? Pat yourself on the back—you’ve just built something that would’ve taken 100x more code last year.

Pro Tip: Run pip install langchain openai python-dotenv first. Then steal this code and make it swear like a sailor (set temperature=1 and watch the chaos).

Leave a Comment