Building Your First AI Agent Without Frameworks
Geri Máté

Geri Máté @gerimate

About: devrel

Location:
Szeged, Hungary
Joined:
May 9, 2023

Building Your First AI Agent Without Frameworks

Publish Date: Jun 13
1 1

Want to understand how AI agents actually work? Let's build one from scratch before jumping into frameworks.


Most AI agent tutorials start with LangGraph or CrewAI, which are great tools, but they can make it hard to understand what's happening underneath.

An agent is really just a language model that can call functions. Once you understand that, frameworks make way more sense.

Today we're building a customer support system using OpenAI's API and Python. This will give you the fundamentals that make any agent framework easier to use and debug.

What we're building:

  • A routing system that decides which "specialist" handles each query
  • Function-calling agents that can search FAQs and analyze sentiment
  • Simple state management to track conversations
  • Logic to escalate to humans when needed

By the end, you'll understand how agents work under the hood, making you much more effective when you do use frameworks.

An Agent is Just an LLM with Tools

Seriously, that's all there is to it:

  1. Language model with a specific job
  2. Functions it can call
  3. Logic to decide when to use them

Everything else is just orchestration.

Let's start with the simplest possible agent:

import openai
import json
from typing import Dict, List, Any

# Set up OpenAI (get your API key from https://platform.openai.com/api-keys)
import os
openai.api_key = os.getenv("OPENAI_API_KEY")

class SimpleAgent:
    def __init__(self, name: str, role: str, tools: List[callable]):
        self.name = name
        self.role = role
        self.tools = {tool.__name__: tool for tool in tools}

    def respond(self, message: str) -> str:
        # Create tool descriptions for the model
        tool_descriptions = []
        for name, func in self.tools.items():
            tool_descriptions.append({
                "type": "function",
                "function": {
                    "name": name,
                    "description": func.__doc__ or f"Function {name}",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "query": {"type": "string", "description": "The input query"}
                        },
                        "required": ["query"]
                    }
                }
            })

        # Call OpenAI with function calling
        response = openai.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "system", "content": self.role},
                {"role": "user", "content": message}
            ],
            tools=tool_descriptions,
            tool_choice="auto"  # Let the model decide when to use tools
        )

        # Handle function calls
        if response.choices[0].message.tool_calls:
            tool_call = response.choices[0].message.tool_calls[0]
            function_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)

            # Execute the function
            if function_name in self.tools:
                result = self.tools[function_name](arguments["query"])
                return f"{self.name}: {result}"

        # Regular response if no function call
        return f"{self.name}: {response.choices[0].message.content}"

# Test it out
def search_faq(query: str) -> str:
    """Search the FAQ database for answers"""
    faqs = {
        "shipping": "Standard shipping takes 3-5 business days",
        "refund": "Refunds processed within 5-7 business days",
        "return": "Returns accepted within 30 days"
    }

    for topic, answer in faqs.items():
        if topic in query.lower():
            return answer
    return "No FAQ found for that topic"

# Create an FAQ agent
faq_agent = SimpleAgent(
    name="FAQ Assistant",
    role="You're a helpful FAQ assistant. Use the search_faq function to find answers to customer questions.",
    tools=[search_faq]
)

# Test it
print(faq_agent.respond("How long does shipping take?"))
# FAQ Assistant: Standard shipping takes 3-5 business days
Enter fullscreen mode Exit fullscreen mode

Done. You just built an AI agent. It understands questions, knows when to use its tool, and gives helpful answers.

Adding More Specialists

Now let's add agents that handle different stuff:

def analyze_sentiment(query: str) -> str:
    """Analyze the emotional tone of customer messages"""
    # Simple keyword approach - you could use [Transformers](https://huggingface.co/docs/transformers/index) for a real sentiment model
    negative_words = ["angry", "frustrated", "terrible", "awful", "hate"]
    urgent_words = ["urgent", "immediately", "asap", "emergency"]

    query_lower = query.lower()

    if any(word in query_lower for word in urgent_words):
        return "URGENT: Customer needs immediate attention"
    elif any(word in query_lower for word in negative_words):
        return "NEGATIVE: Customer is frustrated, handle with care"
    else:
        return "NEUTRAL: Standard response appropriate"

def check_escalation_needed(query: str) -> str:
    """Determine if human escalation is needed"""
    escalation_triggers = [
        "speak to manager", "cancel account", "legal action", 
        "complaint", "lawsuit", "terrible service"
    ]

    if any(trigger in query.lower() for trigger in escalation_triggers):
        return "ESCALATE: Route to human agent immediately"
    else:
        return "CONTINUE: AI agent can handle this query"

# Create specialized agents
sentiment_agent = SimpleAgent(
    name="Sentiment Analyzer",
    role="You analyze customer emotions. Use analyze_sentiment to understand how the customer is feeling.",
    tools=[analyze_sentiment]
)

escalation_agent = SimpleAgent(
    name="Escalation Manager", 
    role="You decide when customers need human help. Use check_escalation_needed to evaluate queries.",
    tools=[check_escalation_needed]
)
Enter fullscreen mode Exit fullscreen mode

The Router: Deciding Who Handles What

Here's where it gets interesting - we need something to decide which agent handles each message:

class AgentRouter:
    def __init__(self):
        self.agents = {
            "faq": faq_agent,
            "sentiment": sentiment_agent,
            "escalation": escalation_agent
        }
        self.conversation_history = []

    def route_query(self, query: str) -> str:
        """Decide which agent should handle this query"""

        # Save the conversation
        self.conversation_history.append({"role": "user", "content": query})

        # Basic routing - you could make this way smarter
        query_lower = query.lower()

        # Check for escalation triggers first
        if any(word in query_lower for word in ["manager", "complaint", "cancel", "lawsuit"]):
            agent_name = "escalation"
        # Check for emotional language
        elif any(word in query_lower for word in ["angry", "frustrated", "urgent", "terrible"]):
            agent_name = "sentiment"
        # Default to FAQ for standard questions
        else:
            agent_name = "faq"

        # Get response from the right agent
        agent = self.agents[agent_name]
        response = agent.respond(query)

        # Save that too
        self.conversation_history.append({"role": "assistant", "content": response})

        return f"[Routed to {agent_name.upper()}]\n{response}"

    def get_conversation_summary(self) -> str:
        """Get a summary of the conversation so far"""
        if not self.conversation_history:
            return "No conversation yet"

        summary = f"Conversation with {len(self.conversation_history)//2} exchanges:\n"
        for i, msg in enumerate(self.conversation_history[-4:]):  # Last 2 exchanges
            role = "Customer" if msg["role"] == "user" else "Agent"
            summary += f"{role}: {msg['content']}\n"

        return summary

# Test the complete system
router = AgentRouter()

print("=== Customer Support Agent System ===\n")

# Test different types of queries
test_queries = [
    "How long does shipping take?",
    "I'm really frustrated with this terrible service!",
    "I want to speak to your manager right now!",
    "What's your return policy?"
]

for query in test_queries:
    print(f"Customer: {query}")
    response = router.route_query(query)
    print(f"{response}\n")

print("Conversation Summary:")
print(router.get_conversation_summary())
Enter fullscreen mode Exit fullscreen mode

Making It Smarter: Let the AI Do the Routing

Keyword matching works, but we can do better. Let's use the LLM itself to make routing decisions:

class SmartRouter:
    def __init__(self):
        self.agents = {
            "faq": faq_agent,
            "sentiment": sentiment_agent, 
            "escalation": escalation_agent
        }
        self.conversation_history = []

    def smart_route(self, query: str) -> str:
        """Use AI to decide which agent should handle the query"""

        routing_prompt = f"""You're routing customer queries to specialists.

        Options:
        - faq: Standard questions about policies, shipping, returns
        - sentiment: Upset or frustrated customers  
        - escalation: Complex complaints or requests for managers

        Customer: "{query}"

        Which specialist? Just answer: faq, sentiment, or escalation"""

        response = openai.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": routing_prompt}],
            temperature=0
        )

        agent_choice = response.choices[0].message.content.strip().lower()

        # Default to FAQ if something weird happens
        if agent_choice not in self.agents:
            agent_choice = "faq"

        # Get response from chosen agent
        agent_response = self.agents[agent_choice].respond(query)

        return f"[Smart routed to {agent_choice.upper()}]\n{agent_response}"

# Test smart routing
smart_router = SmartRouter()

print("=== Smart Routing Test ===\n")

smart_test_queries = [
    "My package is late and I'm getting married tomorrow!",
    "Do you accept international credit cards?", 
    "This is absolutely ridiculous, I want my money back immediately!",
    "Can I return something I bought 3 weeks ago?"
]

for query in smart_test_queries:
    print(f"Customer: {query}")
    response = smart_router.smart_route(query)
    print(f"{response}\n")
Enter fullscreen mode Exit fullscreen mode

Adding Memory: Making Conversations Actually Work

Real support conversations build on what happened before. Here's how to add memory:

class MemoryAwareRouter:
    def __init__(self):
        self.agents = {
            "faq": faq_agent,
            "sentiment": sentiment_agent,
            "escalation": escalation_agent
        }
        self.conversation_memory = []
        self.customer_context = {
            "sentiment_history": [],
            "escalated": False,
            "resolved_issues": []
        }

    def process_with_memory(self, query: str) -> str:
        """Process query with full conversation context"""

        # Save current message
        self.conversation_memory.append({"role": "user", "content": query, "timestamp": "now"})

        # Build context summary
        context = self._build_context()

        routing_prompt = f"""Previous conversation context:
        {context}

        Current message: "{query}"

        Which specialist should handle this?
        - faq: Standard questions
        - sentiment: Emotional customers
        - escalation: Complex issues or if already escalated

        Consider the conversation history. Answer: faq, sentiment, or escalation"""

        response = openai.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": routing_prompt}],
            temperature=0
        )

        agent_choice = response.choices[0].message.content.strip().lower()
        if agent_choice not in self.agents:
            agent_choice = "faq"

        # Update customer context based on routing
        if agent_choice == "sentiment":
            self.customer_context["sentiment_history"].append("negative")
        elif agent_choice == "escalation":
            self.customer_context["escalated"] = True

        # Get enhanced response with context
        agent_response = self._get_contextual_response(agent_choice, query)

        # Add to memory
        self.conversation_memory.append({
            "role": "assistant", 
            "content": agent_response,
            "agent": agent_choice
        })

        return f"[Contextual routing to {agent_choice.upper()}]\n{agent_response}"

    def _build_context(self) -> str:
        """Build conversation context summary"""
        if not self.conversation_memory:
            return "New conversation"

        context = f"Conversation history: {len(self.conversation_memory)} messages\n"
        context += f"Customer escalated: {self.customer_context['escalated']}\n"
        context += f"Negative sentiment detected: {len(self.customer_context['sentiment_history'])} times\n"

        # Include last few exchanges
        recent = self.conversation_memory[-4:]
        for msg in recent:
            role = "Customer" if msg["role"] == "user" else f"Agent ({msg.get('agent', 'unknown')})"
            context += f"{role}: {msg['content'][:100]}...\n"

        return context

    def _get_contextual_response(self, agent_name: str, query: str) -> str:
        """Get response with conversation context"""
        agent = self.agents[agent_name]

        # Add context to the agent's response
        if self.customer_context["escalated"] and agent_name != "escalation":
            prefix = "[Customer previously escalated] "
        elif len(self.customer_context["sentiment_history"]) > 1:
            prefix = "[Customer has been frustrated multiple times] "
        else:
            prefix = ""

        response = agent.respond(query)
        return prefix + response

# Test memory-aware system
memory_router = MemoryAwareRouter()

print("=== Memory-Aware Conversation ===\n")

conversation_flow = [
    "What's your return policy?",
    "That's not good enough, I'm really frustrated!",
    "I want to speak to someone who can actually help me!",
    "Fine, what information do you need for the return?"
]

for query in conversation_flow:
    print(f"Customer: {query}")
    response = memory_router.process_with_memory(query)
    print(f"{response}\n")
Enter fullscreen mode Exit fullscreen mode

What You Actually Built

You just created a complete customer support system using basic Python and OpenAI. Here's what you learned:

The fundamentals:

  • Agents = LLM + functions + routing logic
  • Function calling lets agents take actions
  • Smart routing decides who handles what
  • State management keeps conversations coherent
  • Memory makes agents context-aware

Why this approach:

  • You'll understand what frameworks actually do for you
  • Easier to debug when things go wrong
  • You can customize behavior exactly how you want
  • Works with any LLM provider
  • Good foundation before learning frameworks

Making It Production Ready

To actually deploy this, you'd need:

The basics:

  • Error handling (APIs fail)
  • Database for conversation storage
  • Rate limiting (prevent abuse)
  • Proper logging

The nice-to-haves:

  • Real sentiment analysis model
  • Integration with your FAQ database
  • Actual escalation to humans (Slack API, email, etc.)
  • Analytics on what's working

When frameworks make sense:
Now you understand what LangGraph, CrewAI, and AutoGen do - they handle the routing and orchestration you just built manually. They're great when:

  • You need complex multi-step workflows
  • You want pre-built integrations and tools
  • You're working on a team that benefits from standardized patterns
  • You need features like human-in-the-loop or advanced state management

The key is knowing when the abstraction helps versus when you need more control.

The Real Lesson

AI agents are organized LLMs with specific jobs and the ability to call functions. The "multi-agent" part is smart routing and state management.

Understanding these fundamentals makes you better at using any framework because you know what's happening underneath. Start here, then use frameworks when their features solve real problems you're facing.


Built something cool with this? I'd love to see what you made - drop it in the comments!

Comments 1 total

  • NOTIFICATION
    NOTIFICATIONJun 13, 2025

    Hey everyone! We’re launching your special Dev.to drop for all verified Dev.to authors. Click here here to see if you qualify (for verified Dev.to users only). – Admin

Add comment