Build AI Agents Fast with DDE Agents
DDE

DDE @dde64bit

Joined:
Apr 16, 2025

Build AI Agents Fast with DDE Agents

Publish Date: Apr 16
0 0

AI agents are everywhere right now. Maybe you’ve tried building one and ran into this:

  • The learning curve is too steep
  • The SDK you found is way too complex
  • You’re stuck between big frameworks with poor docs

Same here.

So I built something simple:

DDE Agents, a Python SDK for building, running and chaining agents without the overhead.

You can use it with OpenAI, local Ollama models, and even Hugging Face GGUF models.

It’s simple enough to get your first agent running before your coffee is done brewing.

Why DDE Agents?

  • Simple setup
  • Use local or OpenAI models
  • Add tools, guardrails, chains and handoffs easily
  • Build dynamic agents and workflows
  • Focus on behavior, not boilerplate

It’s great for:

  • Prototyping ideas
  • Experimenting with multi-agent flows
  • Exploring what agents can do

Installation

pip install dde-agents
Enter fullscreen mode Exit fullscreen mode

Make sure you have Ollama installed if you want to use local models.

Quickstart

from agent.Agent import Agent
from agent.Config import ModelConfig

# OpenAI
ModelConfig.setDefaultModel("gpt-4o", True)

# Local
# ModelConfig.setDefaultModel("llama3.1", False)

englishAgent = Agent(
    name="englishAgent",
    instructions="You can only answer in English",
    inputGuardrails="The input must be in English",
)

if __name__ == "__main__":
    print(englishAgent.run(prompt=input("Prompt: ")))
Enter fullscreen mode Exit fullscreen mode

You can also use local Hugging Face models like this:

ModelConfig.setDefaultModel("hf.co/TheBloke/Mistral-7B-GGUF", False)
Enter fullscreen mode Exit fullscreen mode

Features

Feature What it does
Agents Create and run smart agents
Model selection Choose between local or OpenAI models
Guardrails Validate input and output
Chains Link agents in sequence
Tools Use functions or agents as tools
Handoffs Pass control between agents
Dynamic agents Generate new agents during runtime
Image support Experimental support for vision tasks

Example: Mood-Based Motivation Coach

Let’s build a small agent that gives you motivation based on your mood.

We’ll use:

  • One agent to detect mood
  • One to generate a quote
  • One to manage the flow

NOTE: When running local models it is advised to first run this command, to install the model before running your agents.

ollama run {modelName}
Enter fullscreen mode Exit fullscreen mode
from agent.Agent import Agent
from agent.Config import ModelConfig

ModelConfig.setDefaultModel("llama3", False)

moodDetector = Agent(
    name="moodDetector",
    instruction="Figure out the user's mood from the input. Just return a single word like: happy, sad, stressed, tired.",
)

quoteGenerator = Agent(
    name="quoteGenerator",
    instruction="Based on the mood, return a matching motivational quote. Just the quote, no explanation.",
)

coachAgent = Agent(
    name="coachAgent",
    instruction="You are a coach. Use the tool to detect mood, then hand off to the quote generator.",
    tools=[moodDetector],
    handoffs=[quoteGenerator],
)

if __name__ == "__main__":
    user_input = input("How are you feeling today? ")
    response = coachAgent.run(user_input)
    print("\nMotivation for you:\n", response)
Enter fullscreen mode Exit fullscreen mode

What’s happening:

  • You input a feeling
  • One agent analyzes it
  • Another responds with a quote
  • It all runs using a local model via Ollama

API Key (OpenAI)

If you want to use OpenAI models, just set your API key like this:

export OPENAI_API_KEY='your-key-here'
Enter fullscreen mode Exit fullscreen mode

Requirements

To use local models, install Ollama.

Docs and Examples

Contribute or Chat

I’m looking for feedback, ideas, and contributors.

If you:

  • Like messing with agents
  • Want to use this in a project
  • Or just want to explore some ideas

Check out the repo:

github.com/DDE-64-bit/DDE-Agents

Future plans

I'm currently working on:

  • Allowing all hugginface models (not just the GUFF models)
  • Testing terminalUse(), to allow agent to run terminal commands (it is included in the latest version, but not fully tested)
  • Task, the idea is that you can create an instance of Task() and have different agents run until its solved.

Im open to new ideas and feedback. And thank you for reading this.

Comments 0 total

    Add comment