Into the Future: What is Agentic AI (and How You Can Try It Today?)
Nitesh More

Nitesh More @nitz22199

Location:
Boston, MA
Joined:
Aug 16, 2025

Into the Future: What is Agentic AI (and How You Can Try It Today?)

Publish Date: Aug 19
0 0

1. What is Agentic AI?

Agentic AI goes beyond chatbots. Instead of waiting for you to type prompts, agents act proactively and independently. They:

  • Plan tasks
  • Call APIs and use tools
  • Adapt to new information
  • Automate workflows (like triaging tickets, deploying code, or managing data pipelines)

Think of it as moving from “ask me something” to “I’ll get it done.”

AI


2. Why It Matters

Traditional AI = reactive.
Agentic AI = autonomous + adaptive.

This opens the door for:

  • Developers automating repetitive DevOps tasks.
  • Businesses streamlining support, ticketing, and triage.
  • Knowledge workers offloading research, writing, and analysis.

3. Building Your First Agent (Hands-On)

Build a Free, Open-Source Agent: GitHub Issue Triage with Ollama

What you’ll build

A small Python agent that:

  • Reads open GitHub issues
  • Thinks locally via an open-source LLM (Ollama)
  • Decides labels (bug/feature/docs/etc.)
  • Applies labels via the GitHub API

No cloud LLM keys required.


Prereqs (all free)

  1. Python 3.10+
  2. Ollama (local LLM runtime)
  • Install: https://ollama.com
  • Start Ollama server (if it isn’t already):

     ollama serve
    
  • Pull a small instruct model (pick one):

     ollama pull llama3.1:8b-instruct-q2_K
     # or smaller options if RAM is tight:
     # ollama pull qwen2.5:1.5b-instruct
     # ollama pull phi3:mini
    
  • Verify the server & model

     curl http://localhost:11434/api/tags
     # expect: {"models":[{"name":"llama3.1:8b-instruct-q2_K", ...}]}
    

  1. GitHub Personal Access Token (classic or fine-grained) with repo scope
  • Save it to an env var: export GITHUB_TOKEN=ghp_...

Project layout

agent/
  triage_agent.py
  tools.py
  prompts.py
  requirements.txt
Enter fullscreen mode Exit fullscreen mode

requirements.txt

requests>=2.31.0
pydantic>=2.7.0
Enter fullscreen mode Exit fullscreen mode

Install:

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Step 1: Define “tools” (GitHub API helpers)

Create tools.py:

import os
import requests
from typing import List, Dict

GITHUB_TOKEN = os.getenv("GITHUB_TOKEN")
REPO = os.getenv("REPO")
if not REPO:
    raise RuntimeError("Set REPO env var, e.g. export REPO=owner/repo")

HEADERS = {
    "Authorization": f"Bearer {GITHUB_TOKEN}",
    "Accept": "application/vnd.github+json"
}

def get_open_issues(limit: int = 20) -> List[Dict]:
    url = f"https://api.github.com/repos/{REPO}/issues?state=open&per_page={limit}"
    r = requests.get(url, headers=HEADERS, timeout=30)
    r.raise_for_status()
    # Filter out PRs (GitHub returns PRs in /issues)
    issues = [i for i in r.json() if "pull_request" not in i]
    # Keep only fields we need
    simplified = [
        {
            "number": i["number"],
            "title": i["title"],
            "body": i.get("body") or ""
        }
        for i in issues
    ]
    return simplified

def add_labels(issue_number: int, labels: List[str]) -> None:
    if not labels:
        return
    url = f"https://api.github.com/repos/{REPO}/issues/{issue_number}/labels"
    r = requests.post(url, headers=HEADERS, json={"labels": labels}, timeout=30)
    r.raise_for_status()
    print(f"[agent] Labels applied to issue #{issue_number}: {labels}")
Enter fullscreen mode Exit fullscreen mode

Step 2: A clear system prompt for the model

Create prompts.py:

ALLOWED_LABELS = [
    "bug", "feature", "enhancement", "docs", "question",
    "performance", "refactor", "security"
]

SYSTEM_PROMPT = f"""
You are a GitHub issue triage agent.

Return ONLY a JSON array. No prose, no markdown, no code fences.

Allowed labels (use up to two):
{ALLOWED_LABELS}

For each issue object in the input, produce an object:
{{
  "number": <issue_number>,
  "labels": ["bug"] // labels MUST be from the allowed list
}}

If you are unsure, use one generic label: "question" or "enhancement".
If you cannot determine labels, return an empty array [].

Output format examples:
[
  {{ "number": 123, "labels": ["bug"] }},
  {{ "number": 456, "labels": ["docs","enhancement"] }}
]
"""
Enter fullscreen mode Exit fullscreen mode

Step 3: Minimal agent loop using Ollama’s local API

Create triage_agent.py:

import os
import json
import requests
from typing import List, Dict, Any
from tools import get_open_issues, add_labels
from prompts import SYSTEM_PROMPT, ALLOWED_LABELS

MODEL = os.getenv("MODEL", "llama3.1:8b-instruct")
OLLAMA_BASE = os.getenv("OLLAMA_BASE", "http://localhost:11434")
OLLAMA_URL = f"{OLLAMA_BASE}/api/generate"
DEBUG = os.getenv("DEBUG", "1") == "1"

def dprint(*args):
    if DEBUG:
        print(*args, flush=True)

def extract_json_array(text: str) -> List[Dict[str, Any]]:
    text = (text or "").strip()
    # Try to extract first JSON array (handles stray prose)
    start = text.find("[")
    end = text.rfind("]")
    if start != -1 and end != -1 and end > start:
        try:
            return json.loads(text[start:end+1])
        except Exception:
            pass
    return []

def ask_llm(issues: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
    user_prompt = "Issues:\n" + json.dumps(issues, ensure_ascii=False, indent=2)
    prompt = f"{SYSTEM_PROMPT.strip()}\n\n{user_prompt}"

    payload = {
        "model": MODEL,
        "prompt": prompt,
        "stream": False,
        "options": {"temperature": 0}
    }
    dprint(f"[agent] POST {OLLAMA_URL} (model={MODEL})")
    r = requests.post(OLLAMA_URL, json=payload, timeout=300)
    dprint(f"[agent] LLM status: {r.status_code}")
    r.raise_for_status()

    data = r.json()
    raw = (data.get("response") or "").strip()
    dprint("[agent] Raw model reply (first 500 chars):")
    dprint(raw[:500] + ("..." if len(raw) > 500 else ""))

    arr = extract_json_array(raw)
    dprint(f"[agent] Parsed suggestions: {len(arr)}")
    return arr

def sanitize_labels(labels: List[str]) -> List[str]:
    out = []
    for l in labels:
        if l in ALLOWED_LABELS and l not in out:
            out.append(l)
        if len(out) == 2:
            break
    return out

def main():
    repo = os.getenv("REPO")
    if not repo:
        print("ERROR: Set REPO env var, e.g. export REPO=owner/repo")
        return

    print(f"[agent] Using repo: {repo}")
    issues = get_open_issues(limit=20)
    print(f"[agent] Open issues (non-PR): {len(issues)}")
    if not issues:
        print("[agent] No open issues found. Create one and rerun.")
        return

    suggestions = ask_llm(issues)
    if not suggestions:
        print("[agent] Model returned no suggestions or parsing failed.")
        return

    any_applied = False
    for s in suggestions:
        num = s.get("number")
        raw_labels = s.get("labels", [])
        labels = sanitize_labels(raw_labels)

        print(f"[agent] Suggestion → issue #{num}: raw={raw_labels} -> filtered={labels}")

        # Guardrail: never auto-apply "security"
        if "security" in labels:
            labels.remove("security")

        if num and labels:
            any_applied = True
            print(f"[agent] Applying {labels} to issue #{num}")
            try:
                add_labels(int(num), labels)
            except Exception as e:
                print(f"[agent] Failed to apply labels to #{num}: {e}")

    if not any_applied:
        print("[agent] Nothing to apply (no labels or numbers).")

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Run it:

export REPO=owner/repo
export GITHUB_TOKEN=ghp_yourtoken
# (optional) pick a smaller model if needed
export MODEL=llama3.1:8b-instruct-q2_K  

python triage_agent.py
Enter fullscreen mode Exit fullscreen mode

You should see it print which labels it applied. Open an issue with “crash, error, stacktrace” and watch it label bug.

Console Image


label added


How this is “agentic”

  • The model reasons over multiple issues and decides actions (labels).
  • The Python loop executes tools (GitHub API) based on those decisions.
  • You can add reflection (e.g., if labels empty, re-ask with a hint), memory (store past choices in a JSON file), and approval gates (dry run mode).

4. Real-World Ecosystem: Where Claude Fits In

Now that you understand the basics, here’s where things get exciting:

🟣 Anthropic’s Claude Agents
Anthropic recently launched Claude Agents. These are production-ready frameworks for building safe, reliable, long-context AI agents. They integrate with:

  • Claude API – the core brain.
  • Workbench – an environment for prototyping and testing agents.
  • MCP (Model Context Protocol) – a way for agents to connect to external tools and APIs safely.

Essentially, Claude is positioning itself as the enterprise-grade agentic AI platform.

🔵 Other Ecosystem Players

  • OpenAI: AutoGPT, GPTs, and upcoming agent frameworks.
  • LangChain & LlamaIndex: Open-source toolkits for chaining agent logic.
  • Startups (e.g., Cognition’s Devin, Cursor): Agents for specific verticals like coding.

5. Debugging the Hype: Gotchas to Watch Out For

  • Agents aren’t perfect — they can still hallucinate or make bad decisions.
  • Cost can grow if you let an agent loop endlessly.
  • Security matters — don’t give agents unrestricted API keys.

6. The Future

The real magic will come when agents:

  • Collaborate with each other (multi-agent systems).
  • Stay persistent (remember context across weeks).
  • Work seamlessly with human approval loops.

We’re at the early days, but just like cloud and Kubernetes once felt futuristic, agentic AI will soon feel normal.


Comments 0 total

    Add comment