Using Strands Agents with Anthropic API 🧬
Laura Salinas

Laura Salinas @lausalin

About: Computer engineer turned Solutions Architect, now exploring developer advocacy. In my spare time I play competitive dodgeball 🤾🏻‍♀️

Location:
Seattle, WA
Joined:
Mar 20, 2025

Using Strands Agents with Anthropic API 🧬

Publish Date: Jul 8
13 0

It's been a few weeks since I talked about using Strands Agents for the first time and I wanted to add a quick update on two pieces I've been experimenting with this week.

  1. Using Anthropic as a model provider instead of Amazon Bedrock
  2. Using other helpful built-in tools

If you haven't already setup Strands, check out my last post to get started.

Using Anthropic as a model provider

As a refresher from my last post, Strands Agents can be used with a variety of model providers including Amazon Bedrock (which I covered last) as well as Anthropic, OpenAI, Ollama, LiteLLM and more.

Today I'm testing integration with my Anthropic API and continuing to use some of the built-in tools Strands offers.

To get started, I head to the Strands model provider documentation and install the optional Anthropic dependency with pip install 'strands-agents[anthropic]'.

Now I can import the dependency in my Python application with from strands.models.anthropic import AnthropicModel.

I test that the integration works by executing a simple sample:

import os
from strands import Agent
from strands.models.anthropic import AnthropicModel
from dotenv import load_dotenv

model = AnthropicModel(
    client_args={
        "api_key": os.getenv("api_key"),
    },
    # **model_config
    max_tokens=1028,
    model_id="claude-sonnet-4-20250514",
    params={
        "temperature": 0.3,
    }
)

agent = Agent(model=model)
response = agent("What was the most visited city in the world in 2024?")
Enter fullscreen mode Exit fullscreen mode

The response in my terminal:

Using other built-in tools

In my last post I touched on the http_request tool which gives my agent the ability to Make API calls, fetch web data, and call local HTTP servers. This time I wanted to check out the use_aws tool which gives my agent the ability to interact with AWS Services across a variety of simple to complex use cases.

I have a lot of S3 buckets in my AWS account and it would be nice to get a list of all these buckets before I run an audit on what I can delete. Using natural language I should be able to tell Strands to list all of these for me.

import os
from strands import Agent
from strands.models.anthropic import AnthropicModel
from strands_tools import use_aws
from dotenv import load_dotenv

model = AnthropicModel(
    client_args={
        "api_key": os.getenv("api_key"),
    },
    # **model_config
    max_tokens=1028,
    model_id="claude-sonnet-4-20250514",
    params={
        "temperature": 0.3,
    }
)

agent = Agent(model=model, tools=[use_aws])
query= "List all my S3 buckets in us-west-2"
response = agent(query)
Enter fullscreen mode Exit fullscreen mode

The response in my terminal:

When I execute the file I see Strands understands it's been asked a question it needs to use a tool for, and appropriately goes through the use_aws option to generate the response. While I explicitly called out the region as us-west-2, the response explains that the list_buckets operation in the tool extracts all S3 buckets across my account by default, hence the other ones that are populating in the response.

⚠️Note: The use_aws tool looks for my AWS profile locally to authenticate from ~/.aws/credentials, you'll need that setup before attempting to use this tool or any others that require AWS account use.

That's it for this quick update, don't forget to give me a 🦄 if you got this far and let me know what else you'd like to see in the comments!

Additional Resources 📚

Comments 0 total

    Add comment