🏦 Automating Loan Underwriting with Agentic AI: LangGraph, MCP & Amazon SageMaker in Action

🏦 Automating Loan Underwriting with Agentic AI: LangGraph, MCP & Amazon SageMaker in Action

Publish Date: May 9
0 0

To demonstrate the power of Model Context Protocol (MCP) in real-world enterprise AI, I recently ran a loan underwriting pipeline that combines:

  • MCP for tool-style interaction between LLMs and services
  • LangGraph to orchestrate multi-step workflows
  • Amazon SageMaker to securely host the LLM
  • FastAPI to serve agents with modular endpoints

What Is LangGraph?

LangGraph is a framework for orchestrating multi-step, stateful workflows across LLM-powered agents.

🔄 Graph-based execution engine: It lets you define agent workflows as nodes in a graph, enabling branching, retries, and memory — perfect for multi-agent AI systems.

🔗 Seamless tool and state handling: It maintains structured state across steps, making it easy to pass outputs between agents like Loan Officer → Credit Analyst → Risk Manager.

Each agent doesn’t run in isolation — they’re stitched together with LangGraph, a framework that lets you:

● Define multi-agent workflows
● Handle flow control, retries, state transitions
● Pass structured data from one agent to the next

Here’s how it works — and why it’s a powerful architectural pattern for decision automation

🧾 The Use Case: AI-Driven Loan Underwriting

Loan underwriting typically involves:

  1. Reviewing applicant details
  2. Evaluating creditworthiness
  3. Making a final approval or denial decision

In this architecture, each role is performed by a dedicated AI agent:

  • Loan Officer– Summarizes application details
  • Credit Analyst– Assesses financial risk
  • Risk Manager – Makes the final decision

🧱 Architecture Overview

This workflow is powered by a centralized LLM, hosted on Amazon SageMaker, with each agent deployed as an **MCP server on EC2 and orchestrated via LangGraph:

Workflow Steps:

  1. User submits loan details (e.g., name, income, credit score)
  2. MCP client routes the request to the Loan Officer MCP server
  3. Output is forwarded to the Credit Analyst MCP server
  4. Result is passed to the Risk Manager MCP server
  5. A final prompt is generated, processed by the LLM on SageMaker, and sent back to the user

Image Credit: AWS
AWS

I have used below model for the execution

  • Model: Qwen/Qwen2.5-1.5B-Instruct
  • Source: Hugging Face
  • Hosted on: Amazon SageMaker (Hugging Face LLM Inference Container)

execution flow

Image credit: "AWS"

🔗 Want to Try It?

Comments 0 total

    Add comment