Zedd:A Secure AI Knowledge Base with Fine-Grained Access Control
For the Permit.io Authorization Challenge, I built Zedd-KB—a secure, AI-powered internal knowledge base designed for organizations that demand robust, fine-grained authorization over both AI actions and sensitive data access. Zedd-KB demonstrates how externalized authorization with Permit.io can safeguard information and AI capabilities in real-world applications.
🚀 What is Zedd-KB?
Zedd-KB is an internal knowledge base platform that leverages Retrieval-Augmented Generation (RAG) and advanced LLMs to answer user queries using only authorized, relevant content. It integrates Permit.io for dynamic, policy-driven access control, ensuring that every AI action and data access is checked against up-to-date security policies.
📦 Project Repository
- Live Demo: https://zedd-kb.streamlit.app/
- GitHub Repo: github.com/Abhishek-03113/Zed-KB
🛠️ Technologies Used
- Python, FastAPI (backend)
- Streamlit (frontend)
- MongoDB (user & role management)
- Pinecone (vector database)
- LangChain (RAG pipeline orchestration)
- Gemini LLM (Google Generative AI)
- Permit.io (externalized authorization)
🏗️ My Journey
I built Zedd-KB to solve a specific problem: how to create an AI knowledge base where access is controlled by clear permission rules.
Starting with a basic design, I first tried AstraDB for vector storage but switched to Pinecone after facing integration issues. For user data, I chose MongoDB since I was already familiar with it.
The real game-changer was implementing Permit.io. It let me create detailed access rules without hard-coding them into my application. I could define who sees what, and update these rules anytime without changing my code.
I used LangChain to build the RAG pipeline and selected Google's Gemini model for the AI component. Seeing the system properly filter information based on user permissions was satisfying - it meant the core concept worked.
With the deadline approaching, I made a practical choice to use Streamlit instead of my original Next.js frontend plans. This let me focus on the authorization features that were the heart of the project.
Test Credentials:
-
Admin:
admin
/2025DEVChallenge
-
User:
newuser
/2025DEVChallenge
🖥️ Demo
Below are screenshots of various sections of Zedd-KB, each demonstrating a key feature or workflow:
-
Chat Window
This screenshot displays the main chat interface where users interact with the AI-powered knowledge base. Users can type queries, view AI-generated responses, and see a history of their conversations in a clean, user-friendly layout.
- Permit.io Policy Management This image shows the Permit.io dashboard where administrators define and manage fine-grained access control policies. It highlights how roles, permissions, and security levels are configured to protect sensitive data and AI actions.
- Permit.io Audit Logs This screenshot presents the audit log section in Permit.io, providing a detailed record of all access decisions and policy enforcement events. It demonstrates the system’s auditability and traceability for compliance and security reviews.
User Role Management
This image features the user role management interface, where admins can assign roles, update permissions, and manage user accounts. It ensures that only authorized users have access to specific features and data.
Admin Chat View
This image shows the chat interface as seen by an admin user, who has elevated privileges. Admins can access more sensitive information and perform additional actions compared to regular users.
User Chat View
This screenshot displays the chat interface for a standard user, demonstrating restricted access to certain data and features based on their role.
Knowledge Base Management
This image highlights the knowledge base management screen, where admins can upload, organize, and classify documents. It also shows how metadata and security classifications are applied to each document.
🔐 How Authorization Works (with Permit.io)
- Every API endpoint (document upload, chat, user management) checks permissions via Permit.io before executing sensitive actions.
- Roles and security levels are enforced at both the API and AI layer. For example, only admins can upload or view all documents; users can only access public data.
- Approval workflows can be added for exporting or sharing generated insights, requiring explicit admin consent.
- Permit.io policies are externalized and can be updated without redeploying the app, enabling dynamic, auditable access control.
Benefits:
- Centralized policy management
- Dynamic, context-aware permissions
- Auditability (all access decisions are logged)
- Separation of security logic from application logic
Current Roles:
-
admin
: Full access -
user
: Limited access, with clearances managed by admin
💡 Why Externalized Authorization?
- Update access rules in Permit.io without code changes
- Grant/revoke access instantly, even for new AI features
- All access decisions are logged and traceable
- Security logic is decoupled from application logic, reducing risk and complexity
In my implementation, Permit.io enabled several capabilities that would have been impractical with traditional approaches:
- Dynamic tenant-based document access that adapts based on user clearance levels
- Instant updates to authorization rules without code changes
- Detailed audit trails showing exactly which users accessed which documents and AI capabilities
- Clean separation between application features and security logic
This externalized approach dramatically reduced the complexity of implementing secure AI features while providing much stronger governance over sensitive data access.
Built for the Permit.io AI Access Control Challenge. Thanks for reading!
Great work @esc_abhishek 🙌