A privacy-focused AI assistant platform. Supports multiple LLM providers, conversation memory, and a clean streaming chat UI — fully self-hosted and customisable.
Problem
Existing AI chat interfaces are locked to a single provider, collect user data, or offer no customisation for power users who want control over their data and model selection.
Approach
Built a provider-agnostic abstraction layer over LLM APIs with a real-time streaming interface, persistent conversation context, and one-command Docker deployment.
Result
Self-hosted and actively used daily. Supports OpenAI, Anthropic, and local models via a unified API — deployed in under 5 minutes via Docker Compose.