🚀 v0.2.0 · Open Source · Local-First · Zero Config

The AI command center that
runs on your machine.

Multi-model chat, pipelines, templates, CLI tools hub — all powered by local Ollama models. No cloud dependency. No subscriptions. No data leaving your machine.

Install Free → View on GitHub
terminal — cortex serve
# 30-second setup
$ pip install cortex-ai
$ cortex serve

⚡ Cortex AI starting...
✓ Ollama detected: qwen2.5:7b, qwen2.5:0.5b, gemma4:e4b
✓ Database initialized at ~/.cortex/cortex.db
✓ 6 prompt templates loaded
✓ 2 pipelines ready
✓ Ready at http://localhost:7337 — opening browser...

# Or use the CLI directly
$ cortex ask "Explain async/await in Python"
Async/await is Python's way to write concurrent code...

# Pipe any file to a model
$ cat main.py | cortex ask "Review this code"
$ cortex run "Summarize & Translate" --input "$(cat report.txt)"
Features

Everything in one tool

Replace a dozen separate tools. Chat, pipelines, templates, CLI, and REST API — all in one pip install.

💬

Multi-Model Chat

Stream responses from any Ollama model or cloud API. Switch models mid-conversation. Full markdown + code highlighting.

⛓️

Pipeline Builder

Chain AI calls: summarize → translate → critique. Build in the visual UI or define in code. Run with one CLI command.

📋

Prompt Templates

6 built-in templates with {{variable}} substitution. Code Review, Debug, Translate, Summarize, and more. Create your own.

Powerful CLI

Quick queries, stdin piping, pipeline runner. Works in shell scripts. echo "code" | cortex ask "review this"

📊

Usage Dashboard

Track token usage, latency, and request history per model. See estimated savings vs. cloud API costs.

🔌

Extensible

Add custom providers in one Python class. Add routers with FastAPI. The codebase is clean and designed to extend.

🌐

REST API

21 documented endpoints. Use Cortex as a backend for your own apps. Auto-generated OpenAPI docs at /docs.

🛡️

Privacy First

All data stored locally in SQLite. No telemetry, no accounts required. Your conversations stay on your machine.

🚀

Zero-Config Demo

Auto-detects Ollama. Browser opens automatically. Pre-loaded with templates and pipelines. Ready in 30 seconds.

🔧

AI Tools Hub

Unified interface for ALL your CLI AI tools — Claude Code, Aider, and custom tools. Side-by-side comparison mode. Shared context across tools.

How it works

Simple by design

A FastAPI server proxies requests to Ollama or cloud APIs, stores history in SQLite, and serves a React UI — all in one pip install.

01

Install & Run

One pip install. Cortex auto-detects Ollama and any configured API keys.

02

Pick a Model

Choose from local Ollama models or cloud APIs — all in the same dropdown.

03

Build Workflows

Create pipelines and templates. Run from the UI, CLI, or REST API.

04

Track & Optimize

Dashboard shows usage, latency, and savings vs. cloud alternatives.

Installation

Up in 60 seconds

1

Install Cortex AI

pip install cortex-ai
2

Pull a model with Ollama (if you don't have one)

# Install Ollama from ollama.ai, then:
ollama pull qwen2.5:7b
3

Start Cortex AI

cortex serve
# Browser opens automatically at http://localhost:7337
4

Optional: Add cloud API keys

# cortex.yaml (in your project, or ~/.cortex/config.yaml)
ollama_url: "http://localhost:11434"
claude_api_key: "sk-ant-..." # optional
openai_api_key: "sk-..." # optional
5

Use the CLI for quick queries

cortex ask "What is the capital of France?"
cortex ask --model qwen2.5:7b "Explain async/await"
cat myfile.py | cortex ask "Review this code"
cortex models # list available models
cortex status # check server health
Providers

One interface, any model

Local models via Ollama or cloud APIs — configured once, available everywhere.

🦙
Ollama
Run any open model locally. Auto-detected. Works offline.

Free · No key
🤖
Anthropic Claude
Claude Opus 4.6, Sonnet 4.6, Haiku 4.5. Add your API key to cortex.yaml.

API key needed
🧠
OpenAI
GPT-5, GPT-4.1, o3, o4-mini. Full OpenAI-compatible API support.

API key needed
🔌
OpenAI-Compatible
LM Studio, Together AI, Groq, Mistral AI, and more.

Custom base URL

Ready to own your AI stack?

Open source, self-hosted, privacy-first. No subscriptions. No data leaving your machine.