The universal trust layer for every AI tool you use. Verify outputs, track costs across all providers, and compare models on your actual tasks — running entirely on your machine.
TrustLayer analyzes each AI response for hallucinations, overconfidence, and unverifiable claims — then gives you a simple 0–100 score with specific flags.
"There are definitely 100% more jobs created by AI than lost. Every single economist agrees. The truth is AI will certainly create infinite wealth for everyone by 2025."
"Research suggests that regular exercise may reduce heart disease risk by approximately 30–35%, though individual results vary. According to WHO guidelines, 150 minutes per week of moderate activity is the baseline."
The TrustLayer dashboard — trust scores at a glance
Eight core features that make TrustLayer the last AI tool you need to add to your workflow.
Plug in any AI: Ollama (auto-detected), Claude, GPT-4, Gemini, Aider. One unified interface for all of them. Switch between providers seamlessly.
Ollama · Anthropic · OpenAI · Google
Every AI output gets a 0–100 trust score. Hallucination detection, overconfidence flags, and claim-by-claim source attribution.
Trust Score · Hallucination detectionRemembers how you work across sessions. Learns your writing style, common tasks, and preferences. Your 500th session is better than your 1st.
Stored locally · No cloud syncReal-time spending dashboard across all AI providers. Budget alerts and per-task cost optimization tips to cut your bill.
Budget alerts · Monthly reportsTest your actual tasks against multiple models side-by-side. Personal benchmarks — not generic leaderboard scores that don't match your workflow.
Side-by-side · Speed · Accuracy · CostDrag in your docs, notes, PDFs, and code repos. Everything indexed locally. AI uses your context when answering. Works fully offline with Ollama.
Local indexing · Works offlineComplete log of every AI interaction. Trust scores, costs, latency, and full prompt/response pairs — searchable and filterable by provider.
Searchable · Filterable · Full audit trailHonest and humble for factual queries. Creative and opinionated for brainstorming. Invisible for routine tasks. Adapts automatically to context.
Context-aware · Auto-adapts
Complete interaction history — every AI call logged and searchable
TrustLayer sits between you and every AI you use. You don't change your workflow — you just gain visibility and trust.
Ollama is auto-detected. Add API keys for Claude, GPT-4, or Gemini via the Settings page or environment variables. No account required for TrustLayer itself.
As you use AI, TrustLayer silently scores each response. Trust scores, hallucination flags, and source attribution appear inline — without slowing you down.
TrustLayer remembers your sessions, preferences, and writing style. Over time, it surfaces better suggestions, cheaper alternatives, and personalized tips.
Works with everything you already have. Zero config. One command to start.
# Install TrustLayer pip install git+https://github.com/acunningham-ship-it/trustlayer.git # Start the server + web UI trustlayer server # → Auto-detects Ollama # → Opens http://localhost:8000 # Add your API keys (optional) export ANTHROPIC_API_KEY=sk-ant-... export OPENAI_API_KEY=sk-...
# Verify any AI output trustlayer verify "The earth is 4.5 billion years old." # → Trust Score: 94/100 (HIGH) — No concerns # Ask any connected AI trustlayer ask "Summarize this codebase" --provider ollama --model llama3.2 # Compare responses from multiple models trustlayer compare "Write unit tests for this function" # Check your spending across all providers trustlayer costs # Detect available AI tools on your machine trustlayer detect
# Verify content via REST API curl -X POST http://localhost:8000/api/verify \ -H "Content-Type: application/json" \ -d '{"content": "Your AI output here"}' # Actual response: { "trust_score": 85, "trust_label": "high", "summary": "85% trusted. 1 concern(s) flagged. 0 source(s) cited.", "issues": ["High confidence language with no hedging"], "hallucination_score": 15 } # Compare models on the same prompt curl -X POST http://localhost:8000/api/compare \ -d '{"prompt": "Explain quantum entanglement"}'
import requests # Verify any AI output response = requests.post("http://localhost:8000/api/verify", json={ "content": "AI output to verify" }) result = response.json() print(f"Trust: {result['trust_score']}/100 — {result['trust_label']}") # Compare providers side by side comparison = requests.post("http://localhost:8000/api/compare", json={ "prompt": "Your task here", "providers": ["ollama", "anthropic", "openai"] }).json()
No other tool gives you verification, cost tracking, personal learning, and offline support — all in one place that runs locally.
| Capability | TrustLayer | Open WebUI | Individual AI apps |
|---|---|---|---|
| Output verification & trust scores | ✓ | ✗ | ✗ |
| Cross-provider cost tracking | ✓ | ✗ | ✗ |
| Personal learning across sessions | ✓ | ✗ | ~ |
| Side-by-side model comparison | ✓ | ~ | ✗ |
| Offline local knowledge base | ✓ | ~ | ✗ |
| Complete interaction history & audit log | ✓ | ✗ | ✗ |
| 100% local — no cloud required | ✓ | ✓ | ✗ |
| Open source, MIT license | ✓ | ✓ | ✗ |
These aren't hypothetical — they're the top complaints across Reddit, HN, and X right now.
Using 3+ AI tools daily but no way to know which output is actually correct when they disagree.
The trust problem
$200/month across Claude, OpenAI, and Gemini — with zero visibility into what you actually spent it on.
The cost problem
AI presents hallucinated facts with the same confidence as verified ones. No way to tell the difference until it's too late.
The hallucination problem
Sensitive data sent to cloud AI servers with no control over storage, training, or access. Local alternatives exist but lack quality.
The privacy problem
TrustLayer is free, open source, and runs entirely on your machine. Your data never leaves your computer. No account required.