Skip to content
Diego Alducin
Go back

Trialogue: Making LLMs Argue With Each Other

What if you could ask a question and get three different AI perspectives at once?

That’s the idea behind Trialogue — a web app where GPT-4, Claude, and Gemini (or whatever models you want) respond simultaneously in a three-way conversation.

And yes, an AI worker is building most of it while I write this blog post.

The Problem

Every LLM has different strengths:

When I’m trying to figure something out, I often find myself checking multiple models. Copy question, paste in ChatGPT, copy question, paste in Claude… you get it.

The Solution

One interface, three models, parallel responses. You ask once, you get three perspectives side by side.

The twist: BYOK (Bring Your Own Key). You plug in your own API keys, and the app never stores them. They live in your browser’s localStorage and get passed in request headers. I’m not paying for your tokens, and I’m not storing your credentials.

The Stack

LiteLLM is the secret sauce here. It lets you call OpenAI, Anthropic, Google, Groq, and a bunch of other providers through a single interface. Change the model name, everything else stays the same.

Current Progress

An AI worker (yes, really) has been building this project. I have a whole orchestration system for autonomous Claude Code workers — it’s documented in a private repo because I’m not about to explain to someone’s boss why their server is now sentient.

Anyway, here’s the progress:

Milestone 1: Project Foundation

Milestone 2: Core Backend

Milestone 3: Frontend 🔄 in progress

You can follow along at github.com/datdiego/trialogue.

The Architecture

Browser (your keys in localStorage)


FastAPI Backend (stateless, doesn't store anything)


LiteLLM → OpenAI / Anthropic / Google / Groq

Keys go in the request header (X-OpenAI-Key, etc.), never in the body, never in logs. The backend is just a pass-through that handles streaming.

Why Three?

Three models is a good number. Two is just “compare.” Four feels crowded. Three lets you see:

It’s like having a panel of experts, except they’re all hallucinating with varying confidence levels.

Building Within Constraints

The app suggests models that are free or cheap to use:

So you can try the app without any real cost. Nice, right?

What’s Next

Once the frontend is done:

But first, I should go check on that worker. It’s been quiet for 20 minutes and either it’s done or something broke.


This post will be updated as the project progresses. Or I’ll write a “Trialogue v2” post. We’ll see.


Share this post on:

Previous Post
BYOK: Why Your AI App Shouldn't Hold the Keys
Next Post
I Made Claude Code Work While I Sleep (Autonomous AI Workers)