Organi Runbooks
I built Organi after noticing the same problem in almost every team I've seen: the processes exist, but they only work when the right person is in the room. Someone onboards a new hire by memory. An incident gets handled differently every time depending on who's putting off the fire. The month-end checklist lives in one person's head and everyone else is guessing.
Organi is my attempt to fix that. It's a runbook platform (I'm thinking that it can be one product of the whole Organi suite hehe), a place where you take those processes out of documents and heads and turn them into something the team can actually run together.
What a runbook looks like
A runbook in Organi is made up of steps, and each step has a type:
- Checklist — a list of things to tick off before moving on
- Knowledge — a block of information the person running it needs to read and acknowledge
- AI Walkthrough — a chat with an AI model that guides the operator through whatever's happening, using a system prompt you write yourself to guide the final user, I think of it as if you were explaining to a junior in the future what to do.
So an incident response runbook might start with a knowledge step ("here's what this alert means"), then a checklist ("check these five things first"), then an AI walkthrough ("describe what you're seeing and I'll help you narrow it down"). You build it once, and anyone on the team can run it from a link.
AI also helps you build the runbooks
One of my favourite features is the AI generation flow. Instead of starting from scratch, you describe your process in a few sentences and the app generates a structured runbook for you, step types, titles, everything. You review it, tweak it, and publish. It's a good starting point, especially for processes that are hard to write down because they feel too obvious to document or you don't have the structure in mind.
Getting that generation to work reliably was trickier than expected. Free-tier LLMs have small output windows and love wrapping JSON in markdown fences and using smart quotes. I ended up building a multi-stage parsing pipeline — try to parse directly, strip fences and sanitize, try JSON repair, then retry the API call if all else fails. It's not glamorous but it works so there ya go.
How it's built
The project is split into three repos:
organi-api is the backend — Python with FastAPI. It handles everything: runbook management, run execution, AI streaming, org and member management. Auth is done through Supabase (JWT tokens), and data lives in Postgres with row-level security. AI responses are streamed to the frontend over Server-Sent Events using OpenRouter with LLaMA 3.3 70B. The API runs on my homeserver.
organi-web is the main app — SvelteKit 5 with Tailwind. It's where users create runbooks, run them step by step, and manage their team. The run execution view streams AI responses in real time. Deployed on Cloudflare Workers.
organi-landing is the "marketing" site, also SvelteKit 5. It has a live interactive demo of a sample runbook so people can get a feel for the product before signing up. Also on Cloudflare Workers.
Tech stack
| Layer | What I used |
|---|---|
| Frontend | SvelteKit 5, TypeScript, Tailwind CSS 4 |
| Backend | Python 3.12, FastAPI, Pydantic |
| Database & Auth | Supabase (PostgreSQL + Auth) |
| AI | OpenRouter (LLaMA 3.3 70B), Ollama |
| Deployment | Cloudflare Workers, Fly.io, Docker |
| CI/CD | GitHub Actions → GitHub Container Registry |
You can try it at organi.tools.