# talktotalk.me > A social experiment platform where users ask questions and receive anonymous answers from real people or AI models. ## Terminology - In the product UI, “model” usually means a **responder persona** (display name + avatar), not necessarily a machine-learning weights file. - **AI fallback** answers are generated by the server calling a **configurable HTTP REST** endpoint (OpenAI-compatible chat completions). See `docs/LLM_REST.md` and `docs/BYOK.md` in the repository for operators. - **/lab** is an optional browser page that runs **TensorFlow.js** demos locally: training and inference happen on the user’s device; no model weights or training data are sent to talktotalk.me for that page. - **Coins** are optional **virtual points** stored in the app database for engagement (not cryptocurrency). They are not redeemable for cash unless the operator adds separate payment flows. ## What it does - Users submit questions anonymously - Questions are matched with human "responders" in real-time via WebSocket - If no human responds within the timeout, an AI model generates a fallback answer (REST call from the server) - "Chaos Mode" sends the same question to multiple responders for diverse perspectives (may consume virtual coins if enabled by the operator) - Users can share personal question links (/ask/:slug) on social media - Real-time conversational messaging with follow-up questions ## Tech stack - Node.js + Express + Socket.IO - SQLite (via Knex + better-sqlite3) - Mithril.js + Tailwind CSS (Vite-built frontend) - EJS server-side templates - Google Gemini API for translations (scripts) - Optional TensorFlow.js on `/lab` (client-only demo) ## HTTP API (summary) Authentication: many routes expect `Cookie: token=` or `Authorization: Bearer `. - `POST /api/session/guest` — create anonymous guest session - `GET /api/me` — current user (auth) - `PATCH /api/me/profile` — update `displayName` / `avatar` (auth) - `GET /api/me/slug` — shareable question link slug (auth) - `GET /api/me/coins` — virtual coin balance (auth) - `POST /api/coins/spend` — body `{ "action": "tip", "answerId", "amount" }` (auth) - `POST /api/questions` — submit a question (auth); chaos mode may require sufficient coin balance - `POST /api/ratings` — rate an answer (auth); responders may earn coins - `POST /api/auth/register`, `POST /api/auth/login` - `GET /api/threads/:threadId/messages` — thread messages (auth) - `POST /api/reports`, `POST /api/blocks` - `GET /health` — health check ## Operator LLM integration (REST) The server posts to `LLM_API_URL` with `Authorization: Bearer LLM_API_KEY` and an OpenAI-style JSON body. Details: repository `docs/LLM_REST.md`. ## WebSocket - Namespaces: `/chat` (askers), `/answer` (responders) - Events include `question:submit`, `question:status`, `answer:complete`, `rating:submit`, etc. ## Features - Anonymous Q&A with real human responders - AI fallback when no humans available (server-side REST) - Chaos Mode (multi-responder answers) - Virtual coins (optional): signup bonus, chaos charges, rating rewards, tips - Personal question links for social sharing - Real-time typing indicators - Multi-language support (15+ languages via AI translation) - Browser notifications for background messages - PWA-ready (installable, standalone mode) - `/lab` — local TensorFlow.js demo ## Links - Website: https://talktotalk.me - Ask questions: https://talktotalk.me/app - Become a responder: https://talktotalk.me/responder - Lab (TF.js demo): https://talktotalk.me/lab