This port serves the Gami pipeline family — there is no web UI here.
Open the Oraigami Console in the browser (Vite dev server, usually http://localhost:5173).
GET /api/health — agent configuration statusPOST /api/governor/evaluate — JSON: gamiTarget, productId — returns verdict (Allow/Narrow/Pending/Escalate)POST /api/classify/sources — JSON: files[] (filename, contentSnippet, mimeType, sizeBytes) — LLM source category classification with coverage mapPOST /api/gami/scope/run — multipart: files, productId, projectName, scenarioGET /api/gami/scope/run/:runId — optional ?productId=POST /api/gami/scope/extract — JSON: files[] (filename, content), existingObjectNames[] — LLM object extractionPOST /api/gami/valuestream/derive — JSON: files[], organizationName, operatingUnitName, existingObjectNames[] — LLM value stream derivationPOST /api/gami/problem/derive — JSON: files[] (filename, content), organizationName, operatingUnitName, valueStreams[], registryObjects[] — LLM problem derivation from evidencePOST /api/gami/problem/run — JSON: productId, projectName, scopeRunId, problemInputPOST /api/gami/problem/agent/:agentKey — Run individual agent. Keys: problem_description, scoped_endpoints, sme_transcripts, source_basis_synthesis, intake_coordination, problem_framing, problem_validation. JSON: productId, projectName, problemInput, previousStages?GET /api/gami/problem/run/:runId — optional ?productId=POST /api/gami/lineage/run — JSON: productId, projectName, problemRunId, lineageInputGET /api/gami/lineage/run/:runId — optional ?productId=POST /api/gami/solution/run — JSON: productId, projectName, solutionInputGET /api/gami/solution/run/:runId — optional ?productId=POST /api/gami/governance/run — JSON: productId, projectName, governanceInputGET /api/gami/governance/run/:runId — optional ?productId=Agent config: set OPENAI_API_KEY in .env.local or environment. Override model with OR_EXECUTION_AGENT_MODEL, base URL with OR_EXECUTION_AGENT_BASE_URL, or disable with OR_EXECUTION_AGENT_DISABLED=1.