Getting started
Noryen documentation
Noryen captures AI traces from your application - prompts, responses, latency, tokens and cost. It allows you to debug and monitor model behavior in production.
How it works
- Create a project and install
@noryen/sdkin your backend or API layer. - Initialize the SDK with your project API key and either wrap your LLM client or call track() manually.
- Open the dashboard to explore traces, latency, and usage per model.
New to the SDK? Start with Setup & configuration, then pick an installation guide for your stack.