Getting started

Noryen documentation

Noryen captures AI traces from your application - prompts, responses, latency, tokens and cost. It allows you to debug and monitor model behavior in production.

How it works

  1. Create a project and install @noryen/sdk in your backend or API layer.
  2. Initialize the SDK with your project API key and either wrap your LLM client or call track() manually.
  3. Open the dashboard to explore traces, latency, and usage per model.
New to the SDK? Start with Setup & configuration, then pick an installation guide for your stack.

Where to go next