Skip to main content
This guide will walk you through setting up your monitoring system with minimal effort. In just a few steps, you’ll be able to track and analyze the performance and usage of your Large Language Model (LLM) applications.

Prerequistes

  • Ensure you have access to a Kubernetes Cluster. You can quickly spin up a Kubernetes Cluster using Kind

Get Started

1

Install Doku in a Kubernetes Cluster

1

Helm Repo Setup

shell
helm repo add dokulabs https://dokulabs.github.io/helm/
helm repo update
2

Installing the Helm Chart

shell
helm install doku dokulabs/doku
2

Generate an API Key

With Doku running, the next step is to generate an API key for secure communication between your applications and Doku.To generate your first API key, you can use the following command:
shell
curl --request POST \
  --url https://<your-doku-url>/api/keys \
  --header 'Authorization: ""' \
  --header 'Content-Type: application/json' \
  --data '{
  "name": "Admin"
  }'
Make sure to store the returned API key securely, as it’s necessary for authorizing future API calls with Doku.
During your initial request to generate an API key, you can leave the Authorization header empty (i.e., ""). For subsequent requests, you will need to supply the generated API key.
3

Instrument your code

Choose the appropriate SDK for your application’s programming language and follow the steps to integrate monitoring with just a couple of lines of code.
1

Install the dokumetry Python SDK using pip

pip install dokumetry
2

Add the following two lines to your application code:

Add the snippet below to your Python application, replacing YOUR_DOKU_URL and YOUR_DOKU_TOKEN with corresponding values.
import dokumetry

dokumetry.init(llm=client, doku_url="YOUR_DOKU_URL", api_key="YOUR_DOKU_TOKEN")
Example Usage for monitoring OpenAI Usage:
from openai import OpenAI
import dokumetry

client = OpenAI(
    api_key="YOUR_OPENAI_KEY"
)

# Pass the above `client` object along with your DOKU URL and API key and this will make sure that all OpenAI calls are automatically tracked.
dokumetry.init(llm=client, doku_url="YOUR_DOKU_URL", api_key="YOUR_DOKU_TOKEN")

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is LLM Observability",
        }
    ],
    model="gpt-3.5-turbo",
)
Refer to the dokumetry Python SDK repository or the DokuMetry Docs for more advanced configurations and use cases.

(Optional) Export Data

Integrations

Connect to your existing Observablity Platform
You’re all set! Following these steps should have you on your way to effectively monitoring your LLM applications. If you have any questions or need support, reach out to our community.