Skip to main content
DokuMetry Python SDK is the official library from Doku to send LLM usage data from your Python application to Doku.

Installation

shell
pip install dokumetry

Usage

OpenAI

openai.py
from openai import OpenAI
import dokumetry

client = OpenAI(
    api_key="YOUR_OPENAI_KEY"
)

# Pass the above `client` object along with your Doku URL and API key and this will make sure that all OpenAI calls are automatically tracked.
dokumetry.init(llm=client, doku_url="YOUR_DOKU_URL", api_key="YOUR_DOKU_TOKEN")

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is LLM Observability",
        }
    ],
    model="gpt-3.5-turbo",
)
The following OpenAI Endpoints are supported for monitoring with DokuMetry

Cohere

cohere.py
import cohere
import dokumetry

# initialize the Cohere Client with an API Key
co = cohere.Client('YOUR_API_KEY')

# Pass the above `co` object along with your Doku URL and API key and this will make sure that all Cohere calls are automatically tracked.
dokumetry.init(llm=co, doku_url="YOUR_DOKU_URL", api_key="YOUR_DOKU_TOKEN")

# generate a prediction for a prompt
prediction = co.chat(message='What is LLM Observability?', model='command')

# print the predicted text
print(f'Chatbot: {prediction.text}')
The following Cohere Endpoints are supported for monitoring with DokuMetry

Anthropic

anthropic.py
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
import dokumetry

anthropic = Anthropic(
    # defaults to os.environ.get("ANTHROPIC_API_KEY")
    api_key="my api key",
)

# Pass the above `anthropic` object along with your Doku URL and API key and this will make sure that all Anthropic calls are automatically tracked.
dokumetry.init(llm=anthropic, doku_url="YOUR_DOKU_URL", api_key="YOUR_DOKU_TOKEN")

completion = anthropic.completions.create(
    model="claude-2",
    max_tokens_to_sample=300,
    prompt=f"{HUMAN_PROMPT} What is LLM Observability?{AI_PROMPT}",
)
print(completion.completion)
The following Anthropic Endpoints are supported for monitoring with DokuMetry

Parameters

ParameterDescriptionRequired
llmLanguage Learning Model (LLM) Object to trackYes
doku_urlURL of your Doku InstanceYes
api_keyYour Doku API keyYes
environmentCustom environment tag to include in your metricsOptional
application_nameCustom application name tag for your metricsOptional
skip_respSkip response from the Doku Ingester for faster executionOptional

Advanced Usage

When using DokuMetry in Production environments, It is recommened to set skip_resp = true for faster porcessing. To filter your LLM Usage according application or the environment, set the application and environment paramaters with the actual values.

Requirements

Python >= 3.7 is supported. If you are interested in other runtime environments, please open or upvote an issue on GitHub.