Overview
The /api/v1/traces endpoint manages the lifecycle of traces. You’ll call this endpoint twice per trace:
Start : Create a trace with status: "running"
Complete : Update the trace with status: "completed" and duration
Traces represent individual AI operations (e.g., answering a question, classifying text). Each trace has a unique client-generated ID.
Request Body
Unique identifier for the trace. Must start with trace_ followed by 22 alphanumeric characters. Generate this client-side using a cryptographically secure random generator. Example : trace_a3f9c2e1b8d4f7a6c9e2
Human-readable operation name for filtering and grouping in the dashboard. Examples : "answer-question", "classify-ticket", "rag-query"
ISO 8601 timestamp with milliseconds and UTC timezone. Format : YYYY-MM-DDTHH:mm:ss.sssZExample : "2025-12-22T10:30:45.123Z"Should represent when the operation started (same timestamp for both create and complete).
Trace status indicating current state.
"running" — Trace is in progress (use when creating)
"completed" — Trace is finished (use when completing)
Duration of the operation in milliseconds. Required when status is "completed".Calculate as: (end_time - start_time) * 1000 Example : 1234
Optional key-value pairs for filtering and searching traces in the dashboard. Common fields:
user_id — User who initiated the operation
session_id — Session identifier
environment — Production, staging, etc.
model — AI model version
version — Application version
All values must be JSON-serializable.
Response
Response status indicator. Returns "accepted" when the request is successfully queued.
The API uses a fire-and-forget pattern. A 202 Accepted response means the request was received and queued, not that validation passed.
Trace ID Generation
Trace IDs must be generated client-side to enable the fire-and-forget pattern.
Format : trace_ + 22 random alphanumeric characters
TRACE_ID = "trace_$( openssl rand -hex 11 )"
echo $TRACE_ID
# Output: trace_a3f9c2e1b8d4f7a6c9e2
Use cryptographically secure random generators. Don’t use predictable patterns or sequential IDs.
Usage Examples
Creating a Trace
# Generate unique trace ID
TRACE_ID = "trace_$( openssl rand -hex 11 )"
START_TIMESTAMP = $( date -u +"%Y-%m-%dT%H:%M:%S.%3NZ" )
curl -X POST https://app.artanis.ai/api/v1/traces \
-H "Authorization: Bearer ak_..." \
-H "Content-Type: application/json" \
-d "{
\" trace_id \" : \" $TRACE_ID \" ,
\" name \" : \" answer-question \" ,
\" timestamp \" : \" $START_TIMESTAMP \" ,
\" status \" : \" running \" ,
\" metadata \" : {
\" user_id \" : \" user-123 \" ,
\" session_id \" : \" session-456 \"
}
}"
Completing a Trace
import time
start_time = time.time()
# ... your operation logic ...
duration_ms = int ((time.time() - start_time) * 1000 )
response = requests.post(
"https://app.artanis.ai/api/v1/traces" ,
headers = {
"Authorization" : "Bearer ak_..." ,
"Content-Type" : "application/json"
},
json = {
"trace_id" : trace_id,
"name" : "answer-question" ,
"timestamp" : timestamp, # Original start timestamp
"status" : "completed" ,
"duration_ms" : duration_ms,
"metadata" : {
"user_id" : "user-123" ,
"session_id" : "session-456"
}
}
)
Use the same trace_id, name, timestamp, and metadata for both create and complete requests. Only status and duration_ms should change.
Field Example Use Case user_id"user-123"Filter traces by user session_id"session-456"Group related operations environment"production"Separate prod/staging model"gpt-4"Compare model performance version"v2.3.1"Track changes over time customer"acme-corp"Multi-tenant filtering
Keep metadata concise. Don’t include large objects or sensitive information — use state observations for detailed context.
Error Responses
Status Code Error Solution 400 Invalid request format Check JSON syntax and required fields 401 Invalid API key Verify your API key is correct 413 Trace metadata too large Reduce metadata size (< 100KB recommended) 429 Rate limit exceeded Implement exponential backoff retry
Next Steps
Observations Endpoint Record inputs, outputs, and state for your trace
Complete Example See a full end-to-end workflow