Overview
The /api/v1/feedback endpoint allows you to record user feedback for traces. This is essential for:
Linking user satisfaction to specific operations
Building evaluation datasets from production data
Identifying problematic patterns
Measuring improvement over time
Feedback can be binary (positive/negative), numeric (0.0-1.0), or include corrections showing what the right answer should have been.
Feedback is a powerful way to understand which traces are working well and which need improvement. Corrections are especially valuable for building eval sets.
Request Body
The trace ID to provide feedback for. Must match a trace created via /api/v1/traces. Example : "trace_a3f9c2e1b8d4f7a6c9e2"Don’t have a trace ID? Use the Trace Search endpoint to find traces by your own identifiers (conversation ID, criterion ID, etc.).
Your own ID for this feedback record. Enables upsert behavior :
If omitted, creates a new feedback record each time
If provided, updates the existing record with this ID (or creates if not found)
Useful for auto-save scenarios where you want to update the same feedback record as the user edits. Example : "qa_evaluation_result_bgNzRGaJjv8T4obgJEWy38vA"
User rating for the trace. Binary ratings :
"positive" — User found the result helpful/correct
"negative" — User found the result unhelpful/incorrect
Numeric ratings :
Any number between 0.0 and 1.0 (e.g., 0.85 for 4.25/5 stars)
Example : "positive" or 0.85
Optional user comment providing context for the rating. Useful for understanding why users gave certain feedback. Example : "The response was too technical and difficult to understand."
Optional correction data showing what the right answer should have been. Can have any structure matching your use case. Commonly used fields:
answer — The correct answer
category — Correct classification
explanation — Why the original was wrong
Example : {"answer": "We offer a 60-day guarantee, not 30 days."}
Optional information about who provided the feedback/correction. Name of the person who provided the feedback. Example : "Jane Smith"
Email of the person who provided the feedback. Example : "jane.smith@example.com"
Example : {"name": "Jane Smith", "email": "jane.smith@example.com"}
ISO 8601 timestamp. Defaults to current time if not provided. Format : YYYY-MM-DDTHH:mm:ss.sssZExample : "2025-12-22T10:31:00.000Z"
Response
Returns "success" when the feedback is successfully recorded.
After submitting feedback, use the Attribution endpoint to analyze the root cause of any disagreement.
Binary Feedback
Simple positive or negative feedback is the easiest way to track user satisfaction:
import requests
from datetime import datetime
requests.post(
"https://app.artanis.ai/api/v1/feedback" ,
headers = {
"Authorization" : "Bearer ak_..." ,
"Content-Type" : "application/json"
},
json = {
"trace_id" : trace_id,
"rating" : "positive" ,
"timestamp" : datetime.utcnow().isoformat( timespec = 'milliseconds' ) + 'Z'
}
)
Binary feedback is perfect for thumbs up/down, like/dislike, or similar UI elements.
Numeric Feedback
Record granular feedback using a numeric score between 0.0 and 1.0:
# 5-star rating converted to 0.0-1.0
stars = 4 # User gave 4 out of 5 stars
rating = stars / 5.0
requests.post(
"https://app.artanis.ai/api/v1/feedback" ,
headers = {
"Authorization" : "Bearer ak_..." ,
"Content-Type" : "application/json"
},
json = {
"trace_id" : trace_id,
"rating" : rating, # 0.8
"timestamp" : datetime.utcnow().isoformat( timespec = 'milliseconds' ) + 'Z'
}
)
Numeric ratings allow for more nuanced feedback analysis. Convert your rating system (e.g., 1-5 stars) to the 0.0-1.0 scale.
Add optional comments to provide context:
requests.post(
"https://app.artanis.ai/api/v1/feedback" ,
headers = {
"Authorization" : "Bearer ak_..." ,
"Content-Type" : "application/json"
},
json = {
"trace_id" : trace_id,
"rating" : "negative" ,
"comment" : "The response was too technical and difficult to understand." ,
"timestamp" : datetime.utcnow().isoformat( timespec = 'milliseconds' ) + 'Z'
}
)
Comments help you understand why users gave the feedback they did. This is valuable for identifying patterns and improvement opportunities.
Feedback with Corrections
When users provide the correct answer, capture it as a correction:
requests.post(
"https://app.artanis.ai/api/v1/feedback" ,
headers = {
"Authorization" : "Bearer ak_..." ,
"Content-Type" : "application/json"
},
json = {
"trace_id" : trace_id,
"rating" : "negative" ,
"comment" : "Wrong refund period" ,
"correction" : {
"answer" : "We offer a 60-day money-back guarantee, not 30 days."
},
"timestamp" : datetime.utcnow().isoformat( timespec = 'milliseconds' ) + 'Z'
}
)
Corrections are incredibly valuable for building evaluation datasets. They show exactly what the right answer should have been.
Correction Data Structures
Corrections can have any structure that matches your use case:
Classification Correction
Answer Correction
Multi-field Correction
{
"trace_id" : "trace_..." ,
"rating" : "negative" ,
"correction" : {
"category" : "billing" ,
"subcategory" : "refund" ,
"priority" : "high"
},
"timestamp" : "2025-12-22T10:31:00.000Z"
}
Best Practices
Capture feedback as soon as users provide it. Don’t batch or delay - this ensures you don’t lose valuable data.
2. Make Feedback Easy
Use simple UI elements:
Thumbs up/down buttons
Star ratings
Like/dislike buttons
Optional comment box for details
3. Request Corrections
When users give negative feedback, ask what the correct answer should have been. This is gold for building eval sets.
4. Link to User Context
If you have user IDs in your trace metadata, you can analyze feedback patterns by user segment, customer, or use case.
5. Monitor Feedback Rates
Track what percentage of traces receive feedback. Low rates might indicate:
Feedback UI is not prominent enough
Users don’t understand what feedback means
Feedback process is too complex
Error Responses
Status Code Error Solution 400 Invalid request format Check JSON syntax and required fields 401 Invalid API key Verify your API key is correct 404 Trace not found Ensure trace_id matches an existing trace 429 Rate limit exceeded Implement exponential backoff retry
Next Steps
Search Traces Find trace IDs using your own identifiers
Complete Example See a full end-to-end workflow with all endpoints
Slack Integration Automatically match Slack feedback to traces