Claude 5 API Documentation
Integrate Claude's powerful AI into your applications
Quick Start
1. Get Your API Key
Sign up for a Claude API account and generate your API key from the dashboard.
Get API Key2. Install the SDK
# Python
pip install anthropic
# Node.js
npm install @anthropic-ai/sdk
# cURL (no installation needed)
curl -X POST https://api.anthropic.com/v1/messages3. Make Your First Request
Python Example:
import anthropic
client = anthropic.Anthropic(
api_key="your-api-key-here"
)
message = client.messages.create(
model="claude-5-opus-20260514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
]
)
print(message.content)JavaScript Example:
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: 'your-api-key-here'
});
const message = await client.messages.create({
model: 'claude-5-opus-20260514',
max_tokens: 1024,
messages: [
{role: 'user', content: 'Hello, Claude!'}
]
});
console.log(message.content);Available Models
Claude 5 Opus
Model ID: claude-5-opus-20260514
Most powerful model. Best for complex tasks, advanced reasoning, and superior code generation.
- • 200K context window
- • Best coding performance (96.4% HumanEval)
- • Superior reasoning and analysis
- • Input: $15/M tokens, Output: $75/M tokens
Claude 5 Sonnet
Model ID: claude-5-sonnet-20260514
Balanced performance and speed. Ideal for most production use cases.
- • 200K context window
- • 2x faster than Opus
- • Great balance of quality and cost
- • Input: $3/M tokens, Output: $15/M tokens
Claude 5 Haiku
Model ID: claude-5-haiku-20260514
Fastest and most cost-effective. Perfect for high-volume applications.
- • 100K context window
- • Ultra-fast response times
- • Most affordable option
- • Input: $0.25/M tokens, Output: $1.25/M tokens
Key API Features
Streaming Responses
Get real-time responses as Claude generates them:
const stream = await client.messages.create({
model: 'claude-5-opus-20260514',
max_tokens: 1024,
messages: [{role: 'user', content: 'Write a story'}],
stream: true
});
for await (const event of stream) {
if (event.type === 'content_block_delta') {
process.stdout.write(event.delta.text);
}
}System Prompts
Set context and behavior with system prompts:
const message = await client.messages.create({
model: 'claude-5-opus-20260514',
max_tokens: 1024,
system: "You are a helpful coding assistant that writes clean, documented code.",
messages: [{role: 'user', content: 'Write a sorting function'}]
});Image Understanding
Analyze images with Claude:
const message = await client.messages.create({
model: 'claude-5-opus-20260514',
max_tokens: 1024,
messages: [{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/jpeg',
data: base64Image
}
},
{
type: 'text',
text: 'What's in this image?'
}
]
}]
});Best Practices
🔐 Security
- • Never expose API keys in client-side code
- • Use environment variables for keys
- • Rotate keys regularly
- • Implement rate limiting
⚡ Performance
- • Use Haiku for simple tasks
- • Enable streaming for better UX
- • Cache common responses
- • Batch similar requests
💰 Cost Optimization
- • Choose the right model for your task
- • Set appropriate max_tokens limits
- • Monitor usage with analytics
- • Use prompt caching when available
✅ Error Handling
- • Implement retry logic with exponential backoff
- • Handle rate limits gracefully
- • Log errors for debugging
- • Provide fallback responses