Overview
Once your agent is deployed to the Cycls cloud platform, it automatically exposes an OpenAI-compatible chat completion API. This allows you to integrate your agent into any application that supports OpenAI’s API format.
Enabling the API
To activate the chat completion API for your agent, you must define an api_token
in your Agent configuration:
import cycls
agent = cycls.Agent(
pip=["openai", "requests"],
keys=["ak-<token_id>", "as-<token_secret>"],
api_token="sk-proj-1234567890" # Required for API access
)
@agent("my-agent")
async def my_agent(context):
return "Hello from my agent!"
agent.push(prod=True)
API Token Configuration
The API key required to access your agent through the chat completions endpoint. This token is used for agent-level authentication and must be included in API requests.
API Endpoint
Your deployed agent exposes a chat completion endpoint at:
POST https://your-agent.cycls.ai/chat/completions
The API follows the OpenAI chat completion format:
{
"model": "my-agent",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"stream": false,
"temperature": 0.7,
"max_tokens": 1000
}
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "my-agent",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 15,
"total_tokens": 24
}
}
Authentication
Include your API token in the request headers:
curl -X POST https://your-agent.cycls.ai/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-proj-1234567890" \
-d '{
"model": "my-agent",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
Using the OpenAI SDK
You can use the official OpenAI Python SDK to interact with your agent:
import openai
# Configure the client to use your agent
client = openai.OpenAI(
api_key="sk-proj-1234567890",
base_url="https://my-agent.cycls.ai"
)
# Send a message to your agent
response = client.chat.completions.create(
model="my-agent",
messages=[
{"role": "user", "content": "What can you help me with?"}
]
)
print(response)
Streaming is enabled by default and depends on your agent’s implementation. If your agent supports streaming, responses will be returned as real-time chunks:
Streaming behavior depends on your agent’s implementation. Some agents may not support streaming and will return complete responses instead.
## JavaScript/Node.js Example
```javascript
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'sk-proj-1234567890',
baseURL: 'https://my-agent.cycls.ai'
});
const response = await openai.chat.completions.create({
model: 'my-agent',
messages: [
{ role: 'user', content: 'Hello from JavaScript!' }
]
});
console.log(response.choices[0].message.content);
Conversation History
The API maintains conversation context automatically. Each request includes the full conversation history:
import openai
client = openai.OpenAI(
api_key="sk-proj-1234567890",
base_url="https://my-agent.cycls.ai"
)
# First message
response1 = client.chat.completions.create(
model="my-agent",
messages=[
{"role": "user", "content": "My name is Alice"}
]
)
# Follow-up message (includes previous context)
response2 = client.chat.completions.create(
model="my-agent",
messages=[
{"role": "user", "content": "My name is Alice"},
{"role": "assistant", "content": response1.choices[0].message.content},
{"role": "user", "content": "What's my name?"}
]
)
Error Handling
The API returns standard HTTP status codes:
200
: Success
400
: Bad Request (invalid parameters)
401
: Unauthorized (invalid API token)
429
: Rate Limited
500
: Internal Server Error
import openai
from openai import OpenAIError
client = openai.OpenAI(
api_key="sk-proj-1234567890",
base_url="https://my-agent.cycls.ai"
)
try:
response = client.chat.completions.create(
model="my-agent",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
except OpenAIError as e:
print(f"Error: {e}")