Overview

This guide shows you how to build an AI agent powered by OpenAI’s GPT models. You’ll learn how to integrate the OpenAI SDK with Cycls to create intelligent conversational agents that can understand context and provide helpful responses.

Prerequisites

Before starting, make sure you have:
  • OpenAI API Key: Get your API key from OpenAI Platform
  • Cycls Account: Set up your Cycls account for cloud deployment
  • Python Environment: Python 3.8+ with pip installed

Local Development

Let’s start by building a simple LLM agent for local development:

Step 1: Basic Setup

import cycls
from openai import AsyncOpenAI

# Initialize agent for local development
agent = cycls.Agent()

# Initialize OpenAI client outside function (local development only)
client = AsyncOpenAI(api_key="YOUR_OPENAI_API_KEY")

# Simple LLM function using OpenAI
async def llm(messages):
    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        temperature=0.7,
        stream=True
    )
    
    # Stream the response
    async def event_stream():
        async for chunk in response:
            content = chunk.choices[0].delta.content
            if content:
                yield content
    
    return event_stream()

# Register your agent
@agent("my-agent")
async def my_agent(context):
    return await llm(context.messages)

# Run locally
agent.run()

Step 2: Test Your Agent

  1. Start the server: Run agent.run() in your terminal
  2. Open your browser: Go to http://127.0.0.1:8000
  3. Test the conversation: Try asking questions and see how your agent responds

Step 3: Customize the Response

You can customize how your agent responds by modifying the LLM function:
# Add system message for personality
system_message = {
    "role": "system", 
    "content": "You are a helpful AI assistant. Be concise and friendly in your responses."
}

async def llm(messages):
    # Combine system message with conversation history
    full_messages = [system_message] + messages
    
    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=full_messages,
        temperature=0.7,
        max_tokens=500,
        stream=True
    )
    
    async def event_stream():
        async for chunk in response:
            content = chunk.choices[0].delta.content
            if content:
                yield content
    
    return event_stream()

Cloud Deployment

Once you’re satisfied with your local agent, deploy it to the cloud:
Local vs Cloud Import Pattern: In local development, you can import packages outside the function. For cloud deployment, all imports must be inside the function to avoid import errors. This applies to any package (OpenAI, requests, pandas, etc.).

Step 1: Configure for Cloud

import cycls

# Initialize agent with cloud configuration
agent = cycls.Agent(
    pip=["openai"],  # Include OpenAI package
    keys=["YOUR_AGENT_KEY_1", "YOUR_AGENT_KEY_2"]  # Cycls cloud keys
)

async def llm(messages):
    # Import inside function to avoid import errors in cloud deployment
    import os
    from openai import AsyncOpenAI
    
    # Load environment variables and initialize the client inside the function
    api_key = os.getenv("OPENAI_API_KEY")
    client = AsyncOpenAI(api_key=api_key)
    model = "gpt-4o"
    
    # Add system message for personality (inside function for cloud deployment)
    system_message = {
        "role": "system", 
        "content": "You are a helpful AI assistant. Be concise and friendly in your responses."
    }
    
    # Combine system message with conversation history
    full_messages = [system_message] + messages
    
    response = await client.chat.completions.create(
        model=model,
        messages=full_messages,
        temperature=1.0,
        stream=True
    )
    
    # Yield the content from the streaming response
    async def event_stream():
        async for chunk in response:
            content = chunk.choices[0].delta.content
            if content:
                yield content
    
    return event_stream()

@agent("my-agent", auth=False)
async def my_agent(context):
    return await llm(context.messages)

agent.push()

Step 2: Deploy to Production

# Deploy to production with public URL
agent.push(prod=True)

Best Practices

API Key Security

  • Environment Variables: Store your OpenAI API key in environment variables
  • Never Hardcode: Avoid putting API keys directly in your code
  • Rotate Keys: Regularly rotate your API keys for security
import os

async def llm(messages):
    from openai import AsyncOpenAI
    
    # Use environment variable for API key
    api_key = os.getenv("OPENAI_API_KEY")
    client = AsyncOpenAI(api_key=api_key)
    # ... rest of the function

Performance Optimization

  • Streaming: Always use streaming for better user experience
  • Token Limits: Set appropriate max_tokens to control costs
  • Caching: Consider caching frequent responses
  • Rate Limiting: Implement rate limiting for production use

Troubleshooting

Common Issues

  1. Import Errors: Always import OpenAI inside the function for cloud deployment
  2. API Key Issues: Verify your OpenAI API key is valid and has sufficient credits
  3. Streaming Problems: Ensure your function properly yields content
  4. Memory Issues: Monitor token usage to avoid hitting limits