This guide shows you how to build an AI agent powered by OpenAI’s GPT models. You’ll learn how to integrate the OpenAI SDK with Cycls to create intelligent conversational agents that can understand context and provide helpful responses.
You can customize how your agent responds by modifying the LLM function:
Copy
# Add system message for personalitysystem_message = { "role": "system", "content": "You are a helpful AI assistant. Be concise and friendly in your responses."}async def llm(messages): # Combine system message with conversation history full_messages = [system_message] + messages response = await client.chat.completions.create( model="gpt-4o", messages=full_messages, temperature=0.7, max_tokens=500, stream=True ) async def event_stream(): async for chunk in response: content = chunk.choices[0].delta.content if content: yield content return event_stream()
Once you’re satisfied with your local agent, deploy it to the cloud:
Local vs Cloud Import Pattern: In local development, you can import packages outside the function. For cloud deployment, all imports must be inside the function to avoid import errors. This applies to any package (OpenAI, requests, pandas, etc.).
import cycls# Initialize agent with cloud configurationagent = cycls.Agent( pip=["openai"], # Include OpenAI package keys=["YOUR_AGENT_KEY_1", "YOUR_AGENT_KEY_2"] # Cycls cloud keys)async def llm(messages): # Import inside function to avoid import errors in cloud deployment import os from openai import AsyncOpenAI # Load environment variables and initialize the client inside the function api_key = os.getenv("OPENAI_API_KEY") client = AsyncOpenAI(api_key=api_key) model = "gpt-4o" # Add system message for personality (inside function for cloud deployment) system_message = { "role": "system", "content": "You are a helpful AI assistant. Be concise and friendly in your responses." } # Combine system message with conversation history full_messages = [system_message] + messages response = await client.chat.completions.create( model=model, messages=full_messages, temperature=1.0, stream=True ) # Yield the content from the streaming response async def event_stream(): async for chunk in response: content = chunk.choices[0].delta.content if content: yield content return event_stream()@agent("my-agent", auth=False)async def my_agent(context): return await llm(context.messages)agent.push()
Environment Variables: Store your OpenAI API key in environment variables
Never Hardcode: Avoid putting API keys directly in your code
Rotate Keys: Regularly rotate your API keys for security
Copy
import osasync def llm(messages): from openai import AsyncOpenAI # Use environment variable for API key api_key = os.getenv("OPENAI_API_KEY") client = AsyncOpenAI(api_key=api_key) # ... rest of the function