Skip to main content
LangChain is a powerful framework for developing applications powered by language models. It simplifies the process of chaining together different components like LLMs, prompts, and memory. This guide shows you how to combine LangChain’s flexibility with Cycls’ easy deployment to create robust AI agents.

Prerequisites

  • Python 3.9+
  • cycls package installed
  • Docker installed (for local testing)
  • OpenAI API key
Note: This guide uses OpenAI, but LangChain and Cycls support many providers including Anthropic, Google Gemini, Mistral, Cohere, and more. Simply swap the pip dependency and model name.
pip install cycls

Step 1: Create the Agent

Create a new file called app.py:
import cycls

@cycls.app(pip=["langchain", "langchain-openai"], copy=[".env"])
async def app(context):
    from langchain.chat_models import init_chat_model

    # Initialize the chat model (uses OPENAI_API_KEY from .env)
    model = init_chat_model("gpt-4o")

    # Get the user's message
    query = context.messages[-1]["content"]

    # Stream the response
    async for chunk in model.astream(query):
        if chunk.content:
            yield chunk.content

app.local()

Step 2: Configure Environment

Create a .env file in the same directory to store your API key:
OPENAI_API_KEY=sk-proj-...

Step 3: Run the Agent

Execute your agent:
python app.py
The agent will start and provide an endpoint for interaction.

Full Code

Here is the complete app.py file:
import cycls

@cycls.app(pip=["langchain", "langchain-openai"], copy=[".env"])
async def app(context):
    from langchain.chat_models import init_chat_model

    # Initialize the chat model (uses OPENAI_API_KEY from .env)
    model = init_chat_model("gpt-4o")

    # Get the latest user message
    query = context.messages[-1]["content"]

    # Stream the response
    async for chunk in model.astream(query):
        if chunk.content:
            yield chunk.content

app.local()

Using Other LLM Providers

Swap the dependency and model name to use a different provider:
ProviderPip PackageModel Example
OpenAIlangchain-openaigpt-4o
Anthropiclangchain-anthropicclaude-sonnet-4-5-20250929
Googlelangchain-google-genaigemini-3.0-pro
Mistrallangchain-mistralaimistral-large-latest

Deploy to Cloud

To deploy to production, set your Cycls API key and call app.deploy():
import cycls
import os

cycls.api_key = os.getenv("CYCLS_API_KEY")

@cycls.app(pip=["langchain", "langchain-openai"], copy=[".env"])
async def app(context):
    from langchain.chat_models import init_chat_model

    model = init_chat_model("gpt-4o")
    query = context.messages[-1]["content"]

    async for chunk in model.astream(query):
        if chunk.content:
            yield chunk.content

app.deploy()