Skip to main content

Providers

An AgentFlow Agent is a thin wrapper around an LLM call. The provider decides which backend service runs that call and how the agent authenticates with it.

from agentflow.core.graph import Agent

agent = Agent(
model="gpt-4o",
provider="openai",
system_prompt=[{"role": "system", "content": "You are a helpful assistant."}],
)

Supported providers

ProviderBackendTypical models
openaiOpenAI APIgpt-4o, gpt-4o-mini, o1, o3, o4-mini
googleGemini API or Vertex AIgemini-2.0-flash, gemini-2.5-flash, gemini-2.5-pro

Choosing a provider

  • openai — the default choice for GPT-class and reasoning (o1, o3) models.
  • google — Gemini models, with two backends behind one provider:
    • Gemini API (default) — sign up at Google AI Studio, copy an API key, done.
    • Vertex AI — same models routed through Google Cloud with IAM, audit logs, regional data residency, and VPC Service Controls. Enable with use_vertex_ai=True on the agent or GOOGLE_GENAI_USE_VERTEXAI=true in the environment. See Using Vertex AI.

Provider inference

If you don't pass provider, AgentFlow guesses from the model name:

Model prefixInferred provider
gpt, o1, o3, o4openai
geminigoogle

Switching backends within Google

Provider is just a constructor argument, and toggling Vertex AI is a one-line change:

# Development: API key (Gemini API)
agent = Agent(model="gemini-2.5-flash", provider="google", ...)

# Production on GCP: same model, IAM-scoped access
agent = Agent(model="gemini-2.5-flash", provider="google", use_vertex_ai=True, ...)

See each provider page for setup details and full examples.