October 19, 2025

Coding Assistant - Configure Continue and comparison of Continue Cursor and Codex

Configure Continue and comparison of Continue, Cursor, and Codex

Below is a concise guide to set up OpenAI models in Continue and then a head to head comparison between Continue, Cursor, and Codex when you hold the model constant. I assume cost is not a constraint.

The short answer on “what model should I use”

If you want the strongest coding performance from OpenAI right now, choose GPT-5-Codex inside your tools. It is a GPT-5 family variant tuned for agentic coding and is available in the API and in OpenAI’s Codex products. You can also use standard GPT-5 for broad tasks and GPT-4o when you need image understanding mixed with code.


How to configure OpenAI models in Continue

Where your config lives
Continue now uses a YAML based config. On first run it creates config.yaml which you can open from the Continue Chat sidebar. Paths are:

  • macOS or Linux: ~/.continue/config.yaml

  • Windows: %USERPROFILE%\.continue\config.yaml

Minimal OpenAI setup
Create or edit config.yaml and add a model entry. Put your OpenAI API key from the OpenAI console in the apiKey field.

name: MyContinueSetup version: v1 schema: v1 models: - name: GPT-5-Codex provider: openai model: gpt-5-codex apiKey: $OPENAI_API_KEY roles: - chat - edit - apply

This mirrors the provider block in Continue’s OpenAI guide and the broader config reference.

If you are routing through an OpenAI compatible endpoint
Continue lets you point to an OpenAI compatible server by setting apiBase:

models: - name: MyRouterModel provider: openai model: gpt-5-codex apiBase: https://your-endpoint.example.com/v1 apiKey: $MY_ROUTER_KEY

This is the documented way to use OpenAI compatible providers.

Capabilities and agent mode
Continue auto detects most model features. If you need to add capabilities manually for tools or images, you can add a capabilities array:

models: - name: GPT-4o provider: openai model: gpt-4o capabilities: - tool_use - image_input

Use this only when autodetection does not catch a new deployment.

Tip
Continue supports multiple models with different roles. For example you can keep GPT-5-Codex for agent work and a fast model for autocomplete in the same file. See the reference for roles, default options, and timeouts. Continue


Continue vs Cursor vs Codex, model held constant

Assume you call the same OpenAI model through each tool. For example GPT-5-Codex in Continue, Cursor, and Codex.

1) Integration depth and control

  • Continue - Works as an extension inside VS Code and JetBrains. It is open source and configurable through YAML and a Hub. You define models, rules, prompts, and Model Context Protocol servers. You can point to OpenAI compatible back ends and tune agent behavior. This gives strong control when you want a custom workflow.

  • Cursor - A full editor with deep AI features such as chat, inline edits, and tab completion. You can add your own API keys in Settings so you can use your OpenAI quota. Cursor focuses on a smooth editor experience and team admin controls.

  • Codex - Codex is an OpenAI coding agent that runs in the cloud and locally through the Codex CLI and also as an IDE extension. It can read a repository, run commands, propose pull requests, and handle many tasks in parallel in a sandbox. It defaults to GPT-5 and supports GPT-5-Codex. This is the most agent native option from OpenAI.

2) Autonomy and operations

  • Continue - Agent mode exists and can use tools. You decide which tools and rules to load, which makes it easy to keep autonomy within your boundaries.

  • Cursor - Strong inline flow for chat and edits. With the new Bugbot, Cursor extends into automated code review for pull requests with GitHub integration. This is valuable if your main need is review gates and editor driven fixes.

  • Codex - Designed for multi task execution in a managed sandbox. It can branch tasks, run tests, and propose changes as pull requests, then hand the work back to your IDE. This is the most agent forward posture.

3) Customization and vendor choice

  • Continue - Maximum flexibility. Any OpenAI model and many OpenAI compatible deployments by changing apiBase. You can attach MCP tools and custom prompts and rules. Great for teams that want to own the stack.

  • Cursor - Lets you add custom API keys for OpenAI and other providers, though the exact model menu and base URL override options can change over time. It is opinionated in product design to keep the experience simple.

  • Codex - Most features are available through your ChatGPT plan and the Codex CLI. It is OpenAI first by design and exposes the latest agent features fast.

4) Team features and review

  • Continue - Works with your existing Git flow and CI because you choose the model and tools. You can use the CLI and agents in CI as needed. 
    Cursor - Team billing, privacy controls, SSO, and an add on for Bugbot that enforces pre merge reviews. If your priority is a gated review process that runs inside GitHub, Cursor plus Bugbot is a strong fit.
    Codex - Builds and reviews code in a sandbox and can open pull requests. It now has an IDE extension so you can move between cloud and local work.

5) The “what should I pick” rule of thumb

  • Pick Continue if you want full control, model freedom, and deep configuration with YAML and Hub. It is ideal when you expect to tune prompts, rules, and tools per repo.
    Pick Cursor if you want a dedicated editor that makes chat, edits, autocomplete, and review feel native, and you want an opinionated path for team rollout.
    Pick Codex if you want a first party OpenAI agent that can do multi task work in a sandbox and you plan to adopt GPT-5-Codex quickly. It is the fastest path to new agent features from OpenAI.


A note on “Codex the model” vs “Codex the agent”

The original Codex models from 2021 were deprecated in March 2023. The new Codex is an agent built on modern models like codex-1 and GPT-5-Codex. Keep that distinction in mind when you compare tools. 


Suggested Continue configs you can copy

Single model for everything

name: Solo version: v1 schema: v1 models: - name: GPT-5-Codex provider: openai model: gpt-5-codex apiKey: $OPENAI_API_KEY roles: [chat, edit, apply]

Two model setup for speed plus strength

name: SpeedAndStrength version: v1 schema: v1 models: - name: GPT-5-Codex provider: openai model: gpt-5-codex apiKey: $OPENAI_API_KEY roles: [chat, edit, apply] - name: GPT-5-mini provider: openai model: gpt-5-mini apiKey: $OPENAI_API_KEY roles: [autocomplete]

Use the first for agent work and edits and the second for inline suggestions. See the model and roles reference to tune defaults like temperature and max tokens.


Final recommendation

If we ignore cost, start with GPT-5-Codex across the board. In Continue, set it as your primary agent model with tool use enabled. In Cursor, add your OpenAI key and select the same model for chat and edits. Try Codex as a companion when you want parallel task execution in a sandbox and pull request creation at scale. This way you hold the model constant and pick the surface that matches your workflow. 

No comments:

Post a Comment