CanItRun Logocanitrun.
← All apps

Kilo Code

Open-source AI coding agent for VS Code, JetBrains, and CLI. The only Cline-family agent with JetBrains support. Claims highest-volume consumer on OpenRouter.

App type

Coding Agent

Local models

Yes

OpenRouter

Yes

Ollama

Yes

GPU required

Yes — for local inference

Best for

JetBrains users wanting AI coding agents

Setup difficulty

Easy

Platforms

VS Code, JetBrains, CLI

Pricing

Open source — free

Kilo Code is Open-source AI coding agent for VS Code, JetBrains, and CLI. The only Cline-family agent with JetBrains support. Claims highest-volume consumer on OpenRouter. Kilo Code is the newest fork in the Cline/Roo Code lineage with 16K+ GitHub stars.

Kilo Code works with both local models and cloud APIs. It supports OpenRouter for unified access to 300+ models from a single API. Ollama integration lets you run models locally on your own GPU. Kilo Code is open source (https://github.com/kilocode/kilocode), so you can inspect the code and self-host. 24 GB VRAM recommended for local coding models. Same hardware requirements as Cline and Roo Code for equivalent model sizes.

Can it run on my hardware?

Minimum

24 GB VRAM recommended for local coding models. Same hardware requirements as Cline and Roo Code for equivalent model sizes.

Recommended

RTX 3090 or 4090 (24 GB) for Qwen3-Coder 30B at Q4. JetBrains users benefit from the IDE's own memory — allocate 8 GB+ system RAM to the IDE separately from model VRAM.

Approximate VRAM needed for recommended local models at Q4 with 8K context:

ModelParamsQ4 VRAMMin GPU
Qwen3 30B-A3B (MoE)30B~19.8 GB24 GB
Qwen 2.5 Coder 32B Instruct32.5B~22.9 GB24 GB
Qwen3 32B32.8B~22.2 GB24 GB

Check your GPU against these models in the calculator →

App compatibility

FeatureSupported
Local modelsYes
OpenRouterYes
OpenAI-compatible APIYes
OllamaYes
LM StudioYes
Anthropic APIYes
Google APIYes
Mistral APIYes
DockerNo
Works offlineNo
Needs GPUYes

Recommended models

Best local models

Best cloud/API models

Local vs cloud: which should you use?

Use local models if

  • You want privacy — data never leaves your machine
  • You already have a GPU with sufficient VRAM
  • You want zero per-token API costs
  • You need offline access
  • You have at least 16-24 GB VRAM for recommended models

Use cloud/API if

  • Your GPU has insufficient VRAM for the models you need
  • You want access to frontier model quality
  • You need maximum coding/reasoning performance
  • You don't want to manage local model downloads and updates
  • OpenRouter lets you switch between 300+ models with one API key

Setup overview

Setting up Kilo Code is straightforward. It runs on vscode, jetbrains, cli. Full documentation is available at https://kilo.ai/docs.

Limitations

  • Small GPUs — same 24 GB minimum as Cline for local coding
  • Simple autocomplete (alpha feature, not production-ready)

Related

Recommended GPUs

Compatible models

Related apps

Frequently asked questions

What is Kilo Code?
Kilo Code is Open-source AI coding agent for VS Code, JetBrains, and CLI. The only Cline-family agent with JetBrains support. Claims highest-volume consumer on OpenRouter. Kilo Code is the newest fork in the Cline/Roo Code lineage with 16K+ GitHub stars.
Does Kilo Code need a GPU?
24 GB VRAM recommended for local coding models. Same hardware requirements as Cline and Roo Code for equivalent model sizes.
Can Kilo Code use OpenRouter?
Yes. Kilo Code supports OpenRouter for accessing 300+ models through a single API. Configure OpenRouter as a provider in Kilo Code's settings with your API key.
Can Kilo Code use local models via Ollama?
Yes. Kilo Code works with Ollama for running models locally. Install Ollama, pull your model (e.g., `ollama pull qwen2.5:7b`), and connect Kilo Code to the local Ollama server. GPU requirements depend on the model you choose, not Kilo Code itself.
What is the best local model for Kilo Code?
For Kilo Code, the community-verified best local model is Qwen3 30B-A3B (MoE). RTX 3090 or 4090 (24 GB) for Qwen3-Coder 30B at Q4. JetBrains users benefit from the IDE's own memory — allocate 8 GB+ system RAM to the IDE separately from model VRAM.
Can I run Kilo Code on 12 GB VRAM?
12 GB VRAM is generally not sufficient for serious agentic coding with Kilo Code. You can run smaller models (7B-14B at Q4) but tool-calling reliability and context handling will be limited. For the best experience, 24 GB VRAM (RTX 3090/4090) is the community-recommended minimum for local agentic coding.
Is Kilo Code free and open source?
Yes. Kilo Code is open source and completely free. You can find the source code on GitHub at https://github.com/kilocode/kilocode.