Chat Frontends
7apps · local AI compatibility & hardware requirements
Chat frontends are the user interface layer for LLMs. They do not run models themselves — they connect to local backends (Ollama, LM Studio) or cloud APIs (OpenRouter, Anthropic, OpenAI). This means most chat frontends have zero GPU requirements of their own. The GPU requirement comes entirely from the model you choose to connect.
- HuggingChatFree web chat interface from Hugging Face. Access open-weight models instantly with no setup — runs entirely in the cloud.
- Janitor AICloud-based AI character roleplay platform. Largest character library, OpenRouter proxy support, and a massive 500K+ Discord community.· OpenRouter
- LibreChatEnterprise self-hosted ChatGPT clone with 30+ AI providers. Multi-user admin panel, OAuth2 SSO, artifacts, code interpreter, and MCP support.· OpenRouter
- LM StudioDesktop app for running local LLMs with zero setup. In-app model browser, visual GPU fit indicator, and one-click GGUF downloads from Hugging Face.Runs locally
- Open WebUISelf-hosted ChatGPT-like web UI for LLMs. Native Ollama integration, RAG document Q&A, multi-user support, and OpenRouter compatibility.Runs locally · OpenRouter
- SillyTavernSelf-hosted chat interface for AI roleplay and creative writing. Deep character creation, lorebooks, group chats, and first-class OpenRouter support.Runs locally · OpenRouter
- text-generation-webuiPower-user local LLM frontend with maximum backend flexibility. Transformers, ExLlamaV2/V3, llama.cpp, GPTQ, AWQ — all in one web UI.Runs locally
Want to check if your GPU can run the models these apps need? Use the homepage calculator to see which models fit your hardware with estimated tokens per second.