Live demo & docs: https://g4f.dev | Documentation: https://g4f.dev/docs
GPT4Free (g4f) is a community-driven project that aggregates multiple accessible providers and interfaces to make working with modern LLMs and media-generation models easier and more flexible. GPT4Free aims to offer multi-provider support, local GUI, OpenAI-compatible REST APIs, and convenient Python and JavaScript clients — all under a community-first license.
This README is a consolidated, improved, and complete guide to installing, running, and contributing to GPT4Free.
If using slim docker mapping, Interference API may be available at http://localhost:1337/v1
Swagger UI: http://localhost:1337/docs
CLI
Start GUI server:
python -m g4f.cli gui --port 8080 --debug
MCP Server
GPT4Free now includes a Model Context Protocol (MCP) server that allows AI assistants like Claude to access web search, scraping, and image generation capabilities.
Starting the MCP server (stdio mode):
# Using g4f command
g4f mcp
# Or using Python module
python -m g4f.mcp
Starting the MCP server (HTTP mode):
# Start HTTP server on port 8765
g4f mcp --http --port 8765
# Custom host and port
g4f mcp --http --host 127.0.0.1 --port 3000
HTTP mode provides:
POST http://localhost:8765/mcp - JSON-RPC endpoint
See the full API reference for streaming, tool-calling patterns, and advanced options: https://g4f.dev/docs/client
Using GPT4Free.js (browser JS client)
Use the official JS client in the browser—no backend required.
Example:
<script type="module">
import Client from 'https://g4f.dev/dist/js/client.js';
const client = new Client();
const result = await client.chat.completions.create({
model: 'gpt-4.1', // Or "gpt-4o", "deepseek-v3", etc.
messages: [{ role: 'user', content: 'Explain quantum computing' }]
});
console.log(result.choices[0].message.content);
</script>
Notes:
The JS client is distributed via the g4f.dev CDN for easy usage. Review CORS considerations and usage limits.
Providers & models (overview)
GPT4Free integrates many providers including (but not limited to) OpenAI-compatible endpoints, PerplexityLabs, Gemini, MetaAI, Pollinations (media), and local inference backends.
Model availability and behavior depend on provider capabilities. See the providers doc for current, supported provider/model lists: https://g4f.dev/docs/providers-and-models
Provider requirements may include:
API keys or tokens (for authenticated providers)
Browser cookies / HAR files for providers scraped via browser automation
Chrome/Chromium or headless browser tooling
Local model binaries and runtime (for local inference)
Local inference & media
GPT4Free supports local inference backends. See docs/local.md for supported runtimes and hardware guidance.
Media generation (image, audio, video) is supported through providers (e.g., Pollinations). See docs/media.md for formats, options, and sample usage.
Configuration & customization
Configure via environment variables, CLI flags, or config files. See docs/config.md.
To reduce install size, use partial requirement groups. See docs/requirements.md.
You may redistribute and/or modify under the terms of GPLv3.
The program is provided WITHOUT ANY WARRANTY.
Copyright notice
xtekky/gpt4free: Copyright (C) 2025 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
GPT4Free (g4f)
Created by @xtekky,
maintained by @hlohaus
Support the project on GitHub Sponsors ❤️
Live demo & docs: https://g4f.dev | Documentation: https://g4f.dev/docs
GPT4Free (g4f) is a community-driven project that aggregates multiple accessible providers and interfaces to make working with modern LLMs and media-generation models easier and more flexible. GPT4Free aims to offer multi-provider support, local GUI, OpenAI-compatible REST APIs, and convenient Python and JavaScript clients — all under a community-first license.
This README is a consolidated, improved, and complete guide to installing, running, and contributing to GPT4Free.
Table of contents
What’s included
Quick links
Requirements & compatibility
Installation
Docker (recommended)
Slim Docker image (x64 & arm64)
Notes:
Windows Guide (.exe)
👉 Check out the Windows launcher for GPT4Free:
🔗 https://github.com/gpt4free/g4f.exe 🚀
g4f.exe.zipfrom: https://github.com/xtekky/gpt4free/releases/latestg4f.exe.Python Installation (pip / from source / partial installs)
Prerequisites:
Install from PyPI (recommended):
Partial installs
Install from source:
Notes:
Running the app
GUI (web client)
FastAPI / Interference API
http://localhost:1337/v1http://localhost:1337/docsCLI
MCP Server
GPT4Free now includes a Model Context Protocol (MCP) server that allows AI assistants like Claude to access web search, scraping, and image generation capabilities.
Starting the MCP server (stdio mode):
Starting the MCP server (HTTP mode):
HTTP mode provides:
POST http://localhost:8765/mcp- JSON-RPC endpointGET http://localhost:8765/health- Health checkConfiguring with Claude Desktop:
Add to your
claude_desktop_config.json:Available MCP Tools:
web_search- Search the web using DuckDuckGoweb_scrape- Extract text content from web pagesimage_generation- Generate images from text promptsFor detailed MCP documentation, see g4f/mcp/README.md
Optional provider login (desktop within container)
Using the Python client
Install:
Synchronous text example:
Expected:
Image generation example:
Async client example:
Notes:
Using GPT4Free.js (browser JS client)
Use the official JS client in the browser—no backend required.
Example:
Notes:
Providers & models (overview)
Provider requirements may include:
Local inference & media
Configuration & customization
Running on smartphone
Interference API (OpenAI‑compatible)
http://localhost:1337/v1http://localhost:1337/docsExamples & common patterns
Contributing
Contributions are welcome — new providers, features, docs, and fixes are appreciated.
How to contribute:
Repository: https://github.com/xtekky/gpt4free
How to create a new provider
g4f/Provider/How AI can help you write code
Security, privacy & takedown policy
Credits, contributors & attribution
har_file.py— input from xqdoo00o/ChatGPT-to-APIPerplexityLabs.py— input from nathanrchn/perplexityaiGemini.py— input from dsdanielpark/Gemini-API and HanaokaYuzu/Gemini-APIMetaAI.py— inspired by meta-ai-api by Strvmproofofwork.py— input from missuo/FreeGPT35Many more contributors are acknowledged in the repository.
Powered-by highlights
Changelog & releases
Manifesto / Project principles
GPT4Free is guided by community principles:
https://g4f.dev/manifest
License
This program is licensed under the GNU General Public License v3.0 (GPLv3). See the full license: https://www.gnu.org/licenses/gpl-3.0.txt
Summary:
Copyright notice
Contact & sponsorship
Appendix: Quick commands & examples
Install (pip):
Run GUI (Python):
Docker (full):
Docker (slim):
Python usage patterns:
client.chat.completions.create(...)client.images.generate(...)AsyncClientDocs & deeper reading
Thank you for using and contributing to GPT4Free — together we make powerful AI tooling accessible, flexible, and community-driven.