In other language (unofficial port - by the community)
A library that helps you read text from an unknown charset encoding. Motivated by chardet,
I’m trying to resolve the issue by taking a new approach.
All IANA character set names for which the Python core library provides codecs are supported.
You can also register your own set of codecs, and yes, it would work as-is.
[^1]: They are clearly using specific code for a specific encoding even if covering most of used one.
[^2]: Chardet 7.0+ was relicensed from LGPL-2.1 to MIT following an AI-assisted rewrite. This relicensing is disputed on two independent grounds: (a) the original author contests that the maintainer had the right to relicense, arguing the rewrite is a derivative work of the LGPL-licensed codebase since it was not a clean room implementation; (b) the copyright claim itself is questionable given the code was primarily generated by an LLM, and AI-generated output may not be copyrightable under most jurisdictions. Either issue alone could undermine the MIT license. Beyond licensing, the rewrite raises questions about responsible use of AI in open source: key architectural ideas pioneered by charset-normalizer - notably decode-first validity filtering (our foundational approach since v1) and encoding pairwise similarity with the same algorithm and threshold — surfaced in chardet 7 without acknowledgment. The project also imported test files from charset-normalizer to train and benchmark against it, then claimed superior accuracy on those very files. Charset-normalizer has always been MIT-licensed, encoding-agnostic by design, and built on a verifiable human-authored history.
⚡ Performance
This package offer better performances (99th, and 95th) against Chardet. Here are some numbers.
updated as of March 2026 using CPython 3.12, Charset-Normalizer 3.4.6, and Chardet 7.1.0
Chardet’s performance on larger file (1MB+) are very poor. Expect huge difference on large payload. No longer the case since Chardet 7.0+
Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows.
And yes, these results might change at any time. The dataset can be updated to include more files.
The actual delays heavily depends on your CPU capabilities. The factors should remain the same.
Chardet claims on his documentation to have a greater accuracy than us based on the dataset they trained Chardet on(…)
Well, it’s normal, the opposite would have been worrying. Whereas charset-normalizer don’t train on anything, our solution
is based on a completely different algorithm, still heuristic through, it does not need weights across every encoding tables.
✨ Installation
Using pip:
pip install charset-normalizer -U
🚀 Basic Usage
CLI
This package comes with a CLI.
usage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD]
file [file ...]
The Real First Universal Charset Detector. Discover originating encoding used
on text file. Normalize text to unicode.
positional arguments:
files File(s) to be analysed
optional arguments:
-h, --help show this help message and exit
-v, --verbose Display complementary information about file if any.
Stdout will contain logs about the detection process.
-a, --with-alternative
Output complementary possibilities if any. Top-level
JSON WILL be a list.
-n, --normalize Permit to normalize input file. If not set, program
does not write anything.
-m, --minimal Only output the charset detected to STDOUT. Disabling
JSON output.
-r, --replace Replace file when trying to normalize it instead of
creating a new one.
-f, --force Replace file without asking if you are sure, use this
flag with caution.
-t THRESHOLD, --threshold THRESHOLD
Define a custom maximum amount of chaos allowed in
decoded content. 0. <= chaos <= 1.
--version Show version information and exit.
When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a
reliable alternative using a completely different method. Also! I never back down on a good challenge!
I don’t care about the originating charset encoding, because two different tables can
produce two identical rendered string.
What I want is to get readable text, the best I can.
In a way, I’m brute forcing text decoding. How cool is that ? 😎
Don’t confuse package ftfy with charset-normalizer or chardet. ftfy goal is to repair Unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.
🍰 How
Discard all charset encoding table that could not fit the binary content.
Measure noise, or the mess once opened (by chunks) with a corresponding charset encoding.
Extract matches with the lowest mess detected.
Additionally, we measure coherence / probe for a language.
Wait a minute, what is noise/mess and coherence according to YOU ?
Noise : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then
I established some ground rules about what is obvious when it seems like a mess (aka. defining noise in rendered text).
I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to
improve or rewrite it.
Coherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought
that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.
⚡ Known limitations
Language detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters))
Every charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content.
⚠️ About Python EOLs
If you are running:
Python >=2.7,<3.5: Unsupported
Python 3.5: charset-normalizer < 2.1
Python 3.6: charset-normalizer < 3.1
Upgrade your Python interpreter as soon as possible.
👤 Contributing
Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.
Professional support for charset-normalizer is available as part of the Tidelift
Subscription. Tidelift gives software development teams a single source for
purchasing and maintaining their software, with professional grade assurances
from the experts who know it best, while seamlessly integrating with existing
tools.
Charset Detection, for Everyone 👋
The Real First Universal Charset Detector
Featured Packages
In other language (unofficial port - by the community)
>>>>> 👉 Try Me Online Now, Then Adopt Me 👈 <<<<<
This project offers you an alternative to Universal Charset Encoding Detector, also known as Chardet.
FastUniversal[^1]Reliablewithout distinguishable standardsReliablewith distinguishable standardsLicenserestrictive
restrictive
Native PythonDetect spoken languageUnicodeDecodeError SafetyWhl Size (min)Supported EncodingCan register custom encoding[^1]: They are clearly using specific code for a specific encoding even if covering most of used one. [^2]: Chardet 7.0+ was relicensed from LGPL-2.1 to MIT following an AI-assisted rewrite. This relicensing is disputed on two independent grounds: (a) the original author contests that the maintainer had the right to relicense, arguing the rewrite is a derivative work of the LGPL-licensed codebase since it was not a clean room implementation; (b) the copyright claim itself is questionable given the code was primarily generated by an LLM, and AI-generated output may not be copyrightable under most jurisdictions. Either issue alone could undermine the MIT license. Beyond licensing, the rewrite raises questions about responsible use of AI in open source: key architectural ideas pioneered by charset-normalizer - notably decode-first validity filtering (our foundational approach since v1) and encoding pairwise similarity with the same algorithm and threshold — surfaced in chardet 7 without acknowledgment. The project also imported test files from charset-normalizer to train and benchmark against it, then claimed superior accuracy on those very files. Charset-normalizer has always been MIT-licensed, encoding-agnostic by design, and built on a verifiable human-authored history.
⚡ Performance
This package offer better performances (99th, and 95th) against Chardet. Here are some numbers.
updated as of March 2026 using CPython 3.12, Charset-Normalizer 3.4.6, and Chardet 7.1.0
Chardet’s performance on larger file (1MB+) are very poor. Expect huge difference on large payload.No longer the case since Chardet 7.0+✨ Installation
Using pip:
🚀 Basic Usage
CLI
This package comes with a CLI.
or
🎉 Since version 1.4.0 the CLI produce easily usable stdout result in JSON format.
Python
Just print out normalized text
Upgrade your code without effort
The above code will behave the same as chardet. We ensure that we offer the best (reasonable) BC result possible.
See the docs for advanced usage : readthedocs.io
😇 Why
When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a reliable alternative using a completely different method. Also! I never back down on a good challenge!
I don’t care about the originating charset encoding, because two different tables can produce two identical rendered string. What I want is to get readable text, the best I can.
In a way, I’m brute forcing text decoding. How cool is that ? 😎
Don’t confuse package ftfy with charset-normalizer or chardet. ftfy goal is to repair Unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.
🍰 How
Wait a minute, what is noise/mess and coherence according to YOU ?
Noise : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then I established some ground rules about what is obvious when it seems like a mess (aka. defining noise in rendered text). I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to improve or rewrite it.
Coherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.
⚡ Known limitations
⚠️ About Python EOLs
If you are running:
Upgrade your Python interpreter as soon as possible.
👤 Contributing
Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.
📝 License
Copyright © Ahmed TAHRI @Ousret.
This project is MIT licensed.
Characters frequencies used in this project © 2012 Denny Vrandečić
💼 For Enterprise
Professional support for charset-normalizer is available as part of the Tidelift Subscription. Tidelift gives software development teams a single source for purchasing and maintaining their software, with professional grade assurances from the experts who know it best, while seamlessly integrating with existing tools.