[docs]: update roformer.md model card (#37946)
Update roformer model card
fix example purpose description
fix model description according to the comments
revert changes for autodoc
remove unneeded tags
fix review issues
fix hfoption
Co-authored-by: Steven Liu 59462357+stevhliu@users.noreply.github.com
English | 简体中文 | 繁體中文 | 한국어 | Español | 日本語 | हिन्दी | Русский | Рortuguês | తెలుగు | Français | Deutsch | Tiếng Việt | العربية | اردو |
State-of-the-art pretrained models for inference and training
Transformers is a library of pretrained text, computer vision, audio, video, and multimodal models for inference and training. Use Transformers to fine-tune models on your data, build inference applications, and for generative AI use cases across multiple modalities.
There are over 500K+ Transformers model checkpoints on the Hugging Face Hub you can use.
Explore the Hub today to find a model and use Transformers to help you get started right away.
Installation
Transformers works with Python 3.9+ PyTorch 2.1+, TensorFlow 2.6+, and Flax 0.4.1+.
Create and activate a virtual environment with venv or uv, a fast Rust-based Python package and project manager.
Install Transformers in your virtual environment.
Install Transformers from source if you want the latest changes in the library or are interested in contributing. However, the latest version may not be stable. Feel free to open an issue if you encounter an error.
Quickstart
Get started with Transformers right away with the Pipeline API. The
Pipeline
is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output.Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model.
To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input to
Pipeline
) between you and the system.Expand the examples below to see how
Pipeline
works for different modalities and tasks.Automatic speech recognition
Image classification
Visual question answering
Why should I use Transformers?
Easy-to-use state-of-the-art models:
Lower compute costs, smaller carbon footprint:
Choose the right framework for every part of a models lifetime:
Easily customize a model or an example to your needs:
Why shouldn’t I use Transformers?
100 projects using Transformers
Transformers is more than a toolkit to use pretrained models, it’s a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects.
In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the community with the awesome-transformers page which lists 100 incredible projects built with Transformers.
If you own or use a project that you believe should be part of the list, please open a PR to add it!
Example models
You can test most of our models directly on their Hub model pages.
Expand each modality below to see a few example models for various use cases.
Audio
Computer vision
Multimodal
NLP
Citation
We now have a paper you can cite for the 🤗 Transformers library: