MLflow is an open-source developer platform to build AI/LLM applications and models with confidence. Enhance your AI applications with end-to-end experiment tracking, observability, and evaluations, all in one integrated platform.
To install the MLflow Python package, run the following command:
pip install mlflow
📦 Core Components
MLflow is the only platform that provides a unified solution for all your AI/ML needs, including LLMs, Agents, Deep Learning, and traditional machine learning.
MLflow Tracing provides LLM observability for various GenAI libraries such as OpenAI, LangChain, LlamaIndex, DSPy, AutoGen, and more. To enable auto-tracing, call mlflow.xyz.autolog() before running your models. Refer to the documentation for customization and manual instrumentation.
The following examples trains a simple regression model with scikit-learn, while enabling MLflow’s autologging feature for experiment tracking.
import mlflow
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_diabetes
from sklearn.ensemble import RandomForestRegressor
# Enable MLflow's automatic experiment tracking for scikit-learn
mlflow.sklearn.autolog()
# Load the training dataset
db = load_diabetes()
X_train, X_test, y_train, y_test = train_test_split(db.data, db.target)
rf = RandomForestRegressor(n_estimators=100, max_depth=6, max_features=3)
# MLflow triggers logging automatically upon model fitting
rf.fit(X_train, y_train)
Once the above code finishes, run the following command in a separate terminal and access the MLflow UI via the printed URL. An MLflow Run should be automatically created, which tracks the training dataset, hyper parameters, performance metrics, the trained model, dependencies, and even more.
mlflow ui
💭 Support
For help or questions about MLflow usage (e.g. “how do I do X?”) visit the documentation.
In the documentation, you can ask the question to our AI-powered chat bot. Click on the “Ask AI” button at the right bottom.
Please see our contribution guide to learn more about contributing to MLflow.
⭐️ Star History
✏️ Citation
If you use MLflow in your research, please cite it using the “Cite this repository” button at the top of the GitHub repository page, which will provide you with citation formats including APA and BibTeX.
👥 Core Members
MLflow is currently maintained by the following core members with significant contributions from hundreds of exceptionally talented community members.
Open-Source Platform for Productionizing AI
MLflow is an open-source developer platform to build AI/LLM applications and models with confidence. Enhance your AI applications with end-to-end experiment tracking, observability, and evaluations, all in one integrated platform.
🚀 Installation
To install the MLflow Python package, run the following command:
📦 Core Components
MLflow is the only platform that provides a unified solution for all your AI/ML needs, including LLMs, Agents, Deep Learning, and traditional machine learning.
💡 For LLM / GenAI Developers
🔍 Tracing / Observability
Getting Started →
📊 LLM Evaluation
Getting Started →
🤖 Prompt Management
Getting Started →
📦 App Version Tracking
Getting Started →
🎓 For Data Scientists
📝 Experiment Tracking
Getting Started →
💾 Model Registry
Getting Started →
🚀 Deployment
Getting Started →
🌐 Hosting MLflow Anywhere
You can run MLflow in many different environments, including local machines, on-premise servers, and cloud infrastructure.
Trusted by thousands of organizations, MLflow is now offered as a managed service by most major cloud providers:
For hosting MLflow on your own infrastructure, please refer to this guidance.
🗣️ Supported Programming Languages
🔗 Integrations
MLflow is natively integrated with many popular machine learning frameworks and GenAI libraries.
Usage Examples
Tracing (Observability) (Doc)
MLflow Tracing provides LLM observability for various GenAI libraries such as OpenAI, LangChain, LlamaIndex, DSPy, AutoGen, and more. To enable auto-tracing, call
mlflow.xyz.autolog()before running your models. Refer to the documentation for customization and manual instrumentation.Then navigate to the “Traces” tab in the MLflow UI to find the trace records OpenAI query.
Evaluating LLMs, Prompts, and Agents (Doc)
The following example runs automatic evaluation for question-answering tasks with several built-in metrics.
Navigate to the “Evaluations” tab in the MLflow UI to find the evaluation results.
Tracking Model Training (Doc)
The following examples trains a simple regression model with scikit-learn, while enabling MLflow’s autologging feature for experiment tracking.
Once the above code finishes, run the following command in a separate terminal and access the MLflow UI via the printed URL. An MLflow Run should be automatically created, which tracks the training dataset, hyper parameters, performance metrics, the trained model, dependencies, and even more.
💭 Support
🤝 Contributing
We happily welcome contributions to MLflow!
Please see our contribution guide to learn more about contributing to MLflow.
⭐️ Star History
✏️ Citation
If you use MLflow in your research, please cite it using the “Cite this repository” button at the top of the GitHub repository page, which will provide you with citation formats including APA and BibTeX.
👥 Core Members
MLflow is currently maintained by the following core members with significant contributions from hundreds of exceptionally talented community members.