The Confidential AI open-source project enables developers to securely execute sensitive AI tasks in the cloud: without exposing raw data/models, it leverages trusted hardware and remote attestation technologies to protect user privacy data, training sets, and generative models throughout their lifecycle while allowing normal utilization of cloud computing resources for complex AI inference and training.
For rapid validation of Confidential-AI’s end-to-end workflow, we provide a one-click Docker-based deployment solution. See Docker Deployment Guide. This solution applies to:
Hybrid environment simulation: Process covers user side (Trustee key management) and cloud side (Trustiflux trusted inference) collaboration. Full simulation can be completed in a single TDX instance through Docker, suitable for development debugging or demonstration verification.
Out-of-the-box: Containerized packaging of dependency environments and configuration scripts avoids deployment issues caused by environment differences, ensuring process consistency.
Security enhancement: Combines TDX remote attestation technology to ensure keys are decrypted only in verified trusted environments, protecting model privacy.
Agile delivery: Pre-configured automation scripts handle complex steps like PCCS configuration and service discovery, reducing onboarding costs.
Environment agnosticism: Container images can be rapidly migrated across any cloud environment supporting TDX, adapting to multi-cloud/hybrid cloud architectures.
RPM Deployment:
Production-grade deployment solution based on RPM packages. For details, see RPM Deployment Guide. Applicable to the following scenarios:
Production Environment Deployment: Version control and dependency management through standard package managers.
Hardware-Dedicated Environments: Deploy directly on TDX-supported physical or virtual machines for optimal performance.
Standard Package Management: Install/upgrade/uninstall via RPM packages, compliant with enterprise operation standards.
Automated Workflows: Preconfigured scripts automatically handle complex processes like key management and service registration.
Flexible Extensibility: Support for custom configuration parameters by modifying config_trustee.yaml and config_trustiflux.yaml to adjust deployment strategies.
Confidential AI
The Confidential AI open-source project enables developers to securely execute sensitive AI tasks in the cloud: without exposing raw data/models, it leverages trusted hardware and remote attestation technologies to protect user privacy data, training sets, and generative models throughout their lifecycle while allowing normal utilization of cloud computing resources for complex AI inference and training.
Table of Contents
Components
Current Stable Version:
v1.1.0- 2025-08-01Previous versions: see Release Notes.
Version compatibility information: see Version Compatibility.
Deployment
Docker Deployment:
RPM Deployment:
config_trustee.yamlandconfig_trustiflux.yamlto adjust deployment strategies.License
This project uses the Apache 2.0 license.