[Tutorial] Fix tutorial-06 result mismatch for causal forward kernel (#8853)
New contributor declaration
I am not making a trivial change, such as fixing a typo in a comment.
I have written a PR description following these rules.
I have run
pre-commit run --from-ref origin/main --to-ref HEAD.Select one of the following.
- I have added tests.
/testforlittests/unittestfor C++ tests/python/testfor end-to-end testsThis PR does not need a test because
This PR fixes an issue in current test script.Select one of the following.
- I have not added any
littests.The
littests I have added follow these best practices, including the “tests should be minimal” section. (Usually running Python code and using the instructions it generates is not minimal.)For casual forward kernel, we need to filter out configs where BLOCK_M < BLOCK_N (BLOCK_M=64, BLOCK_N=128 in current config space). If BLOCK_M < BLOCK_N, in
_attn_fwd_innerfunction, the iteration range forSTAGE==1is:[0, ceil_div(start_m * BLOCK_M, BLOCK_N) * BLOCK_N], the upper bound may be larger than the expectedstart_m * BLOCK_M. Besides, these OOB data won’t be masked whenSTAGE==1, and further cause the result mismatch issue.
DocumentationNightly WheelsTriton Conference 2025
Registration
The 3rd Triton conference is scheduled to take place on October 21, 2025. Click here to register!
Poster Submission
We invite members of the Triton community who are attending the Triton Developer Conference to present posters about their Triton-related technical work.
Please submit basic information of your poster, including author information and abstract using this form.
Important Dates
Triton
This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with higher flexibility than other existing DSLs.
The foundations of this project are described in the following MAPL2019 publication: Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations. Please consider citing this work if you use Triton!
The official documentation contains installation instructions and tutorials. See also these third-party Triton puzzles, which can all be run using the Triton interpreter – no GPU required.
Quick Installation
You can install the latest stable release of Triton from pip:
Binary wheels are available for CPython 3.10-3.14.
Install from source
Or with a virtualenv:
Building with a custom LLVM
Triton uses LLVM to generate code for GPUs and CPUs. Normally, the Triton build downloads a prebuilt LLVM, but you can also build and use LLVM from source.
LLVM does not have a stable API, so the Triton build will not work at an arbitrary LLVM version.
For convenience, use the following command to build LLVM and install Triton with the custom LLVM:
Alternatively, follow these steps to build LLVM from source manually.
Find the version of LLVM that Triton builds against. Check
cmake/llvm-hash.txtto see the current version. For example, if it says:This means that the version of Triton you have builds against LLVM 49af6502.
git checkoutLLVM at this revision. Optionally, make additional modifications to LLVM.Build LLVM. For example, you might run:
Grab a snack, this will take a while.
Build Triton as above, but set the following environment variables:
Tips for building
Set
TRITON_BUILD_WITH_CLANG_LLD=trueas an environment variable to use clang and lld. lld in particular results in faster builds.Set
TRITON_BUILD_WITH_CCACHE=trueto build with ccache.Set
TRITON_HOME=/some/pathto change the location of the.tritondirectory where Triton’s cache is located and downloads are stored during the build. By default, this is the user’s home directory. It can be changed anytime.If you’re running out of memory when building Triton, specify the
MAX_JOBSenvironment variable (to thepip install -e .command) to limit the number of jobs.Pass
--no-build-isolationtopip installto make nop builds faster. Without this, every invocation ofpip installuses a different symlink to cmake, and this forces ninja to rebuild most of the.afiles.The build system creates a
compile_commands.jsonfile under the Triton repo directory. This file is used by VSCode IntelliSense and clangd to provide code completion and other features for C++ code.If IntelliSense does not work, you can try the following steps:
pip install -e ..compile_commands.jsonfile produced by the build:find ./build -name 'compile_commands.json' | xargs readlink -f. You might get a full path similar to/Users/{username}/triton/build/cmake.macosx-11.1-arm64-cpython-3.12/compile_commands.json.Shift + Command + Pon Mac, orShift + Ctrl + Pon Windows/Linux) and openC/C++: Edit Configurations (UI).compile_commands.jsoninto the “Compile Commands” textbox.Running tests
There currently isn’t a turnkey way to run all the Triton tests, but you can follow the following recipe:
Tips for hacking
For detailed instructions on how to debug Triton’s frontend, please refer to this tutorial. The following includes additional tips for hacking on Triton’s backend.
Configuration knobs
See
python/triton/knobs.pyfor the full list of configuration knobs. You can set those knobs directly in python or use environment variables to control them. Below are some of the environment variables you can specify (seeknobs.pyfor the full list):MLIR_ENABLE_DUMP=1dumps the IR before every MLIR pass Triton runs, for all kernels. UseMLIR_ENABLE_DUMP=kernelNameto dump for a specific kernel only.MLIR_ENABLE_DUMP=1does not work, try cleaning your triton cache:rm -r ~/.triton/cache/*.MLIR_DUMP_PATHspecifies whereMLIR_ENABLE_DUMPwill dump to. If unset will dump to stderr.LLVM_IR_ENABLE_DUMP=1dumps the IR before every pass run over the LLVM IR.TRITON_REPRODUCER_PATH=<reproducer_path>will generate an MLIR reproducer file at<reproducer_path>before each MLIR compiler stage. If any of the stages fail,<reproducer_path>will be a local MLIR reproducer captured right before the failing pass.TRITON_INTERPRET=1uses the Triton interpreter instead of running on the GPU. You can insert Python breakpoints in your kernel code!TRITON_ENABLE_LLVM_DEBUG=1passes-debugto LLVM, printing a lot of debugging information to stdout. If this is too noisy, run with justTRITON_LLVM_DEBUG_ONLYinstead to limit the output.LLVM_IR_ENABLE_DUMP=1, extract the IR before the LLVM pass of interest, and then run LLVM’soptstandalone, perhaps passing-debug-only=fooon the command line.TRITON_LLVM_DEBUG_ONLY=<comma-separated>is the equivalent of LLVM’s-debug-onlycommand-line option. This limits the LLVM debug output to specific pass or component names (which are specified using#define DEBUG_TYPEthroughout LLVM and Triton) in order to allow the debug output to be less noisy.TRITON_LLVM_DEBUG_ONLYallows for one or more comma separated values to be specified (egTRITON_LLVM_DEBUG_ONLY="tritongpu-remove-layout-conversions"orTRITON_LLVM_DEBUG_ONLY="tritongpu-remove-layout-conversions,regalloc").TRITON_ENABLE_ASAN=1invokes the LLVM address sanitizer for memory leak and out of bounds access detection. Currently only supported on the AMD backend. This must be run using the ASAN libraries documented here.USE_IR_LOC={ttir,ttgir}reparses the IR such that the location information will be the line number of the IR file with that particular extension, instead of line number of the python file. This can provide a direct mapping from the IR to llir/ptx. When used with performance tools, it can provide a breakdown on IR instructions.TRITON_PRINT_AUTOTUNING=1prints out the best autotuning config and total time spent for each kernel after autotuning is complete.DISABLE_LLVM_OPTwill disable llvm optimizations for make_llir and make_ptx if its value is true when parsing as Bool. Otherwise, it will be parsed as a list of flags to disable llvm optimizations. One usage case isDISABLE_LLVM_OPT="disable-lsr"Loop strength reduction is known to cause up to 10% performance changes for certain kernels with register pressure.TRITON_ALWAYS_COMPILE=1forces to compile kernels regardless of cache hit.MLIR_ENABLE_TIMINGdumps the timing information for each MLIR pass.LLVM_ENABLE_TIMINGdumps the timing information for each LLVM pass.TRITON_DEFAULT_FP_FUSIONoverrides the default behavior of allowing fp fusion (mul+add->fma).MLIR_ENABLE_DIAGNOSTICS=<comma-separated>controls diagnostic emission in MLIR. Options are:warnings,remarks,stacktraces,operations. Use comma-separated values to customize output. For example,MLIR_ENABLE_DIAGNOSTICS=remarks,operationsenables remarks and IR operations, whileMLIR_ENABLE_DIAGNOSTICS=warnings,stacktracesenables warnings with stacktraces. By default, only errors are shown. Settingwarningsincludes errors and warnings;remarksincludes errors, warnings, and remarks.MLIR_ENABLE_REMARKis deprecated. Please useMLIR_ENABLE_DIAGNOSTICS=remarks.TRITON_KERNEL_DUMPenables the dumping of the IR from each compilation stage and the final ptx/amdgcn.TRITON_DUMP_DIRspecifies the directory to save the dumped IR and ptx/amdgcn whenTRITON_KERNEL_DUMPis set to 1.TRITON_KERNEL_OVERRIDEenables the override of the compiled kernel with a user-specified IR/ptx/amdgcn at the beginning of each compilation stage.TRITON_OVERRIDE_DIRspecifies the directory from which to load the IR/ptx/amdgcn files whenTRITON_KERNEL_OVERRIDEis set to 1.TRITON_F32_DEFAULTsets the default input precision oftl.dotwhen using 32-bit floats, which can be eitherieee,tf32, ortf32x3.TRITON_FRONT_END_DEBUGGING=1disables exception wrapping when an error occurs in the compiler frontend, allowing the full stack trace to be seen.TRITON_DISABLE_LINE_INFO=1removes all line information from the module.PTXAS_OPTIONSpasses additional command-line options to the PTX assemblerptxas(only on NVIDIA).LLVM_EXTRACT_DI_LOCAL_VARIABLESemit full debug info, allowing for eval of values in gpu debuggers (ie cuda-gdb, rocm-gdb etc)Kernel Override Steps
Compiler Pipeline Inspection Steps To introspect the pipeline
add_stages, before running your kernels, simply set the add_stages_inspection_hook like so:Examples of how to use this for out of tree plugin passes is here
Changelog
Version 2.0 is out! New features include:
Contributing
Community contributions are more than welcome, whether it be to fix bugs or to add new features at github. For more detailed instructions, please visit our contributor’s guide.
Compatibility
Supported Platforms:
Supported Hardware:
Development Container (Dev Container)
Dev Containers for the Triton project are available from the triton-dev-containers repository.
Key Benefits:
How to Use the Dev Container:
For detailed instructions on how to use the dev containers, please see the dev container user guide.