pydemult allows you to demultiplex fastq files in a streamed and parallel way. It expects that a sample barcode can be matched by a regular expression from the first line of each fastq entry and that sample barcodes are known in advance.
Suppose we have a file containing sample barcodes like this:
By default, pydemult parses the read name for the cell barcode with regular expressions. Cell barcodes are indicated by a capturing group called CB, while (optional) UMIs are indicated by a capturing group called UMI. Some examples include:
(.*):(?P<CB>[ATGCN]{11}), for a cell barcode of length 11 that is present after the last colon of the read name.
(.*):CELL_(?P<CB>[ATGCN]{10}):UMI_(?P<UMI>[ATGCN]{8}), for a cell barcode of length 10, followed by a UMI sequence of length 8. For DropSeq data preprocessed by the umis tool, a regex like this is advisable.
Output
pydemult will create a compressed fastq file for each sample barcode, with the filename taken from the corresponding Sample column entry of barcodes.txt.
A note on multithreading
pydemult divides its work into a demultiplexing and output part. The main thread streams the input and lazily distributes data blobs (of size --buffer-size) across n different demultiplexing threads (set with --threads), where the actual work happens. Demultiplexed input is then sent over to m threads for writing into individual output files (set with --writer-threads). Reading and demultiplexing are fast and CPU-bound operations, while output speed is determined by how fast data can be written to the underlying file system. In our experience, output is much slower than demultiplexing itself and requires proportionally more cores to speed up the runtime. We obtained best results when distributing output to three threads for each demultiplexing thread (1:3 ratio of --threads to --writer-threads).
License
The project is licensed under the MIT license. See the LICENSE file for details.
Streamed and parallel demultiplexing of fastq files
Quickstart
Requirements and usage
pydemultallows you to demultiplex fastq files in a streamed and parallel way. It expects that a sample barcode can be matched by a regular expression from the first line of each fastq entry and that sample barcodes are known in advance.Suppose we have a file containing sample barcodes like this:
and a typical entry in the fastq file looks like this:
Since the sample barcode is six bases long, we have to set the corresponding
--barcode-regexoption to(.*):(?P<CB>[ATGCN]{6}in the callBarcode and UMI regular expressions
By default,
pydemultparses the read name for the cell barcode with regular expressions. Cell barcodes are indicated by a capturing group calledCB, while (optional) UMIs are indicated by a capturing group calledUMI. Some examples include:(.*):(?P<CB>[ATGCN]{11}), for a cell barcode of length 11 that is present after the last colon of the read name.(.*):CELL_(?P<CB>[ATGCN]{10}):UMI_(?P<UMI>[ATGCN]{8}), for a cell barcode of length 10, followed by a UMI sequence of length 8. For DropSeq data preprocessed by the umis tool, a regex like this is advisable.Output
pydemultwill create a compressed fastq file for each sample barcode, with the filename taken from the corresponding Sample column entry ofbarcodes.txt.A note on multithreading
pydemultdivides its work into a demultiplexing and output part. The main thread streams the input and lazily distributes data blobs (of size--buffer-size) acrossndifferent demultiplexing threads (set with--threads), where the actual work happens. Demultiplexed input is then sent over tomthreads for writing into individual output files (set with--writer-threads). Reading and demultiplexing are fast and CPU-bound operations, while output speed is determined by how fast data can be written to the underlying file system. In our experience, output is much slower than demultiplexing itself and requires proportionally more cores to speed up the runtime. We obtained best results when distributing output to three threads for each demultiplexing thread (1:3ratio of--threadsto--writer-threads).License
The project is licensed under the MIT license. See the
LICENSEfile for details.