Authors: Gaurav Sharma, Computational Biologist at Ocean Genomics; Jeremy Simon, Assistant Professor at UNC Chapel Hill; Rob Patro, Cofounder at Ocean Genomics
This project has been made possible by the team at Ocean Genomics, and by a grant from the Chan Zuckerberg Initiative.
In this tutorial we will use salmon
and alevin-fry
based pipeline to analyse SPLiT-seq data. We will also use a new tool splitp
that modifies the fastq file to deal with the scenario where paired barcodes are used during the first round of combinatorial barcoding. We start by downloading the data. Post quantification, we will perfom single-cell analysis.
Introduction
Split-pool ligation-based transcriptome sequencing, or SPLiT-seq, is a single-cell RNA-seq method that uses combinatorial barcoding to label the cell or nucleus source of RNA. This method was introduced in the article “Single-cell profiling of the developing mouse brain and spinal cord with split-pool barcoding” by Rosenberg et al., Science 2018. As described in the article, different barcodes are ligated during different rounds of barcoding. There are three 8 bp cellbarcodes and a 10 bp UMI. Currently, SPLiT-seq can be performed with two chemistries — v1 and v2. Both v1 and v2 have 24 bp barcode and 10 bp UMI.
In this tutorial, we will use the data from the aforementioned article. The sample used in this tutorial, available for download on GEO, is a mix of cell lines HEK293, HelaS3, and NIH/3T3.
Download the input data
Create a new directory for the analysis
$ mkdir split-seq_tutorial
$ cd split-seq_tutorial
The SRA run corresponding to the data is SRR6750057 and can be downloaded from SRA.
$ mkdir -p data
$ fasterq-dump SRR6750057 -S -O ./data/ -e 16 -m 10GB -b 10MB
The other necessary files can be downloaded from here and saved in the data
directory. Running fastp
to trim and remove bad quality reads is helpful.
Paired barcoding case
In some cases, barcodes with known pairing are used during first round of combinatorial barcoding. This results in the same cell getting different barcode during the first round but the same barcode during later rounds. For example, if GATAGACA
is paired with AACGTGAT
, then the barcodes
CTGTAGCC-ACACAGAA-AACGTGAT
, and
CTGTAGCC-ACACAGAA-GATAGACA
denote the same cell. The sequencing order is opposite of the order of ligation. For the first round, a transcript may get amplified by either an oligo-dT primer or a random hexamer, the pairing between the barcodes corresponds to the primer of amplification.
To deal with this case, splitp
can be used. Splitp is a new tool that should eventually support a variety of complex read pre-processing steps necessary to process reads from various single-cell protocols, though for now the focus is only on the pre-processing necessary to process the SPLiT-seq protocol described here. The mapping file between oligo-dT primers and random hexamers is available at the link provided earlier. The command would be:
$ splitp -r SRR6750057_2.fastq -b ./data/oligo_hex_bc_mapping.txt -s 87 -e 94 -o > SRR6750057_corrected_2.fastq
splitp
usage:
USAGE:
splitp [OPTIONS] --read-file <READ_FILE> --bc-map <BC_MAP> --start <START> --end <END>
OPTIONS:
-b, --bc-map <BC_MAP> the map of oligo-dT to random hexamers
-e, --end <END> end position of the random barcode
-h, --help Print help information
-o, --one-hamming consider 1-hamming distance neighbors of random hexamers
-r, --read-file <READ_FILE> the input R2 file
-s, --start <START> start position of the random barcode
-V, --version Print version information
The bc-map
, which maps barcodes corresponding to oligo-dT amplification to those corresponding to random hexamers, should be a tab separated text file with oligo-dT primer barcodes in the first column. The file must have a header comment line.
Creating the index
For this tutorial, we will need salmon
(>=v1.7.0) and alevin-fry
(>=v0.4.2). For the index, we will use a splici (spliced + intron) index, that contains both spliced mRNA transcripts and intronic sequences. This allows us to use the unspliced, spliced, ambiguous (USA) quantification mode of alevin-fry
and generates these three types of counts for each gene. It can be used to compute RNA-velocity as well.
Since the fastq is contains a mix of human and mouse cell lines, first we will create a combined index of both species. For details on how to create splici index, please refer to the splici tutorial. For human, we will use GENCODE v31 and for mouse GENCODE vM25. The files needed to generate the splici index can be downloaded from the zenodo link.
Following sequence of commands was used to generate the index. First combine the references, then generate combined splici reference, and finally create the splici index.
Commands to combine the references
$ sed 's/chr/Mchr/g' mouse.gencode.vM25.genome.fa > mouse_gencode_vM25_genome.fa
$ sed 's/chr/Mchr/g' mouse.gencode.vM25.gene_annotations.gtf > mouse_gencode_vM25_gene_annotations.gtf
$ tail -n+6 mouse_gencode_vM25_gene_annotations.gtf > mgg
$ mv mgg mouse_gencode_vM25_gene_annotations.gtf
$ cat human.gencode.v35.genome.fa mouse_gencode_vM25_genome.fa > combined_genome.fa
$ cat human.gencode.v35.gene_annotations.gtf mouse_gencode_vM25_gene_annotations.gtf > combined_gene_annotations.gtf
The following R code was used to generate the reference for indexing (using the roe
package; one
could also use pyroe
instead):
library(roe)
genome_path <- "combined_genome.fa"
gtf_path <- "combined_gene_annotations.gtf"
filename_prefix = "transcriptome_splici"
output_dir = "splici_index_reference"
read_length = 66
flank_trim_length = 5
make_splici_txome(
genome_path = genome_path,
gtf_path = gtf_path,
read_length = read_length,
output_dir = output_dir,
flank_trim_length = flank_trim_length,
filename_prefix = filename_prefix,
dedup_seqs = FALSE,
no_flanking_merge = FALSE
)
Finally, salmon
index
is used to generate the index
salmon index -t splici_index_reference/transcriptome_splici_fl61.fa -i human_mouse_index -p 32
Quantification
The quantification process is as follows:
salmon
alevin
peforms mapping of the fragments to the index to generate a RAD file.alevin-fry
uses that output to generate-permit list, collate the RAD file and perform quantification of the collated RAD file.
To read more about these steps please refer to alevin-fry
documentation.
For the first step, we use salmon
alevin
to generate a RAD (Reduced Alignment Data) file. To use SPLiT-seq protocol, we can use either --splitseqV1
or --splitseqV2
flags for v1 and v2 chemistries, respectively.
The chemistry used by Rosenberg et al. is v1, whereas v2 is used for the commercial data from Parse biosciences. Both are similar, with the only difference being in the position of the third barcode. In v1 it occurs at position 78 and in v2 it occurs at position 86 of the sequenced read. The salmon
alevin
implementation assumes that, like the files from Rosenberg et al., the biological reads are in R1 fastq file and the barcodes and umi are present in R2. If your data has the opposite order, i.e., barcode and umi in R1, then in the command below use -1
for R2 and -2
for R1.
Here we use the --sketch
flag to perform pseudoalignment with structural constraints, though selective-alignment could also be used (by passing --rad
instead of --sketch
). For viewing a detailed list of help options please use salmon alevin -h
.
$ salmon alevin -i ./data/splici_index_human.v31_mouse.vM25 -l A -1 ./data/SRR6750057_1.fastq.gz -2 ./data/SRR6750057_corrected_2.fastq.gz -p 32 --splitseqV1 -o ./SRR6750057_run --sketch
To correct and associate sequenced barcodes to the most likely “corrected” barcodes, alevin-fry
generate-permit-list is used. Here we use the flag -k
, which attempts to estimate the number of high quality barcodes to include based on the “knee” of the cumulative frequency distribution of the reads associated with barcodes. Then the RAD file is collated using alevin-fry
collate and quantification is performed using alevin-fry
quant. The counts are generated as mtx files with cells as rows and genes as columns.
$ alevin-fry generate-permit-list -d both -i ./SRR6750057_run --output-dir ./SRR6750057_out_permit_knee -k
$ alevin-fry collate -r ./SRR6750057_run -t 16 -i ./SRR6750057_out_permit_knee
$ alevin-fry quant -m splici_index_reference/transcriptome_splici_fl61_t2g_3col.tsv -i ./SRR6750057_out_permit_knee -o ./SRR6750057_counts -t 16 -r cr-like-em --use-mtx
Post-quantification analysis
library(fishpond)
library(data.table)
library(Seurat)
library(miQC)
library(SeuratWrappers)
library(flexmix)
library(SingleCellExperiment)
library(Matrix)
library(stringr)
Since SPLiT-seq/ParseBio amplifies transcripts based on both oligo-dT and random hexamer priming, we recommend using outputFormat=“snRNA” to maximally capture signal occurring anywhere in the body of the mature or immature transcript. More quantitative details and benefits of this to come in a future manuscript.
# Read in entire output directory from alevin-fry using fishpond, will create SingleCellExperiment object
sce <- loadFry(fryDir = "SRR6750057_counts/", outputFormat = "snRNA")
## locating quant file
## Reading meta data
## USA mode: TRUE
## Processing 116057 genes and 301 barcodes
## Using pre-defined output format: snRNA
## Building the 'counts' assay, which contains U S A
## Constructing output SingleCellExperiment object
## Done
# Convert gene ids to gene symbols
gene_annotation_table <- fread("./data/combined_geneid_genesymbols.txt")
geneNames <- gene_annotation_table$GeneSymbol[match(rownames(sce),gene_annotation_table$Geneid)]
rownames(sce) <- geneNames
# Some gene names are duplicated such as the same gene in chromosomes X and Y. Those are merged here.
exp.gene.grp <- t(sparse.model.matrix(~ 0 + geneNames))
exp.summarized <- exp.gene.grp %*% counts(sce)
rownames(exp.summarized) <- rownames(exp.summarized) %>% str_replace_all("geneNames","")
# Create Seurat object
rosenberg300ubc.seurat <- CreateSeuratObject(counts = exp.summarized)
## Warning: Feature names cannot have underscores ('_'), replacing with dashes
## ('-')
# Compute mitochondrial contamination and filter out low quality cells using miQC
rosenberg300ubc.seurat <- subset(rosenberg300ubc.seurat, subset = nCount_RNA > 750 & nFeature_RNA > 375)
rosenberg300ubc.seurat <- PercentageFeatureSet(object = rosenberg300ubc.seurat, pattern = "^MT|^Mouse-mt", col.name = "percent.mt")
rosenberg300ubc.seurat <- RunMiQC(rosenberg300ubc.seurat, percent.mt = "percent.mt", nFeature_RNA = "nFeature_RNA", posterior.cutoff = 0.75, model.slot = "flexmix_model")
## Warning in RunMiQC(rosenberg300ubc.seurat, percent.mt = "percent.mt",
## nFeature_RNA = "nFeature_RNA", : flexmix returned only 1 cluster
## defaulting to backup.percentile for filtering
## Warning: Adding a command log without an assay associated with it
rosenberg300ubc.seurat.filtered <- subset(rosenberg300ubc.seurat, miQC.keep == "keep")
dim(rosenberg300ubc.seurat.filtered)
## [1] 114900 297
# Normalize and scale data
rosenberg300ubc.seurat.filtered <- NormalizeData(rosenberg300ubc.seurat.filtered)
rosenberg300ubc.seurat.filtered <- FindVariableFeatures(rosenberg300ubc.seurat.filtered, nfeatures = 5000)
all.features <- rownames(rosenberg300ubc.seurat.filtered@assays$RNA@counts)
rosenberg300ubc.seurat.filtered <- ScaleData(rosenberg300ubc.seurat.filtered, features = all.features)
## Centering and scaling data matrix
# Run PCA, then determine how many PCs are informative
rosenberg300ubc.seurat.filtered <- RunPCA(rosenberg300ubc.seurat.filtered, verbose = FALSE, npcs = 100)
ElbowPlot(rosenberg300ubc.seurat.filtered, ndims = 30, reduction = "pca")
# Run UMAP, and identify cell clusters
rosenberg300ubc.seurat.filtered <- RunUMAP(rosenberg300ubc.seurat.filtered, dims = 1:10)
## Warning: The default method for RunUMAP has changed from calling Python UMAP via reticulate to the R-native UWOT using the cosine metric
## To use Python UMAP via reticulate, set umap.method to 'umap-learn' and metric to 'correlation'
## This message will be shown once per session
## 12:27:11 UMAP embedding parameters a = 0.9922 b = 1.112
## 12:27:11 Read 297 rows and found 10 numeric columns
## 12:27:11 Using Annoy for neighbor search, n_neighbors = 30
## 12:27:11 Building Annoy index with metric = cosine, n_trees = 50
## 0% 10 20 30 40 50 60 70 80 90 100%
## [----|----|----|----|----|----|----|----|----|----|
## **************************************************|
## 12:27:11 Writing NN index file to temp file /tmp/RtmpnqRw7m/file13908e7ceec965
## 12:27:11 Searching Annoy index using 1 thread, search_k = 3000
## 12:27:11 Annoy recall = 100%
## 12:27:12 Commencing smooth kNN distance calibration using 1 thread
## 12:27:13 Initializing from normalized Laplacian + noise
## 12:27:13 Commencing optimization for 500 epochs, with 11226 positive edges
## 12:27:15 Optimization finished
rosenberg300ubc.seurat.filtered <- FindNeighbors(rosenberg300ubc.seurat.filtered, dims = 1:10, verbose = FALSE)
rosenberg300ubc.seurat.filtered <- FindClusters(rosenberg300ubc.seurat.filtered, verbose = FALSE, resolution = 0.1, algorithm=2)
# Plot UMAP labeled by clusters
DimPlot(rosenberg300ubc.seurat.filtered,reduction = "umap", label = TRUE)
# Find markers of the 3 clusters
markers = FindAllMarkers(rosenberg300ubc.seurat.filtered, assay="RNA", slot="scale.data", only.pos=T)
## Calculating cluster 0
## Calculating cluster 1
## Calculating cluster 2
# Cross-referenced top markers with Human Protein Atlas RNA expression in cell lines
HEK293.markers = c("SLIT2", "ZNF829", "KPNA5", "BRD2", "SPEN")
HELA.markers = c("NPR3", "PDE2A", "COL7A1", "FOLR1")
NIH3T3.markers = c("Mouse-Srrm2", "Mouse-Hspa5", "Mouse-Scd2", "Mouse-Huwe1") # These are the only mouse cells in the mix, so any mouse gene symbols will do
goi = c(HEK293.markers, HELA.markers, NIH3T3.markers)
# Create violin plot
VlnPlot(rosenberg300ubc.seurat.filtered, features = goi, stack=T, flip=T, sort=T, slot="scale.data")
sessionInfo()
## R version 4.1.2 (2021-11-01)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.4 LTS
##
## Matrix products: default
## BLAS: /opt/R-4.1.2/lib/R/lib/libRblas.so
## LAPACK: /opt/R-4.1.2/lib/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] stringr_1.4.0 Matrix_1.4-0
## [3] SingleCellExperiment_1.16.0 SummarizedExperiment_1.24.0
## [5] Biobase_2.54.0 GenomicRanges_1.46.1
## [7] GenomeInfoDb_1.30.1 IRanges_2.28.0
## [9] S4Vectors_0.32.3 BiocGenerics_0.40.0
## [11] MatrixGenerics_1.6.0 matrixStats_0.61.0
## [13] flexmix_2.3-17 lattice_0.20-45
## [15] SeuratWrappers_0.3.0 miQC_1.2.0
## [17] SeuratObject_4.0.4 Seurat_4.1.0
## [19] data.table_1.14.2 fishpond_2.0.1
##
## loaded via a namespace (and not attached):
## [1] plyr_1.8.6 igraph_1.2.11 lazyeval_0.2.2
## [4] splines_4.1.2 listenv_0.8.0 scattermore_0.7
## [7] ggplot2_3.3.5 digest_0.6.29 htmltools_0.5.2
## [10] fansi_1.0.2 magrittr_2.0.2 tensor_1.5
## [13] cluster_2.1.2 ROCR_1.0-11 limma_3.50.3
## [16] remotes_2.4.2 globals_0.14.0 R.utils_2.11.0
## [19] spatstat.sparse_2.1-0 colorspace_2.0-2 ggrepel_0.9.1
## [22] xfun_0.29 dplyr_1.0.7 crayon_1.4.2
## [25] RCurl_1.98-1.5 jsonlite_1.7.3 spatstat.data_2.1-2
## [28] survival_3.2-13 zoo_1.8-9 glue_1.6.1
## [31] polyclip_1.10-0 gtable_0.3.0 zlibbioc_1.40.0
## [34] XVector_0.34.0 leiden_0.3.9 DelayedArray_0.20.0
## [37] future.apply_1.8.1 abind_1.4-5 scales_1.1.1
## [40] DBI_1.1.2 miniUI_0.1.1.1 Rcpp_1.0.8
## [43] viridisLite_0.4.0 xtable_1.8-4 reticulate_1.24
## [46] spatstat.core_2.3-2 rsvd_1.0.5 htmlwidgets_1.5.4
## [49] httr_1.4.2 RColorBrewer_1.1-2 ellipsis_0.3.2
## [52] modeltools_0.2-23 ica_1.0-2 farver_2.1.0
## [55] R.methodsS3_1.8.1 pkgconfig_2.0.3 nnet_7.3-16
## [58] sass_0.4.0 uwot_0.1.11 deldir_1.0-6
## [61] utf8_1.2.2 labeling_0.4.2 tidyselect_1.1.1
## [64] rlang_1.0.0 reshape2_1.4.4 later_1.3.0
## [67] munsell_0.5.0 tools_4.1.2 cli_3.1.1
## [70] generics_0.1.2 ggridges_0.5.3 evaluate_0.14
## [73] fastmap_1.1.0 yaml_2.2.2 goftest_1.2-3
## [76] knitr_1.37 fitdistrplus_1.1-6 purrr_0.3.4
## [79] RANN_2.6.1 pbapply_1.5-0 future_1.23.0
## [82] nlme_3.1-153 mime_0.12 R.oo_1.24.0
## [85] compiler_4.1.2 rstudioapi_0.13 plotly_4.10.0
## [88] png_0.1-7 spatstat.utils_2.3-0 tibble_3.1.6
## [91] bslib_0.3.1 stringi_1.7.6 highr_0.9
## [94] RSpectra_0.16-0 vctrs_0.3.8 pillar_1.6.5
## [97] lifecycle_1.0.1 BiocManager_1.30.16 spatstat.geom_2.3-1
## [100] lmtest_0.9-39 jquerylib_0.1.4 RcppAnnoy_0.0.19
## [103] cowplot_1.1.1 bitops_1.0-7 irlba_2.3.5
## [106] httpuv_1.6.5 patchwork_1.1.1 R6_2.5.1
## [109] promises_1.2.0.1 KernSmooth_2.23-20 gridExtra_2.3
## [112] parallelly_1.30.0 codetools_0.2-18 MASS_7.3-54
## [115] gtools_3.9.2 assertthat_0.2.1 sctransform_0.3.3
## [118] GenomeInfoDbData_1.2.7 mgcv_1.8-38 parallel_4.1.2
## [121] grid_4.1.2 rpart_4.1-15 tidyr_1.2.0
## [124] rmarkdown_2.11 Rtsne_0.15 shiny_1.7.1