Q: The TCGA is now over and the DCC data portal is offline, where do I find legacy information about TCGA?
A: TCGA data processing formally ended in July of 2016, with content from its data portal and CGHub having been migrated to the NCI Genomic Data Commons data portal. We suggest that all questions about TCGA data, policies, practices and procedures be directed to the TCGA program office, the NCI Center for Cancer Genomics, or the Genomics Data Commons.
Q: What is the best way for us to contact you?
A: To help us respond to you faster, and more completely, it is best to use one of these methods to contact us as a group, instead of emailing privately to individuals on our team.
Q: Where is your documentation?
A: Though the data and software in our pipeline is constantly evolving, we believe that process clarity & operational transparency streamlines efforts and ultimately improves science. We therefore endeavor to provide a reasonable level of background data processing and algorithm documentation, given our time, resource, and priority constraints. In addition, we generate hundreds of analysis reports per month, each containing detailed summaries, figures, and tables, as well as literature references and links to other documentation on the algorithmic codes in our pipeline. For each run we also provide a summary report of samples ingested, and analysis notes and data notes. Our pipeline nomenclature is described below, and further description of the TCGA data formats is available here. Finally, the analysis tasks in the latest run are shown below as a directed graph, which you may click to expand, and then click upon any enabled nodes to view the Nozzle report generated for that analysis result.
Q: How or where can I access the inputs and/or results of a run?
A: In one of several ways, all of which are governed by TCGA data usage policy (and note that only the TCGA DCC requires password access, all Firehose and FireBrowse mechanisms are completely open for public use):
from which you may simply navigate to the tumor type and run date of interest. More information on the nomenclature and content of these files is given below. Microsoft Windows-based users can use the WinRAR utility to unpack the archive files, while Unix and Apple Mac OS/X users can use the gzip and/or tar utilities.
Q: That's great, but how do I navigate all of your results on the web?
A: There are multiple ways to navigate our online results. The best place to start is firebrowse.org:
This gives you access to both our standard data packages (right column), and the results of our standard analyses suite (left column).
Analyses results may also be accessed from the unified reports:
For fine-grained querying of results via the web, we have an interactive API:
Q: Can I easily search your entire suite of runs and results?
A: Yes, in multiple ways: First, our homepage includes a Google-powered search mechanism
The same Google search mechanism is available in FireBrowse, in addition to the wealth of query capability already offered by the FireBrowse API. Please see these search examples for ideas; we intend to continually improve our search capability, by adding new keywords, synonyms, annotations etc.
Q: How do I cite Firehose results in my paper?
A: In one of several ways:
Standard Data Runs: If you have used any of our data archives in your research then please cite the respective data run using the instructions and DOIs given here.
Manuscript: the GDAC Firehose manuscript is under preparation, and when published will provide an additional reference point for citation.
Q: When is the next run?
A: As of December 2014 the Broad Institute GDAC will aim to provide:
The motivations for these changes in our run schedule are described here, here and here. The process of data and code flow in our runs is outlined here. Note that our standard data and analysis runs are provided as archival and dissemination mechanisms for wide public consumption; while our custom AWG runs are performed on an as-needed basis, to provide the most up-to-date sample cohorts and analyses to the respective working group (avoiding the time and sample lag of the monthly & quarterly runs). If you are looking for November 2012 runs, read here to learn why they were not performed.
Q: How do I use a graphic from FireBrowse in my paper?
A: The FireBrowse visualization widgets (viewGene, iCoMut) do not explicitly provide a screen-capture feature; but if you use your browser's Print feature and then save the result to a PDF, you'll have a vector-graphics image than can be scaled without loss of fidelity.
Q: There are many acronyms used in TCGA, for example to identify disease cohorts. Where can I find what these acronyms mean?
A: Consult the TCGA Encyclopedia for general questions. There are several ways one can map cohort abbreviations to full disease names, including:
Note that our portals list 38 cohorts and the TCGA page shows 34 cohorts, with the difference being aggregate cohorts the Firehose GDAC constructed for convenience of TCGA and the research community. As of December 2016 these aggregate cohorts are:
COADREAD: colorectal, combines COAD + READ
Q: Why does your table of ingested data show that disease type XYZ has N mutation samples?
A: Our precedence rules for ingesting mutation samples are:
Q: How can I determine the allelic fraction of mutations in your MAF files?
A: Unfortunately, there is no guarantee that a MAF file will have this information, as it is not indicated in the MAF Specification. Some centers have added this information as custom columns; for instance many Broad MAFs may have t_ref_count and t_alt_count from MuTect, and WashU MAFs may have tumor_vaf.
Q: Where can I find the mutation rates calculated during Firehose analyses?
A: Mutation rates are calculated by MutSig, and can be found in the
Q: What are the differences between MutSig 1.5, 2.0, and CV?
A: MutSig relies on several sources of evidence in the data to estimate the amount of positive selection a gene underwent during tumorigenesis. The three main sources are:
The first line of evidence, Abundance, goes into the core significance calculation performed in all versions of MutSig. In MutSig1.0, this is simply called "p". MutSig1.0 assumes a constant BMR across all genes in the genome and all patients in the patient cohort. In MutSig1.5, this is also called "p", but MutSig1.5 uses information from synonymous mutations to roughly estimate gene-specific BMRs. Later versions of MutSig (MutSigS2N and MutSigCV) have increasingly sophisticated procedures for treating the heterogeneity in per-gene, per-patient, and per-context BMRs, but they are all answering essentially the same question about Abundance of mutations above the background level.
The other lines of evidence, Conservation and Clustering, are examined by a separate part of MutSig that performs many permutations, comparing the distributions of mutations observed to the null distribution from these permutations. The output of this permutation procedure is a set of additional p-values: p_clust is the significance of the amount of clustering in hotspots within the gene. p_cons is the significance of the enrichment of mutations in evolutionarily conserved positions of the gene. Finally, p_joint is the joint significance of these two signals (Conservation and Clustering), calculated according to their joint distribution. The reason for calculating p_joint is to ensure there is no double-counting of the significance due, for example, to clustering in a conserved hotspot.
Combining all three lines of evidence: In order to take a full accounting of the signals of positive selection in a given gene, we combine all three lines of evidence. This is done by using the Fisher method of combining p-values. The two p-values combined are the "p" (or "p_classic") from the analysis of mutation Abundance, and the p_joint from the analysis of Conservation and Clustering in MutSig2.0. More information on MutSig is available on its entry in the CGA software page, the 2013 and 2014 MutSig publications and dozens of TCGA-related papers.
Q: What do the different fields for significantly mutated genes mean?
A: Many of these fields depend on what version of MutSig was used. The following table covers the majority of them:
Q: Where did my mutations go?
A: MAFs processed by MutSig may have mutations removed for one of several reasons:
Q: Why does your table of ingested data show that disease type XYZ has N methylation samples?
A: We ingest and support both of the major methylation platforms (Infinium HumanMethylation450 and HumanMethylation27), therefore the entries in our data table give the sum of both. However, as noted in our June 2012 release notes, Firehose does not yet include the statistical algorithms used by TCGA AWGs to merge both of these methylation platforms into a single bolus; until those are shared we prefer
Q: What TCGA sample types are Firehose pipelines executed upon?
A: Since inception Firehose analyses have been executed upon tumor samples and then correlated with clinical data. Nearly all analyses utilize primary solid tumor samples (numeric code 01, short letter code "TP" as given in the TCGA sample type codes table), with two exceptions:
Also note that each stddata run dashboard contains a samples summary report, which explains why – even though our GDAC mirrors ALL data from the DCC on a daily basis – not every sample is ingested into Firehose*.
*Specifically, we filter out ALL samples listed as Redacted in the TCGA Annotations Manager, and FFPE samples are only available in standard data archives, not analyses.
Programmatically, the FireBrowse patients api will give you a list of all patients in each cohort, either in bulk (all cohorts) or any subset you chose. It doesn’t give the complete aliquot barcode yet, but will in the very near future. In addition to playing with this API interactively through the FireBrowse UI, there are also Python, Unix, and R bindings, and even a pip-installable package.
Q: How do I analyze samples that aren't included in your Firehose runs (e.g. Blood Normals, Solid Tissue Normals, etc.)?
A: All analysis-ready patient samples are available in our stddata archives; control samples are not. You can obtain the stddata archives using our firehose_get utility or by traversing the FireBrowse user interface or stddata API. Each sample in the archive is identified by a TCGA Barcode that contains the sample type. As shown below, the Sample portion of the barcode can be looked up in the sample type code table available here (as can the tissue source site, aka TSS, et cetera). In addition, FireBrowse makes much of this information available programmatically in its metadata API.
TCGA Barcode Description: As described here, a batch is uniquely determined by the first shipment of a group of analytes (or plate) from the Biospecimen Core Resource. So, in most cases the plate number of a sample is effectively synonymous with the batch id of the sample; an exception to this is when additional analytes from a participant are subsequently shipped the batch id will remain fixed at the first plate number.
Q: Where can I find additional information about a TCGA sample/analyte
A: While the TCGA was active samples notes (aka annotations) had been maintained at the TCGA Annotations Manager. In July of 2016, however, the data portal of TCGA went offline, with all data (including annotations) having been migrated to the Genomic Data Commons. Please contact the GDC staff for details on how annotation information may be obtained after July 2016.
Q: What do you do when multiple aliquot barcodes exist for a given sample/portion/analyte combination?
A: To date GDAC analyses have proceeded upon one single tumor sample per patient, so when multiple aliquot barcodes exist we try to select the scientifically most advantageous one among them. Given the absence of disambiguating QC metrics, we use the following rules to make such selections:
For example, consider the archive
which contains, among many others, the following aliquot barcodes
By the above rules our pipelines would select the second aliquot, as it has the higher plate number. Finally, note that as of Fall 2013 we segregate FFPE samples from frozen tissue samples (and have never performed analyses upon FFPE samples); this segregation is reflected in the sample counts and provenance of our samples summary report, with FFPE cases being listed in their own section.
Q: The DCC site shown above is protected, how do I obtain the TCGA access credentials?
A: Begin by visiting the TCGA data access page.
Q: What reference genome build are you using?
A: We match the reference genome used in our analyses to the reference used to generate the data as appropriate. Our understanding is that TCGA standards stipulate that OV, COAD/READ, and LAML data are hg18, and all else is hg19. caveat: SNP6 copy number data is available in both hg18 and hg19 for all tumor cohorts, so we use hg19 for copy number analyses in all cases.
Q: How are the copy number data generated, and what do their file names mean?
A: This is discussed in the application note posted here: http://www.broadinstitute.org/cancer/cga/copynumber_pipeline. Note that the 'minus_germline, or 'nocnv' segment files, refer to whether the steps in section 2.3 are applied. The steps in section 2.4 are applied regardless.
Q: What centers are responsible for sequencing XYZ tumor?
Q: How do I add my code to your Firehose pipeline?
Q: But where do I get Firehose data to test my module?
A: This is described above.
Q: Your results archives have long and complicated names, what do they mean?
Each pipeline we execute results in a set of 6 archive files being submitted to the DCC: primary results in the Level_* archive; auxiliary data (e.g. debugging information) in the aux archive, tracking information in the mage-tab archive; and an MD5 checksum file for each. In most cases you will only need the primary results in the Level_* archives.
Q: What is the difference between RPKM and RSEM mRNASeq data?
A: RPKM and RSEM are different methods for estimating expression levels from mRNASeq data. RPKM (Reads Per Kilobase per Million mapped reads) is described in a paper by Mortazavi, Williams, McCue, Schaeffer & Wold titled Mapping and quantifying mammalian transcriptomes by RNA-Seq. RSEM (RNA-Seq by Expectation-Maximization) is considered by many to be a better estimation method and, if available, RSEM data is preferentially used in our downstream analyses. It is described in a paper by Bo Li & Colin Dewey titled RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome.
Q: How do I map mRNA isoform IDs to genes?
A: We do not provide a mapping table for this, because in TCGA these data are generated by the University of North Carolina. However, for internal analyses we and others frequently use the UCSC Table Browser, e.g. as described in this BioStars recipe.
Q: What can you tell me about the RPPA data?
A: RPPA stands for "reverse phase protein array," which are data generated by the M.D. Anderson Cancer Center as described here. The MDACC also hosts the TCPA portal, which serves clean, batch-corrected RPPA data that may be preferred for your analysis over the uncorrected data deposited directly to the TCGA data coordination center.
Q: I have a question about sequencing data generated by the Broad Institute, are you the right group to ask?