The GTEx methods debate

FeaturedThe Future of Work
Back to Blog

The GTEx methods debate

This week, a dialogue erupted around the Genotype-Tissue Expression (GTEx) Consortium and its methods for analyzing RNA-Seq data. Tracking the debate will take you through Twitter threads, into blog posts, down comments sections, past PubMed entries, and over Nature‘s login wall.

The scale of GTEx’s work (by any measure you pick, whether it’s the sample size of 1,800 [1], the government funding of over $21 million [2], or whatever else) makes it easy to care about how that work is happening, and in turn about the controversy–but the stature of those in the debate (Manolis Dermitzakis, Lior Pachter, and Steven Salzberg, among others) makes it almost impossible to ignore.

We’re collecting the highlights here and will update this post as more happen.

Lior Pachter’s original post

It started with an Oct. 21st blog post from Lior Pachter with the provocative title “GTEx is throwing away 90% of their data.” Pachter introduces the GTEx Consortium as the NIH’s “RNA-Seq tour de force” before criticizing their software choices for their vast dataset. The Flux Capacitor, he claims, is poorly documented and not up to community standards. He describes a published description of the software this way:

The methods description in the Online Methods of Montgomery et al. can only be (politely) described as word salad.

Ouch. He then presents an analysis on simulated data to claim that 90% of GTEx’s raw data is being tossed in the bin.

Simply Statistics weighs in

Biostatistics Prof. John Leek weighs the criticisms of GTEx in a post at Simply Statistics. He agrees that their method is neither sufficiently documented nor “community-approved.” But he cautions against overblown claims and urges caution also with respect to the “back-of-the-envelope” simulation Pachter performed to support his critique.

Lost in all of the publicity about the 90% number is that Pachter’s blog post hasn’t been vetted, either.

The GTEx Consortium responds

On October 31st, Lior Pachter hosted a response from the GTEx consortium on his blog. They defend Flux Capacitor’s merits and point to the tool’s Web documentation. They also address the simulation-based portion of Pachter’s argument by making the basic point that requiring ten times as much data to get the same quality results is not the same as discarding 90% of the data. There are also interesting notes on why Flux Capacitor was used instead of, say, Cufflinks:

Initially we used Cufflinks (CL), which is the most commonly used tool in the field for quantifying isoforms. However, when using it at large scale (1000s of samples) we hit technical problems of large memory use and long compute times. We attempted to overcome these difficulties, and investigated the possibility of parallelizing CL and contacted the CL developers for help. However, the developers advised us that CL could not be parallelized at that point. Due to project timelines, we started investigating alternative methods.

The debate continues

From here it gets pretty tangled:

  • Pachter-collaborator Steven Salzberg joins the conversation in the comments section, amplifying Pachter’s claims about the inferiority of Flux Capacitor and the danger of using less-tested tools in an environment where results are difficult to replicate.
  • Nicholas Bray soon adds a long comment that could itself have been a worthy blog post, not only criticizing Flux Capacitor’s documentation but also offering general reflections on bioinformatic simulations and models in light of Flux Capacitor’s (perhaps curious) use of the Weibull distribution to model fragment sizes.
  • NYGC bioinformatician Nicolas Robine chimes in on Twitter, and we’re pretty sure he’s taking aim at a recent paper (Kim et al. 2013) from Salzberg’s cohort, where simulations support the claim that TopHat2 outperforms another community favorite (STAR).
  • A comment from Heng Li caps off a digression on computational resources (“If RAM is the problem here, the Broad is not the best place to run Cufflinks”) and what computational feats the community can reasonably expect of the consortium (“I entirely agree that it is important to run Cufflinks or another popular pipeline/tool”).

We expect more worthwhile commentary to follow, and we’ll link it here as it does. Until then, it’s worth noting a big-picture point of interest: we are seeing scientific debate, from top experts and about cutting-edge subjects, carried out on social media.

Notes

  1. Matthews, S. (2013). Gene expression database scales up, providing baseline data. Nature Medicine. doi:10.1038/nm0713-799.
  2. We estimated this total for the first round of GTEx funding from the 2010 NIH press release, “NIH launches Genotype-Tissue Expression project.”