Total internal and external lengths of the Bolthausen-Sznitman coalescent

Total internal and external lengths of the Bolthausen-Sznitman coalescent
Götz Kersting, Juan Carlos Pardo, Arno Siri-Jégousse
(Submitted on 6 Feb 2013)

In this paper, we study a weak law of large numbers for the total internal length of the Bolthausen-Szmitman coalescent. As a consequence, we obtain the weak limit law of the centered and rescaled total external length. The latter extends results obtained by Dhersin & M\”ohle \cite{DM12}. An application to population genetics dealing with the total number of mutations in the genealogical tree is also given.

Integrating genealogical and dynamical modelling to infer escape and reversion rates in HIV epitopes

Integrating genealogical and dynamical modelling to infer escape and reversion rates in HIV epitopes
Duncan Palmer, Angela McLean, Gil McVean
(Submitted on 5 Feb 2013)

The rates of escape and reversion in response to selection pressure arising from the host immune system, notably the cytotoxic T-lymphocyte (CTL) response, are key factors determining the evolution of HIV. Existing methods for estimating these parameters from cross-sectional population data using ordinary differential equations (ODE) ignore information about the genealogy of sampled HIV sequences, which has the potential to cause systematic bias and over-estimate certainty. Here, we describe an integrated approach, validated through extensive simulations, which combines genealogical inference and epidemiological modelling, to estimate rates of CTL escape and reversion in HIV epitopes. We show that there is substantial uncertainty about rates of viral escape and reversion from cross-sectional data, which arises from the inherent stochasticity in the evolutionary process. By application to empirical data, we find that point estimates of rates from a previously published ODE model and the integrated approach presented here are often similar, but can also differ several-fold depending on the structure of the genealogy. The model-based approach we apply provides a framework for the statistical analysis of escape and reversion in population data and highlights the need for longitudinal and denser cross-sectional sampling to enable accurate estimate of these key parameters.

Genetic draft, selective interference, and population genetics of rapid adaptation

Genetic draft, selective interference, and population genetics of rapid adaptation
Richard A. Neher
(Submitted on 5 Feb 2013)

To learn about the past from a sample of genomic sequences, one needs to understand how evolutionary processes shape genetic diversity. Most population genetic inference is based on frameworks assuming adaptive evolution is rare. But if positive selection operates on many loci simultaneously, as has recently been suggested for many species including animals such as flies, a different approach is necessary. In this review, I discuss recent progress in characterizing and understanding evolution in rapidly adapting populations where random associations of mutations with genetic backgrounds of different fitness, i.e., genetic draft, dominate over genetic drift. As a result, neutral genetic diversity depends weakly on population size, but strongly on the rate of adaptation or more generally the variance in fitness. Coalescent processes with multiple mergers, rather than Kingman’s coalescent, are appropriate genealogical models for rapidly adapting populations with important implications for population genetic inference.

Identifying Signatures of Selection in Genetic Time Series

Identifying Signatures of Selection in Genetic Time Series
Alison Feder, Sergey Kryazhimskiy, Joshua B. Plotkin
(Submitted on 3 Feb 2013)

We develop a rigorous test for natural selection based on allele frequencies sampled from a population over multiple time points. We demonstrate that the standard method of estimating selection coefficients in this setting, and the associated chi-squared likelihood-ratio test of neutrality, is biased and it therefore does not provide a reliable test of selection. We introduce two methods to correct this bias, and we demonstrate that the new methods have power to detect selection in practical parameter regimes, such as those encountered in fitness assays of microbial populations. Our analysis is limited to a single diallelic locus, assumed independent of all other loci in a genome, which is again relevant to simple competition assays of laboratory and natural isolates; other techniques will be required to detect selection in time series of co-segregating, linked loci.

Most viewed on Haldane’s Sieve: January 2013

The most viewed preprints on Haldane’s Sieve in January 2013 were:

Equitability, mutual information, and the maximal information coefficient

Equitability, mutual information, and the maximal information coefficient
Justin B. Kinney, Gurinder S. Atwal
(Submitted on 31 Jan 2013)

Reshef et al. recently proposed a new statistical measure, the “maximal information coefficient” (MIC), for quantifying arbitrary dependencies between pairs of stochastic quantities. MIC is based on mutual information, a fundamental quantity in information theory that is widely understood to serve this need. MIC, however, is not an estimate of mutual information. Indeed, it was claimed that MIC possesses a desirable mathematical property called “equitability” that mutual information lacks. This was not proven; instead it was argued solely through the analysis of simulated data. Here we show that this claim, in fact, is incorrect. First we offer mathematical proof that no (non-trivial) dependence measure satisfies the definition of equitability proposed by Reshef et al.. We then propose a self-consistent and more general definition of equitability that follows naturally from the Data Processing Inequality. Mutual information satisfies this new definition of equitability while MIC does not. Finally, we show that the simulation evidence offered by Reshef et al. was artifactual. We conclude that estimating mutual information is not only practical for many real-world applications, but also provides a natural solution to the problem of quantifying associations in large data sets.

Equitability Analysis of the Maximal Information Coefficient, with Comparisons

Equitability Analysis of the Maximal Information Coefficient, with Comparisons
David Reshef (1), Yakir Reshef (1), Michael Mitzenmacher (2), Pardis Sabeti (2) (1, 2 – contributed equally)
(Submitted on 27 Jan 2013)

A measure of dependence is said to be equitable if it gives similar scores to equally noisy relationships of different types. Equitability is important in data exploration when the goal is to identify a relatively small set of strongest associations within a dataset as opposed to finding as many non-zero associations as possible, which often are too many to sift through. Thus an equitable statistic, such as the maximal information coefficient (MIC), can be useful for analyzing high-dimensional data sets. Here, we explore both equitability and the properties of MIC, and discuss several aspects of the theory and practice of MIC. We begin by presenting an intuition behind the equitability of MIC through the exploration of the maximization and normalization steps in its definition. We then examine the speed and optimality of the approximation algorithm used to compute MIC, and suggest some directions for improving both. Finally, we demonstrate in a range of noise models and sample sizes that MIC is more equitable than natural alternatives, such as mutual information estimation and distance correlation.

The infinitely many genes model with horizontal gene transfer

The infinitely many genes model with horizontal gene transfer
Franz Baumdicker, Peter Pfaffelhuber
(Submitted on 28 Jan 2013)

The genome of bacterial species is much more flexible than that of eukaryotes. Moreover, the distributed genome hypothesis for bacteria states that the total number of genes present in a bacterial population is greater than the genome of every single individual. The pangenome, i.e. the set of all genes of a bacterial species (or a sample), comprises the core genes which are present in all living individuals, and accessory genes, which are carried only by some individuals. In order to use accessory genes for adaptation to environmental forces, genes can be transferred horizontally between individuals. Here, we extend the infinitely many genes model from Baumdicker, Hess and Pfaffelhuber (2010) for horizontal gene transfer. We take a genealogical view and give a construction — called the Ancestral Gene Transfer Graph — of the joint genealogy of all genes in the pangenome. As application, we compute moments of several statistics (e.g. the number of differences between two individuals and the gene frequency spectrum) under the infinitely many genes model with horizontal gene transfer.

Our paper: Assemblathon 2 and pizza

Our next guest post is by Keith Bradnam (‏@kbradnam) on the Assemblathon (@assemblathon) paper: Assemblathon 2: evaluating de novo methods of genome assembly in three vertebrate species. arXived here.

Making pizzas and genome assemblies

In Davis, California there are 18 different establishments that predominantly sell pizzas and I often muse on the important issue of ‘who makes the best pizza?’. It’s a question that is deceptive in its simplicity, but there are many subtleties that lie behind it, most notably: what do we mean by best? The best quality pizza probably. But does quality refer to the best ingredients, the best pizza chef, or the best overall flavor? There are many other pizza-related metrics that could be combined into an equation to decide who makes the best pizza. Such an equation has to factor in the price, size, choice of toppings, quality (however we decide to measure it), ease of ordering, average time to deliver etc.

Even then, such an equation might have to assume that your needs reflect the typical needs of an average pizza consumer. But what if you have special needs (e.g, you can’t eat gluten) or you have certain constraints (you need a 131 foot pizza to go)? Hopefully, it is clear that the notion of a ‘best’ pizza is highly subjective and the best pizza for one person is almost certainly not going to be the best pizza for someone else.

What is true for ‘making pizzas’ is also largely true for ‘making genome assemblies’. There are probably as many genome assemblers out there as there are pizza establishments in Davis, and people clearly want to know which one is the best. But how do you measure the ‘best’ genome assembly? Many published genome sequences result from a single assembly of next-generation sequencing (NGS) data using a single piece of assembly software. Could you make a better assembly by using different software? Could you make a better assembly just from tweaking the settings of the same software? It is hard to know, and often costly — at least in terms of time and resources — to find out.

That’s where the Assemblathon comes in. The Assemblathon is a contest designed to put a wide range of genome assemblers through their paces; different teams are invited to attempt to assemble the same genome sequence, and we can hopefully point out the notable differences that can arise in the resulting assemblies. Assemblathon 1 used synthetic NGS data with teams trying to reconstruct a small (~100 MB ) synthetic genome. I.e. a genome for which we knew what the true answer should look like. For Assemblathon 2 — manuscript now available on arxiv.org — we upped the stakes and made NGS data available for three large vertebrate genomes (a bird, a fish, and a snake). Teams were invited to assemble any or all of the genomes. Teams were also free to use as much or as little of the NGS data as they liked. For the bird species (a budgerigar), the situation was further complicated by the fact that the NGS data comprised reads from three different platforms (Illumina, Roche 454, and Pacific Biosciences). In total we received 43 assemblies from 21 participating teams.

How did we try to make sense of all of these entries, especially when we would never know what the correct answer was for each genome? We were helped by having optical maps for each species which could be compared to the scaffolds in each genome assembly. We also had some Fosmid sequences for the bird and snake which helped provide a small set of ‘trusted’ reference sequences. In addition to these experimental datasets we tried employing various statistical methods to assess the quality of the assemblies (such as calculating metrics like the frequently used N50 measure). In the end, we filled a spreadsheet with over 100 different measures for each assembly (many of them related to each other).

From this unwieldy dataset we chose ten key metrics, measures that largely reflected different aspects of an assembly’s quality. Analysis of these key metrics led to two main conclusions — which some may find disappointing:

  1. Assembly quality can vary a lot depending on which metrics you focus on; we found many assemblies that excelled in some of the key metrics, but fared poorly when judged by others.
  2. Assemblers that tended to score well — when averaged across the 10 key metrics — in one species, did not always perform as well when assembling the genome of another species.

With respect to the second point, it is important to point out that the genomes of three species differed with regard to size, repeat content, and heterozygosity. It is perhaps equally important to point out that the NGS data
provided for each species differed in terms of insert sizes, read lengths, and abundance. Thus it is hard to ascertain whether inter-species differences in the quality of the assemblies were chiefly influenced by differences in the underlying genomes, the properties of the NGS data that were available, or by a combination of both factors. Further complicating the picture is that not all teams attempted to assemble all three genomes; so in terms of assessing the general usefulness of assembly software, we could only look at the smaller number of teams that submitted entries for two or more species.

In many ways, this manuscript represents some very early, and tentative steps into the world of comparative genome assembler assessments. Much more work needs to be done, and perhaps many more Assemblathons need to be run if we are to best understand what make a genome assembly a good assembly. Assemblathons are not the only game in town however, and other efforts like dnGASP and GAGE are important too. It is also good to see that others are leveraging the Assemblathon datasets (the first published analysis of Assemblathon 2 data was not by us!).

So while I can give an answer to the question ‘what is the best genome assembler?’, the answer is probably not going to be to your liking. With our current knowledge, we can say that the best genome assembler is the one that:

  1. you have the expertise to install and run
  2. you have the suitable infrastructure (CPU & RAM) to run the assembler
  3. you have sufficient time to run the assembler
  4. is designed to work with the specific mix of NGS data that you have generated
  5. best addresses what you want to get out of a genome assembly (bigger overall assembly, more genes, most accuracy, longer scaffolds, most resolution of haplotypes, most tolerant of repeats, etc.)

Just as it might be hard to find somewhere that sells an inexpensive gluten-free, vegan pizza that’s made with fresh ingredients, has lots of toppings and can be quickly delivered to you at 4:00 am, it may be equally hard to find a genome assembler that ticks all of the boxes that you are interested in. For now at least, it seems that you can’t have your cake — or pizza — and eat it.

Our paper: An experimental test for genetic constraints in Drosophila melanogaster

Our next guest post is by Ian Dworkin (@IanDworkin) on his paper (along with coauthors) An experimental test for genetic constraints in Drosophila melanogaster.

We have recently posted a (heavily revised) manuscript to arXiv detailing how we used the fruit fly Drosophila melanogaster (you can read here about why these little flies are so wonderful) to test a particular hypothesis about a genetic constraint, and more generally how our knowledge of development may inform us about the structure of the genetic variance-covariance matrix, G. Also we developed a really cool set of statistical models that evaluated our explicit hypotheses (more on that right at the end of the post)!

As a quick reminder (or introduction), G summarizes both how much genetic variation particular traits have, as well as how much traits co-vary genetically. This covariation can be due to “pleiotropy” which is a fancy word for when a gene (or a mutation in that gene) influences more than one trait. ie. a mutation might influence both your eye and hair colour). These traits can also covary together when two or more alleles (each influencing different traits) are physically close to each other (linked) and recombination has not had enough time to break these combinations apart. I highly recommend Jeff Conner’s recent review in Evolution for a nice review of these (and other concepts related to some issues I discuss below).

Evolutionary biology, in particular evolutionary quantitative genetics thinks a lot about the G-matrix, and how it interacts with natural selection (or drift) to generate evolutionary change. This is summarized by the now famous equation linking change in trait means(Δ) as a function of both genetic variation (and covariation) and the strength of natural selection (usually measured as a so-called selection gradient, β). This is the multivariate (more than one trait) version of the breeders equation (made most famous by all of the seminal work by R. Lande).

Δz̄=Gβ


Why do we care so much about this little equation? It encapsulates many pretty heady ideas.  First and foremost that you can not have evolutionary change without genetic variation. That’s right, natural selection by itself is not enough. You can have very strong selection for traits (such as running speed) to survive better with a predator around, but if there is no heritable variation for running speed, no (evolutionary) change will happen in the proceeding generations (and good luck with that tiger coming your way). However, once you have to consider multiple traits (running speed, endurance and hearing), we have to think about whether there is available genetic variations for combinations of traits, and whether these are “oriented” in a similar direction to natural selection. If not, it may be that evolutionary change with be slowed considerably (even if each traits seems to have lots of heritable variation). Of course if the genetic variation for all of these traits is pointing in the same direction as selection, then evolution may proceed very quickly indeed! The ideas get more interesting and complex from there, but they are not the for this discussion (the paper above by Jeff Conner, and this great review by Katrina McGuigan are definitely worth reading for more on this).

In any case, much thought has been given to how this G matrix can change both by natural selection and by other factors such as new mutation. Depending on how G changes, future evolutionary potential might change, which is pretty cool if you think about it! How might G change then? These are important ideas, because while we can estimate what G looks like, and how it might change (in particular due to natural selection), it is much harder to know what it will look like far in the future, making our ability to predict long term evolutionary change more difficult.
So what might help us predict G? One idea is that our knowledge of developmental biology will help us understand the effects of mutations, and thus G. If so, developmental biology could be a particularly powerful way of predicting the potential for evolutionary change, or lack there of (a so called developmental constraint).

To test this idea, I decided to use a homeotic mutation. Homeosis is the term used for when one structure (like an arm) is transformed (during development) to another (related) structure like a leg.  In fruitflies homeotic mutations are the stuff of legend (and nobel prizes), in particular for the wonderful cases of the poor critters growing with legs (instead of antenna) out of their heads, or four winged flies. You can see wonderful examples of mutations causing such homeotic changes in flies and other critters here.

In our case we used a much weaker and subtler homeotic mutation Ubx1, which causes slight, largely quantitative changes. For example with this mutation, the third set of legs on the fly would be expected to resemble (in terms of lengths of the different parts of the leg) the second set of legs (flies like all insects have 3 sets of legs as adults). We wanted to know whether when we changed the third legs to look like second legs, would the G for the transformed third leg look that of a normal third leg or a normal second leg? Thus we were trying to predict changes in G based on what we know (a priori) of development and genetics in the fruitfly.

So what did we find? The most important points are summarized in figure 2 and table 3 (if you want to check out the paper that is). The TL’DR version is this: Yes, the legs homeotically transformed like we expected, but G of the mutant legs did not really change very much from that of a normal third leg. In other words, our knowledge of development did not really help us much in understanding changes in G. There are a few reasons why (which we explain in the paper), but I think that it is an interesting punchline, and I will leave it up to you to decide what it means (and if our experiment, analysis and interpretation are reasonable and logically consistent).

I also really want to give a shout out to one of the co-authors (JH) who developed the particular statistical model that we ended up using. He developed a set of explicit models that really helped us test our specific hypotheses directly with the data and experimental design at hand. This is sadly rarely done with statistics, so it is worth reading just for that! I really think (hope?) that this combination of approaches can be very useful for evolutionary genetics. Let me know what you think!