This guest post is by Anand Bhaskar and Yun Song on their paper: “The identifiability of piecewise demographic models from the sample frequency spectrum”. arXived here.
With the advent of high-throughput sequencing technologies, it has been of great interest to use genomic data to understand human demographic history. For example, we now estimate that modern humans migrated out of Africa around 60K-120K years ago [1,2], and that Neandertals may have admixed with modern humans in Europe as recently as 47,000 years ago . Apart from satisfying curiosity about our anthropological history, the inference of demography is important for several scientific reasons. Most importantly, demographic processes influence genetic variation, and understanding the interplay between natural selection, genetic drift, and demography is a key question in population genetics. Also, controlling for demography is important for practical applications. For example, the demography inferred from neutrally evolving genomic regions can serve as a null model when searching for regions under selection. Demographic models could also be used to circumvent the problem of spurious associations in case-control studies induced by population substructure.
A summary of whole haplotypes that is commonly used in population genetic analyses is the sample frequency spectrum (SFS). For a sample of n haplotypes from a panmictic (i.e. without substructure) population, the SFS is an (n-1)-dimensional vector where the i-th entry is the proportion of SNPs with i copies of the mutant allele in the sample. One can talk about a mutant/derived allele because most analyses assume that mutations are rare enough that the observed SNPs are dimorphic. The first few entries of the SFS capture the proportion of rare SNPs in the sample and are especially useful for inferring recent population history. Several recent large sample sequencing studies [4-6] have found that humans have many more putatively neutral rare SNPs compared to predictions from a constant population size model. Using the SFS from their data, these studies all infer demographic models with recent exponential population expansion.
However, until fairly recently, it was not known whether the SFS of a sample uniquely determines the underlying demographic model. Could it be possible that two different demographic models produce the exact same expected SFS for all sample sizes? In 2008, Simon Myers, Charles Fefferman, and Nick Patterson came up with an elegant mathematical argument  to show that there are infinitely many population size histories that generate the same expected SFS for all sample sizes. They even provided an explicit example of a population size history which produced the same expected SFS as a constant population size model. However, their example history had increasingly rapid oscillations in the population size in the recent past, something that we might not expect to find in real biological populations. After all, even though we commonly use continuous-time models of evolution like coalescent theory and diffusion processes, biological populations evolve in discrete events of birth and death.
Our research group has been working on demographic inference from the SFS and from full sequence data for the last several years, and so it was natural for us to ask whether the class of population size histories that are commonly inferred using statistical algorithms might also suffer from this non-identifiability problem. Most statistical methods infer piecewise population size histories, where the pieces come from some biologically-motivated family of functions. In particular, piecewise constant and piecewise exponential models commonly appear in the literature. And if one can indeed uniquely identify piecewise demographic models from the SFS, what sample sizes are needed to do so?
In our paper, we address this question by proving that if the underlying population size function is piecewise with at most K pieces, then the expected SFS of a random sample of size n uniquely determines the demography as long as n is larger than some function of K that depends on the type of pieces of the population size function. For example, if the underlying demographic model was piecewise constant with at most K pieces (i.e. described by at most 2K – 1 parameters), then the expected SFS of a sample of size 2K uniquely determines the demographic model. In other words, no two piecewise constant population size functions with at most K pieces can generate the same expected SFS for a sample of size 2K or larger. For piecewise exponential demographic models with at most K pieces, a sample size of 4K – 1 is sufficient to uniquely determine the demographic model. When one doesn’t know which allele is ancestral and which is derived (for example, if outgroup information is lacking at the relevant SNPs), demographic analysis can still be carried out using the SFS by “folding” it. The folded SFS has floor(n/2) entries, where the i-th entry is the proportion of SNPs with i copies of the minor allele (which might be an ancestral or derived allele). Since the folded SFS has only roughly half the dimension as the full SFS, one might expect to require twice as many samples to uniquely determine the demographic model from the folded SFS compared to the full SFS. We formally prove in our paper that this intuition is indeed correct.
It is important to stress that this identifiability result is statistical rather than algorithmic in that that one would need to have perfect information about the expected SFS of a random sample in order to uniquely determine the underlying piecewise demography. In practice, one can get good estimates of the expected SFS by considering a large number of SNPs in the inference procedure, and by considering SNPs that are farther apart along the chromosomes so that the coalescent trees for the sample at different SNPs will be roughly independent of each other. More work is certainly needed to understand how much genomic data (measured both in terms of the number of SNPs and the sample size) would be needed in practice to robustly infer realistic demographic models.
 Li, H. and Durbin, R. (2011) Inference of human population history from individual whole-genome sequences. Nature 475, 493–496.
 Scally, A. and Durbin, R. (2012). Revising the human mutation rate: implications for understanding human evolution. Nature Reviews Genetics, 13(10), 745-753.
 Sankararaman, S., Patterson, N., Li, H., Pääbo, S., and Reich, D. (2012) The date of interbreeding between Neandertals and modern humans. PLoS Genetics 8, e1002947.
 Nelson, Matthew R., et al. (2012) An abundance of rare functional variants in 202 drug target genes sequenced in 14,002 people. Science 337, 100–104.
 Tennessen, Jacob A., et al. (2012) Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science 337, 64–69.
 Fu, Wenqing, et al. (2012) Analysis of 6,515 exomes reveals the recent origin of most human protein-coding variants. Nature 493, 216–220.
 Myers, S., Fefferman, C., and Patterson, N. (2008) Can one learn history from the allelic spectrum? Theoretical Population Biology 73, 342–348.
Pingback: Some background on Bhaskar and Song’s paper: “The identifiability of piecewise demographic models from the sample frequency spectrum” | Haldane's Sieve
Hi Anand and Yun,
I thought I’d post some comments on your preprint. I’ll admit that I’ve not fully digested all of the results, and so not all of my comments will be fully formed.
It took me a couple of reads to reconcile these results with those of Myers et al, and realize that they are not really at odds with them. I viewed Myers et al as saying that you might perfectly infer a particular demographic model but still be seriously misled about the “real” history (although these alternate histories may not be realistic). I.e. I can identify the parameters of a specific parametric discrete epoch demographic history, but there are still a family of quite different demographic histories that would match this site frequency spectrum perfectly. I felt like the introduction/conclusion could be clearer on this point, as people may view your article as saying the central premise of Myers et al doesn’t hold if you are not careful.
BTW I should have started by saying that I think that this is a very useful contribution, and in general seems quite well laid out.
I was also unsure what the “discrete units of time” in the introduction referred to.From this post here, I’m wondering whether you mean that the population has intervals between births and deaths of individuals. I think my confusion arises as this form of “non-continuous(ness)” feels very different from fitting a demographic model with a “reasonable” number of discrete epochs.
It seems important to keep comments about whether the alternate continuous curves of Myers et al are biologically realistic slightly separate from whether we can fit discrete models. Populations do fluctuate very rapidly in size, likely far more than we are capable of fitting with coalescent based models. I guess my worry is that these (unmodeled) fluctuations might lead our estimated picture of population size over time, derived from the frequency spectrum, to be a far from complete view of demography.
I’ve wondered whether there is some way to better represent our uncertainty in population histories from the frequency spectrum, so we can then look for other aspects of data that can fill in these gaps.
I also think this is a very interesting and useful contribution.
Not sure I agree with Graham about Myers et al.; I thought it was clear enough where this stands in relation to that paper. In one sense they addressed a narrowly mathematical question, analogous (as they pointed out) to that of whether one can hear the shape of a drum. Ask a mathematician and she will say no, because of such and such properties of the eigenvalues of certain functions and these constructed examples; ask anyone else and they will say yes of course, if it’s big it sounds deep. Perhaps a trivial observation in itself, but it seems to me you have gone some way to quantifying it (for demography), which is great.
You do briefly discuss the practical issue regarding SFS estimation. However a key practical issue is how well behaved the system is. For example are there certain classes of demography (i.e. regions of the parameter space) which are particularly sensitive, such that any uncertainty in the SFS confounds inference? (Perhaps you address this somewhere and I missed it.)
As a broader issue, one might point out that you are still restricting yourself to a small subset of the larger issue about identifiability in demographic inference, namely analyses based on the SFS. The theoretician in me is quite happy to see a mathematical solution to a well stated problem. But we know there is a lot more information in population genomic data than just the SFS, e.g. haplotype distributions, linkage, branch lengths etc. The big question is: can we further expand the space of models we can discriminate by adding such data, or are there aspects of past demography will we never be able to recover (at least with genetic data alone)? Can your approach help to answer that? Would be interesting to hear your thoughts, and might even be worth considering in your discussion.
Thanks Aylwyn for raising excellent questions. We’ve been working on exactly the problems you mention.
In a forthcoming paper on efficient demographic inference using the SFS, we’re examining the sensitivity of parameter estimation to the uncertainty in the SFS due to having only a finite number of segregating sites. For example, from simulated data, we’ve found that there can be substantial uncertainty in estimating the rate of very rapid recent exponential population growth if the SFS is not very reliable.
We’re also thinking about the power and limits of inference using linkage data. I’m not sure if the machinery in this paper can be directly applied to that problem, since we relied on the coalescent theory about the SFS being mathematically fairly simple. The coalescent becomes substantially more complicated when modelling whole sequences with recombination, so I think this will be a harder problem and we don’t have concrete results yet.
As a non-theoretician who may have slightly misinterpreted this manuscript, I’m probably going to say a couple pretty dumb things here. When I first “read” the Myers paper I interpreted it as having a rather negative outlook on the usefulness of the SFS to accurately identify historical demography. As an empiricist interested in estimating demographic histories, this concerned me, but I never tried to dig through the math to gain a deeper understanding. I kept tabs on the paper as I hoped somebody would come along and put it in plainer english, or that it might be the source of an antagonistic perspective piece in which some old crank yells at everyone that they can’t use the SFS because the models aren’t identifiable, but all I seemed to notice were oblique references deep in the discussion section of papers using the SFS (e.g, Gutenkunst et al.’s dadi paper). So I was pretty interested to see this paper, which, as Graham warned about, I took to be at least somewhat at odds with Myers et al.
So now having read it over, I’m a bit confused about the takeaway message. It seems to be saying that a restricted subset of piecewise models are identifiable. We can pretty much guarantee, as Graham mentioned, that this restricted subset is not going to exactly reflect reality, as populations can fluctuate extremely rapidly, so it kind of seems that from an empirical perspective, we are where we started, only now we’ve highlighted that we don’t really know what the behavior of simple models applied to complex real data will be. I guess as an empiricist, the things I’m most concerned about here are can I figure out when my simple model is too simple, and if some data sets have identifiability issues, can we develop a means of identifying them?
Noah, you’re right that the underlying demographic model might not be simple and that the space of demographic models we consider for inference might not contain the true demographic model (assuming there is such a thing, since coalescent/diffusion theory is already a mathematical idealization). On the other hand, I think there is much value in restricting attention to the space of sparse and simple population models. For example, if you have the SFS from a sample size of 1000 haploids, then with probability one, no demographic model with less than 999 degrees of freedom will generate the observed SFS exactly. But you almost certainly don’t want to fit so many degrees of freedom to such data, since you risk fitting to noise in the data. If you have a priori knowledge that your demographic model belongs to some family of models (such as a piecewise exponential family), then you could use information criteria like AIC or BIC in a statistically sound way to choose between different models that explain the data relatively equally well. This is something we’re doing in a forthcoming paper on demographic inference using the SFS. It also seems that the literature has treated the Wright-Fisher model as a starting point for all modelling, and then approximated it with coalescent/diffusion theory. In that case, piecewise constant models in the coalescent where each generation of the Wright-Fisher model corresponds to one piece are the most fine-grained demographic models we might want to consider for inference.
Perhaps Aylwyn is right and I’m worrying about this too much.
I think maybe the thing I’m struggling with here, and with the original Myers et al paper, Is I don’t really know how different the population histories are that can give exactly the same frequency spectrum (nor how imperfect knowledge plays into this). Their function whose harmonics vanish to give the constant population size model looks quite different from a constant population. If I fit a (say) 10 epoch constant-pop model, I would likely fit a constant population model (i.e. a straight line) and not acknowledge those peaks and troughs. Is that a reasonable representation of my uncertainty, as the Myers et al fluctuations are unrealistic? Or am I glossing over important possible demographic models by focusing on a particular form (and given my uncertainty in some aspects of the frequency spectrum).
I guess I feel like I still don’t know the answer to these questions, likely perhaps just because I’m not fully understanding the results.
We faced a similar problem with Peter Ralph and I’s IBD paper ( here) , We think we could have fit models of exponential growth really quite well. However, given how variable our less constrained solutions to the problem were, we felt that fitting a piecewise model would not really represent our uncertainty fully. I know this is a somewhat distinct problem, but it still makes me nervous about trusting the fact that we can estimate of piecewise models well as sufficient evidence that we are acknowledging our uncertainty fully.
Anyhow this seems like a good step in the right direction, perhaps just a bit of clarification on this point would really settle this point more definitely.
Thanks Graham for your thoughtful comments. I’m responding to your earlier comments as well in this reply.
You’re right that we need to better understand the equivalence classes of different population size functions that can generate the same SFS (or even whole haplotype data). On the other hand, different population size functions in such a class must have an unbounded number of oscillations (in a very loose sense of the word oscillation; more precisely, the difference of rescaled versions of two population functions in a class must oscillate around zero an unbounded number of times) because otherwise we could use a large enough sample size *in theory* to distinguish these functions (by the number of sign changes argument in the paper). This is also Myers’ et al’s counterexample.
Also, in the introduction of the paper, by mentioning “discrete units of time”, we had imagined that we can use the discrete Wright-Fisher model as the “true” population genetic model, in which case every 1/(2N) units of time in the coalescent could theoretically be one piece of a piecewise constant model and that given enough samples and segregating sites, one can in principle infer the population size at each generation of a Wright-Fisher model up to some time in the past (assuming there was some known ancestral population size further back in time). We briefly mentioned this line of thought in the last paragraph in the discussion. The 1/(2N) is also a resolution limit in the coalescent model, since population sizes that change more rapidly than this won’t have an interpretation in the discrete Wright-Fisher model (this is why we said that the explicit counterexample of Myers et al. is a little unrealistic). All of this is of course in theory.
From a more practical perspective though, with enough number of segregating sites and accurate variant calling, I don’t think fitting something like 10 epochs of constant population size will be any problem. But if you want to fit a very large number of epochs to data (where how large the number of epochs you want to fit is relative to the number of samples and the number of segregating sites), then this comes down to a model selection problem and I don’t think it’s necessarily worth considering/modelling very complex population size functions for at least 2 distinct reasons. Firstly, from a model selection perspective, we want to avoid overfitting to noise in the data. Secondly, we’re trying to fit the degrees of freedom in our idealized model (here, the population size function in an idealized coalescent model) that reproduce the genetic data well. To me at least, the notion of effective population size in our population genetic models is still a bit disconnected in interpretation from the real quantity of census population size, since the effective population size assumes random mating which is quite far from what happens in real populations. I think this might also be why we only infer current effective population sizes on the order of millions of diploids from large sample sequencing studies [4-6] even though the census population size is on the order of a billion diploids. If a very complicated population function explains the data just as well as a simple function, then for any downstream uses of population genetic models (like say for GWAS), the simple population function should be good enough.
We’ll try to address your comments and be more clear in a revised version of the paper.
Pingback: Most viewed on Haldane’s Sieve: September 2013 | Haldane's Sieve