Probability of fixation

From Genetics Wiki
Revision as of 04:57, 24 September 2018 by Floyd (talk | contribs) (Working from the binomial)

Jump to: navigation, search

This was derived in Kimura 1962.

[math]u(p)=\frac{1-e^{-4N_esp}}{1-e^{-4N_es}}[/math]

If we are considering the initial frequency of a single new mutation in the population p=1/(2Ne),

[math]u(p)_1=\frac{1-e^{-4N_es\frac{1}{2N_e}}}{1-e^{-4N_es}}=\frac{1-e^{-2s}}{1-e^{-4N_es}}[/math].

And if 4Nes is large

[math]u(p)_2\approx\frac{1-e^{-2s}}{1}=1-e^{-2s}[/math].

[math]e^{2s}\approx 1+2s[/math]

[math]u(p)_2 \approx 1-e^{-2s} \approx 1-1+2s = 2s[/math].

This agrees with the results of Fisher 1930 (pp. 215--218) and Wright 1931 (pp. 129--133).

It may be surprising at first the the probability of fixation of a new allele that confers a fitness advantage is only approximately 2s. So if it gives a 3% fitness advantage the probability of fixation is only about 6%. In other words there is a 94% chance the new adaptive allele will be lost due to genetic drift. This implies that adaptive evolution of a species is very inefficient and that adaptive alleles have to occur repeatedly by mutation, to be lost by drift, before they eventually fix.

Why is this process so inefficient? When an allele is rare, such as a single copy as a new mutation, the forces of drift are typically much larger than the forces of selection. As an example work out the probability of sampling zero copies of an allele starting at a count of one from one generation to the next with a Poisson distribution and a mean of λ = 1.

[math]P(k)=\frac{\lambda^k e^{-\lambda}}{k!}[/math]

There is a 0.3679 probability of loss in the next generation. The probability in the next generation of one copy is also 0.3679, two copies is 0.1839, three copies 0.0613, four copies 0.0153, five copies 0.00307, etc. With a greater than a third chance of loss in one generation due to neutral drift this can more than outweigh the small advantage (say on the order of single percents) added by selection.

Even if the allele survives into the next generation it is most likely at a count of one and still has a greater than 1/3 loss in the following generation. If it manages to make it to two copies there is a 13.5% chance of loss in the next generation, a 27.1% chance of reducing back down to a count of one again, etc. Multiplying all of the paths from one in the first generation to zero in the third generation up to a count of five in the second, 1->0->0, 1->1->0, 1->2->0, 1->3->0, 1->4->0, 1->5->0, using a Poisson distribution gives, 0.3679, 0.1353, 0.02489, 0.003053, 0.0002807, 0.00002066 respectively for a total probability of 0.5315. So there is a greater than 1/3 loss in the second generation and a greater than 1/2 loss in the third generation.

Calculating probabilities beyond this point should be done with a binomial transition matrix to properly account for all of the possible paths.

Notes

Working from the binomial

The deterministic change in frequency due to selection is

[math]p_{t+1} = \frac{w p}{w p + (1-p)}[/math]

where the fitness of the allele at frequency p is w and the fitness of the alternative allele is one, see Selection.

The probability of transitioning from the current number of "A" alleles to all possible numbers of "A" alleles in the next generation (0--2N) due to random sampling (genetic drift) is given by the Binomial Distribution.

[math]P\left(k_{t+1}\right)={2N \choose k} p^k \left(1-p\right)^\left(2N-k\right)[/math]

Here p is the current allele frequency and k is the number of alleles of the specific type in the next generation.

We can combine selection and drift by substituting pt+1 for p in the binomial and w = 1 + s.

[math]P\left(k_{t+1}\right)={2N \choose k} \left(\frac{(1+s) p}{(1+s) p + \left(1 - p\right)}\right)^k \left(1-\frac{(1+s) p}{(1+s) p + \left(1 - p\right)}\right)^\left(2N-k\right)[/math]

Use a transition matrix based on this formula and iterate a starting allele frequency a large number of generations to compare the outcome to the predictions from the equation for the fixation probability... (to be continued)

Have an initial vector, S, representing the starting state of one copy of an allele in the population, multiply it by a drift transition matrix, D, raised to a large number of generations, g.

[math]\mathbb{S} \mathbb{D}_{i \to j}^g = \begin{bmatrix}0, & 1, & 0, & 0, & \cdots & 0\end{bmatrix} \mathbb{D}_{i \to j}^g[/math]

D contains the binomial probabilities of transitioning from i to j number of alleles each generation.

Kimura's derivation

This is derived from

[math]u(p) = \frac{\int_0^p G(x)\, \mbox{d} x}{\int_0^1 G(x)\, \mbox{d} x}[/math],

equation 3 of Kimura 1962.

[math]u(p,t)[/math] is the probability of fixation of an allele at frequency p within t generations.

The change in allele frequency ([math]\delta p[/math]) over short periods of time ([math]\delta t[/math]) is

[math]u(p, t+\delta t) = \int f(p, p+\delta p; \delta t) u(p+ \delta p, t) \, \mbox{d} (\delta p)[/math],

integrating over all values of changes in allele frequency ([math]\delta p[/math]).

A mean and variance of the change in allele frequency (p) per generation are defined as

[math]M_{\delta p}=\lim_{\delta t \to 0} \frac{1}{\delta t} \int (\delta p) f(p, p+\delta p; \delta t) \, \mbox{d} (\delta p)[/math]

[math]V_{\delta p}=\lim_{\delta t \to 0} \frac{1}{\delta t} \int (\delta p)^2 f(p, p+\delta p; \delta t) \, \mbox{d} (\delta p)[/math]

The probability of fixation given sufficient time for fixation to occur is

[math]u(p)=\lim_{t \to \infty} u(p,t)[/math]

[math]G(x) = e^{-\int \frac{2M_{\delta x}}{V_{\delta x}} \, \mbox{d} x}[/math]

(to be continued ... I need to work through this and my calculus is rusty.)

A different approach

[math]u(p)=\frac{1-e^{-4N_esp}}{1-e^{-4N_es}}[/math]

The numerator is the probability of not zero (loss) in a Poisson distribution with a mean of 4Nesp. 2Np is the number of copies of the allele in the population. This is multiplied by 2s. [math]2Np \times 2s = 4Nsp[/math]

Why is s multiplied by two here?

The denominator is the same thing but with p=1 (the largest value possible). This rescales the numerator to be a fraction out of one(?).

I suspect there is a more intuitive approach to understanding this by exploring this line of reasoning but I am not quite seeing it yet.

[math]1-e^{-4N_esp}\approx 1- 1 - 4Nsp = -4Nsp[/math] ?

[math]u(p)=\frac{1-e^{-4N_esp}}{1-e^{-4N_es}}\approx\frac{-4Nsp}{-4Ns}=p[/math] ?

[math]1-e^{-4N_esp}\approx 1- (1 - 2s)^{2Np}[/math] ?

There is more to go through in Fisher 1930 (pp. 215--218) and Wright 1931 (pp. 129--133). I need to start there and work forward.