Equation of the Month
Equation of the Month
A blog run by theTheoretical Population Ecology and Evolution Group,
Biology Dept.,
Lund University
The purpose of this blog is to emphasize the role of theory for our understanding of natural, biological systems. We do so by highlighting specific pieces of theory, usually expressed as mathematical 'equations', and describing their origin, interpretation and relevance.
Thursday, December 18, 2014
Diet Choice
What does it mean?
Assume that a predator feeds on two distinct and randomly distributed prey types (types 1 and 2 with densities N1 and N2, respectively). It is also assumed that the forager is attempting to maximize the rate by which energy is acquired. The predator searches the environment randomly (with search rate a) and encounters the two prey types in proportion to their respective densities. Let also the prey types differ in energetic reward (ei) and the time it takes to handle each prey after it has been captured and until ingested (handling time - hi). The two prey type differ in profitability such that e1/h1 > e2/h2. During a foraging bout when the predator does nothing but searching for and handling prey, the optimal strategy - the strategy that maximizes energy intake rate - is to always attack prey type 1 whenever encountered. Prey type 2 should always be left unattacked unless the density of prey type 1 falls below a certain threshold, namely the one given in the diet choice inequality above. If this criterion is satisfied both prey types should be attacked whenever encountered.
The left-hand side increases with increasing energetic reward from type 2 (e2) and decreasing handling time (h2). That is, the more profitable prey type 2 is, the higher the density threshold is.
The diet choice model implies an all-or-nothing response (the "zero-one rule")- either the less profitable prey type should be fully included in the diet or it should not be eaten at all.
The diet should be broad if the profitabilities are more similar and if the more profitable prey is relatively rare. A narrow diet would be advantageous if the converse is true.
For models (and experimental results) with less abrupt switches between the inclusion of both prey types or not, see for example Fryxell & Lundberg (1997).
Where does it come from?
In 1966 John Emlen and Robert MacArthur together with Eric Pianka laid the foundation for a new discipline subsequently christened "behavioral ecology" (MacArthur & Pianka 1966, Emlen 1966). They did so by assuming that organisms should one way or another behave optimally as a result of natural selection in order to maximize Darwinian fitness. A large number of optimality models emerged from from this then rapidly growing research field and the diet choice model was one of the first (see Stephens & Krebs 1986 for review).
Importance
The diet choice model together with the marginal value theorem launched the research program called "optimal foraging theory" and a huge number of models of foraging behavior with different currencies to maximize (other than energy gain), trade-offs, risk-sensitivity, foraging modes, and implications for population and community dynamics (Fryxell & Lundberg 1997). The diet choice model is closely related to the Disc equation and therefore an important link between fitness maximization at the individual level and the dynamics of predators and their prey.
The foraging decision based on the diet choice model is a response to the threat of lost opportunities. A predator that always attacks both prey indiscriminately would waste the opportunities for feeding on the more profitable prey.
Per Lundberg
Literature
Emlen, J.M. 1966. The role of time and energy in food preference. Am. Nat. 100: 611-617.
Fryxell, J. M. & Lundberg, P. 1997. Individual behavior and community dynamics. Chapman & Hall, NY.
MacArthur, R. M. & Pianka, E.R. 1966. On optimal use of a patchy environment. Am. Nat. 100: 603-609
Stephens, D. W. & Krebs, J. R. 1986. Foraging theory. Princeton UP.
Stephens, D. W., Brown, J. S. & Ydenberg, R. C. (eds) 2007. Foraging - behavior and ecology. Chicago UP
Friday, February 7, 2014
The Ricker (logistic) model
where r0 is the maximum per capita growth rate and K is the carrying capacity (equilibrium population density). The Ricker equation models the change in population density (size) from one point in time, t, to a future point in time, t+1.
What does it mean?
The Ricker model (or Ricker logistic equation) models the discrete time step (discrete generation) dynamics of a population of size N. Similar to the discrete logistic equation, K is the population size at which the growth rate of the population (Nt+1 – Nt) is zero. This does however not mean that birth and deaths are absent, rather that births and deaths are equal and cancel each other out. Per capita growth (Nt+1 / Nt) is a decreasing function of Nt and is maximized (and equal to exp(r0)) when Nt = 0.
Where does it come from?
The model was first introduced in Ricker (1954) where it was used to model stock dynamics and recruitment in fisheries. The model is similar to (in terms of formulization and dynamical behavior) and inspired by the logistic growth equation, however, it is formulized in a way such that Nt+1 cannot become negative, even if Nt >> K. Consequently, it is somewhat more realistic and "safer" to use.
Importance
As for the logistic equation the Ricker model is used in theoretical ecology to model all types of population dynamics and recruitment in general, although if first was introduced to model fish populations.
Mikael Pontarp
Literature
- Ricker, W.E. 1952. Stock and recruitment. J. Fish. Res. Board Can. 11:559-623
- Case, T.J. 2000. An illustrated guide to theoretical ecology. Oxford UP
- Mangel, M. 2006. The theoretical biologists toolbox. Cambridge UP
- Brännström, Å. and Sumpter, D.J. 2005. The role of competition and clustering in population dynamics. Proc. Biol. Sci. 272(1576): 2065–72
- Geritz, S.A. and Kisdi, E. 2004. On the mechanistic underpinning of discrete-time population models with complex dynamics. J. Theor. Biol. 228(2):261–9
Wednesday, October 9, 2013
The SIR model
What does it mean?
This is the standard model for how an infectious disease is spread in a closed population without births, deaths or migration. Individuals are in three categories, the ones not yet infected but susceptible (S), the infected ones (I) and the ones that have recovered from the disease and have become immune (R). This system describes the rate of change of the proportions of the three categories and only two parameters dictate the dynamics - β is the rate by which susceptible individuals become infected, and γ, the recovery rate (1/γ is then the average infectious period). Note that S, I and R are proportions, not the number of individuals in the three categories.
When a pathogen invades the population everybody is susceptible (S = 1). This proportion decreases as more and more individuals become infected (I increases). But the infected ones recover with rate γ and eventually no-one is infected and everybody has recovered (R = 1).
From the second equation we can can infer one of the most important quantities in epidemiology. If this derivative (dI/dt) is less than zero the infection dies out. This happens if the initial fraction of susceptibles is less than γ/β. It is customary to use the inverse of this ratio to indicate whether an infection is going to spread in a population of suceptible individuals. This inverse ratio is called R0 - the basic reproductive ratio and is the average number of secondary cases arising from an average primary case in an entirely susceptible population. We see that if R0 > 1, the infection can spread and if R0 < 1 it can not. Human influenza has an estimated R0 around 3-5, measles around 16-18. It follows from this that (in a closed population) an infectious disease can only spread if there is an intial fraction of susceptibles greater than 1/R0.
Here is where vaccination comes in. The initial proportion of susceptibles can be reduced and to what extent that is necessary depends on R0 of the disease. The critical fraction, pc, of vaccinated individuals can be shown to be pc = 1 − 1/R0. Thus, the more infectious the disease (high R0), the larger the fraction of the population has to be vaccinated in order to stop the disease from spreading.
Where does it come from?
The simple SIR model (Susceptible, Infected, Recovered) was first formulated by W.O. Kermack and A.G. McKendrick in 1927. This was the first mathematical theory of the spread of diseases and they identified the critical threshold for a pandemic (an infection that is spread in a very large proportion of the population over large areas).
Importance
Despite its simplicity, the SIR model captures many important and useful aspects of the spread of infections. The model is easily expanded to include births and deaths in the population. In that case R0 = β/(γ + μ), where μ is the natural death rate in the population (not the mortality caused by the disease).
Many modifications have been made to the SIR model to better understand (and prevent) the spread of diseases, both in humans and in wild animal populations. The SIRS models allow the recovered individuals to be susceptible again, the SEIR model includes a class of individuals that are infected but not infectious (the "E"), and the MSEIR model accounts for immunity passed on from the mother to newborn, to mention a few of the many original SIR modifications. An important extension is to model the spread of the diseas in space (e.g., Finkenstädt & Grenfell 1998).
It is of utmost importance to be able to model the epidemiology of a disease in order to prevent pandemics, to design efficient vaccination programs and to understand the role of diseases in ecological system. The SIR model is the backbone of all such efforts.
Per Lundberg
Literature
Finkenstädt, B. & Grenfell, B. 1998. Empirical determinants of measels metapopulation dynamics in England and Wales. Proc. Roy. Soc. B. 265: 211-220.
Keeling, M. J. & Rohani, P. 2008. Modeling infectious diseases. Princeton UP.
Kermack, W. O. & McKendrick, A. G. 1927. A contribution to the mathematical theory of epidemics. Proc. Roy. Soc. Lond. A 115: 700-721.
Ostfeld, R. S., Keesing, F. & Eviner, V. T. (eds) 2008. Infectious disease ecology. Princeton UP.
WebRep
currentVote
noRating
noWeight
Wednesday, April 17, 2013
The Breeder's Equation
What does it mean and where does it come from
R is the response to selection, defined as the difference in mean phenotype between offspring and the parent generation (before selection).
h2 is the (narrow sense) heritability, defined as the ratio between the additive genetic variance of the trait under consideration and the total phenotypic variance of the same trait.
S is the selection differential, defined as the difference in mean phenotype of the parent generation before and after selection.
This classic relationship in quantitative genetic is the simplest selection model, with its roots deep in the early writings of Karl Pearson and also Lush (1937) although the exact origin is somewhat obscure. Another version of the same model is
Δz = βw,z / w · cov(zoffspring,zmidparent)
where Δz is the change in mean trait value (R above), w is mean fitness, βw,z is the regression coefficient between fitness and the trait value and the covariance term describes the covariation between (mid)parent trait value and offspring trait value, also called the additive genetic variance of the trait (if the trait is fitness itself, this equation is equivalent to Fisher's Fundamental Theorem). The relationship between the two equations above is given by
h2 = cov(zoffspring,zmidparent) / var(zparent)
and
S = βw,z var(zparent) / w.
The multivariate version of the breeders equation was worked out by Lande and others (e.g., Lande 1982, Lande and Arnold 1983). The breeder´s equation is one of the backbones of the quantitative genetics approach in evolutionary theory.
Importance
If traits aren’t heritable (h2=0) or if there is no selection (S=0), then there can be no selective change in trait values, and no adaptive evolution. Quite obvious, perhaps, but nevertheless fundamentally important. The breeder’s equation in its many versions does have some important limitations, though. It can only talk about the change in (mean) trait values from one generation to the next (OK in plant and animal breeding) and says very little about long-term evolution. There are also some serious problems when it comes to observational data, for example, from populations in the wild (e.g., Morrissey et al. 2010). There are also some more fundamental problems. It can be shown (Morrissey et al 2010) that the relationship between fitness and genes must be the same as the relationship between fitness and phenotypes in order for the predictions of the breeder’s equation and another important quantitative genetics equation (“the second theorem of natural selection”), the Robertson–Price identity, to be the same. That is to say that there are some basic issues with the various relationships between traits, fitness, selection, and heritability that are unresolved (in natural population) and that the breeder’s equation has limited value when predicting microevolution in wild populations. This is illustrated when the breeder´s equation is expanded (Heywood 2005, Morris and Lundberg 2011) to
R = h2S + σwz',z + E(Δz),
where σwz',z is the partial covariance – the covariance controlled by the midparent value – between fitness and the mean offspring trait value (the components of the difference in the mean trait value between generations that is caused by factors influencing differential fitness among parents, but that is not related to the trait value). is the expected change in trait value in the absence of fitness differences among parents, including for example drift. See Morris and Lundberg (2011) for elaborations in the problems with the breeder´s equation.
Per Lundberg
Literature
Lande, R. 1982. A quantitative genetic theory of life history evolution. Ecology 63: 607-615.
Lande, R. and Arnold, S. J. 1983. The measurement of selection on correlated characters. Evolution 37: 1210-1226.
Lush, J. 1937. Animal breeding plans. Iowa State College Press.
Lynch M. and Walsh, B. 1998. Genetics and analysis of quantitative traits. Sinauer.
Morris, D. W. and Lundberg, P. 2011. Pillars of evolution. Oxford Univ. Press.
Morrissay, M. B., Kruuk, L. E. B. and Wilson, A, J. 2010. The danger of applying the breeder’s equation in observational studies of natural population. J. Evol. Biol. 23: 2277- 2288.
Friday, August 31, 2012
The Disc Equation
What it means
The disc equation models the rate of intake (f) of prey items (with population density R) by a single predator while its only activity is foraging (searching for and handling prey). The two parameters are a, the attack (or search) rate (a constant) and h, the handling time per prey (also a constant). This function increases monotonically towards an asymptote set by 1/h. The shorter the handling time per prey, the higher the maximum intake rate.
The disc equation is an example of a functional response model.
Compare this equation with the Monod equation (bacterial growth) and the Michaelis-Menten equation (for enzymatic reaction rates).
Where does it come from?
In 1959, C. S. Holling published two seminal papers on the ”functional response” of predators, i.e., how the rate of predation should vary with prey density. In the first paper, Holling (1959a) figures out four types of predation, the second one (”Type II”) was elaborated on and derived more formally in the second paper (Holling, 1959b). It was subsequently named the ”disc equation” because the experiment Holling set up was done using artificial food items on a sandpaper disc and a blindfolded person (his secretary, no less) ”predating” on them. Holling noted that for a given rate of attack (a), more and more of the total time was spent handling prey as prey density increased, and that the intake rate therefore should be discounted by the handling time per prey item. Holling’s real interest was to understand how and when predators can regulate the density of prey species, particularly forest pest insects.
Importance
The disc equation soon became the standard model in practically all studies of foraging behavior and predator-prey interactions. The model was supported by numerous experimental results on a wide variety of predators feeding on a single prey type. With more than one type to feed on, the disc equation has been extended to the multiple prey functional response (Murdoch and Oaten 1975). That model assumes that the predator has no preference for any given prey type. If that is the case, the disc equation is less useful and models with switching rules and prey type preferences are better suited for the problem. The disc equation was used in the Rosenzweig-MacArthur model in 1963 and it quickly became the standard alternative to the linear Lotka-Volterra model of predator-prey dynamics.
Since the disc equation models a decelerating intake rate with increasing prey density, it leads to prey safety in numbers – the higher the prey density, the lower the per capita risk of being eating. This tends to destabilize predator-prey interactions.
The disc equation is still a backbone of foraging theory and in theories of predator-prey interactions and food web dynamics.
Per Lundberg
Literature
Holling, C. S. 1959a. The Components of Predation as Revealed by a Study of Small-Mammal Predation of the European Pine Sawfly. The Canadian Entomologist 91: 293-320.
Holling, C. S. 1959b. Some characteristics of simple types of predation and parasitism. The Canadian Entomologist 91: 385-398.
Murdoch, W. W and Oaten, A. 1975. Predation and population stability. Adv. Ecol. Res. 9: 1-131.
Rosenzweig, M. L. and MacArthur, R. H. 1963. Graphical representation and stability conditions of predator-prey interactions. Am. Nat. 97: 209-223.
Wednesday, June 27, 2012
The Kleiber Law
R ∝ M3/4
(from Hemmingsen, 1960)
What it means
Larger animals have relatively slower metabolisms than small ones. A mouse must eat about a third of its body mass every day not to starve whereas a human can survive on only 2%. The relationship follows a power law: basal metabolic rate (R) is proportional to the ¾ power of an animal's mass (M). This relationship, the Kleiber Law (Kleiber 1947), can be drawn as a straight line on a log-log plot (see Fig.). Mysteriously, this simple relationship holds, from simple organisms to most complex ones, from microbes to giant blue whales across 18 orders of magnitude in body mass.
Where does it come from
Max Kleiber, an ecologist from Switzerland, discovered the law in the early 1930s. After its initial publication, other workers added additional species to his original figure. They extended it to a ‘mouse-elephant-curve’ and subsequently even further, to whales and microbes, confirming the Kleiber law’s surprising validity. Before Kleiber published his law, first explanations for why metabolic rate would change with body mass were already around. These were based on an organism’s body surface to volume ratio. Large animals have proportionately less surface area per unit volume, they hence lose body heat more slowly, and, so it was argued, need proportionately less food and have a relatively slower metabolism. But, following this argumentation, metabolic rate should scale with mass to the power of 2/3, not 3/4. A causal explanation for the ¾-law has been lacking until scientist in the 1990s, using mathematical models, proposed that the geometry and particularly the fractal structure of an animal’s circulatory system could be the reason for the ¾ exponent (West et al. 1997, West et al. 1999). One problem with these models is that the derivations build on considerations of blood flow, but the Kleiber law also holds for organisms without a blood circulatory system, like bacteria or corals.
Applicability and importance
Whether metabolic rate always scales with body mass to the power of 3/4 is still debated - some researchers think that no single exponent fits all the data, and some believe it should be 2/3 instead. The ¾-law though, favored by the majority of biologists and fitting the data best, seems to be one of the few examples of a generally applicable ‘law’ in biology. Biologists are not used to finding general rules of this kind within their domain. I first learnt about it in a course on animal physiology during my graduate studies, and I remember clearly how much its simplicity and generality fascinated me. In the past few years, researchers have come up with a new theory for ecology along these lines, which names metabolism as its basic principle (Brown et al. 2004). The ‘metabolic theory of ecology’ posits that the way animals use energy should be considered a unifying principle of ecology. It states that metabolism provides the fundamental constraints by which ecological processes are governed. Supporters of the theory suggest that processes at all levels of organization, from single organism’s life-history strategies to population dynamics and ecosystem processes could possibly be explained in terms of constraints imposed by metabolic rate.
Barbara Fischer
Further reading
Kleiber M. (1947) Body size and metabolic rate. Physiological Reviews 27 (4): 511–541.
West GB, Brown JH, Enquist BJ (1997) A general model for the origin of allometric scaling laws in biology. Science 276: 122–6
West, G.B., Brown, J.H., & Enquist, B.J. (1999). The fourth dimension of life: Fractal geometry and allometric scaling of organisms. Science 284 (5420): 1677–9.
Brown, J. H., Gillooly, J. F., Allen, A. P., Savage, V. M., & G. B. West (2004) Toward a metabolic theory of ecology. Ecology 85: 1771–1789
(from Hemmingsen, 1960)
What it means
Larger animals have relatively slower metabolisms than small ones. A mouse must eat about a third of its body mass every day not to starve whereas a human can survive on only 2%. The relationship follows a power law: basal metabolic rate (R) is proportional to the ¾ power of an animal's mass (M). This relationship, the Kleiber Law (Kleiber 1947), can be drawn as a straight line on a log-log plot (see Fig.). Mysteriously, this simple relationship holds, from simple organisms to most complex ones, from microbes to giant blue whales across 18 orders of magnitude in body mass.
Where does it come from
Max Kleiber, an ecologist from Switzerland, discovered the law in the early 1930s. After its initial publication, other workers added additional species to his original figure. They extended it to a ‘mouse-elephant-curve’ and subsequently even further, to whales and microbes, confirming the Kleiber law’s surprising validity. Before Kleiber published his law, first explanations for why metabolic rate would change with body mass were already around. These were based on an organism’s body surface to volume ratio. Large animals have proportionately less surface area per unit volume, they hence lose body heat more slowly, and, so it was argued, need proportionately less food and have a relatively slower metabolism. But, following this argumentation, metabolic rate should scale with mass to the power of 2/3, not 3/4. A causal explanation for the ¾-law has been lacking until scientist in the 1990s, using mathematical models, proposed that the geometry and particularly the fractal structure of an animal’s circulatory system could be the reason for the ¾ exponent (West et al. 1997, West et al. 1999). One problem with these models is that the derivations build on considerations of blood flow, but the Kleiber law also holds for organisms without a blood circulatory system, like bacteria or corals.
Applicability and importance
Whether metabolic rate always scales with body mass to the power of 3/4 is still debated - some researchers think that no single exponent fits all the data, and some believe it should be 2/3 instead. The ¾-law though, favored by the majority of biologists and fitting the data best, seems to be one of the few examples of a generally applicable ‘law’ in biology. Biologists are not used to finding general rules of this kind within their domain. I first learnt about it in a course on animal physiology during my graduate studies, and I remember clearly how much its simplicity and generality fascinated me. In the past few years, researchers have come up with a new theory for ecology along these lines, which names metabolism as its basic principle (Brown et al. 2004). The ‘metabolic theory of ecology’ posits that the way animals use energy should be considered a unifying principle of ecology. It states that metabolism provides the fundamental constraints by which ecological processes are governed. Supporters of the theory suggest that processes at all levels of organization, from single organism’s life-history strategies to population dynamics and ecosystem processes could possibly be explained in terms of constraints imposed by metabolic rate.
Barbara Fischer
Further reading
Kleiber M. (1947) Body size and metabolic rate. Physiological Reviews 27 (4): 511–541.
West GB, Brown JH, Enquist BJ (1997) A general model for the origin of allometric scaling laws in biology. Science 276: 122–6
West, G.B., Brown, J.H., & Enquist, B.J. (1999). The fourth dimension of life: Fractal geometry and allometric scaling of organisms. Science 284 (5420): 1677–9.
Brown, J. H., Gillooly, J. F., Allen, A. P., Savage, V. M., & G. B. West (2004) Toward a metabolic theory of ecology. Ecology 85: 1771–1789
Thursday, May 3, 2012
Source-sink Dynamics
bj + ij – dj – ej = (bide)j = 0
source: bj > dj and ej > ij
sink: bj < dj and ej < ij
What it means
bj, dj, ij and ej are the total number of births, deaths, immigrants and emigrants in habitat j. At equilibrium a source is a net exporter and a sink a net importer of individuals. The source-sink concept explains how populations can persist in poor habitats where extinction would be a fact if it wasn't for a net in-flow of individuals dispersing from high quality habitats.
Where does it come from?
The source-sink concept is often credited H. Ronald Pulliam, who's seminal paper published in The American Naturalist have been cited almost 2,200 times since its publication in 1988. As always, there were other key contributions leading up to this influential paper. Until the 1960's ecological theory was non-spatial, but with the publication of several important papers and books by e.g. MacArthur, Wilson, Levins and not to mention habitat selection theory based on the ideal free distribution by Fretwell and Lucas (1970) there was an growing interest in population dynamics across heterogeneous landscapes. However, if the spatial distribution of a population conformed to the ideal free distribution, all individuals would have the same expected fitness (since none would benefit from dispersing). This was in contrast to the dispersal sink terminology suggested by Lidicker (1975) to describe that individuals may occupy poor habitats because they are excluded from better ones. Horn realised that this meant that local population size is not necessarily informative about the underlying habitat quality. The potential importance of source-sink dynamics may have led Holt (1985) to the first mathematical source-sink model three years before the publication of Pulliam (1988). Pulliams's paper was built on the bide (birth, immigration, death, emigration) structure which had been previously used by Cohen (1969). Thanks to the simple, yet instructive model analysis presented by Pulliam the message was available and could be understood by a wide range of scientists.
Importance
The source-sink concept has changed the way we think about niches (realized niches may be larger than fundamental niches), and the spatial dimension of population and community ecology (local patterns may no reflect local conditions). Identifying sources and sinks may however be challenging due to temporal variation in local conditions as well the existence of pseudo-sinks, where an overpopulated source (e.g. due to immigration) can have a negative growth rate. The source-sink concept has nevertheless vitalised population management by providing theory for spatial control and challenged optimal monitoring, exploitation and conservation in heterogeneous landscapes. Furthermore, source-sink dynamics provide a challenge for climate envelope models that try to predict future species ranges based on observed ranges. Today Pulliam's paper is cited many disciplines outside ecology, including law, medicine, toxicology and mathematics.
Niclas Jonzén
Literature
Liu, J., Hull, V., Morzillo, A.T. & Wiens, J.A (eds.). 2011. Sources, Sinks and Sustainability. Cambridge UP, Cambridge, UK. The source-sink literature is treated in detail by this volume and includes all papers cited above.
Pulliam, H.R. 1988. Sources, sinks and population regulation. Am. Nat. 132: 652-661.
source: bj > dj and ej > ij
sink: bj < dj and ej < ij
What it means
bj, dj, ij and ej are the total number of births, deaths, immigrants and emigrants in habitat j. At equilibrium a source is a net exporter and a sink a net importer of individuals. The source-sink concept explains how populations can persist in poor habitats where extinction would be a fact if it wasn't for a net in-flow of individuals dispersing from high quality habitats.
Where does it come from?
The source-sink concept is often credited H. Ronald Pulliam, who's seminal paper published in The American Naturalist have been cited almost 2,200 times since its publication in 1988. As always, there were other key contributions leading up to this influential paper. Until the 1960's ecological theory was non-spatial, but with the publication of several important papers and books by e.g. MacArthur, Wilson, Levins and not to mention habitat selection theory based on the ideal free distribution by Fretwell and Lucas (1970) there was an growing interest in population dynamics across heterogeneous landscapes. However, if the spatial distribution of a population conformed to the ideal free distribution, all individuals would have the same expected fitness (since none would benefit from dispersing). This was in contrast to the dispersal sink terminology suggested by Lidicker (1975) to describe that individuals may occupy poor habitats because they are excluded from better ones. Horn realised that this meant that local population size is not necessarily informative about the underlying habitat quality. The potential importance of source-sink dynamics may have led Holt (1985) to the first mathematical source-sink model three years before the publication of Pulliam (1988). Pulliams's paper was built on the bide (birth, immigration, death, emigration) structure which had been previously used by Cohen (1969). Thanks to the simple, yet instructive model analysis presented by Pulliam the message was available and could be understood by a wide range of scientists.
Importance
The source-sink concept has changed the way we think about niches (realized niches may be larger than fundamental niches), and the spatial dimension of population and community ecology (local patterns may no reflect local conditions). Identifying sources and sinks may however be challenging due to temporal variation in local conditions as well the existence of pseudo-sinks, where an overpopulated source (e.g. due to immigration) can have a negative growth rate. The source-sink concept has nevertheless vitalised population management by providing theory for spatial control and challenged optimal monitoring, exploitation and conservation in heterogeneous landscapes. Furthermore, source-sink dynamics provide a challenge for climate envelope models that try to predict future species ranges based on observed ranges. Today Pulliam's paper is cited many disciplines outside ecology, including law, medicine, toxicology and mathematics.
Niclas Jonzén
Literature
Liu, J., Hull, V., Morzillo, A.T. & Wiens, J.A (eds.). 2011. Sources, Sinks and Sustainability. Cambridge UP, Cambridge, UK. The source-sink literature is treated in detail by this volume and includes all papers cited above.
Pulliam, H.R. 1988. Sources, sinks and population regulation. Am. Nat. 132: 652-661.
Tuesday, February 28, 2012
The Moran effect
ρp = ρe
What it means
It states that the correlation between the densities of two separate, conspecific, populations (ρp), is equal to the correlation between their respective environments (ρe). This offers a simple explanation to the often found synchrony of the dynamics of spatially separated populations. Populations densities vary in synchrony because local environments are correlated across space. ‘Environment’ is here interpreted in the broad sense, it can be abiotic (e.g. weather) or biotic (e.g. predation pressure)).
Where does it come from
Moran (1953), in a paper on the dynamics of the highly synchronized Canadian lynx populations, stated the theorem without really proving it (”It can easily be shown mathematically that...”). It is based on a set of simplifying assumptions:
i) Each local population is driven by linear, stochastic dynamics. A simple example is a first order auto-regressive process (AR(1)): xt = axt-1 + εt, where a is a constant, xt is (possibly log-transformed) population density at time t (minus its long term mean) and εt is the local environment at time t.
ii) All local populations are driven by exactly the same dynamic equation.
iii) All environmental fluctuations are either temporally uncorrelated ('white noise') or share the same temporal structure (they could be linear, auto-regressive processes themselves).
iv) There is no dispersal between populations
The theorem received little attention until Royama (1984, 1992) brought it up and coined its name.
Applicability and importance
More realistic assumptions (non-linear, unequal dynamics) lead to relatively lower population synchrony, compared to Moran's prediction (e.g. Ranta et al 2006). For natural populations one can thus not assume that the Moran effect is as strong as in the ideal case. Its true power lies in its generality. Any structured, linear, model yields the same result. It is thus applicable, at least approximately, to in principle all natural populations, offering an always-present explanation to synchrony. As an example, many cyclic populations are highly synchronized. It therefore tempting to look for a single mechanism causing both the cycles and the synchrony. The Moran effect readily explains the synchrony. Other explanations (such as predator-prey interactions) can be sought for the cyclicity (Royama 1992).
The major alternative explanations to population synchrony that have been put forward are dispersal between populations and nomadic predators. Especially the role of dispersal has been analysed in some detail, showing a strong dependence on the character of the local dynamics. In any case, the Moran effect is always present, it can never be ignored.
From a conservation point of view, population synchrony decreases the viability of spatially structured populations. In short, it increases the probability that several local population go extinct simultaneously. This is in contrast to the mixed blessing of dispersal, which increases synchrony but at the same time makes possible recolonization of empty habitat patches.
Jörgen Ripa
Further reading
Moran, P. A. P. 1953. The statistical analysis of the Canadian lynx cycle. II. Synchronization and meteorology. Australian Journal of Zoology 1: 291-298.
Royama, T. 1984. Population dynamics of the spruce budworm Choristoneura fumiferana. Ecological Monographs 54(4): 429-462.
Royama, T. 1992. Analytical population dynamics. Chapman & Hall, London
Palmqvist, E. and P. Lundberg 1998. Population extinctions in correlated environments. Oikos 83: 359-367.
Ripa, J. 2000. Analysing the Moran effect and dispersal: their significance and interaction in synchronous population dynamics. Oikos 89: 175-187.
Ranta, E., P. Lundberg & V. Kaitala. 2006. Ecology of Populations. Cambridge UP.
What it means
It states that the correlation between the densities of two separate, conspecific, populations (ρp), is equal to the correlation between their respective environments (ρe). This offers a simple explanation to the often found synchrony of the dynamics of spatially separated populations. Populations densities vary in synchrony because local environments are correlated across space. ‘Environment’ is here interpreted in the broad sense, it can be abiotic (e.g. weather) or biotic (e.g. predation pressure)).
Where does it come from
Moran (1953), in a paper on the dynamics of the highly synchronized Canadian lynx populations, stated the theorem without really proving it (”It can easily be shown mathematically that...”). It is based on a set of simplifying assumptions:
i) Each local population is driven by linear, stochastic dynamics. A simple example is a first order auto-regressive process (AR(1)): xt = axt-1 + εt, where a is a constant, xt is (possibly log-transformed) population density at time t (minus its long term mean) and εt is the local environment at time t.
ii) All local populations are driven by exactly the same dynamic equation.
iii) All environmental fluctuations are either temporally uncorrelated ('white noise') or share the same temporal structure (they could be linear, auto-regressive processes themselves).
iv) There is no dispersal between populations
The theorem received little attention until Royama (1984, 1992) brought it up and coined its name.
Applicability and importance
More realistic assumptions (non-linear, unequal dynamics) lead to relatively lower population synchrony, compared to Moran's prediction (e.g. Ranta et al 2006). For natural populations one can thus not assume that the Moran effect is as strong as in the ideal case. Its true power lies in its generality. Any structured, linear, model yields the same result. It is thus applicable, at least approximately, to in principle all natural populations, offering an always-present explanation to synchrony. As an example, many cyclic populations are highly synchronized. It therefore tempting to look for a single mechanism causing both the cycles and the synchrony. The Moran effect readily explains the synchrony. Other explanations (such as predator-prey interactions) can be sought for the cyclicity (Royama 1992).
The major alternative explanations to population synchrony that have been put forward are dispersal between populations and nomadic predators. Especially the role of dispersal has been analysed in some detail, showing a strong dependence on the character of the local dynamics. In any case, the Moran effect is always present, it can never be ignored.
From a conservation point of view, population synchrony decreases the viability of spatially structured populations. In short, it increases the probability that several local population go extinct simultaneously. This is in contrast to the mixed blessing of dispersal, which increases synchrony but at the same time makes possible recolonization of empty habitat patches.
Jörgen Ripa
Further reading
Moran, P. A. P. 1953. The statistical analysis of the Canadian lynx cycle. II. Synchronization and meteorology. Australian Journal of Zoology 1: 291-298.
Royama, T. 1984. Population dynamics of the spruce budworm Choristoneura fumiferana. Ecological Monographs 54(4): 429-462.
Royama, T. 1992. Analytical population dynamics. Chapman & Hall, London
Palmqvist, E. and P. Lundberg 1998. Population extinctions in correlated environments. Oikos 83: 359-367.
Ripa, J. 2000. Analysing the Moran effect and dispersal: their significance and interaction in synchronous population dynamics. Oikos 89: 175-187.
Ranta, E., P. Lundberg & V. Kaitala. 2006. Ecology of Populations. Cambridge UP.
Sunday, January 1, 2012
Logistic growth
N is population density, r is the intrinsic rate of increase (i.e., the maximum per capita growth rate), K is the so-called carrying capacity (i.e., the maximum sustainable population) and t is time. A population following the logistic growth equation is regulated such that the per capita growth rate ((dN/dt)/N) declines linearly with density.
Hence, when there are very few individuals (N << K) the per capita growth rate is close to r, but it will decrease as the population density is increased. Eventually when the density has reached the carrying capacity (N=K) the growth rate equals zero and the population has reached a globally stable equilibrium (i.e., the population will end up at N=K independent on starting value except for N(0)=0). If the population density incidentally is larger than the carrying capacity (N > K), e.g., due to immigration, then the growth rate becomes negative until the population density has decreased down to the carrying capacity (N=K).
Where does it come from?
The logistic growth equation was originally formulated by Pierre-François Verhulst. The Verhulst equation was published after Verhulst had read Thomas Malthus' An Essay on the Principle of Population. Verhulst derived his logistic equation to describe the self-limiting growth of a biological population.
Importance
The logistic growths equation is a common model of single species population growth when there are limited resources. It means that the rate of increase is proportional to both the existing population and the amount of available resources, all else being equal. Due to the linear relationship between the per capita growth rate and population density the logistic growth model is the simplest model of population regulation. Actually, it is one of very few nonlinear differential equations in ecology having an exact solution. It is used to model single species populations of a great variety, e.g. in bacteria, yeast, fish, mammals and plants. The logistic growth model has also been extended in various ways and it can be an important building block when formulating multi-species models.
Anders Wikström
Literature
Case, T.J. 2000. An illustrated guide to theoretical ecology. Oxford University Press
Mangel, M.2006. The theoretical biologists toolbox. Cambridge University Press
Turchin, P. 2003. Complex population dynamics: a theoretical/empirical synthesis. Princeton University Press
Tuesday, June 28, 2011
The Canonical Equation of Adaptive Dynamics
What it means
The equation describes how the value of an ecological trait (z) evolves depending on the per capita mutation rate (μ), the variance of mutation effects (σ2), the population size at equilibrium (N* ) and the selection gradient (the last factor). W(z', z) is called invasion fitness and is measured as the per capita growth rate of a morph with trait value z' in an environment where a morph with a trait value z dominates (the resident trait). Note that the derivative in the last factor, i.e. the slope of the invasion fitness, is taken with respect to the mutant trait z' and evaluated at the resident trait value z.
Implications and importance
The equation is derived for mutation-limited evolution in large, monomorphic, asexual populations (Dieckmann and Law 1996). Changes in the trait value are assumed to be small and occur as successful mutant populations establish and replace the resident population. It is biologically straightforward to see why the different factors in the equation affect the rate of evolution. To start with, the product of and N* dictates how often mutations arise in the population. The factor 1/2 appears in the equation because under directional selection half of the mutations in a one-dimensional trait are bound to go in the 'wrong' direction. Higher variance in the mutation effects (σ;2) increases the rate of evolution by making the mutational steps longer. The slope of the invasion fitness around the resident trait value indicates how much fitness increases (or decreases) with a small mutational step. Evolution will be faster with a steeper slope since the likelihood of a successful invasion increases when the relative fitness advantage of the mutant over the resident is high. This slope also affects the direction of evolution such that z evolves towards higher values when the slope is positive and vice versa. Evolution will come to a halt when the slope of the invasion fitness is zero.
The canonical equation of adaptive dynamics, and generalisations of it, is especially useful for dealing with frequency-dependent selection and situations where the ecological feedback environment is affected by the evolutionary change. It has strong connections to evolutionary game theory and can for example be used to study gradual evolution to an Evolutionary Stable Strategy, ESS, or to evolutionary branching points (Geritz et al 1998, McGill and Brown 2007). Applications include food web evolution, speciation, fisheries management and the evolution of cooperation.
The canonical equation is related to other approaches to describe gradual evolution such as quantitative genetics or strategy dynamics. The approaches differ mainly in assumptions about the genetic variation (e.g. mutation-limited evolution vs. standing genetic variation) and whether or not ecological feedback is affecting evolution or not (changing vs. fixed adaptive landscape).
Jacob Johansson
Further reading:
Dieckmann U. and Law R. 1996. The dynamical theory of coevolution: A derivation from stochastic ecological processes. Journal of Mathematical Biology 34: 579–612
Geritz, S., É. Kisdi, E., G. Meszéna, G., and J. A. J. Metz, 1998. Evolutionarily singular strategies and the adaptive growth and branching of the evolutionary tree. Evolutionary Ecology 12: 35-57.
Champagnat, N., Ferrière, R., Ben Arous, G. (2001) The canonical equation of adaptive dynamics: a mathematical view. Selection 2, 73-83 .
Waxman, D. and Gavrilets, S. 2005. 20 Questions on Adaptive Dynamics. Journal of Evolutionary Biology 18: 1139-1154
McGill, B., and J. Brown. 2007. Evolutionary game theory and adaptive dynamics of continuous traits. Annual Review of Ecology, Evolution and Systematics 38: 403-435.
Species-Area relationship
S = cAz
What it means
The equation states the relationship between an area (A) and the expected number of extant species (S) within that area. The constants c and z define the shape of the nonlinear relationship (Rosenzweig 2000). With an increasing area the expected number of species inhabiting that area is also increasing at a rate mainly dictated by z.
Where does it come from?
Originally the relationship, presented above, was theoretically derived from a species-abundance framework (Preston 1962). Given the assumption of a lognormal distribution of species abundances in a community, Preston derived the equation and calculated the z-value to be 0.27. This provided an empirically testable theory of biodiversity in island biogeography as well as mainland regions of different size.
Explanation and implications
The species-area relationship (mainly described by the exponent z above) can be explained by fundamental eco-evolutionary processes such as migration, speciation and extinctions which ultimately are driven by mechanisms such as niche availability, density dependence and species ranges (McGlade 1999). Although all mechanisms possibly are ubiquitous, some may be more important under certain conditions than others.. For example, large geographical areas include more diverse habitats, and hence more niches, facilitating high species diversity. In addition the degree of migration to and from the island, dictated by island area and isolation, has been identified as an important factor affecting the relationship.
In mainland areas with similar conditions the relationship can be explained by population size and geographical range of the species (McGlade 1999). As geographical area is decreased, population sizes and species ranges also decrease. This may give rise to an increase in extinction rate. Conversely, increasing population size and range facilitate speciation as large populations with large ranges often contain large genetic variation and are split into allopatry more often.
It has been shown that the coefficient c is often dependent on the taxon and biogeographical region, whereas z is more stable and has been estimated to fall between 0.20-0.35 for mainland biogeography and 0.12-0.17 for island biogeography (MacArthur 1969). These parameter estimations often fall below the theoretical value derived by Preston. Lower z-values than predicted can, for example, indicate high immigration of transient species from surrounding areas. Conversely, large z-values may indicate large islands or geographical areas which include several biomes whose species can evolve as independent assemblages. The species-area relationship has often been used in conservation biology (Krebs 1999), but not always without problems (see e.g., He & Hubbell 2011)
Mikael Pontarp
Further reading
He, F. & Hubbell, S.P. (2011) Species-area always overestimate extinction rate from habitat loss. Nature, 473
Krebs, C.J. (1999) Ecological methodology. Addison-Welsy Educational Publishers. Menlo Park
MacArthur, H.R & Wilson, E.O. (1968) The theory of island Biogeography. Princeton university press. Princeton
McGlade, J. (1999) Advanced ecological theory. Blackwell sience. London
Preston .F.W. (1962) The canonical distribution of commonness and rarity. Ecology, 43
Rosenzweig, M.L (2000) Species diversity in space and time. Cambridge University Press. Cambridge.
Monday, April 18, 2011
The Marginal Value Theorem
What it means:
The foraging in a patch (i) should be abandoned when the rate of energy acquisition in that patch (the left-hand side) equals the average intake rate including travelling time (the right-hand side, I*). E is energy gain and h is time spent in the patch. The assumptions is that all the animal is doing is searching for and handling food.
Implications and importance:
The theorem is based on the fact that resource acquisition often has diminishing returns and that it pays to leave an activity before the patch is depleted if there are alternative patches to exploit. Charnov (1976) and Parker & Stuart (1976) were the first to formalize this idea in evolutionary ecology.
One example solution to the Marginal Value Theorem (MVT) is
where hi* is the optimal patch residence time in patch i, si is the resource level in patch i, sa is the average resource level across patches, k is a parameter determining the initial slope of the gain function in a patch, and ts is the average travelling time between any two patches in the environment (Lundberg & Åström 1990). If patches are close to each other so little time is spent travelling, patch residence time decreases. The same is true if the average patch (i.e., the environment) is resource rich (high sa). The richer the focal patch (high si), the longer the patch residence time should be.
The MVT has made innumerable predictions for resource use in patchy environments (e.g., bees visiting flowers, browsers feeding on trees, mice exploiting seeds). Imagine yourself picking apples in an orchard or having one or many pork chops to eat when hungry. How many apples do you leave behind before changing trees if they are close and full of apples as opposed to the reverse?
Per Lundberg
Further reading:
Charnov, E. L. 1976. Optimal foraging, the marginal value theorem. Theor. Pop. Biol. 9: 129-136
Lundberg, P. & Åström, M. 1990. Functional response of optimally foraging herbivores. J. Theor. Biol. 144: 367-377.
Parker, G. A. & Stuart, R. A. 1976. Animal behaviour as a strategy optimizer: evolution of resource assessment strategies and optimal emigration thresholds. Am. Nat. 110: 1055-1076.
Stevens, D. W., Brown, J. S. & Ydenberg, R. C. (eds) 2007. Foraging. Chicago Univ. Press.
Friday, March 11, 2011
Exponential growth
What it means
N is population density and t is time. This is the simplest model of population growth and assumes that the per capita growth rate, i.e., the difference between per capita birth and death rates, is a constant, r, often referred to as the intrinsic (per capita) growth rate. The solution of the differential equation above is
where N(0) is the population density at time zero. If r > 0 the population will grow to infinity, whereas if r < 0 it will decline towards zero.
Implications and importance
The 18th century reverend Thomas Malthus is often cited as the founder of the exponential growth model. This model is sometimes referred to as the exponential law (Turchin 2003); it certainly has similarities with the law of intertia in physics, and it is generally considered to be the first principle of population dynamics (e.g. Ginzburg 1986; Berryman 1999). Since it describes the dynamics of a population in a constant environment with no forces acting upon it, it efficiently serves as a starting point for more detailed models including e.g. structure (age, stage, space, etc), interactions and stochasticity. The stochastic version of the exponential growth model, which is a random walk on the log scale, is sometimes used in conservation biology for estimating extinction risks of populations at low density.
Further reading
Berryman, A. A. 1999. Principles of population dynamics and their applications. Stanley Thornes Publishers, Cheltenham, UK.
Ginzburg, L. R. 1986. The theory of population dynamics. I. Back to first principles. Journal of Theoretial Biology 122:385-399.
Turchin, P. 2003. Complex population dynamics. A theoretial/empirical synthesis. Princeton University Press, Princeton, NJ.
Thursday, February 10, 2011
The Fear Equation
where μ is (perceived) predation risk, F is current fitness, and ∂F / ∂e is the marginal fitness gain from acquiring more energy from foraging.
What does it mean? Technically speaking, the equation represents the marginal rate of substitution of safety for food. It says that when the environment is risky (high rates of predation), when current fitness is high (e.g., if the animal is well fed), and when the marginal gain of more food is low, then one should be very risk averse, i.e., feel “fear”.
Where does it come from? It originates from Joel Brown’s seminal paper (Brown 1988) formulating the relationship between patch use, foraging rate and predation risk.
Importance: This idea has subsequently been much explored when studying foraging ecology, habitat selection, and the mechanisms of coexistence between competitors and predators and their prey. It elegantly shows how different fitness “currencies” (here, food and safety) can be translated into each other. This trick is often necessary when putting together reliable and realistic fitness functions for many problems in evolutionary ecology. It also determines the “landscape of fear” prey populations experience and it can be shown that this effect on the population can be greater than the actual killing of prey individuals. It also nicely explains the “Stalingrad effect”, i.e., the fearless behavior of the inhabitants of the city during the WWII battle under severe risk. They had with very low current “fitness” and extremely high marginal “fitness“ gain from some food. Think about similar situations yourselves!
Per Lundberg
Per Lundberg
Literature:
Brown, J.S. 1988. Patch use as an indicator of habitat preference, predation risk, and competition. Behav. Ecol. Sociobiol. 22: 37-47.
Brown, J. S. 1992. Patch use under predation risk: I. Models and predictions. Ann. Zool. Fennici 29:301-309.
Brown, J. S. & Kotler, B. P. 2004. Hazardous duty pay and the foraging cost of predation. Ecol. Lett. 7: 999-1014.
Friday, January 14, 2011
Fisher's Fundamental Theorem on Natural Selection
"The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time." (Fisher 1930)
What it means
In brief, simplified terms, it means that natural selection will in all organisms tend to increase fitness. Evolution is in this simplified sense an optimizing process. Fitness, defined as per capita growth rate, is what is being optimized.
In more precise terms the statement needs a fair amount of qualification, almost word by word, to be as general as claimed. Fisher was by no means clear about the qualifications - they are mostly due to later interpretations (Price 1972).
”increase in fitness” - is the increase in the population mean additive genetic values of fitness. Further, it is the additive genetic values at the time of selection that counts.
”genetic variance” - the additive genetic variance, i.e. the variance in additive effects in the population.
Fitness can not increase forever, and Fisher was perfectly aware of it. However, natural selection will always tend to increase fitness while changes in the environment (such as an increased population density) can decrease fitness.
Edwards (1994) suggested a revised, modernized version of the theorem:
The rate of increase in the mean fitness of any organism at any time ascribable to natural selection acting through changes in gene frequencies is exactly equal to its genic variance in fitness at that time.
The rate of increase in the mean fitness of any organism at any time ascribable to natural selection acting through changes in gene frequencies is exactly equal to its genic variance in fitness at that time.
For further details see Price (1972) and Grafen (2003).
Implications and importance
The theorem is a key link between the mechanics of Mendelian genetics and evolution through natural selection, and thus a keystone of the modern evolutionary synthesis.
It has been viewed as a ’license’ for naturalists to think of organisms as optimizing agents, and pointing out exactly what is being optimized (Grafen 2003). (Note: The process of evolution by natural selection is by no means dependent on genetics as we know it - evolution can work with many types of heritability. In this sense, organic life on Earth is but an example)
The theorem was for a long time disregarded as only applicable to special, simplified cases, but was later resurrected to its general status (Price 1972). This long delay can in most part be explained by the obscureness of Fisher’s writing and his unwillingness to express the theorem in more formal mathematics.
Further reading
Edwards, A. W. F. (1994) The fundamental theorem of natural selection. Biol. Rev. 69: 443-474
Fisher, R. A. (1930). The Genetical Theory of Natural Selection. Oxford , Oxford University Press.
Grafen, A. (2003). Fisher the evolutionary biologist. The Statistician 52(3): 319-329
Price, G. R. (1972) Fisher's "fundamental theorem" made clear. Ann. Hum. Genet., 36: 129-140
Subscribe to:
Posts (Atom)