“Network Structure and Biodiversity Loss in Food Webs: Robustness Increase with Connectance” (Dunne, Williams, and Martinez 2002) is a paper investigating the robustness of food webs using networks. It looks at 16 different food webs from 15 different places. Each web was tested to see how the removal (extinction) of certain species might affect the overall food web. A food web was said to collapse if over 50% of its species had become extinct. The robustness of food webs is represented in this paper as the fraction of removals it takes to collapse the food web over the total number of trophic species in the food web. Several other properties of the food networks were calculated to see if they correspond to the robustness of food webs including connectance, species richness, omnivory, and the number of links per species.
There were four experiments simulated with the food webs. In the first simulation the most connected species were removed; in the second the most connected species were removed excluding primary producers(like grass etc.); in the third species were randomly removed, and in the last the least connected species were removed. These simulations were conducted to see if any new insights could be discovered about a food webs robustness and to see if there were any particular species which was significantly more important than the others (i.e. a species that if removed would cause a mass second extinction).
Three out of the 16 food networks displayed power law degree distribution and small world properties but no major differences in reactions to the extinctions were found between the small world and non small world food networks. However, small world food networks are shown to be more severely affected by extinctions due to their lack of strong connectance. Out of all the properties of food webs looked at, connectance appears to be the most influential in determining the robustness of the food web. The more connectance the more robust the food web, which seems to make sense in ecological terms because the more options a predator has to feed upon others the less likely it is for them to become extinct due to loss of a prey species.
While this network analysis of food webs makes sense in mathematical terms I would stress the need to review the assumptions this analysis makes in order to critique its relevance in the real world. Since the network analysis on food webs do not take into account the adaptability of species it may be that they are generally irrelevant in testing robustness. It could be that a species will usually eat only one prey if available but if that prey went extinct then it could move on to eating another species. Also food webs will likely never be able to take into account all links between species so there is an artificial cut off point that could render this analysis unhelpful. Links may also be false due to human error or there could be a particular species that is essential in the predator’s diet because it provides some nutrient that the predator can get no other way. Therefore, even though this predator has other food sources it would still go extinct if that particular species was removed from the food web. However, if scientists could provide field evidence that connectance is an important value in the world outside this analysis of food webs could allow us to identify ecosystems that are particularly vulnerable to mass extinctions.
The true meaning of the alzabo
11 hours ago
5 comments:
Hi Adri,
Thanks for reading our paper (this is Jennifer Dunne, the first author). Your summary is quite good, and you you have put your finger on some important issues. It is not at all clear that these particular "degree-based" (i.e., based on how many links a species has) extinctions are relevant to the real world. One goal of that particular paper was to to see what the response of systems might be to the extremes (or near-extremes) of possible extinction sequences.
Some dynamical modeling work has suggested that this type of network structure analysis gives a minimum number of likely extinctions for a given primary loss (our analysis only accounts for "bottom-up" effects of consumers losing resources, not top-down effects of the loss of consumer on their resources on down through the web).
Regarding your comments about prey-switching and incompleteness of data-- these are both important things to account for. In fact, we try to work with datasets that integrate data on feeding links over several seasons or years, and we explicitly do not exclude rare links or links that make up a small fraction of a cunsumer's diet. We do this partly to account for the prey-switching effect just through network structure-- it may be that out of 10 prey that a species has in one of our datasets, they preferentially feed on just 5 of them most of the time. Some people have argued that we should exclude the other five from our data because they are "unimportant". We don't exclude them becuase in times of stress or trouble, if the preferred prey goes away, the consumer still has 5 other taxa to feed on. Thus, in our datasets, if all resource species disappear, the consumer is truly left without any choices.
It would be great to be able to do field experiments to explicitly verify some of these things, but doing the right types of experiments for this is very difficult, if not impoosible, in natural complex systems. However, there are scientists who work with microcosms (test tube ecosystems) who could potentially test the importance of connectance and species richness.
Regarding data errors, there will alsways be data errors. The important question is how robust one's results are to likely errors. Other studies of ours and others' suggest that network structure analyses are relatively robust to the incorrect attribution of some fraction of links and nodes (probably at least 10%, possibly more, unless the errors are systematic in a particular direction).
By the way-- we don't actually claim that ecosystems "collapse" at 50% loss of species. Instead, this was just a convenient way to define a single number for "robustness"-- while we focused on R50, we could have focused on R25 or R75! In any case, the loss of 50% of psecies from a system is pretty extreme, I think everyone would agree.
If you email me at jdunne@santafe.edu, I can send you a couple of related papers.
cheers, Jennifer Dunne
As far as modeling food webs with networks is concerned, it may make sense to weight the edges depending on an animal's preference for specific prey, or the "dietary weight" that a food source carries. One question that I would have is can an animal be sustained for extended periods of time on its "survival" diet? For example, an animal may be able to switch to a secondary food source during certain stressful times of the year to make up for a shortage of its primary diet, but is this secondary diet sufficient for long term survival of the animal in the case of extinction or relocation of its primary food source?
It seems to me that one of the underlying issues raised by Adrianna's post and the two responses is how to determine what constitutes a link in an empirical network study. At first blush, it seems like this should be a straightforward, obvious issue: a link is a link if it's there, and not if it isn't. But the real world is messier than this, and there are usually subjective choices that need to be made when dealing with large data sets. For example, Jennifer talks about integrating data over several seasons or years and then not explicitly excluding rare links. Her argument for not doing so seems convincing to me. But one could make a reasonable argument the other way, too.
Next week, Sam and I will talk in class about an empirical network study we did last spring. I was surprised by the amount of "data massaging" that we needed to do in order to make analysis tractable. I feel quite comfortable with the way we did things, but there's certainly plenty of room for critique.
I think the experience of working with some messy data is a good one. It's a great way to learn about some of the strengths and weaknesses of science. I've not been involved in lot of empirical studies in my research life, but the few times I have, they've always turned out to be trickier and more interesting (and fun) than I had anticipated.
Cecily has a good point about weighted food webs but then all kinds of complications arise from this as Dave mentions. For instance, an animal may feed substantially on different prey during different times so that the strength of the link might depend on the time of year an ecosystem is looked at. Even if you looked at a predators diet over a year might be that a certain prey species that is only a small portion of the predators overall diet is particularly important during months, like the winter, when all other food sources are scarce. In this case if the species around during the winter became extinct it is possible that the predator would as well even if it had links to other prey. However, since it is probably not possible to take all factors that affect a food web into account in a network analysis, it seems like we should just try to understand the limitations of the information we gain from this kind of analysis.
Yes, there are certainly details that are getting left out of this model. But this is true with any model, of course. I think one of the things that modeling is good for is that it gives us a way to test which details might matter and which might not for a particular phenomena.
For example, one could take a basic empirical food web and then perturb it in various ways. This could be done by adding or deleting nodes at random. Or, if it's a weighted network, one could also add a small amount of noise to the weights. One could also assemble the data in different ways. E.g., if there are some weak links that you aren't sure if you should include, try both including them and excluding them, and see if it makes a difference. One then calculates the properties one is interested in---robustness or community structure or whatever---for the original and perturbed networks and see what happens.
If it turns out that these perturbations don't make a big difference, then perhaps one has discovered some general feature of the network. Models of this sort aren't always directly verifiable by direct experiment, but they aren't intended to be. Rather, the goal of is to give some general intuition about very complex phenomena. Insights gained via this approach are often qualitative.
Also, it seems to me that often these models are designed to make global statements which might be misleading when applied locally. (This is potentially true of any statistical approach; this isn't unique to network models.) For example, looking at food webs as Dunne et al have done might give a general picture of their robustness to random extinctions. But it probably wouldn't be the best way to think about how to save an endangered bird or frog species. Similarly, in data mining for communities or clusters in graphs, one would expect the algorithm to mis-classify a bunch of individual nodes, even if it gets the overall community structure more or less correct.
Post a Comment