Dr. Rob Dillon, Coordinator

Monday, December 12, 2011

Toward the Scientific Ranking of Conservation Status - Part I

Last month I got an email from a colleague in the South Carolina Department of Natural Resources, asking for my help updating the state wildlife conservation plan. I told him I'd be willing to pitch in with the 2011-12 effort, just as I helped in 2004-05 [1]. But I continue to harbor deep misgivings about the entire process.

In Part I of the essay that follows, we debride a nasty sore on the left butt cheek of American environmental science - the unscientific (possibly pseudoscientific) method by which we prioritize our biota for conservation purposes. And in Part II of this series, to follow next month, we will begin the process of suturing that wound back up.

Like all other states with which I am familiar, South Carolina’s wildlife plan relies upon a subjective system of conservation status ranks, as follows:
S1 - Critically imperiled state-wide because of extreme rarity or because of some factor(s) making it especially vulnerable to extirpation.
S2 - Imperiled state-wide because of rarity or factor(s) making it vulnerable.
S3 - Rare or uncommon in state.
S4 - Apparently secure in state.
S5 - Demonstrably secure in state.
The spreadsheet my DNR colleague sent me for my input [2] had a column for species number (N=32 freshwater snails in South Carolina), scientific name, common name, legal status, conservation status rank (as above), and an (astonishing!) 19 additional data columns, more about which later. My colleague asked me to complete this massive 32x24 matrix by January 15, indicating as he did that the results would (ultimately) be forwarded onward to the nonprofit organization, NatureServe.

The origin and evolution of the conservation ranking system in general currency around the United States is shrouded in mystery. According to documents available from the NatureServe website [3], the notion of a state "natural heritage inventory" arose from collaboration between the nonprofit Nature Conservancy and my very own South Carolina back in 1974, with the first (A-B-C) system of conservation status ranking appearing in 1978. The current five-tier system was developed in 1982. In 1994 a group of state natural heritage program directors formed a related but independent nonprofit organization called the "Association for Biodiversity Information" to catalog the rising flood of inventory and status ranking data, which (in some complex fashion) led The Nature Conservancy to spin off "NatureServe" in 2001.

The 1982 system featured ranking at three scales: Global (G-ranks), National (N-ranks), and Sub-national (S-ranks), based on eight "factors" scored by anonymous participants. The number of factors taken into consideration has increased over the years, as has the number of participants, as has the elaboration of the technique by which the body of anonymous opinion is reputedly converted into a system of conservation status ranks.

For example, the 19 columns on the right side of the matrix my SCDNR colleague sent me last month included
  • Knowledge of the species population status - "High" if we know the status throughout the species range, "Medium" if we know the status in select areas, "Low" if we know little to none.

  • State Threats - "A" if very threatened, "B" if moderately threatened, "C" if not very threatened, "U" if unthreatened.

  • Feasibility Measure - How likely is it that conservation activities can make a difference for this species (High, medium, low).
Any reader curious regarding the actual analytical technique by which standard international ignorance units (SIIUs), state threat quotients (STQs), feasibility metrics (FMs), and 16 similarly baffling variables counted and scored for each species are converted into the critical-imperilment-demonstrable-security scale on the global conservation status gauge is invited to peruse the voluminous documentation available from the NatureServe website [4].
This is obviously not science. Conservation status ranks, as they have been propagated throughout the entire natural resources community for 30 years, are not testable, verifiable or falsifiable. The entire system is, at its very foundation, anonymous, unaccountable, subjective opinion.

Are conservation status ranks merely unscientific, or are they pseudoscientific? Pseudoscience is “a claim, belief, or practice which is presented as scientific, but which does not adhere to a valid scientific method [5].” Thus the difference between harmless non-science and execrable pseudoscience is in its presentation.

To the extent that the conservation status ranks arising from this system are presented honestly, as an opinion poll of mysterious parameter, I think they can be excused as (at worst) innocent claptrap. Is there any better solution to the genuine challenge of prioritizing species for conservation? Are we not doing the best that we can in a difficult situation? This is America - take a vote. Fine.

But if there is any effort or intent to present conservation status ranks as scientific, then we as a community will be guilty of promulgating pseudoscience. The elaborate machinations of NatureServe, which have developed over the years into a byzantine system of coding and computation, look suspiciously like dressing a pig in a ball gown, especially when standing behind a velvet rope, looking towards the sty.

And when we scientists make use of conservation status ranks, we give the appearance of endorsing the process that brought them, turning nonscience into pseudoscience by the very act. Surely we wouldn't reproduce conservation status ranks in our peer-reviewed journals, would we? Surely, surely we scientists wouldn't gin up some "crisis" on the basis of such a system, in a self-serving ploy to attract funding for our own research programs, would we? To do so would be to commit pseudoscience of a high and aggravated nature.

I absolutely understand why natural resource agencies personnel rely on conservation status ranks for their state wildlife action plans. The state of South Carolina cannot launch inventories of every bug, slug, and butterfly within its vastly triangular borders every five years to meet the data requirements of each fresh wave of federal regulation [6].
But as scientists, we must be very clear that the current system of conservation status ranking, as implemented by NatureServe, cannot be endorsed.

The FWGNA project has now developed a large database with objective estimates of the abundance of all 57 species of freshwater snails inhabiting the Atlantic drainages of the southeast. In the next installment of this series, I will propose a new method to rank these 57 species into five categories of abundance for conservation purposes. But while this approach is designed to mimic the existing system of status ranking currently in favor throughout American conservation biology, it has a theoretical basis and will be rigorously objective.

Stay tuned...

[1] I reviewed the 2005 South Carolina wildlife plan together with the plans of nine other southeastern states in my essay, "Freshwater Gastropods in State Conservation Strategies - The South." [26May06]

[2] The header indicated that this particular data matrix has been developed in collaboration with North Carolina and Georgia. I was peripherally involved in the Virginia process back in 2004, and it wasn't quite as elaborate.

[3] See the brief history of NatureServe on its "Tenth Anniversary" page.
[4] NatureServe Conservation Status Assessments:
Methodology for Assigning Ranks

[5] This is from Wikipedia, which is the first hit one gets, if one googles it.

[6] I’m surrendering to reality here. In fact, the FWGNA has surveyed most of five states for less than $20k in total grant support. I suppose the entire country could be done for $200k. Land snails and bivalves for similar figures? Each order of insects? We’re probably talking several million dollars to inventory the biota of the entire country. I suppose that’s too much to ask.


  1. Ah Dr. Dillon, I love your posts. Alas, conservation rankings are not science and I'm happy that you've discovered this. Welcome to the intersection of management, public policy, and science.

    I also think your estimate of the cost of surveying the nations biota is a tad low. In fact, if you can do it for that, I'm sure that you'll have a receptive audience with the NSF... that is, if you can live with yourself for taking money from the evil Fed.

    I look forward to Part 2!
    My very best,

  2. Howdy, Rob!

    I have always been an advocate of the IUCN Red List method. It is similar to NatureServe, but in the species assessments in which I was involved we had to provide documentation for all our determinations in the form of citable literature, grey literature reports, and museum records. We were not allowed to use hearsay and we had to sign every assessment, so there was a back trail to follow.

    D. Christopher Rogers

  3. Response to Rob Dillon Blog- Part I
    Rob Dillon’s recent contribution to this listserve, “Toward the Scientific Ranking of Conservation Status” (posted on 12 December in the comments section of the Freshwater Gastropods of North America web site: http://www.fwgna.org/; and distributed to various mollusk listservers), contains so many errors and misstatements that we feel obliged to respond. In his post, inspired by an email from a friend asking for information on freshwater gastropods to inform South Carolina’s review of species of conservation concern, Dillon uses the provocative language of the blogosphere to rant about NatureServe’s conservation status ranks. His statements are incorrect on a number of points: (1) he confuses South Carolina’s factors for their state wildlife action plan with the NatureServe conservation status factors; (2) he misrepresents the open and transparent nature of the NatureServe conservation status ranking system; (3) his description of the NatureServe conservation status ranking system is incorrect; (4) his contention that status ranks are not testable, verifiable, or falsifiable is simply not true.
    Dillon’s first error is mistaking the factors he saw in the information sent to him by South Carolina with the NatureServe ranking factors. To verify this point, we requested a copy of the email and documentation sent to Dillon by the South Carolina DNR. It turns out that Dillon was sent a spreadsheet that contained state-level natural heritage ranks (S1-S5), and a series of rank factors that are used for South Carolina’s state wildlife plan to assess conservation status. Dillon assumed that the rank factors in the spreadsheet are used to generate the S-ranks, when in fact the S-rank is simply provided as an additional piece of information along with the wildlife plan rank factors. They have no part to play in the formal NatureServe process of S-ranking or G-ranking.
    Having made the assumption that these wildlife plan rank factors are those of NatureServe, Dillon launches into an attack on the supposed opacity of NatureServe’s methods. In fact, NatureServe’s methods are well documented and open for the world to see (Master et al. 2009, Faber-Langendoen et al. 2009). These methods have been reproduced in an Excel tool that is also freely available (http://www.natureserve.org/publications/ConsStatusAssess_RankCalculator-v2.jsp). Furthermore, a brief explanation of the ranking methodology is publicly available on the NatureServe website that provides conservation status information for North American biodiversity (http://www.natureserve.org/explorer/ranking.htm). Individual accounts provide ranking documentation and authorship. Dillon obviously is not ignorant to this information. He in fact cited Faber-Langendoen et al. (2009) in his post. It is puzzling, therefore, that he failed to notice that the factors listed in this document do not match those in the spreadsheet he was sent.
    The complexity of life on Earth, the variety of threats to biodiversity, and paucity of information on many species precludes a simple system to objectively rank the conservation status of all organisms (Mace et al. 2008). We note that the IUCN Red List, the global standard measure of extinction risk, with which the NatureServe methodology is interoperable and indeed contributes data from North American species as an authority, also has an in-depth system of criteria (IUCN 2001) and guidelines that run to 87 pages (IUCN Standards and Petitions Subcommittee 2011). Of course, a system for ranking a single taxonomic group, such as freshwater gastropods, could be much simpler.
    [Part II next]

  4. Response to Rob Dillon Blog- Part II
    Third, and our main concern here, the NatureServe method bears no resemblance to Dillon’s description. NatureServe has, for over 10 years, used ten well-documented rank factors arranged in three categories. We spell out these factors here to avoid confusion:

    Rarity (range extent, area of occupancy, population, number of occurrences, number of occurrences or percent are with good viability/ecological integrity)
    Trends (long-term trend, short-term trend)
    Threats (threats, intrinsic vulnerability)

    These factors represent a revision of the initial 8-factor approach first developed in the early 80s (Master 1991) to incorporate new findings in the literature about extinction risk. Some of the revisions resulted from the findings of an NCEAS (National Center for Ecological Analysis and Synthesis) working group on extinction risk (Regan et al. 2005). This listserve is not the place to describe the method in any more detail, but we invite interested readers to consult a more complete account (Master et al. 2009, Faber-Langendoen et al. 2009).

    Fourth, Dillon’s claim that “conservation status ranks …are not testable, verifiable, or falsifiable” is not true. With clearly stated factors, any evidence contrary to that used to justify a conservation status rank can be used to change it. In fact, a common activity is to review new information about a species or plant community. If the new information contradicts the existing information (i.e., “falsifying” it), the rank is recalculated and changed if needed. Also, NatureServe routinely receives data from outside scientists that similarly provoke a recalculation of the rank.

    We hope this explanation sets the record straight for the readers of this listserve. We acknowledge and applaud Dillon’s tremendous contributions to our understanding of North American gastropods, and look forward to his future posts on these fascinating creatures.

    Don Faber-Langendoen
    Bruce Young
    Larry Master
    Margaret Ormes
    Jay Cordeiro


    Faber-Langendoen, D., L. Master, J. Nichols, K. Snow, A. Tomaino, R. Bittman, G. Hammerson, B. Heidel, L. Ramsay, and B. Young. 2009. NatureServe conservation status assessments: methodology for assigning ranks. NatureServe, Arlington, VA. Available: http://www.natureserve.org/publications/ConsStatusAssess_RankMethodology.pdf.

    IUCN. 2001. IUCN Red List Categories and Criteria: Version 3.1. IUCN Species Survival Commission. IUCN, Gland, Switzerland and Cambridge, UK.

    IUCN Standards and Petitions Subcommittee. 2011. Guidelines for using the IUCN Red List categories and criteria. Version 9.0. Prepared by the Standards and Petitions Subcommittee. Available: http://www.iucnredlist.org/documents/RedListGuidelines.pdf.

    Mace, G.M. et al. 2008. Quantification of extinction risk: IUCN’s system for classifying threatened species. Conservation Biology 22:1424-1442.

    Master, L. 1991. Assessing threats and setting priorities for conservation. Conservation Biology 5:559-563.

    Master, L., D. Faber-Langendoen, R. Bittman, G. A. Hammerson, B. Heidel, J. Nichols, L. Ramsay, and A. Tomaino. 2009. NatureServe conservation status assessments: factors for assessing extinction risk. NatureServe, Arlington, VA. Available: http://www.natureserve.org/publications/ConsStatusAssess_StatusFactors.pdf.

    Regan, J., Burgman, M. A, McCarthy, M. A, Master, L. L., Keith, D. A., Mace, G. M., and Andelman, S. 2005. The consistency of extinction risk classification protocols. Conservation Biology 19: 1969-1977.