Dr. Rob Dillon, Coordinator





Monday, December 12, 2011

Toward the Scientific Ranking of Conservation Status - Part I

Last month I got an email from a colleague in the South Carolina Department of Natural Resources, asking for my help updating the state wildlife conservation plan. I told him I'd be willing to pitch in with the 2011-12 effort, just as I helped in 2004-05 [1]. But I continue to harbor deep misgivings about the entire process.

In Part I of the essay that follows, we debride a nasty sore on the left butt cheek of American environmental science - the unscientific (possibly pseudoscientific) method by which we prioritize our biota for conservation purposes. And in Part II of this series, to follow next month, we will begin the process of suturing that wound back up.

Like all other states with which I am familiar, South Carolina’s wildlife plan relies upon a subjective system of conservation status ranks, as follows:
S1 - Critically imperiled state-wide because of extreme rarity or because of some factor(s) making it especially vulnerable to extirpation.
S2 - Imperiled state-wide because of rarity or factor(s) making it vulnerable.
S3 - Rare or uncommon in state.
S4 - Apparently secure in state.
S5 - Demonstrably secure in state.
The spreadsheet my DNR colleague sent me for my input [2] had a column for species number (N=32 freshwater snails in South Carolina), scientific name, common name, legal status, conservation status rank (as above), and an (astonishing!) 19 additional data columns, more about which later. My colleague asked me to complete this massive 32x24 matrix by January 15, indicating as he did that the results would (ultimately) be forwarded onward to the nonprofit organization, NatureServe.

The origin and evolution of the conservation ranking system in general currency around the United States is shrouded in mystery. According to documents available from the NatureServe website [3], the notion of a state "natural heritage inventory" arose from collaboration between the nonprofit Nature Conservancy and my very own South Carolina back in 1974, with the first (A-B-C) system of conservation status ranking appearing in 1978. The current five-tier system was developed in 1982. In 1994 a group of state natural heritage program directors formed a related but independent nonprofit organization called the "Association for Biodiversity Information" to catalog the rising flood of inventory and status ranking data, which (in some complex fashion) led The Nature Conservancy to spin off "NatureServe" in 2001.

The 1982 system featured ranking at three scales: Global (G-ranks), National (N-ranks), and Sub-national (S-ranks), based on eight "factors" scored by anonymous participants. The number of factors taken into consideration has increased over the years, as has the number of participants, as has the elaboration of the technique by which the body of anonymous opinion is reputedly converted into a system of conservation status ranks.

For example, the 19 columns on the right side of the matrix my SCDNR colleague sent me last month included
  • Knowledge of the species population status - "High" if we know the status throughout the species range, "Medium" if we know the status in select areas, "Low" if we know little to none.

  • State Threats - "A" if very threatened, "B" if moderately threatened, "C" if not very threatened, "U" if unthreatened.

  • Feasibility Measure - How likely is it that conservation activities can make a difference for this species (High, medium, low).
Any reader curious regarding the actual analytical technique by which standard international ignorance units (SIIUs), state threat quotients (STQs), feasibility metrics (FMs), and 16 similarly baffling variables counted and scored for each species are converted into the critical-imperilment-demonstrable-security scale on the global conservation status gauge is invited to peruse the voluminous documentation available from the NatureServe website [4].
This is obviously not science. Conservation status ranks, as they have been propagated throughout the entire natural resources community for 30 years, are not testable, verifiable or falsifiable. The entire system is, at its very foundation, anonymous, unaccountable, subjective opinion.

Are conservation status ranks merely unscientific, or are they pseudoscientific? Pseudoscience is “a claim, belief, or practice which is presented as scientific, but which does not adhere to a valid scientific method [5].” Thus the difference between harmless non-science and execrable pseudoscience is in its presentation.

To the extent that the conservation status ranks arising from this system are presented honestly, as an opinion poll of mysterious parameter, I think they can be excused as (at worst) innocent claptrap. Is there any better solution to the genuine challenge of prioritizing species for conservation? Are we not doing the best that we can in a difficult situation? This is America - take a vote. Fine.

But if there is any effort or intent to present conservation status ranks as scientific, then we as a community will be guilty of promulgating pseudoscience. The elaborate machinations of NatureServe, which have developed over the years into a byzantine system of coding and computation, look suspiciously like dressing a pig in a ball gown, especially when standing behind a velvet rope, looking towards the sty.

And when we scientists make use of conservation status ranks, we give the appearance of endorsing the process that brought them, turning nonscience into pseudoscience by the very act. Surely we wouldn't reproduce conservation status ranks in our peer-reviewed journals, would we? Surely, surely we scientists wouldn't gin up some "crisis" on the basis of such a system, in a self-serving ploy to attract funding for our own research programs, would we? To do so would be to commit pseudoscience of a high and aggravated nature.

I absolutely understand why natural resource agencies personnel rely on conservation status ranks for their state wildlife action plans. The state of South Carolina cannot launch inventories of every bug, slug, and butterfly within its vastly triangular borders every five years to meet the data requirements of each fresh wave of federal regulation [6].
But as scientists, we must be very clear that the current system of conservation status ranking, as implemented by NatureServe, cannot be endorsed.

The FWGNA project has now developed a large database with objective estimates of the abundance of all 57 species of freshwater snails inhabiting the Atlantic drainages of the southeast. In the next installment of this series, I will propose a new method to rank these 57 species into five categories of abundance for conservation purposes. But while this approach is designed to mimic the existing system of status ranking currently in favor throughout American conservation biology, it has a theoretical basis and will be rigorously objective.

Stay tuned...
Rob

Notes
[1] I reviewed the 2005 South Carolina wildlife plan together with the plans of nine other southeastern states in my essay, "Freshwater Gastropods in State Conservation Strategies - The South." [26May06]

[2] The header indicated that this particular data matrix has been developed in collaboration with North Carolina and Georgia. I was peripherally involved in the Virginia process back in 2004, and it wasn't quite as elaborate.

[3] See the brief history of NatureServe on its "Tenth Anniversary" page.
[4] NatureServe Conservation Status Assessments:
Methodology for Assigning Ranks

[5] This is from Wikipedia, which is the first hit one gets, if one googles it.

[6] I’m surrendering to reality here. In fact, the FWGNA has surveyed most of five states for less than $20k in total grant support. I suppose the entire country could be done for $200k. Land snails and bivalves for similar figures? Each order of insects? We’re probably talking several million dollars to inventory the biota of the entire country. I suppose that’s too much to ask.