Translate

Friday, October 27, 2017

Deconstructing the Climate Demagoguery of the Wine Country Wildfire Tragedies

Wine Country Fire October 2017


As sure as the winds will blow, climate demagogues hijack every human tragedy to amplify fears of rising CO2 concentrations. Despite the fact that other critical factors were the keys to understanding the devastation of the Wine Country fires, politicians like Hillary Clinton, Al Gore and Governor Jerry Brown were quick to proclaim climate change had made the fires worse than they would have been.

Climate researcher Kevin Trenberth has long tried to undermine the foundations of science by discarding the null hypothesis. Without formal testing whether a tornado, hurricane or wildfire event is within the expectations of natural variability, Trenberth simply asserts every tragedy is made worse by rising CO2. Accordingly, he is interviewed by climate change propagandists after every weather tragedy. In an interview with InsideClimateNews a few months before the Wine Country wildfires Trenberth continued to proselytize his views, “Whatever conditions exists, they're always exacerbated by climate change. There's always that heat variable, the increased risk.”

Indeed heat is always a variable, but usually it has nothing to do with CO2. Sadly, due to his extreme beliefs Trenberth often confuses climate with weather.

Similarly, Daniel Swain who authors a good California Weather Blog, unfortunately strays when he tries to interject CO2-climate change into an otherwise good weather analysis. Writing the fires should also be looked at from “the long-term climate context,” he argued the “record-hottest summer” dried out the vegetation exacerbating the fire conditions. But he too failed to separate natural climate and weather events from his hypothesized contributions from CO2. As will become clear from a more detailed analysis, climate change played no part in the wildfire devastation.

The Ignition Component

Fire danger rating systems analyze 1) an ignition component, 2) a fuel component and 3) a spread component to determine how to allocate fire-fighting resources and when to issue public alerts. Natural fires are caused by lightning, and thus good weather models can forecast the short-term probability of lightning fires. Lightning fires are also more likely during warm and moist seasons enhancing their window of predictability. Unfortunately, Cal Fire reports 95% of California fires are unpredictably ignited by humans.

Climate alarmists like Dr. Trenberth have blithely suggested global warming is increasing the fire season stating, “In the West, they used to talk about a fire season, the fire season used to be 60 days, then 90 days, and now they think it's year-round. There's no pause." Tragically that uncritical belief in a climate-related extended fire season has been parroted by lay person and scientists alike. But the facts show the observed extended fire season is due to human ignitions. Blaming climate change is fake news!

In a 2017 paper researchers reported that across the USA from 1992 to 2012, “human-caused fire season was three times longer than the lightning-caused fire season and added an average of 40,000 wildfires per year across the United States. Human-started wildfires disproportionally occurred where fuel moisture was higher.” Furthermore “Human-started wildfires were dominant (>80% of ignitions) in over 5.1 million km2, the vast majority of the United States, whereas lightning-started fires were dominant in only 0.7 million km2.”

We can reduce some human caused ignitions. The Wine Country fires were not ignited by lightning but all observations suggest they were started by downed power lines in high winds. A year ago, California legislators introduced a bipartisan bill aimed at reducing wildfire ignitions from powerlines. Although governor Brown hypes the unsubstantiated dangers of climate change, he vetoed the bill which would have promoted real action to prevent well-known human causes of wildfires. Preventing powerline ignition could have prevented the Wine Country tragedy.

The Fuel Component


Fire ecologist will estimate a fire’s potential intensity by calculating the Energy Release Component (ERC), a measure of the potential heat energy per square foot. ERC is a function of the biomass both dead and alive, and the biomass moisture content. As fuels increase and as fuels dry the ERC increases. Live fuels are modeled such that maximum moisture content coincides with the peak growing season, and declines thereafter as the plants go dormant. Moisture content of dead fuels are modeled according to their diameters.

Depending on their diameters, dead fuels will lose moisture as they equilibrate with their dry surroundings at rates that vary from 1 hour to 1000 hours or more. To aid in firefighting management decisions, fuels are categorized into 4 groups as described in Gaining an Understanding of the National Fire Danger Rating System published by the National Wildfire Coordinating Group 

1-Hour Time-lag Fuels “consist of herbaceous plants or round wood less than one-quarter inch in diameter.  Also included is the uppermost layer of litter on the forest floor.” The ERC of these fuels and thus the fire danger, can change throughout the day. Dead grass as well as twigs and small stems of chaparral shrubs are 1-hour fuels, and those fine fuels sustained the rapid spread of the Wine Country fires. Assertions that recent and past summer droughts or decades of climate change had dried the fuels and exacerbated the Wine Country fire danger have absolutely no scientific basis. The approach of the hot, bone-dry Diablo Winds would have extracted all the possible moisture from the dead grasses and chaparral twigs within hours, regardless of past temperatures. Trenberth and Swain simply confused rapid weather changes with climate change.


The critical “long-term context” they never discussed is that a century of fire suppression allowed destructive levels of fuel loads to develop, increasing the biomass component of the ERC estimate. As populations grew, so did the demand to suppress every small fire that could threaten a building. Natural small fires reduce the fuel load, whereas fire suppression allows fast drying fuels to accumulate. Unfortunately, fire suppression only delays the inevitable while stocking more fuel for a much more intense blaze. Local officials and preservationists have long been aware of this problem, and controlled burns to reduce those fuels were being increasingly prescribed. Tragically, it was too little too late.

Prescribed Control Burn

10-Hour Time-lag Fuels are “dead fuels consisting of round wood in the size range of one quarter to one inch in diameter and, very roughly, the layer of litter extending from just below the surface to three-quarters of an inch below the surface.” The fuel moisture of these fuels vary from day to day and modeled moisture content is based on length of day, cloud cover or solar radiation, temperature and relative humidity.

100-Hour Time-lag Fuels are “dead fuels consisting of round wood in the size range of 1 to 3 inches in diameter and, very roughly, the forest floor from three quarters of an inch to four inches below the surface.” Moisture content of these fuels are also a function of length of day (as influenced by latitude and calendar date), maximum and minimum temperature and relative humidity, and precipitation duration in the previous 24 hours. 

Much of the chaparral shrubs produce twigs and stems in size ranges of the 1-hr, 10-hr and 100-hr fuels. These fuels were most likely the source of burning embers that high winds propelled into the devastated residential areas. Again, these dried out fuels are the result of a natural California summer drought and short term weather conditions such as the bone-dry Diablo Winds that arrive every year. 



Figure 2  Moisture content of 3-8 inch diameter fuels from March to December

1000-Hour Time-lag Fuels are “dead fuels consisting of round wood 3 to 8 inches in diameter or the layer of the forest floor more than about four inches below the surface or both”. These larger fuels are more sensitive to drought conditions that existed months earlier, so it could be rightfully argued that a hotter drier July and August made these fuels more flammable in October and exacerbated the fires.

Fire ecologists planning prescribed burns to reduce fuel loads, wait until the 1000-Hr fuels’ moisture content is reduced to 12% or lower. If these larger fuels are dry, it is certain the smaller fuel categories are dry as well, so that all fuels will be highly flammable. As seen in the graph above (Figure 2) 1000-hr fuels reach that critical dryness threshold by July 1st and remain below that threshold until mid-October when the rains begin to return. Contrary to Trenberth’s blather, California’s fire season has always lasted 90+ days. Undoubtedly the unusually hot and dry 2017 summer would have lowered 1000-hr fuel moisture content even further. Nonetheless those fuels become naturally flammable every summer. Furthermore, these larger fuels were less often burned and thus insignificant factors regards the fires rapid spread. The rapid spread of the fires was due to consumption of the rapidly drying fuels.

Swain is fond of finding a “record setting” metric to bolster his climate change assertions. As such, he noted the “record-hot summer had dried out vegetation to record levels” and linked to a graph tweeted by John Abatzoglou showing October ERC values for the past 30 years were at a record high in 2017 (in part because of delayed rains). However, that “record” was also largely irrelevant. The ERC calculation is heavily biased by the greater biomass of the larger 1000-hr fuels that would indeed get drier as the autumn continued without rain. Still those larger fuels were insignificant contributors to the rapidly spreading fire. As seen below (Figure 3), the grasses have been entirely burnt while the larger shrubs and trees, as well as the woody debris near the base of the trees (in the upper left) have not been consumed. In fact many of the trees are still alive. The potential energy estimated by the “record ERC” was only partially realized. It was the fast-drying dead grass and chaparral shrubs that turned potential ERC into meaningful fiery heat.

Figure 3


The Spread Component

“The spread component is defined as “the theoretical ideal rate of spread expressed in feet-
per-minute.” Wind speed, slope and fine fuel moisture are key inputs in the calculation of the spread component, thus accounting for a high variability from day-to-day." Thus, a combination of dry fuels and high winds typically result in fire-watch and red-flag warnings one day and no warnings days later as the winds subside. Forest rangers are well aware that September and October bring the powerful Diablo Winds to Santa Rosa as well as the Santa Annas to southern California, and with those winds comes the highest fire danger.

Cliff Mass is an atmospheric scientist at the University of Washington and author of the superb Cliff Mass Weather and Climate blogs. An October 16th post provides an excellent summary of the metorological conditions that created the fierce winds driving the Wine Country fires. In essence, a strong approaching wind flow (the Diablo Winds) coupled with a thermal inversion near the top of the mountains that border the Santa Rosa valley, accelerated winds into a 60 to 90 mile per hour downslope wind event, a phenomenon known as a mountain wave.  Those high winds snapped power line poles and ignited fires. The regional topography also funneled the winds and fire down the valley, taking dead aim at the heart of Santa Rosa. The topography had guided a similar fire in 1964, the Hanley fire, which was started by a carelessly discarded cigarette. Unfortunately without much concern, most of the burnt homes in the Tubbs fire had been built on top of the burnt grounds of that previous Hanley fire, despite public protests.

Were those high winds perhaps exacerbated by climate change? Highly unlikely!

The Diablo Winds affecting Santa Rosa or the Santa Annas of southern California are driven by cooling seasonal temperatures in the high deserts to the east. The inner continent cools faster than the oceans, setting up a pressure gradient driving the winds toward the coast. The winds then heat adiabatically rising 5 degrees Fahrenheit for every 1000 feet of elevation descent. An adiabatic rise in temperature means no added heat from any source and basic physics tells us temperatures can rise adiabatically simply due to compression. Thus an air mass that originated near Flagstaff Arizona at a 6900 foot elevation, could adiabatically warm by 30 degrees as it reaches sea level.

The flow direction of winds are largely driven by unequal seasonal changes in temperatures. During the summer the interior heats faster than the oceans, such that a cooling onshore wind reduces interior temperatures. This pattern reverses in the autumn as the interior lands cool faster than the ocean creating an inland high pressure that drives the Diablo and Santa Anna winds toward the coast.  Despite declining solar insolation, this autmn wind flow causes coastal California to experience some of its hottest days of the year in September and October, commonly referred to as Indian summer. Similarly a pressure system that inhibited the cooling onshore winds around San Francisco, resulted in a record hot summer temperature. By simultaneously opposing cooling sea breezes while bringing warm winds that were adiabatically 5 to 10 degrees warmer, temperatures rise and relative humidity falls. The result is bone-dry hot Diablo winds that suck the moisture from land and vegetation where ever the winds pass.

To restate the forces driving the winds, the Diablo winds are the result of a pressure gradient resulting from an interior that cools faster than the ocean. If CO2 is warming the earth to any significant extent, then we would expect that warming to prevent the inner continent from cooling as quickly as it did decades ago. Thus CO2-global warming would predict a decline in that presure gradient and a weakening of these winds.

Devastated Neighborhoods in Santa Rosa


To summarize, none of the fire components - ignition, fuels, or spread – had been affected by climate change.

Finally, keen observers will notice that entire blocks of houses, and entire neighborhoods were completely burnt to the ground, in contrast to neighborhood trees that often remained relatively unscathed. This suggests that the high winds rapidly carried burning embers from the grassland and chaparral into these developments. While the trees did not trap the embers, the buildings did. I would expect we will soon hear about investigations inquiring into why these residences were not required to erect more fire safe structures, especially when built in a known fire-prone habitat and a high wind corridor. The simple requirement of constructing eaves in such a manner that prevents the trapping of burning embers and fire-proof roofs may have saved many homes.

Indeed there are many lessons that will allow us to prevent such wildfire disasters in the future if we have accurately determined the causes of these fires. Cliff Mass notes that our short-term weather models had accurately predicted the time and place of the fiercest winds. That information could be used to temporarily shut down the electrical grid where power lines are likely to ignite fires. We can bury power lines below ground. We can remove the high fuels loads that accumulated during a century of misguided fire suppression. Insurance companies can demand higher rates unless proven precautions are undertaken. It is those lessons that Gore, Clinton, Brown should be promoting to inform the public. Trenberth and Swain should be informing the people of the natural weather dangers that are inevitable. There is no evidence that climate change, whether natural or anthropogenic, exacerbated the ignition, fuels or spread components of these deadly fires.  And worse their obsessed belief that rising CO2 concentrations worsen every tragedy only distracts our focus from real life-saving solutions.



Wednesday, April 5, 2017

Falling Sea Level: The Critical Factor in 2016 Great Barrier Reef Bleaching!

  
It is puzzling why the recent 2017 publication in Nature, Global Warming And Recurrent Mass Bleaching Of Corals by Hughes et al. ignored the most critical factor affecting the 2016 severe bleaching along the northern Great Barrier Reef – the regional fall in sea level amplified by El Niño. Instead Hughes 2017 suggested the extensive bleaching was due to increased water temperatures induced by CO2 warming. 


Reef at Low Tide Around Lizard Island Great Barrier Reef


In contrast in Coral Mortality Induced by the 2015–2016 El-Niño in Indonesia: The Effect Of Rapid Sea Level Fall by Ampou 2017, Indonesian biologists had reported that a drop in sea level had bleached the upper 15 cm of the reefs before temperatures had reached NOAA’s Coral Reef Watch bleaching thresholds. As discussed by Ampou 2017, the drop in sea level had likely been experienced throughout much of the Coral Triangle including the northern Great Barrier Reef (GBR), and then accelerated during the El Niño. They speculated sea level fall also contributed to the bleaching during the 1998 El Niño. Consistent with the effects of sea level fall, other researchers reported bleaching in the GBR was greatest near the surface then declined rapidly with depth. Indeed if falling sea level was the main diver in 2016’s reef mortalities, and this can be tested, then most catastrophic assertions made by Hughes 2017 would be invalid.

Indeed the Great Barrier Reef had also experienced falling sea levels similar to those experienced by Indonesian reefs.  Visitors to Lizard Island had reported more extreme low tides and more exposed reefs as revealed in the photograph above, which is consistent with the extremely high mortality in the Lizard Island region during the 2016 El Niño. Of course reefs are often exposed to the air at low tide, but manage to survive if the exposure is short or during the night. However as seen in tide gauge data from Cairns just south of Lizard Island, since 2010 the average low tide had dropped by ~10 to 15 cm.  After previous decades of increasing sea level had permitted vertical coral growth and colonization of newly submerged coastline, that new growth was now being left high and dry during low tide. As a result shallow coral were increasingly vulnerable to deadly desiccation during more extreme sea level drops when warm waters slosh toward the Americas during an El Niño. 



Furthermore, an El Niño in the Coral Triangle not only causes a sudden sea level fall, but it also generates a drier high-pressure system with clear skies, so that this region is exposed to more intense solar irradiance. In addition, El Niño conditions reduce regional winds that drive reef-flushing currents and produce greater wave washing that could minimize desiccation during extreme low tides. And as one would predict, these conditions were exactly what were observed during El Niño 2016 around Lizard Island and throughout the northern GBR.

Aerial surveys, on which Hughes 2017 based their analyses, cannot discriminate between the various causes of bleaching. To determine the cause of coral mortality, careful examination of bleached coral by divers is required to distinguish whether bleached coral were the result of storms, crown-of-thorns attacks, disease, aerial exposure during low tides, or anomalously warmer ocean waters. Crown-of-thorns leave diagnostic gnawing marks, while storms produce anomalous rubble. Furthermore aerial surveys only measure the aerial extent of bleaching, but cannot determine the depth to which most bleaching was restricted due to sea level fall. To distinguish bleaching and mortality caused by low tide exposure, divers must measure the extent of tissue mortality and compare it with changes in sea level. For example, the Indonesian researchers found the extent of dead coral tissue was mostly relegated to the upper 15 cm of coral, which correlated with the degree of increased aerial exposure by recent low tides. Unfortunately Hughes et al never carried out, or never reported, such critical measurements.

However a before-and-after photograph presented in Hughes 2017 suggested the severe GBR bleaching they attributed to global warming primarily happened between February and late April. Their aerial surveys occurred between March 22 and April 17, 2016. And consistent with low tide bleaching, that is exactly the time frame that tide tables reveal reefs experienced two bouts of extreme low tides coinciding with the heat of the afternoon (March 7-11 & April 5-10). And such a combination of sun and low tide are known to be deadly.

A study of a September 2005 bleaching event on Pelorous and Orpheus Islands in the central GBR by Anthony 2007, Coral Mortality Following Extreme Low Tides And High Solar Radiation, had reported extreme deadly effects when extreme low tides coincided with high solar irradiance periods around midday. As in Indonesia, they also reported bleaching and mortality had occurred despite water temperatures that were “significantly lower than the threshold temperature for coral bleaching in this region (Berkelmans 2002), and therefore unlikely to represent a significant stress factor.” Along the reef crests and flats, “40 and 75% of colonies in the major coral taxa were either bleached or suffered partial mortality. In contrast, corals at wave exposed sites were largely unaffected (<1% of the corals were bleached), as periodic washing of any exposed coral by waves prevented desiccation. Surveys along a 1–9 m depth gradient indicated that high coral mortality was confined to the tidal zone.” [Emphasis mine]

The fortuitous timing of Ampou’s coral habitat mapping from 2014 to 2016 in Bunaken National Park (located at the northwest tip of Sulawesi, Indonesia) allowed researchers to estimate the time of coral mortality relative to sea level and temperature changes. Ampou reported that in “September 2015, altimetry data show that sea level was at its lowest in the past 12 years, affecting corals living in the bathymetric range exposed to unusual emersion. By March 2016, Bunaken Island (North Sulawesi) displayed up to 85% mortality on reef flats” and that almost “all reef flats showed evidence of mortality, representing 30% of Bunaken reefs.” Based on the timing of reef deaths and changes in temperature they concluded, “the wide mortality we observed can not be simply explained by ocean warming due to El Niño.”  They concluded, “The clear link between mortality and sea level fall, also calls for a refinement of the hierarchy of El Niño impacts and their consequences on coral reefs.”

From the illustrations (below) of a generalized topography of a fringing or barrier reef, we can predict the effects of low sea level by examining where bleaching and mortality would occur within the whole reef system. Coral occupying the reef crests are most sensitive to drops in sea level and desiccation because they are first to be exposed to dangerous periods of aerial exposure and last to re-submerge. The inner reef flats are vulnerable to lower sea levels, as those shallow waters are more readily exposed at low tide because the reef crest prevents ocean waters from flooding the flats. If reefs flats are not exposed, the shallow waters that remain can heat up dangerously fast. Accordingly Anthony 2007 found 40 to 75%, and Ampou 2017 found 85% of the reef flats had bleached. In contrast coral in the fore reefs are the least vulnerable to desiccation and higher temperatures due to direct contact with the ocean, upwelling and wave washing. Accordingly Anthony 2007 reported <1% bleaching in the fore reefs.

 

Coral mortality due to a drop in sea level leaves other diagnostic telltale signs such as micro-atoll formation. As illustrated below in Fig. 4 from Goodwin 2008, during neap low tides (MLWN) sea water can still pass over the reef crest and flush the inner reef with relatively cooler outer ocean water. However during the low spring tides (MLWS), the reef crest is exposed and ocean water is prevented from reaching the reef flats. As mean sea level falls (MSL), coral on the crest and flats are increasingly exposed to the air for longer periods, and the upper layer of coral that had previously kept up with decades of rising sea level, are now exposed to increasing periods of desiccation and higher mortality.





There are over 43 species in the coral triangle that can be characterized as “keep-up” coral whose growth rates are much greater than average 20th century sea level rise. However their vertical growth is limited by the average low water level (HLC-Height of Living Coral in Fig. 4). Average low water level is calculated as the mean water level between low neap tides and lower low spring tides. (Due to the linear alignment of the sun, earth and moon and the resulting stronger gravitational pull during a full and new moon, spring tides result in both the highest high tides and lowest low tides. In contrast neap tides exert the least gravitation pull.  Spring tides typically happen twice a month, but usually no more than once a month will spring low tides coincide with the heat of the midday sun.)

When growing in deeper waters, a keep-up species like mounding Porites spp. grow at rates of 5 to 25 mm per year and form dome shaped colonies. However due to increased aerial exposure when growth reaches the surface, or due to exposure from sea level fall, the upper most surface dies from high air temperatures, higher UV damage and desiccation. This results in a flat-topped colony leading to the classic “micro-atoll” shape, with dead coral in the center surrounded by a ring of live coral, as exemplified by a Kiribati micro-atoll in the photograph below.



Microatolls


Micro-atoll patterns have been crucial for reconstructing past fluctuations in sea level on decadal to millennial timeframes. As Ampou 2017 observed in Bunaken NP, mortality due to a drop in sea level was mostly restricted to the upper 15 cm of coral, which leads to the formation of micro-atolls.  So before simply assuming climate-change-warming has induced mortality, micro-atoll formation and other associated patterns indicative of sea level change must be examined. A short discussion on how sea level changes can shape micro-atolls can be read here.

Due to its regional sensitivity to the sea level change that accompanies an El Niño, the northern Great Barrier Reef has an abundance of fossil micro-atolls that have allowed researchers to estimate El Niño activity and fluctuating sea levels over the past 4000 years. They estimated 4000 years ago low water neap tides were at least 0.7 meters higher than they are at present. Studies of micro-atolls in the Cook Islands further to the east in the southern Pacific, suggest that by 1000 AD during the Medieval Warm Period, average sea level had fallen, but remained about 0.45 meters higher than today. During the Little Ice Age sea level fell to 0.2 meters below current levels during the late 1700s and early 1800s, before recovering throughout the 1900s.

Hughes 2017 wanted to emphasize GBR bleaching as a “global-scale event” in keeping with his greenhouse gas/global warming attribution, but bleaching and mortality was patchy on both local and regional scales. And although Hughes presented their analyses as “a fundamental shift away from viewing bleaching events as individual disturbances to reefs,” the unusually high mortality around Lizard Island demands a closer examination of individual reef disturbances. The lack of mortality in 2016 across the southern and Central GBR, was explained as a result of the cooling effects of tropical storm Winston, but that does not explain why individual reefs in those regions have not bleached at all, while others bleached only once, and still others bleached twice or three times since 1998. Hughes’ shift away from examining what factors affected individual reefs will most likely obscure the most critical factors and yield false attributions.




Hughes reported the various proportions of areal bleaching as degrees of severity. But that frightened many in the public who confused bleaching with mortality, leading some misguided souls to blog the GBR was dead.  However bleaching without mortality is not a worrisome event no matter how extensive. Rates of mortality and recovery are more important indices of reef health. As discussed in the article The Coral Bleaching Debate: Is Bleaching the Legacy of a Marvelous Adaptation Mechanism or A Prelude to Extirpation?, all coral retain greater densities of symbiotic algae (symbionts) in the winter but reduce that density in the summer, which often leads to minor seasonal bleaching episodes that are usually temporary. Under those circumstances coral typically return to normal within weeks or months. Furthermore by ejecting their current symbionts, coral can acquire new symbionts that can promote greater resilience to changing environmental conditions. Although symbiont shifting and shuffling promotes adaptation to shifting ocean temperatures, symbiont shuffling cannot protect against extreme low tide desiccation, and dead desiccated coral can no longer adapt. Humans have little control over El Niños or low tides.

Hughes also contradicted past studies to mistakenly suggest that recurring bleaching in a given reef is evidence that corals are not adapting or acclimating. However bleaching happens for many reasons. Symbiont shuffling to better adapt to warmer waters does not guarantee adaptation to lower sea levels, cyclones or changes in salinity. Coral reefs deal with changing sea levels with rapid growth to keep-up as sea level rises, and then dying back when sea level falls. Decadal swings in regional sea level will likely cause decadal swings in bleaching and are not evidence of coral fragility.

Hughes 2017 modeled the 2016 GBR bleaching event as a function of surface ocean temperatures that surpass bleaching thresholds, although reefs will bleach below that threshold and will fail to bleach despite temperatures above that threshold. Despite the fact El Niños are well known to cause rapid sea level fall along the GBR, Hughes’ model never accounted for falling sea level. Nor did they account for past observations that falling sea levels induced bleaching when temperatures were below bleaching thresholds. More disturbing because sea level fall caused bleaching in various reefs, with some experiencing good water quality and others poor quality, Hughes asserted there was “no support for the hypothesis that good water quality confers resistance to bleaching.” However this contradicts an abundance of regional studies attributing increased coral disease and bleaching to high nutrient loading.

Woolridge 2013 have argued that coral eject their algal symbionts and bleach when temperature, light and nutrients increase to a level that accelerates the symbionts growth. Increased growth consequently reduces the amount of energy transferred to the coral, resulting in ejection of the slacking symbiont. Because increased nutrient loads can promote increased symbiont growth at relatively lower temperatures, higher nutrient loads can promote bleaching at lower temperatures.

Furthermore while coral’s symbiotic relationships allow them to recycle limited nutrients and out compete seaweeds, higher nutrient loads enable greater seaweed growth, which reduces corals’ competitive advantage. Furthermore seaweeds have been shown to harbor allelopathic chemicals that inhibit coral growth, as well as serving as reservoirs for bacteria that cause coral diseases. Higher nutrient loads induce more dissolved organic carbon that bacteria feed upon, allowing disease-causing bacteria to rapidly multiply. Higher nutrient loads also increase the survival of crown-of-thorns larvae, which then increases coral depredation and bleaching.

In a 2013 experimental study, Chronic Nutrient Enrichment Increases Prevalence And Severity Of Coral Disease And Bleaching, Vega-Thurber reported that higher nutrient loads caused a “twofold increase in both the prevalence and severity of disease compared with corals in unenriched control plots” as well as a “3.5-fold increase in bleaching frequency relative to control corals.”

Although Hughes 2017 suggests the pattern of recurring bleaching is simply a function of temperature and global warming, as illustrated in Hughes’ Figure “e” below, recurring bleaching is not a global phenomenon. (Black dots represent reefs that bleached during all 3 surveys: 1998, 2002, 2016; light gray represents reefs that bleached only once, and dark gray reefs bleached twice.) . In most cases the degree of recurring bleaching does not predict the recurrence of bleaching in nearby reefs despite similar ocean temperatures. Although an El Niño generates widespread bleaching, bleaching is still a regional issue affecting individual reefs differently. During an El Niño sea level rises in the eastern Pacific and falls in the western Pacific. Recurring bleaching in the Far North and Southern regions of the GBR are uncommon, while recurring GBR bleaching has been frequent between Cookstown and Townsville where temperatures have been quite variable. And in accord with prior research, the region between Cookstown and Townsville has suffered from lower water quality and higher nutrients loads, causing more frequent bleaching and greater crown-of-thorns attacks.


Reefs visited during 3 surveys and recurring bleaching


  
After perusing Hughes 2017, it was clear they had been led to incorrectly embrace the prevailing bias of  CO2-induced catastrophic bleaching because they failed to address the fall in sea level before and during the 2016 El Niño, and likewise they failed to address how weather created by El Niños promotes clear skies and increased solar heating. To add insult to injury, because sea level drops bleached reefs in both good water quality and bad, and bleaches reefs in both protected preserves and unprotected, Hughes 2017 presented a statistical argument that disparaged any significant value of ongoing conservation efforts to minimize bleaching by reducing nutrient loading and by protecting reefs from overfishing. By belittling or ignoring most critical factors affecting coral bleaching other than temperature, Hughes suggested our only recourse to protect reefs “ultimately requires urgent and rapid action to reduce global warming.”

And because such an apocryphal analysis was published in Nature and will undoubtedly mislead coral conservation policies,

I wept.

Friday, March 3, 2017

How NOAA and Bad Modeling Invented an “Ocean Acidification” Icon: Part 2 - Bad Models





 Are the Oceans’ Upper Layers Really Acidifying?


Bad models, not measurements, have suggested ocean acidification in the upper layers of the oceans. As detailed in Part 1, NOAA’s Bednarsek, incorrectly attributed the dissolution of sea butterfly shells to anthropogenic CO2 although the evidence clearly showed the natural upwelling of deeper low pH waters were to blame. Based on models employed by NOAA’s Feely and Sabine, Bednarsek claimed the upper ocean layers are becoming more acidic and less hospitable to sea butterflies relative to pre-industrial times. However detecting the location and the depth at which anthropogenic CO2 now resides is a very, very difficult task. Because the ocean contains a large reservoir of inorganic carbon, 50 times greater than the atmospheric reservoir, the anthropogenic contribution is relatively small. Furthermore anthropogenic carbon comprises less that 2% of the combined CO2 entering and leaving the ocean surface each year. Thus there is a very small signal to noise ratio prohibiting accurate detection of anthropogenic CO2. Despite admittedly large uncertainties, modelers boldly attempt to infer which layers of the ocean are acidifying. 

Sea Butterfly  Limacina helicina


(To clarifying terminology, an organic carbon molecule is a molecule that is joined to one or more other carbons, such as carbohydrates and hydrocarbons. CO2 with a lone carbon is considered inorganic, and when dissolved can take 3 forms (or “species”) collectively referred to as Dissolved Inorganic Carbon (henceforth DIC): 1) Carbonic acid (H2CO3),  2) Bicarbonate ion (HCO3-) after losing one H+ 3) Carbonate ion (CO3-2) after losing a second H+ )

However model results are based on three very dubious assumptions:

1) Models assume surface layers absorb anthropogenic CO2 by reaching equilibrium with atmospheric concentrations. With minor adjustments, models simply calculate how much dissolved inorganic carbon (DIC) will be added to the ocean based on increased atmospheric CO2 since pre-industrial times.

2) Models assume CO2 will diffuse into the upper ocean layers and be transported throughout the ocean in a similar fashion to tracers, like CFCs. Because CFCs accumulate disproportionately near the surface, models assume DIC does as well.

3) Models assume biosphere is in a steady state. Thus they do not take into account increased primary production and the rapid export of carbon to depth.

Although there is no doubt anthropogenic CO2 is taken up by the oceans, assertions that ocean surface layers are acidifying are the results of faulty model assumptions.

What Equilibrium?

CO2 equilibrium is rarely achieved between ocean and atmosphere. Ocean surface pH and thus calcium carbonate saturation levels are determined by the efficiency of the biological pump. In other words, when, where, and how much CO2 enters the ocean surface, requires surface CO2 concentrations to be lower than atmospheric concentrations. That difference depends on how much CO2 is fixed into organic carbon by photosynthesis and subsequently exported it to depth, or how much CO2 is upwelling. Photosynthesis indiscriminately draws down all CO2 molecules that have invaded surface waters either via upwelling from depth or by diffusion from the atmosphere. Despite opposing effects of mixing and diffusion, the biological pump maintains a strong vertical gradient of high surface water pH and low DIC, with decreasing pH and increasing DIC at greater depths. In regions where strong upwelling of DIC from the deeper ocean overwhelms the ability of photosynthesizing organisms to sequester carbon, surface pH drops and CO2 is outgassed to the atmosphere. Several models estimate that without the biological pump, atmospheric CO2 would increase by 200 to 300 ppm above current levels.

The efficiency of the biological pump determines to what depths anthropogenic carbon will be transported. However NOAA’s Sabine does not model the effects of the biological pump, oddly stating “although ocean biology plays an integral role in the natural distribution of carbon in the ocean, there is no conclusive evidence that the ocean uptake and storage of anthropogenic carbon, thus far, involve anything other than a chemical and physical response to rising atmospheric CO2.”

Does Sabine truly believe the undeniable biological pump discriminates between anthropogenic and natural carbon? Or does he believe that there have been no changes in primary production and carbon export?  As primary production increases, so does the carbon export to depth. Annual primary production in the Arctic has increased by 30% since 1998. We can infer primary production increased in the Sargasso Sea based on a 61% increase in mesoplankton between 1994 and 2006. North Atlantic coccolithophores have increased by 37%  between 1990 and 2012. And primary production and carbon export in the Peru Current has dramatically increased since the end of the Little Ice Age. The increasing trend in primary production and accompanying carbon export is potent evidence supporting an alternative hypothesis that the biological pump has sequestered increased invasions of anthropogenic CO2.

An examination of seasonal changes in surface CO2 concentration, illustrates how the biological pump determines when and how much CO2 enters the ocean, and how much DIC accumulates near the surface. As exemplified by the graph below from 2008 buoy data off the coast of Newport, Oregon (Evans 2011), each spring photosynthesis lowers ocean surface CO2 to 200 ppm, far below current atmospheric concentrations and much lower than what would be expected from equilibrium with a pre-industrial atmosphere. Spring surface waters are supersaturated, and any downwelling or mixing of these supersaturated waters cannot acidify upwelled water or subsurface layers. Furthermore the springtime draw down conclusively removes any anthropogenic CO2 residing in sunlit waters. Furthermore experiments show CO2 is often a limiting nutrient and added atmospheric CO2 stimulates photosynthesis. Microcosm experiments found that when atmospheric CO2 was increased, the plankton community consumed 39% more DIC.

Daily CO2 concentrations in surface waters off Newport Oregon from Evans 2011



Upwelling season begins in summer extending through fall. As illustrated above, upwelling events rapidly raise surface concentrations of CO2, reaching 1000 ppm. Physics dictates there can be no net diffusion from the atmosphere into the ocean when the oceanic concentration is higher than atmospheric, and thus there are virtually no anthropogenic additions during upwelling season. Here any lowering of surface pH or calcium carbonate saturation must be due to upwelling.

Finally during the winter, (not illustrated) surface waters exhibited a steady CO2 concentration of 340 ppm. Although photosynthesis is reduced, and winter mixing brings more subsurface carbon and nutrients to the surface, the surface remains below equilibrium with the atmosphere. Although surface concentrations are low enough to permit the invasion of atmospheric CO2, the biological pump continues to export that carbon to depth so that surface layers remain supersaturated all winter.

Diffusion of CO2 into the ocean is a slow process. It is believed that it requires about 1 year for the oceans to equilibrate with an atmospheric disturbance. But as spring arrives, increasing sunlight again enhances photosynthesis, so whatever anthropogenic CO2 that may have invaded the surface over the course of the year, is once again fully sequestered and pumped to depth, lowering surface CO2 concentrations to 200 ppm. Bednarsek’s claim that anthropogenic CO2 is acidifying the upwelled water along the Oregon California Coast is once again not supported.

Tracers Do Not Correctly Simulate Transport of Anthropogenic Carbon

Tracers like chlorofluorocarbons (CFCs) are synthetic gases that are biologically inert. They were introduced to the world during the 1920s primarily as a refrigerant. Climate scientists have assumed the physical transport and accumulation of CFCs and increasing anthropogenic carbon will be similar. Below in Figure 1, the red area just south of Greenland designates an area that has accumulated the most CFCs. This local concentration happens when high salinity Atlantic waters cool and carry surface water and its dissolved gasses downward to the abyss forming North Atlantic Deep Water. It is estimated that this downwelling has exported 18% of all CFCs below 1000 meters; implying dissolved anthropogenic carbon has been similarly exported and sequestered. However elsewhere CFCs accumulate disproportionally in upper surface layers, so models assume dissolved anthropogenic CO2 is likewise accumulating nearer the surface.




Both CFCs and CO2 are gases, and their solubility is similarly modulated by temperature. Warm waters of the tropics absorb the least amount of CFCs and CO2, as illustrated by the dark blue regions in Figure 1 from Willey 2004. Thus equatorial waters feeding the California Undercurrent that upwell along the west coast have likewise absorbed the least amounts of anthropogenic carbon, if any. (The extremely low level of CO2 diffusion into the tropical ocean plus the super saturation of tropical waters, casts great misgivings on any claim that coral reefs have been significantly affected by anthropogenic acidification.)

However, unlike inert CFCs, any CO2 entering sunlit waters is quickly converted to heavy organic matter by photosynthesis. Although dissolved CFCs and dissolved carbon are passively transported in the same manner, particulate organic carbon (alive or dead) behaves very differently. Particulate carbon rapidly sinks, removing carbon from the surface to depth in ways CFC tracers fail to simulate. Examination of the literature suggests “various methods and measurements have produced estimates of sinking velocities for organic particles that span a huge range of 5 to 2700 meters per day, but that commonly lie between tens to a few hundred of meters per day”. Low estimates are biased by suspended particles that are averaged with sinking particles. Faster sinking rates are observed for pteropod shells, foraminifera, diatoms, coccolithophorids, zooplankton carapaces and feces aggregations, etc that are all capable of sinking 500 to 1000 meters per day. These sinking rates are much too rapid to allow respired CO2 from their decomposition to acidify either the source waters of upwelling such as along the Oregon and California coast, or the surface waters

Earlier experiments had suggested single cells sank very slowly at rates of only 1 meter per day and thus grossly underestimated carbon export. However single-cell organisms will aggregate into clusters that increase their sinking rates. Recent studies revealed the “ubiquitous presence of healthy photosynthetic cells, dominated by diatoms, down to 4,000 m.” Based on the length of time healthy photosynthesizing cells remain viable in the dark, sinking rates are calculated to vary from 124 to 732 meters per day, consistent with a highly efficient biological pump. Although NOAA's scientists have expressed concern that global warming will reduce the efficiency of the biological pump by shifting the constituents of phytoplankton communities to small, slow-sinking bacteria, new research determined that bacteria also aggregate into clusters with rapid sinking rates ranging from 440 to 660 meter per day.

Sequestration of carbon depends on sinking velocities and how rapidly organic matter is decomposed. Sequestration varies in part due to variations in the phytoplankton communities. Depths of 1000 meter are considered to sequester carbon relatively permanently, as waters at those depths do not recycle to the surface for 1000 years. Weber 2016 suggests 25% of the particulate organic matter sinks to 1000 meter depths in high latitudes while only 5% reaches those depths at low latitudes. But long-term sequestration does not require sinking to 1000 meter depths. Long-term sequestration requires sinking below the pycnocline, a region where the density changes rapidly. Dense waters are not easily raised above the pycnocline, so vertical transport of nutrients and carbon is inhibited creating long-term sequestration. Because the pycnocline varies across the globe, so do sequestration depths.

Below on the left is a map (a) from Weber 2016, estimating to what depths particles must sink in order to be sequestered for 100 years. Throughout most of the Pacific particles need only sink to depths ranging from 200 to 500 meters. In contrast the golden regions around the Gulf Stream, New Zealand and southern Africa must sink to 900 meters.

The map on the right (b), estimates what proportion of organic matter leaving the sunlit waters will be sequestered. The gold in the Indian Ocean estimates 80% will reach the 100-year sequestration depth, while 60% will reach sequestration depths along the Oregon California Coast. Again casting doubt on Bednarsek’s claims of more recently acidified upwelled waters acidifying sea butterfly shells. Elsewhere in map “b”, 20% or less of the exported carbon reaches sequestration depths.

Estimation of sequestration depths and proportion of carbon reaching sequestraton



The combination of sinking velocities and sequestration depths suggests significant proportions of primary production will be sequestered in a matter of days to weeks. This is consistent with the maintenance of the vertical DIC and pH gradients detected throughout our oceans. However it conflicts with claims by NOAA’s scientists.

Biased by CFC observations Sabine wrote, “Because anthropogenic CO2 invades the ocean by gas exchange across the air-sea interface, the highest concentrations of anthropogenic CO2 are found in near-surface waters. Away from deep water formation regions, the time scales for mixing of near-surface waters downward into the deep ocean can be centuries, and as of the mid-1990s, the anthropogenic CO2 concentration in most of the deep ocean remained below the detection limit for the delta C* technique.”

That NOAA scientists fail to incorporate the fact that particulate carbon can be sequestered to harmless depths in a matter of days to weeks instead of “centuries” appears to be the cause of their catastrophic beliefs about ocean acidification. Furthermore because CFCs have accumulated near the surface with only miniscule amounts in the deeper ocean, the tracer provides absolutely no indication of how upwelling brings ancient DIC to the surface. So by relying on a CFC tracer, their models will mistakenly assume that increased concentrations of DIC near the surface must be due to accumulating anthropogenic carbon, and not upwelled ancient carbon.

The Ocean’s Biosphere Steady State?


Given a steady export percentage of primary productivity, increasing amounts carbon will be exported in proportion to increasing productivity. Thus it is reasonable to hypothesize that if marine productivity has increased since the end of the Little Ice Age (LIA) aka pre-industrial times, that increased production will have sequestered the increasing amounts of anthropogenic carbon. Although there are only a few anoxic (depleted oxygen) ocean basins, where organic sediments can be well preserved, those basins all reveal that since the Little Ice Age, marine productivity and carbon export has indeed increased as the oceans warmed.

Research from Chavez 2011 is illustrated below and demonstrates that during the LIA, marine primary productivity (d) was low, but has increased by 2 to 3 fold over the recent 150 years. Sediments reveal that fast sinking diatoms increased 10 fold at the end of the LIA, but likely due to silica limitations, since 1920 diatom flux to ocean sediments has been reduced to about a 2-fold increase over the LIA. Nonetheless numerous studies find estimates of sedimentary diatom abundance is representative of carbon export production in coastal upwelling regions.





The increased primary production coincides with over a hundred-fold increase in fish scales and bones (f). And consistent with the need for increased nutrients to support increased primary production, proxy evidence suggests a 2-fold increase in nutrients in the water column (c). Such evidence is why researchers have suggested their observed decadal increases in upwelled DIC and nutrients might be part of a much longer trend. Finally in contrast to the global warming explanation for depleted ocean oxygen, the decomposition of increased organic carbon provides a more likely explanation for observations of decreased oxygen concentrations in the water column (a) and sediments (b). Because primary production had doubled by 1900, long before global warming or before anthropogenic CO2 had reach significant concentrations, it is unlikely anthropogenic CO2 contributed to increased upwelling, increased primary production or any other trends in this region

However increased primary production alone does not guarantee that sinking particulate carbon is removing enough carbon to counteract anthropogenic additions. However there are dynamics that suggest this must be the case. First consider that an examination of the elements constituting a phytoplankton community, there is a common ratio of 106 carbon atoms detected for every 16 nitrogen atoms (aka a Redfield ratio). Given that nitrogen typically limits photosynthetic production, if carbon and nitrogen are upwelled in the same Redfield proportion, unless other dynamics cause an excess of nitrogen, photosynthesis might only assimilate upwelled carbon but not enough to account for all the additional anthropogenic carbon.

However calcifying organisms like pteropods, coccolithophores and foraminifera export greater proportions of inorganic carbon because their sinking calcium carbonate shells lack nitrogen. This can create an excess of nitrogen relative to upwelled carbon in surface waters. Second diazotrophs are organisms that convert atmospheric nitrogen into biologically useful forms. Free-living diazotrophs like the cyanobacterium Trichodesmium, can be so abundant their blooms are readily observed. (The blooms of one species are primarily responsible for the coloration of the Red Sea.) Some diazotrophs form symbiotic relationships with diatoms and coral. So diazotrophs can cause an excess of nitrogen that allows photosynthesis to assimilate both upwelled carbon and anthropogenic carbon. Furthermore as discussed in Mackey 2015 (and references therein) “To date, almost all studies suggest that N2 fixation will increase in response to enhanced CO2.”

With all things considered, the evidence suggests NOAA scientists have an upside down characterization of the ocean’s “steady state.” There is no rigid rate of primary production and export that prevents assimilating anthropogenic carbon and pumping it to depth. On the contrary the combined dynamics of nitrogen fixation and the biological pump, suggest the upper layers of the ocean have likely maintained a pH homeostasis, or a pH steady state, at least since pre-industrial times. Increases in atmospheric CO2, whether from natural upwelling or from anthropogenic sources, are most likely assimilated quickly and exported to ocean depths where they are safely sequestered for centuries and millennia. As also discussed in the article How Gaia and Coral Reefs Regulate Ocean pH, claims that the upper ocean has acidified since preindustrial times are not measurements, but merely results from modeling a “dead” ocean and ignoring critical biological processes.