A significant outbreak of disease in farm animals could strike a major blow to the U.S. economy, and, if the disease is transmissible to humans, could also result in loss of human life. U.S. research capabilities designed to rapidly detect, diagnose, and respond to diseases, as well as to develop new vaccines and, when possible, cures, have been built up significantly in recent years. But is it enough? A report from the National Research Council, sponsored by the Department of Homeland Security, found that U.S. laboratory capabilities are lacking in one key area: the ability to study the most dangerous germs in livestock such as cows and pigs.
What are the risks of a disease outbreak?
Agriculture and food is a key part of the U.S. economy. A great deal of attention has been paid to foot-and-mouth disease because of its potential economic impacts. For example, an outbreak in the UK in 2001 lasted 221 days and resulted in 2,026 infected farms and 6 million lost animals at an estimated cost of $10.7-11.7 billion. The United States has been successful in fighting foot-and-mouth disease—the last outbreak was in 1929—but recent economic studies have shown that a major U.S. outbreak today could exceed $20-30 billion in economic losses.
What is foot-and-mouth disease?
Foot-and-mouth disease (Aphtae epizooticae) is an infectious and sometimes fatal disease that affects an array of agricultural animals including cattle, sheep, goats, and pigs. The virus starts with a fever followed by blisters in the mouth or on the feet which can lead to lameness. Often times it is referred to as a plague due to the fact that it is highly infectious and can be spread through aerosols, contact with farming equipment and infected wild animals. Containment of the disease requires vaccination, strict monitoring, quarantine, and occasionally elimination.
Animal diseases that can spread to humans could also pose a significant public health threat. For example, the first known human epidemic of avian flu (strain H5N1), which occurred in Hong Kong in 1997, was linked to chickens. Since then, H5N1 has been reported in Asia, Africa, Europe, Indonesia, Vietnam, the Pacific, and the near East. Of the hundreds of people who were sickened, slightly more than 60% have died. Concerns about outbreaks of animal diseases have grown as global travel and trade are on the rise, and as it has become clearer how quickly pathogens can circle the globe.
Laboratories for All Kinds of Study
To accomplish the goals of protecting agriculture as well as human health, the United States has a laboratory infrastructure that can handle various types of animals–from mice and rabbits, to chickens and sheep, with a few able to handle larger livestock. The facilities also vary in their “biosafety” level, which dictates in part what type of research can be carried out and what level of protection is needed for studying the pathogens. The facilities range from biosafety level 1 (BSL-1), which is for the study of agents that do not cause disease in humans and present minimal dangers to lab personnel and the environment, up to biosafety level 4 (BSL-4) which is required for work with the most dangerous disease agents that pose a high risk of laboratory infections, and for which there are no vaccines or treatments.
|What are biosafety levels?
Biosafety levels refer to specific combinations of work practices, safety equipment, and facilities designed to minimize the exposure of workers and the environment to infectious agents.
Biosafety Level 1 (BSL-1) is suitable for work with well-characterized agents not known to consistently cause disease in healthy adults, and that present minimal danger to lab personnel and the environment.
Biosafety Level 2 (BSL-2) is suitable for work involving agents that pose moderate hazards to personnel and the environment.
Biosafety Level 3 (BSL-3) is for clinical, diagnostic, teaching, research, or production facilities that work with indigenous or exotic agents that may cause serious or potentially lethal disease through inhalation exposure.
Animal Biosafety Level 3 (ABSL-3) is suitable for work with laboratory animals infected with indigenous or exotic agents, agents that are transmitted by particles traveling in air, and agents causing serious or potentially lethal disease.
Biosafety Level 3 Enhanced (BSL-3E) is for diagnostic testing on specimens with hemorrhagic fevers thought to be due to dengue or yellow fever viruses.
Biosafety Level 3 Agriculture (BSL-3Ag) is for research on high consequence livestock pathogens, such as foot-and-mouth disease.
Biosafety Level 4 (BSL-4) is required for work with dangerous and exotic agents that pose a high risk of aerosol-transmitted laboratory infections and life-threatening disease, for which there are no vaccines or treatments, or a related agent with unknown risk of transmission.
Adapted from Centers for Disease Control and Prevention Glossary/ http://www.cdc.gov
Today, most of the U.S. research on foot-and-mouth disease is carried out at the Plum Island Animal Disease Center (PIADC), which is a biosafety level 3 agriculture (BSL-3Ag) facility located off the coast of Long Island, NY. A substantial number of BSL-3 and BSL-4 facilities have been constructed in the United States over the past 10 years by federal and state agencies and universities. However, current BSL-4 facilities in the United States are not equipped to handle livestock and BSL-3 facilities on the mainland are not authorized to work on foot-and-mouth disease virus. The report also concludes that the Plum Island facility is aging and increasingly cost-inefficient.
The Case for More Capability to Study Agricultural Animal Diseases
The biggest need for a BSL-4 facility able to handle livestock is to handle emerging or unknown infectious agents that might be on the horizon. Several known viruses for which there is yet no cure, for example Nipah virus in swine, could cause significant damage to the U.S. economy and human health if it were to arrive in the United States.
There are a few BSL-4 facilities outside the United States, each with the capability to handle livestock species. Depending on the timing of a disease event, those facilities might be willing to collaborate with U.S. scientists to research an emerging pathogen, but their primary responsibility is to their own national needs. Research can also be complicated, as facilities such as one in Australia have their own training and certification of personnel, which may pose a barrier to scientists from other nations using the facility.
It is clear: the possibility of an outbreak of an emerging disease in agricultural animals is a real threat, one which we are currently not fully equipped to address. The NRC report also reviews three options proposed by the Department of Homeland Security for increasing U.S. capabilities, including building a proposed new BSL-4 facility on the U.S. mainland, building a scaled-down version of that facility, and maintaining the Plum Island Animal Disease Center while leveraging BSL-4 animal capabilities abroad.
Figure 1. Selected federal, state, and national BSL-3, BSL-3Ag, and BSL-4 facilities. Courtesy of Alisha Prather, Galveston National Laboratory University of Texas Medical Branch.
Recently, a number of news stories have appeared on the rare earth elements, a group of 17 chemicals with similar properties that are crucial to technologies ranging from hybrid cars to cell phones. The United States relies on foreign sources of rare earth elements, causing concerns that if prices climb or supply is cut off, some parts of the domestic economy could be significantly affected. Although the rare earth elements are getting most of the press, several other types of minerals—many of them imported—are also potentially of concern for a diverse manufacturing economy. These issues are the subject of a 2007 report, Minerals, Critical Minerals and the US Economy, which makes the point that keeping a watchful eye on mineral supply and demand could help to avoid disruptions to the nation’s economy and security.
What are the rare earth elements?
The “rare earth elements” are set of chemical elements with similar properties. Although rare by name, the rare earth elements are not necessarily rare by nature, and are actually quite plentiful in the earth’s crust. However, because of their chemical properties, rare earth elements are not often found in concentrated, easily-mined ore deposits. Consequently, most of the world’s supply of rare earth elements comes from only a handful of sources.
Source: U.S. Geological Survey Rare Earth Element Factsheet, http://pubs.usgs.gov/fs/2002/fs087-02/
Europium, one of the rare earth elements.
What Makes a Mineral Critical?
Chemicals derived from minerals are part of virtually every product we use. Their unique properties contribute to the provisioning of food, shelter, infrastructure, transportation, communications, health care, and defense. Every year more than 25,000 pounds of new mineral products must be provided for every person in the United States just to make items we use every day, and a growing number of these minerals are imported.
The report’s authoring committee was careful to point out that a reliance on foreign sources of minerals is not necessarily a cause for concern. However, a better understanding of exactly which minerals are most critical to the economy would allow planning to ensure mineral resources are available in time—and at acceptable costs—to meet demands. To help address this issue, the committee developed a “criticality matrix” to assess how critical a particular mineral is for the US. The extent to which a mineral is considered critical is determined both by its importance (vertical axis) and its subjectivity to supply restrictions (horizontal axis).
The mineral criticality matrix.
Usefulness of Minerals
Minerals vary in usefulness based on the demand for that mineral from different sectors of the U.S. economy. Depending on the mineral’s chemical and physical properties, some minerals will be more crucial for specific uses than others. For example, platinum group metals and rare earth elements are fundamental to the construction and function of catalytic converters. Because no viable substitutes exist for these minerals in this application, restrictions in supplies of platinum group metals and rare earth elements would mean that the manufacture of the current generation of catalytic converters would be threatened. In general, the greater the difficulty, expense, or time needed to find a suitable substitute for a given mineral, the greater the impact of a restriction in the mineral’s supply.
Availability of Minerals
The availability of any mineral supply is based on factors such as mineral resources, the difficulty of extracting and processing the mineral, the environmental and social issues associated with mineral extraction, political considerations, and the economic cost of extracting the mineral. Supply risks can increase if mineral production is concentrated in a small number of mines, companies, or countries. For example, part of the concern about rare earth elements is that domestic sources have not been widely explored or mined, so the United States relies on imports of rare earths from other countries, in particular China, which now supplies approximately 97 percent of the world’s rare earths.
In the report, 11 minerals are assessed to demonstrate how the matrix might be used to assess mineral criticality. Of this group, platinum group metals, rare earths, indium, manganese, and niobium are the most critical.
Applying the Criticality Matrix. This matrix shows the criticality of 11 minerals. The circles for each mineral represent the composite score on a scale of 1 to 4 on each axis of the impact of a supply restriction and the supply risk.
Planning for Disruptions in Mineral Supply
An improved understanding of the criticality of minerals allows decision makers to develop strategies to limit the effects of possible restrictions in mineral supply. Since the release of the report in 2007, members of the report’s authoring committee have testified before numerous House of Representatives and Senate committees. Following testimony for the Committee on Science and Technology, the House passed a September 2010 bill authorizing research to address the supply scarcity of rare earth minerals and addressed the larger, long-term issue of critical materials supply. The bill set up a program of research and development aimed at advancing technology that would affect rare earths throughout their life cycle, from mining to manufacturing through to recycling. In addition, the bill authorized research to find substitutes for rare earth materials in manufacturing, and to find ways of reducing their usage. (http://archives.democrats.science.house.gov/press/PRArticle.aspx?NewsID=2932)
It’s hard to believe that more than a year has passed since the Deepwater Horizon oil spill in the Gulf of Mexico. In the meantime, we’ve seen the tar balls recede and the plumes of oil dissipate, and from what we hear in the news, oil-damaged ecosystems are slowly returning to normal. But one question that keeps coming up is, “just what is normal?” It turns out that for many Gulf species, including the iconic and beloved sea turtle, too little is known about populations, growth rates, or breeding patterns to answer that question with any certainty.
Unfortunately, this isn’t the first time that scientists have lacked the critical data they need to monitor ecosystem responses to an environmental disaster. In the wake of the Exxon Valdez oil spill, evaluating the effects on wildlife was difficult because of limited information. Now, more than 20 years later, we’re in the same position again. A 2010 National Research Council report, Assessment of Sea-Turtle Status and Trends: Integrating Demography and Abundance, outlined exactly why so little data exist and what would be needed to properly assess the impacts of the spill on Gulf ecosystems.
Assessing Sea Turtle Populations
The first problem is getting an accurate count of population sizes. Sea turtles migrate long distances between seasons and at different stages of their lives, which makes counting them in any particular geographic region difficult. Most monitoring today relies mainly on counting nests and females on beaches. However, this doesn’t give a complete picture of population numbers, because adult females can take decades to reach sexual maturity, do not nest every year, and represent only a small fraction of the total sea turtle population.
Simple counts of nests also fail to provide any information on trends in the abundance of sea turtles or on the causes of changes in population. For example, the nesting of loggerhead turtles on Florida beaches has been monitored since 1989 (shown right). Until 1998, nest numbers increased; but more recently, the number of nests has declined rapidly. Many factors could account for this decline, from environmental changes to increased accidental capture in trawling nets. But without more information, specific causes cannot be determined.
For those and other reasons, the long-term effects of the recent Gulf oil spill on sea turtle species cannot be completely evaluated, and the success of restoration plans remains unknown.
What Information is Needed?
The National Research Council report identifies several important pieces of information that could help increase information about sea turtle populations.
Because sea turtles migrate—sometimes leaving an area for many years—scientists must piece together what might have happened to the animals during that time. The report concludes that it is important to also look at the different ages of the turtles within populations and to integrate those data with what is known about sea turtles’ birth rate, survival, growth rate, and age at maturity. This information would help scientists figure out if the oil spill impacted some sea turtle populations or some age classes within a population differently than others. New tools in genetics, such as DNA markers, can help to identify individual sea turtles and associate them with a particular population. Scientists then can use that information to determine whether sea turtles that swam in waters affected by the oil spill came disproportionately from some sites, perhaps already depleted, as opposed to others, possibly more robust.
A sea turtle swims near oiled Sargassum algae. Credit: Carolyn Cole/LA Times)
The report also noted that all too often, the information collected by one research organization is not accessible to other researchers because the methods for collecting and analyzing data are not standardized or because of issues with data ownership and sharing. Incentives to encourage data sharing, for example through funding, would make it easier for researchers to gain access to all the necessary information.
Another issue is the permitting process for such research. Even before the Deepwater Horizon spill, sea turtles were listed as endangered, meaning that special permits were required to carry out research. However, most sea turtle researchers agree that the permitting process is a greater obstacle to research than is necessary to protect sea turtles and can delay or hamper important research projects and conservation efforts.
Getting accurate assessments of sea turtle populations will require interdisciplinary research among experts on topics such as population genetics and genomics, statistics and bioinformatics, One way to provide this information would be to launch interdisciplinary training in these topics for fisheries and conservation professionals.
Looking to the future, we can’t rule out another oil spill or other environmental disaster. But with some work, we’ll be better prepared to assess and understand changes in wildlife and ecosystems, and design plans to restore habitats, before the crisis occurs. Let’s hope that the Deepwater Horizon spill – the largest offshore oil spill in U.S. history – was the impetus we needed to get there.
My heart goes out to the people of Japan as they clean up and rebuild following the devastating earthquake and tsunami. The earthquake struck with almost no warning, and reports indicate that tsunami waves as high as 33 feet in some areas reached Japan’s northeast shores within 30 minutes of the 8.9 magnitude quake. Many of us have seen the incredible videos that show waves of water and debris tragically washing away homes, businesses, and schools, and tossing cars, ships, and planes as if they were toys.
Because the Japanese tsunami formed close to shore and traveled to land quickly, there was simply not adequate time to evacuate everyone from such a large area before the waves hit the coast. Similar conditions prevailed in the Indian Ocean tsunami of 2004 with the added problem that people were not well educated about the warning signs and dangers of a tsunami. It was widely reported that many people in Sumatra and other areas even rushed towards the receding waves to gather fish from the sea floor.
Japan is among the most prepared of all nations for earthquakes and tsunamis. A relatively high percentage of its buildings are engineered to withstand earthquakes with shock absorber-like features that give buildings flexibility to move with seismic waves instead of collapsing. Japan has lined about 40% of its coastlines with concrete seawalls, breakwaters or other structures meant to protect the country against high waves and typhoons, or even tsunamis.
But perhaps the single most important survival factor of all is that Japan’s people are well educated from a young age on how to respond to both earthquakes and tsunamis. This is likely the reason that, despite significant losses of human life, even more lives were not lost, given that 30-foot high waves swept over an area with about 1 million people, and pushed more than 2-miles inland. Many of the people knew what to do.
Past tsunamis have cost lives and property in many coastal areas of the United States including Hawaii, Alaska, Puerto Rico, American Samoa, the Virgin Islands, California, and Oregon, but there have been relatively few fatalities from tsunamis compared to more common disasters such as floods, hurricanes, and tornadoes. Because devastating tsunamis are relatively infrequent in the United States, it is more difficult to sustain awareness and preparedness.
However, the devastating effects of the Indian Ocean tsunami—an estimated 240,000 dead and many villages washed away—prompted U.S. legislation in 2005 to improve the U.S. tsunami forecasting systems. In addition, Congress asked the National Research Council to review tsunami detection and warning networks and to identify ways the country could become more prepared for tsunamis. The resulting expert report, Tsunami Warning and Preparedness, was released in July 2010.
The report concludes that the U.S. tsunami detection systems have generally improved since 2004. The United States has two Tsunami Warning Centers, located in Hawaii and Alaska. These centers are in charge of monitoring seismic activity and collecting data from DART (Deep-ocean Assessment and Reporting of Tsunamis) buoys, which are placed at intervals in the Pacific Ocean. These buoys monitor changes in the water pressure on the ocean floor to detect the passage of a tsunami. The National Oceanic and Atmospheric Administration has made significant progress in expanding the DART network, manufacturing and deploying an array of 39 buoys, establishing 16 new coastal sea level gauges, and upgrading 33 existing water level monitoring stations.
Based on analysis of these data, the Tsunami Warning Centers issue alerts to the appropriate emergency managers. Information collected from sea-level sensors generates forecasts of tsunami wave heights, which are then used to adjust or cancel warnings, watches, and advisories. These data also reveal tsunamis from sources that do not generate seismic waves, such as sea floor landslides.
DART system information is useful and important for distant coastlines. However, for nearby shorelines, tsunami warnings must be issued in minutes based on seismic information available immediately after the earthquakes. The fact is that many U.S. coastal regions that could experience earthquakes near their shores as was the case in Japan. In particular those communities along the “ring of fire” including Alaska, Hawaii, and the West coast still face major challenges in responding to the threats of near-shore tsunamis. Even if a tsunami warning is issued quickly, there is simply not enough time to for emergency managers to disseminate the warning message and order an evacuation.
Therefore, the report concludes that no matter how good tsunami detection systems becomes, surviving a tsunami generated close to shore depends mostly on the ability of people to recognize the warning signs and to immediately head for high ground. People need to be able to recognize natural cues such as the shaking of the ground from the tsunami-generating earthquake, and, as seen in the Indian Ocean tsunami, receding of ocean waters, and immediately know what to do without official warnings or evacuation instructions. Therefore, education and outreach are important components for U.S. tsunami preparedness in order to minimize deaths and injuries. For a lengthier summary of the report’s findings, click here to read the four-page report in brief.
The United States and other nations can learn from Japan. With an effort to educate the public about the warning signs of tsunamis, future losses can be minimized.
SOURCE: National Oceanic and Atmospheric Administration.
Often thought of as a single, massive wave, tsunamis are in fact a series of waves triggered by seafloor displacements such as earthquakes, undersea volcanic eruptions, and other phenomena such as undersea landslides, that cause a large displacement of water. An earthquake that occurs on the ocean floor where tectonic plates collide has the potential to generate destructive tsunami waves that can move at speeds averaging 450 (and up to 600) miles per hour and travel thousands of miles with a capability of inundating low-lying coastal areas. The Japanese tsunami propagated rapidly across the Pacific, causing 6-7 foot surges in Hawaii and reaching the U.S. west coast in about 12 hours – enough time to warn people to stay away from the water, but still causing significant damage in some harbors. SOURCE: National Geophysical Data Center, National Oceanic and Atmospheric Administration.
Recently, we’ve heard a great deal about the impacts of humans on warming the planet and on altering the future of the Earth’s climate system. However, emerging research, which I find interesting, suggests the reverse over past millennia: natural fluctuations in the climate of prehistoric Earth may have had an effect on the evolution that shaped the transformation of our earliest ancestors into modern-day humans. This possibility is discussed in a recent National Research Council report, Understanding Climate’s Influence on Human Evolution.
Some Species Carry on, Some Don’t
A highly simplified summary of human evolution over the past 8 million years.
The path to modern-day human has been a long one—scientists estimate that the split that lead to separate humans and chimpanzees lineages occurred between 6 and 8 million years ago. Since then, a succession of hominid species have come and gone, with changes such as the ability to walk upright, the development of stone tools, and the emergence of social behaviors happening over time.
It’s not hard to imagine that environment played a role in the evolution of the earliest humans, as it has for all species.
The earth system—the combination of land, atmosphere, and oceans that make up our environment—is constantly changing. Some changes are predictable, like seasonal shifts in temperature or the daily transition between hours of light and dark. But over the millions of years that it has taken humans to evolve, the Earth has endured some large-scale fluctuations in climate that in turn caused vast changes in habitat, such as shifts between ice ages and hot, arid conditions.
Adapting to Surroundings
Struggling to survive as conditions changed, some species became extinct, while others moved to new locations with more preferable habitats, and still other species gradually evolved to become better adapted to the new conditions.
For example, as cooler and drier climate caused the expansion of grasslands across Africa, one species of prehistoric antelope evolved teeth that could chew through the tough grass that grew on the plains— an advantage over pre-existing species that could not eat the coarse grass.
Similarly, scientists think that humans also evolved specific traits that could give an advantage in their habitat. This idea was the basis of one of the earliest theories of human evolution, the savannah hypothesis. The theory stated that many important human adaptations arose as a result of the expansion of grasslands across Africa – for example, as the landscape became more open, with fewer trees, our ancestors developed the ability to walk upright in order to move across the savannah more easily.
Evolution: a process in which genetic changes accumulate over time
Versatility is Key to Survival
Although the savannah hypothesis remains well respected, in recent years several other ideas about human evolution have emerged. One alternative hypothesis is based on the realization that ancient climate change didn’t occur in just one direction. By analyzing ancient sediments, layers of rock, and the fossilized remains of plants and animals, scientists have pieced together a record of prehistoric climate that suggests conditions fluctuated greatly over thousands of years, from monsoon to ice age and back again.
Under this hypothesis, simply becoming specialized to suit a specific environment wasn’t always useful in these conditions of changing climate. Many species, such as some types of musk oxen (see box) went extinct as shifts in climate changed landscapes. In contrast, the species that fared best were those that could cope with new conditions, not those adapted for just one specific habitat.
The Decline of the Musk Oxen
Studying the DNA of long-extinct species of musk oxen, scientists have found that populations of the animals began to decline in the midst of climate oscillations. This suggests these species were unable to survive as climate change made critical resources such as food and water scarce.
Scientists favoring this alternative hypothesis think that this prehistoric climate change may have shaped human evolution by driving our early ancestors to become specialists in versatility. Proponents of the idea point out that during the last six million years – the time when modern-day humans were evolving – there have been periods of particularly variable climate, although in the past 100,000 years climate has been relatively stable.
They believe that in the struggle for survival, the ability to cope with changing climate gave modern-day humans an edge over other species. For example, the Neanderthals are close evolutionary cousins of modern-day humans, and had relatively large brains, the ability to use tools, and even adapted to changing weather conditions such as ice ages. Yet the Neanderthals went extinct, while Homo sapiens – our species — went on to dominate the planet.
Scientific proponents of the new hypothesis think Neanderthals lacked the ability to innovate solutions to problems when their habitat changed. In the 200,000 years the species existed, their technologies didn’t develop beyond making and using of a few simple tools. Humans, on the other hand, devised new ways to overcome the challenges they faced. For example, early humans developed different tools to help solve different problems, built shelters to protect themselves from the cold, used fire to clear trees from the land to make hunting easier, and developed social behaviors. With these abilities, humans were no longer simply reacting to their environment; they could also exert some degree of control over their surroundings.
The skulls of several early human species. From left to right: Australopithecus africanus, 2.5 million years old; Homo rudolfensis, 1.9 million years old; Homo erectus, about 1 million years old; Homo heidelber-gensis, about 350,000 years old. The last skull is from Homo sapiens, the modern-day human species, and is estimated to be about 4,800 years old. SOURCE: Courtesy of the Human Origins Program; photo credits include Chip Clark, Jim DiLoreto, Don Hurlbert, all of the Smithsonian Institution.
Unanswered Questions Remain
The alternative hypothesis is gaining favor, but has not been fully tested and accepted across the scientific community—and so the story is by no means complete. Scientists still face major limitations in resolving questions about our origins and history because fossil records of our earliest ancestors are sparse and understanding of past climate is incomplete. Recommended research directions are outlined in Understanding Climate’s Influence on Human Evolution.
But over the next few years, just as we’re sure to learn more about the impact of humans on modern Earth’s climate system – and the steps we can take to minimize such impacts — hopefully, we’ll also get more information about the impacts climate had on shaping the human species.
Just like the weather, the amount of water available in a region in any given year has its ups and downs. Those ups and downs are particularly important in the Southwest, where scarce water forces tough decisions and conservation in everything from watering lawns to farming to industrial uses of water. An unusual number of recent storms and heavy snowfalls that have boosted snowpack across much of the upper Colorado River basin by about 151% of average¹ will make it easier to meet the region’s water demands this year, as melting snow will fill up the rivers, tributaries, and reservoirs in the basin (see map). “Average” is based on measures of water flow recorded over the past 100 years, which forms the basis of many current water policies. But here’s the bad news: past climate data show that the region’s average water flow over the past 500 years has been much drier than in the past 100 years. In other words, what is thought of as normal today might not be normal at all.
The Colorado River Basin
Water management in the Colorado River Basin is already very challenging as severe drought conditions have affected much of the region since the early 2000s. In fact, 2002 and 2004 are among the 10 driest years on record in the upper basin states of Colorado, New Mexico, Utah, and Wyoming. Water storage in the basin’s reservoirs dropped sharply during this period due to very low snowpack and stream flows; for example, 2002 water year flows into Lake Powell were roughly 25 percent of average. Visitors to Lake Powell noted the “bathtub ring” left by the drop in water levels of the lake.
At around the same time, several studies were produced that “reconstructed” Colorado River flows over the past several centuries. In those studies, data from annual growth rings of trees, which are good indicators of annual moisture availability, indicate that the region has experienced many severe and extended droughts in the past 500 years. In response to that news, the U.S. Bureau of Reclamation and regional water bureaus in California and Nevada asked the National Research Council to convene a committee of experts to help put those studies and other scientific information into context to inform water policy. Colorado River Basin Management: Evaluating and Adjusting to Hydroclimatic Variability (NRC, 2007) represents the committee’s consensus conclusions.
A Changing Picture of Colorado River Streamflow
One of the findings in Colorado River Basin Management is that past water management decisions, were signed during a period of unusually wet conditions across the basin, and which provided an overly optimistic assumptions of long-term water availability. The first and foremost of these is the Colorado River Compact of 1922 which still today governs water allocations between the upper and lower Colorado River basin. The prevailing scientific understanding of Colorado River flows has been based primarily on direct measurements of the river’s flow at stations along the river, the first of which was established in the late nineteenth century. The record of streamflow measurements taken throughout the twentieth century has led to an implicit assumption that the river has an average annual flow of about 15 million acre-feet/year, around which year-to-year flow variations occur (see Figure 1). Even though the basin experienced wet and dry periods, river flows and weather conditions were expected to return to a “normal” state that was largely defined by the average flow in the 20th century.
However, tree-ring based reconstructions have provided new evidence that has shifted the scientific view of Colorado River flows. One important shift is that the long-term annual average flow of the river, based on the 500-year record, is considerably less than 15 million acre-feet. Another important piece of information is that the basin has experienced several periods of drought that have been even longer and of greater severity than those experienced in the early and mid-2000s (see Figure 2).
Measured River Flow Over the Past 100 Years
Figure 1. For many years, scientific understanding of Colorado River flows was based primarily on measurements of the river’s flow at stations along the river. The image above shows the 1906-2006 record of the river’s flow at Lees Ferry, Arizona, with figures for annual values (blue bars), a 5-year running total (black line) and the average flow of about 15 million acre-feet/year (red line)—which was thought of as a “normal” average flow value around which year-to-year variations occur.
River Flow Estimates from Tree Ring Data Over the Past 500 Years
Figure 2. Multiple reconstructions of Colorado River flows at Lees Ferry, Arizona, which are based on tree-ring data that date back roughly 500 years, reflect a long-term average annual flow of much less than 15 million acre-feet—the presumed “average” flow that is being used to allocate the river’s water—as well as several pronounced drought periods.
Additional Future Worries: Warmer Temperatures and Rising Population
Adding to the concern about the past climate is the fact that temperature records for much of the Colorado River basin and the western United States document a clear warming trend over the past three decades. These records, along with climate model projections, suggest that temperatures across the region will continue to rise in the foreseeable future. Higher temperatures are predicted to result in less upper basin precipitation falling and being stored as snow and higher temperatures will increase evaporative losses. There is less consensus regarding future trends in precipitation. However, based on analyses of many climate model simulations, the weight of evidence suggests that warmer future temperatures will reduce future Colorado River streamflow and water supplies.
Meanwhile, rapid population growth across the western United States is driving increases in water demand. For example, from 1990-2000, Arizona’s population increased by about 40 percent, while Colorado’s population increased by about 30 percent. Population projections suggest that this trajectory will continue. Although many innovative urban water conservation programs have reduced water use per person, population growth is driving increases in urban water demands. Water consumption in Clark County, Nevada (which includes Las Vegas), for example, approximately doubled in the 1985-2000 period. Steadily rising population and increasing urban water demands in the Colorado River region would inevitably result in increasingly costly, controversial, and unavoidable trade-offs.
Limits of Technologies and Conservation Measures
A wide array of technological and conservation measures can be used to help stretch existing water supplies. These measures include underground storage, water reuse, desalination, weather modification, conservation, and creative water pricing structures. These measures may not necessarily be inexpensive or easy to implement, but many of them show promise for augmenting water supplies in future years. However, technological and conservation options for augmenting or extending water supplies—although useful and necessary—in the long run will likely not be enough to remove the fundamental tension between limited water supplies in the Colorado River Basin and inexorably rising population and water demands.
This year’s winter snows are a good thing, but in the long run they don’t change the fact that water managers and policy makers in the Colorado River Basin will have to develop plans and actions for meeting increasing water demands in a potentially warmer and much drier future. Doing so will require more collaboration between the scientific and water management communities and enhanced interstate cooperation.
¹According to the U.S. Bureau of Reclamation, snowpack conditions above Lake Powell were an estimated 151% of average as of December 31, 2010
For more than a decade, agricultural seed companies have been selling seeds that are genetically engineered to include (or exclude) genes to produce specific traits. The adoption by farmers of those seeds has been rapid: as of 2010, about 80% of the total corn, cotton, and soybeans seeds planted in the United States are genetically engineered. The plants help farmers compete against two of their most formidable enemies, insects and weeds, because of introduced genes that (1) make them resistant to specific pests; and (2) make them resistant to the herbicide commercially known as RoundUp, which enables farmers to kill weeds with RoundUp without killing the crops.
Many genetically engineered plants are “transgenic”—meaning that they have a gene (or are expressing a gene) from a different species. The inserted genes can come from species within the same kingdom (plant to plant) or between kingdoms (bacteria to plant). For example, the first genetically engineered plants were tobacco plants made resistant to the family of moths and caterpillars that feed on the plants by inserting a gene from a bacteria, Bacillus thuringiensis (Bt). Like the bacteria, the resultant “Bt” plants produce a toxin that kills the moths and caterpillars. In comparison to traditional methods of creating new traits—which include generations of plant breeding or the use of radiation to create genetic changes—these new techniques are fast and targeted for the new desirable traits being sought.
However, this new form of plant breeding has been controversial, particularly in Europe, where people are concerned about issues such as the risk that traits could be introduced, intentionally or unintentionally, into other food crops, or that genetically engineered plants could contaminate organic crops or the natural environment. People are also concerned that a small number of companies producing the seeds could control and financially exploit the genetic stock of key crops.
Over the past ten years, the NRC has produced several reports on genetically engineered crops; most of the early ones evaluated potential risks of genetically engineered (GE) crops. The most recent report, The Impact of Genetically Engineered Crops on Farm Sustainability in the United States, takes a more holistic look at GE crops at the farm-level, including environmental, economic, and social impacts. The report evaluates some key questions: exactly how have farmers benefited, and have all farmers benefited equally? Are the benefits expected to continue, unabated? What environmental consequences might there be?
The report finds some positive trends in the form of economic and environmental benefits, but cautions that mismanagement and overuse of GE crops—or even the irrelevance of available GE technology to many farmers—could limit their further use and potential.
Most farmers who use GE crops have experienced lower costs of production or higher yields, and sometimes both. Although the costs of GE seeds are higher than conventional ones, production costs are lower because farmers don’t have to apply as many insecticides and herbicides as they would with conventional crops. Farmers also save the labor and fuel costs for equipment operations to weed and spray insecticides. Not having to weed or to spray insecticides offers the perceived benefits of increased worker safety, and greater simplicity and flexibility in farm management (time saved is money on the farm). Box 1 provides an example from a study (Rice, 2004) of the estimated benefits of planting 10 million acres of corn that is resistant to the corn rootworm, based on a 2004 study (Rice et al).
Box 1. Estimated Benefits of Planting Insect-Resistant Corn
According to a 2004 study (Rice, 2004), planting 10 million acres of corn that produces toxins against the corn rootworm would have these estimated benefits: :
• Intangible benefits to farmers, including reduced exposure to pesticides, ease and use of handling, better pest control
• Tangible economic benefits, estimated at $231 million from yield gains
• Increased yield protection (9-28% better than no insecticide use, 1.5-4.5% better than insecticide use)
• A decrease of about 5.5 million pounds of insecticide (active ingredient) per 10 million acres.
• Conservation of 5.5 million gallons of water used in insecticide application
• Conservation of about 70,000 gallons aviation fuel.
• Reductions in farm waste, with about 1 million fewer insecticide containers
• Increased planting efficiency
Although the effects on yield of fertilizers, capital, and labor can be directly measured, other effects of the use of insect- and herbicide-resistant crops must be measured indirectly—that is by how much they reduce or facilitate the reduction of crop losses. When GE soybeans, corn, and cotton resistant to RoundUp are planted, along with timely RoundUp applications to control weeds, yields are almost always greater when compared to crop production without weed control.
The benefit of planting insect-resistant crops is more time and location dependent. For example, the use of corn resistant to the European corn borer resulted in annual average yield gains across the United States of 5-10 percent, but that the advantage
Figure 1. The costs per acre of GE corn, cotton, and soybean seed has risen steadily in the past several years. Any economic benefits that farmers get in terms of higher yields or reduced labor costs must offset the higher prices farmers are paying for genetically-engineered seeds.
varied greatly. Prior to the introduction of insect-resistant corn, many farmers accepted yields losses rather than incur the expense and uncertainty of chemical control. With the adoption of GE corn, yield differences were most notable in years and places when the pressure on crops from pests was high.
Because pest pressure varies across regions, not all farmers have realized an equal benefit. In addition, genetically engineered seed is much more expensive than conventional crops (see Figure 1). That means that productivity gains have to offset those additional costs, which may not occur, for example, if a farmer lives in an area where weeds and insects are not intense.
The decision to adopt GE crops may have far-reaching effects on other farms. For example, livestock producers, who are the biggest buyers of corn and soybeans, are major beneficiaries of reductions in crop price from better yields in GE crops. However, to date, there have been no quantitative estimates of those savings. Farmers who don’t plant GE crops do benefit from the regional use of GE technology that reduces pest populations. However, those farmers also might suffer from the development of weeds and insects that have acquired pesticide resistance in fields planted with GE crops. Without more research on these issues, the wider effects of GE crops on other farmers are difficult to determine.
Figure 2. The use of RoundUp-resistant crops has reduced the need to till for weed control, and contributed to an increase the amount of conservation tilling, which helps prevent erosion and farm runoff that pollutes rivers with sediments and chemicals.
GE crops can benefit the environment when properly used. One of the biggest potential benefits is improvement of soil and water quality. The use of RoundUp-resistant crops has helped reinforce the growing trend to practice conservation tillage, because it eliminates weed control as one of the reasons to use conventional tilling. (see Figure 2) With conventional tillage, farmers turn plant stalks and stubble into the soil while at the same time disrupting the growth of weeds. The problem is that conventional tilling can erode and compact soil and form a crust that repels water. The result is increased runoff from farms that carries sediments and agricultural chemicals into rivers and other waterways. With conservation tillage, at least 30% of crop waste remains on top of the field. This includes the practice of “no till,” which is no tilling at all; the seeds are “drilled” into the ground amidst the stubble of the last crop. The end result is less runoff and better water quality.
A major benefit from the use of insect-resistant crops has been the decreased use of insecticides. Since the advent of GE crops, the pounds of insecticides (active ingredient) used per acre has decreased. This benefits the environment because most spray insecticides kill most types of insects, even beneficial ones such as honey bees or natural predators of pests. But GE crops that target specific pests with Bt toxins in corn and cotton have been used very successfully, targeting only the pests that feed on those crops. To combat the possibility that repeated plantings of Bt crops could lead to the emergence of Bt resistant insects, the U.S. Environmental Protection Agency mandated a “refuge strategy”: a certain percentage of every Bt field must be planted with non-Bt seed to ensure that a population of insects susceptible to Bt will survive.
The report finds that the reliance on plants that are resistant to only one herbicide (RoundUp) could be problematic. RoundUp does have several environmental advantages over other herbicides because it kills most plants without substantial adverse effects on animals or on soil and water quality. However, repeated applications of it could allow naturally–occurring glyphosate-resistant weeds to thrive. Continued and constant exposure to RoundUp can also speed the evolution of resistance to it in weeds previously susceptible. A trend in the occurrence of RoundUp-resistant weeds has already been detected in the United States and abroad (see Figure 3). Combating the growth of herbicide-resistant weeds would require more diverse weed management practices, for example rotating the use of different types of herbicides.
Figure 3. As the use of GE crops resistant to RoundUp has increased, so too has the number of glyphosate-resistant weeds, both in the United States and abroad. More diverse weed management practices, for example, rotation the use of different types of herbicides, is needed to combat this trend.
Research on earlier technological developments in agriculture suggests that there are likely to be social impacts from the adoption of GE crops. For example, it’s possible that farmers with less access to credit or those who grow crops for smaller markets would be less able to access or benefit from GE crops. Genetic-engineering technology could affect many aspects of farming, including labor dynamics, farm structure, and farmers’ relationships with each other, but little research has been conducted to date on those social effects.
Another concern is how the market structure of the U.S. seed market may affect access to and the development of GE traits. Today, a handful of large, diversified companies dominate the market. They have invested significantly in the research, development, and commercialization of patent-protected GE traits for the large seed markets of corn, soybean, and cotton, but, so far, they have chosen not to commercialize GE traits in many other crops, either because the market size is insufficient to cover the necessary R&D costs, or due to concerns about consumer acceptance of the crops and their risks. Research to date has found no adverse effects on farmers’ economic welfare from this market structure. However, the trend toward seeds with multiple “stacked” traits is causing concern that access to seeds without GE traits or with traits of particular interest will become increasingly limited.
The public debate over genetic-engineering technology of plants will continue for the foreseeable future as seed companies and farmers seek to produce and use new crops with new combinations of traits, while others continue to raise concerns about contamination of organic crops and potential loss of markets where GE crops are not allowed, among other issues. Also driving this debate will be efforts by the agricultural community to address some of the biggest global challenges, for example, helping to fight global food insecurity by developing plants with improved nutritional qualities and resilience to climate change.
Next year, the division plans to release materials that explain in lay terms more about GE crops and the expert findings from the National Research Council.
Radiation Effects Research Foundation in Hiroshima, Japan
August 2010 marks the 65th year since the 1945 atomic bombings that devastated the cities of Hiroshima and Nagasaki, ending World War II with Japan. Those atomic bombs were the first used in wartime and, hopefully, the last.
Many of the survivors of those bombings have generously agreed to become part of the most extensive studies of health effects in a human population ever conducted, making their experiences available for the betterment of humankind. Those studies were begun in 1947 by the Atomic Bomb Casualty Commission (ABCC), which was established by the National Academy of Sciences at the request of President Harry Truman. The studies have been continued by the Radiation Effects Research Foundation (RERF), which was established in 1975 by the governments of Japan and the United States. Through studies of atomic bombing survivors and their children, RERF has examined the links between radiation exposure and disease, cell and genetic damage, and other factors.
I’d like to dedicate this issue to the researchers and survivors involved in that effort and to share the important things that they have learned.
Early Effects of the Atomic Bombs
Most of the deaths caused by the atomic bombings occurred on the days of the bombings due to the overwhelming force and heat of the blasts and in the following days and weeks from injuries and exposure to radiation. In Hiroshima, an estimated 90,000 to 166,000 deaths occurred within two to four months of the bombing out of a total population of 340,000 to 350,000. In Nagasaki, 60,000 to 80,000 died out of a population of 250,000 to 270,000. The precise number of deaths is not known because military personnel records were destroyed, entire families perished leaving no one to report deaths, and unknown numbers of forced laborers were present in both cities.
One thing to understand about the health effects from radiation exposure is that it depends on the dose a person receives. The dose depends on several factors, the most important of which is the distance from the radiation source. Through interviews with survivors shortly after the bombings, researchers estimated the distance from the bomb explosion at which half of people survived to be 1,000 to 1,200 meters (about two-thirds to three-fourths of a mile) in Hiroshima and 1,000 to 1,300 meters in Nagasaki. The closer people were to the explosion, the greater the dose of radiation (see Figure 1), as well as the severe effects of the blast and heat; there is no information on classification of immediate deaths.
Radiation damages organ tissues and can lead to organ failure. Illnesses collectively called “acute radiation syndrome” may occur a few days after exposure to high doses of radiation (of about 1 Sievert or greater, see Figure 1). Principal signs and symptoms are nausea and vomiting, diarrhea from damage to the intestines, reduced blood cell counts and bleeding from damage to bone marrow, hair loss due to damaged hair-root cells, and temporary male sterility.
The immune system is also vulnerable to radiation immediately after exposure. In people who received large doses of radiation, two vital parts of the immune system–lymphocytes and bone marrow stem cells–were severely damaged. Two months after exposure, marrow stem cells recovered and death due to infection generally ended.
Figure 1. The chart shows the approximate radiation exposure (in Sieverts) in relation to a person’s distance from the bomb's explosion (the hypocenter), and it provides a comparison with other common radiation exposures.
Delayed Effects: The Study of Survivors
At the heart of RERF’s research programs is a group of about 120,000 atomic bomb survivors who were still living in Hiroshima and Nagasaki in 1950, known as the Life Span Study cohort. About 90,000 of these people were within 10 km (6 miles) of the bombsites, roughly half within 2.5 km (the core group) and the other half between 2.5 and 10 km where radiation exposures were much lower. This group has undergone long-term population health and individual clinical studies that have helped researchers to study the delayed health effects of radiation.
Link to Leukemia
Excess leukemia was the earliest delayed effect of radiation exposure seen in atomic bomb survivors, first noted by a Japanese physician in the late 1940s. A registry of leukemia and related disorders was established to track cases.
Because leukemia is a rare disease, the absolute number of leukemia cases among atomic bomb survivors is relatively small even though the percentage increase in risk is high. Leukemia accounts for only about 3% of all cancer deaths and less than 1% of all deaths. As of 2000, there were 310 leukemia deaths among 49,244 Life Span Study survivors with a bone marrow dose of at least 0.005 Sv. The group experienced 103 deaths beyond expected deaths from leukemia, which means that 33% of the cases were attributable to radiation, but for those with a bone marrow dose of 2 or more Sv, 95% of the leukemias were radiation associated.
Research on A-bomb-related leukemia showed that the incidence of leukemia rose almost in direct proportion to dose; that the risk for leukemia was much higher for those exposed as children than for those exposed as adults; and that the incidence of radiation-related leukemia peaked at 8-10 years after exposure.
Figure 2. The figure shows how the percentage of survivors who developed leukemia changes with dose. The points are estimates of this percentage for various dose groups, and the vertical bars describe uncertainty in these estimates.
Link to Cancers: Linear but not Large Effects
By about 1956, researchers found an increase in rates for many other types of cancers. One of the most important findings is that exposure to radiation increases rates of most types of cancer, basically in proportion to radiation dose. That’s an important finding, because it means that even exposure to a very small amount of radiation will cause a very small increase in the risk of getting cancer. These results have direct implications for us today.
As of 2003, over 8% of cancers observed in the population of life Span Study survivors were attributable to radiation. There were 6,308 solid cancer deaths among the 48,102 Life Span Study survivors with a dose of 0.005 Sv or greater, which was 525 more solid cancer deaths than would have been expected in a similar, but unexposed, population. For the average radiation dose of survivors within 2,500 meters (about 0.2 Sv), there is about a 10 % increase above normal age-specific rates.
It is not possible to distinguish whether a cancer in a particular person is caused by radiation or other factors. In contrast to early effects of radiation that damage organ tissues, late radiation effects result from genetic changes in living cells. The exact mechanisms that lead to cancer are not clear, but it is believed that the process requires a series of genetic mutations accumulated over periods of years. Therefore, excess cancers attributable to radiation (except leukemia) are often not evident until decades after exposure.
Radiation exposure increases the risk for the following types of cancers: esophagus, stomach, colon, rectum, liver, gall bladder, pancreas, lung, breast, uterus, ovary, prostate, and bladder.
Small Non-cancer Effects of Radiation
RERF researchers also have analyzed the relationship between radiation exposure and a number of noncancer disorders. Radiation effects found in the Life Span Study survivors include relatively small but statistically significant excess risks for cardiovascular, digestive, respiratory and non-malignant thyroid diseases. In particular, radiation accounts for nearly one-third as many excess cardiovascular-disease deaths as cancer deaths. Studies also show a pattern of growth retardation for survivors who were exposed to the bomb’s radiation in childhood. Investigations of possible accelerated aging have shown some increased risk with radiation exposure for arteriosclerosis.
The considerable differences in the timing and increased risk of radiation-related leukemia, solid cancers and non-cancer diseases are illustrated in Figure 3.
Figure 3. The epidemiological differences among radiation-associated leukemia, solid cancer and non-cancer diseases are evident in this graph showing estimated past and future radiation-associated mortality per year in the Life Span Study cohort by calendar year. There are uncertainties for both observed (solid curves) and projected (dashed curves) excess deaths.
Good News for Children of Survivors
One of the earliest concerns in the aftermath of the atomic bombings was how radiation might affect survivor’s children who were conceived and born after the bombings. Efforts to detect genetic effects caused by radiation damage to sperm and ovarian cells in survivors children began in the late 1940s. Recognizing the need for continued follow-up on children of survivors, RERF established the F1 study of 77,000 children, of which about 30,000 have at least one parent who received a radiation dose greater than 0.005 Sievert.
So far, no evidence of inherited genetic effects has been found. RERF is now using recent advances in molecular biology to confirm those results at the DNA level. Monitoring of deaths and cancer incidence in the children of survivors continues, and a clinical study is being undertaken to evaluate any potential radiation effects on late-onset genetic disorders.
Using RERFs Work
RERF’s important work has become the world’s primary guide for radiation-induced health effects, especially cancer. It has been used to develop standards for occupational exposures and to assess risks from medical exposure sources such as CT scans and other diagnostic procedures. The studies have also been vital in illuminating potential health effects in victims of nuclear accidents, current and former workers at nuclear facilities, and other exposed populations.
Many of the survivors who were children during the atomic bombings are still alive today and are now reaching their peak cancer years (see Figure 3). As of 2003, more than 40% of the survivors were alive, but more than 90% of those exposed under the age of 10 were still living. Projections suggest that in 2020 those percentages will be about 20% and 60% respectively. Consequently, RERF’s important mission to track the health of the survivor population and their children will continue for at least another two decades.
You can visit RERF’s website to find a wealth of information about its findings, its history, and general information about radiation, including a recently published Basic Guide to Radiation and Health Sciences.
The National Academy of Sciences (NAS) established the Atomic Bomb Casualty Commission (ABCC) in 1947 with funding from the U.S. Atomic Energy Commission. ABCC initiated extensive health studies on A-bomb survivors in cooperation with the Japanese National Institute of Health of the Ministry of Health and Welfare, which joined the research program in 1948. In April 1975, ABCC was reorganized into the nonprofit, bi-national Radiation Effects Research Foundation. Annual funding for RERF is provided by the Japanese Government through the Ministry of Health, Labour and Welfare and by the U.S. Department of Energy (DOE). The National Research Council’s Nuclear and Radiation Studies Board serves as a liaison to RERF for scientific assistance and support under a cooperative agreement with DOE.
Recent polls present conflicting findings about Americans’ views on climate change. A May 2010 Gallup poll found that concern about climate change had decreased since 2008 and that an increasing number of Americans feel that the seriousness of global warming is overblown. In contrast, a June 2010 poll by Yale and George Mason Universities indicated increasing concern, finding that 61 percent of Americans believe global warming is real, and 50 percent believe it is caused mostly by humans, up from 57 percent and 47 percent, respectively, in January. A June 2010 poll by Stanford University found that 75% of Americans believe the Earth is warming because of human activity, down from 84% for the same poll in 2007. Polling differences aside, it’s clear that Americans’ views will be taken into account as political leaders seek to address climate change, and that the more those views are informed by reliable information, the better.
At the request of the U.S. Congress and the National Oceanic and Atmospheric Administration, the National Research Council recently released three reports (with two more to follow) to help inform the U.S. response to climate change. The reports lay out options for responding to climate change—to better understand it, slow it, and adapt to it—as part of a series called “America’s Climate Choices.” The reports and materials based on them are available to the public at America’s Climate Choices.
The reports cover many points, but I’d like to explore just two that I think are particularly important. First, there is strong, credible, and increasing scientific evidence that Earth is warming and that most of the warming is due to human activities. Second, the total amount of greenhouse gas emissions over time will determine the ultimate magnitude of future climate change, which means the earlier we start to reduce our rate of greenhouse gas emission, the better are our chances of avoiding worst-case climate scenarios.
Advancing the Science of Climate Change lays out evidence from multiple lines of research that convincingly shows climate change is occurring, that it is caused largely by human activities, and that it poses significant risks to human and natural systems. For example, thermometer readings show that the Earth’s average surface temperature has warmed measurably since the beginning of the 20th century, and especially over the last three decades. These observations are corroborated by observations of warming in the oceans, melting glaciers and Arctic sea ice, and shifts in ecosystems.
Most of the observed warming can be attributed to an increase in heat-trapping gases in the atmosphere, especially by carbon dioxide emitted by burning fossil fuels for energy. Ice core records clearly show that carbon dioxide concentrations have steadily risen since the beginning of the industrial revolution and are higher today than they have been in at least 800,000 years. In addition, scientists can now chemically “fingerprint” carbon dioxide molecules to show that they do, in fact, come from burning of fossil fuels.
The science is also clear that the warming we’ve seen so far is just the beginning. Projections of future climate change based on models calculate the amount of additional warming that could be expected based on different assumptions about future energy production and use. All of these models project continued warming for many decades, and even centuries, unless greenhouse gas emissions are reduced substantially. Some of the consequences of unchecked warming, such as significant sea level rise and more frequent heat waves, floods, and droughts, would be extremely challenging for society to deal with. Yet carbon dioxide emissions continue to rise.
Limiting the Magnitude of Climate Change provides an overview of our options to reduce the magnitude of future climate change by reducing emissions. All of the target levels for greenhouse gas emissions being seriously proposed in national and international policy debates would involve a significant reduction in current global gas emissions. In fact, greenhouse gas emissions continue to increase as the world’s economy and energy consumption grows.
One of the report’s most valuable contributions is its discussion of the process for establishing goals to limit future climate change. A goal of stabilizing global atmospheric greenhouse gas concentrations at some maximum value (e.g. 450 ppm) is not necessarily the most useful for framing national policy. Global concentrations are the result of global emissions, which of course cannot be determined through any single nation’s efforts alone. Nor does a global concentration goal allow us to directly measure national-scale progress.
Instead, the report suggests that policy makers view the U.S. goal in terms of how much greenhouse gas can be emitted over a specified period of time—in other words, to create a national emissions budget. Determining a specific budget goal involves value judgments, for example what the U.S. share in emissions reductions should be, and economic and social considerations that fall outside the realm of science. Any of the proposals seriously being discussed imply a limit for total emissions between the years 2012 and 2050. Unfortunately, all of these budget goals will be exceeded well before 2050 at the current rate of emissions. The report demonstrates in compelling terms that the earlier the U.S. acts to reduce emissions, the less difficult those reductions will be to achieve. That is not a value judgment—it’s simple math.
Figure 1. This figure illustrates the concept of a cumulative emissions budget over time. Meeting the budget is more likely the earlier and more aggressively the nation works to reduce emissions.
I think it’s helpful to think about it like a diet. If you wanted to lose 40 pounds by a certain event in the future, it would be much easier to reach that goal if you begin eating less and exercising more as soon as possible, rather than waiting to start until a time much closer to the event.
Meeting the emissions budget won’t be easy. The report concludes we may still fall short of emission budget goals even if the nation aggressively deploys all of its available technical options for reducing greenhouse gas emissions. These actions include maximizing energy efficiency, adoption of renewable energy sources, and moving ahead with new nuclear power plants and carbon capture and storage. Therefore, it’s vitally important to not only aggressively pursue available emission reduction opportunities, but also to invest heavily in R&D aimed at creating new opportunities for emission reduction. (see Figure 1)
Adding to this challenge is our country’s large existing infrastructure in the power sector, in industry, in transportation (e.g., autos, trucks, and airplanes, with associated fuels and fuel-supply systems), and in housing and other buildings. Substitution of more efficient or non-carbon-based energy technologies will be limited by the speed with which we can modernize such infrastructures.
The report discusses the other benefits that can result from developing and implementing new technologies to increase energy efficiency and to reduce greenhouse gas emissions. Such benefits include, for instance, the fact that strong U.S. actions can help influence other countries to move ahead with emission reduction efforts, and expansion of energy related sectors of the U.S. economy.
Hopefully, the information provided by these America’s Climate Choices reports will help the nation and its policy makers to understand that human-induced climate change is real and that the sooner we start to address this problem, the better.
The America’s Climate Choices reports released to date also include Adapting to the Impacts of Climate Change. Two more reports in the series, Informing Effective Responses to Climate Change and a final report will be released in the coming months. For more details, visit America’s Climate Choices to read or download the free summary or Report in Brief or to purchase the report.
A separate but related report, Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia, will be issued in the coming weeks to inform various national and international policy negotiations on the predicted and possible effects associated with different target levels for stabilizing atmospheric greenhouse gas concentrations.
I’ve always loved the ending of H.G. Wells classic novel, The War of the Worlds. The terrifying aliens who invade the Earth are finally felled–not by guns or nuclear warheads–but instead by the regular old microbes and germs that inhabit the planet.
As Wells’ story illustrates, microbes and humans have been collaborating for thousands of years of co-evolution. What most people don’t realize is that only a few microbes are harmful (i.e., pathogens). The vast majority of microbes carry out essential functions that make air breathable, help digest food, support and protect crops, and clean up chemicals in the environment, among other services. Indeed, life on Earth wouldn’t even be possible without microbes.
Despite their crucial role, microbes are still not well understood. It wasn’t even known that microbes existed until the 17th century when Anton van Leeuwenhoek was first able to see them under a microscope. Until very recently, microbiologists could only study the microbes that could be isolated and cultured in a lab by looking at these one at a time. With the advent of modern genomics (DNA studies), scientists have begun to understand just how diverse and ubiquitous microbes are, accounting for about half the world’s biomass. Today, scientists estimate that there are many millions of microbial species, and of those, less than 1% can be cultured.
Fortunately, there’s a new science that has recently leaped past the need for lab cultures and has put us on a fast track to a better understanding of microbes. The science of “metagenomics,” (sometimes also “environmental genomics” or “community genomics”) turns the power of genomics and bioinformatics on whole communities of microbes where they live. Scientists can take a sample of virtually anything–seawater, soil, or the contents of a stomach–put it into a gene sequencer, and “see” all the things in the sample by analyzing their DNA.
As described in The New Science of Metagonomics (National Research Council, 2006), metagenomics now not only gives scientists access to the many millions of microbes that have not previously been studied, but also begins to provide new information about which microbes are present in a sample and how they work together. It also enables scientists to link other details about the sample–for example, acidity, salinity, and temperature–to the biochemical processes being studied.
Metagenomics can be applied to some of the nation’s toughest challenges. For example, it may lead to the ability to use microbes to break down plant wastes (such as corn stalks) in much the same way as a cow digests hay, providing new sources of renewable energy. Studies have shown a possible link between microbial communities in the stomach lining of mice and whether the mice are fat or thin, a finding which could be of value in understanding obesity. Metagenomics findings are also being applied to cleaning up oil spills, making water drinkable, improving farming, and developing new pharmaceuticals, to name a few examples.
In sum, metagenomics is one of the lesser known, but most important new areas of biology. To learn more, visit a special metagenomics website from the National Research Council. Until next time, don’t forget to be thankful for microbes.