Aijun Ding, Xin Huang, and Congbin Fu
Air pollution is one of the grand environmental challenges in developing countries, especially those with high population density like China. High concentrations of primary and secondary trace gases and particulate matter (PM) are frequently observed in the industrialized and urbanized regions, causing negative effects on the health of humans, plants, and the ecosystem.
Meteorological conditions are among the most important factors influencing day-to-day air quality. Synoptic weather and boundary layer dynamics control the dispersion capacity and transport of air pollutants, while the main meteorological parameters, such as air temperature, radiation, and relative humidity, influence the chemical transformation of secondary air pollutants at the same time. Intense air pollution, especially high concentration of radiatively important aerosols, can substantially influence meteorological parameters, boundary layer dynamics, synoptic weather, and even regional climate through their strong radiative effects.
As one of the main monsoon regions, with the most intense human activities in the world, East Asia is a region experiencing complex air pollution, with sources from anthropogenic fossil fuel combustion, biomass burning, dust storms, and biogenic emissions. A mixture of these different plumes can cause substantial two-way interactions and feedbacks in the formation of air pollutants under various weather conditions. Improving the understanding of such interactions needs more field measurements using integrated multiprocess measurement platforms, as well as more efforts in developing numerical models, especially for those with online coupled processes. All these efforts are very important for policymaking from the perspectives of environmental protection and mitigation of climate change.
Sumit Sharma, Liliana Nunez, and Veerabhadran Ramanathan
Atmospheric brown clouds (ABCs) are widespread pollution clouds that can at times span an entire continent or an ocean basin. ABCs extend vertically from the ground upward to as high as 3 km, and they consist of both aerosols and gases. ABCs consist of anthropogenic aerosols such as sulfates, nitrates, organics, and black carbon and natural dust aerosols. Gaseous pollutants that contribute to the formation of ABCs are NOx (nitrogen oxides), SOx (sulfur oxides), VOCs (volatile organic compounds), CO (carbon monoxide), CH4 (methane), and O3 (ozone). The brownish color of the cloud (which is visible when looking at the horizon) is due to absorption of solar radiation at short wavelengths (green, blue, and UV) by organic and black carbon aerosols as well as by NOx. While the local nature of ABCs around polluted cities has been known since the early 1900s, the widespread transoceanic and transcontinental nature of ABCs as well as their large-scale effects on climate, hydrological cycle, and agriculture were discovered inadvertently by The Indian Ocean Experiment (INDOEX), an international experiment conducted in the 1990s over the Indian Ocean. A major discovery of INDOEX was that ABCs caused drastic dimming at the surface. The magnitude of the dimming was as large as 10–20% (based on a monthly average) over vast areas of land and ocean regions. The dimming was shown to be accompanied by significant atmospheric absorption of solar radiation by black and brown carbon (a form of organic carbon). Black and brown carbon, ozone and methane contribute as much as 40% to anthropogenic radiative forcing. The dimming by sulfates, nitrates, and carbonaceous (black and organic carbon) species has been shown to disrupt and weaken the monsoon circulation over southern Asia. In addition, the ozone in ABCs leads to a significant decrease in agriculture yields (by as much as 20–40%) in the polluted regions. Most significantly, the aerosols (in ABCs) near the ground lead to about 4 million premature mortalities every year. Technological and regulatory measures are available to mitigate most of the pollution resulting from ABCs. The importance of ABCs to global environmental problems led the United Nations Environment Programme (UNEP) to form the international ABC program. This ABC program subsequently led to the identification of short-lived climate pollutants as potent mitigation agents of climate change, and in recognition, UNEP formed the Climate and Clean Air Coalition to deal with these pollutants.
Peter Kareiva and Isaac Kareiva
The concept of biodiversity hotspots arose as a science-based framework with which to identify high-priority areas for habitat protection and conservation—often in the form of nature reserves. The basic idea is that with limited funds and competition from humans for land, we should use range maps and distributional data to protect areas that harbor the greatest biodiversity and that have experienced the greatest habitat loss. In its early application, much analysis and scientific debate went into asking the following questions: Should all species be treated equally? Do endemic species matter more? Should the magnitude of threat matter? Does evolutionary uniqueness matter? And if one has good data on one broad group of organisms (e.g., plants or birds), does it suffice to focus on hotspots for a few taxonomic groups and then expect to capture all biodiversity broadly? Early applications also recognized that hotspots could be identified at a variety of spatial scales—from global to continental, to national to regional, to even local. Hence, within each scale, it is possible to identify biodiversity hotspots as targets for conservation.
In the last 10 years, the concept of hotspots has been enriched to address some key critiques, including the problem of ignoring important areas that might have low biodiversity but that certainly were highly valued because of charismatic wild species or critical ecosystem services. Analyses revealed that although the spatial correlation between high-diversity areas and high-ecosystem-service areas is low, it is possible to use quantitative algorithms that achieve both high protection for biodiversity and high protection for ecosystem services without increasing the required area as much as might be expected.
Currently, a great deal of research is aimed at asking about what the impact of climate change on biodiversity hotspots is, as well as to what extent conservation can maintain high biodiversity in the face of climate change. Two important approaches to this are detailed models and statistical assessments that relate species distribution to climate, or alternatively “conserving the stage” for high biodiversity, whereby the stage entails regions with topographies or habitat heterogeneity of the sort that is expected to generate high species richness.
Finally, conservation planning has most recently embraced what is in some sense the inverse of biodiversity hotspots—what we might call conservation wastelands. This approach recognizes that in the Anthropocene epoch, human development and infrastructure are so vast that in addition to using data to identify biodiversity hotspots, we should use data to identify highly degraded habitats and ecosystems. These degraded lands can then become priority development areas—for wind farms, solar energy facilities, oil palm plantations, and so forth. By specifying degraded lands, conservation plans commonly pair maps of biodiversity hotspots with maps of degraded lands that highlight areas for development. By putting the two maps together, it should be possible to achieve much more effective conservation because there will be provision of habitat for species and for economic development—something that can obtain broader political support than simply highlighting biodiversity hotspots.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Ever since the expansion of early humans across the planet, biodiversity has been impacted by our activities, although the scales of biodiversity impact and primary mechanisms of action have changed over time.
Biodiversity is defined here as variation among living organisms, both within and between species. It is maintained by a balance between processes that generate variation and those that cause its loss. A concern for modern humans is that our activities are driving rapid losses of biodiversity, which outweigh by orders of magnitude the processes of biodiversity generation. The net biodiversity losses could have significant impacts on human wellbeing in both current and future generations.
Within species, biodiversity is reflected in genetic, and consequent phenotypic (e.g. morphological), variation between individuals. Genetic diversity is generated by germ line mutations, genetic recombination during sexual reproduction, and immigration of new genotypes into a population. Across species, biodiversity is reflected in the numbers of different species present and also, by some metrics, in the evenness of their relative abundances. At this level, biodiversity is generated by processes of speciation and by immigration of new species into an area.
In terms of biodiversity losses, there are processes that cause roughly continuous low-level losses, but there is also strong evidence from fossil records for transient events, in which exceptionally large losses of biodiversity have occurred. These major extinction episodes are thought to have been caused by various large-scale environmental perturbations such as volcanic eruptions, sea level falls, climatic changes, and asteroid impacts. From all these events, biodiversity has shown recovery over subsequent calmer periods, although the composition of higher level evolutionary taxa can by altered significantly.
In the modern period, biodiversity appears to be undergoing another mass extinction event, driven by large-scale human impacts. The primary mechanism of biodiversity loss caused by humans has changed over time. Even in the Pleistocene, early humans were thought to be partly responsible for species extinctions through hunting of large megafauna, and such exploitation continues into the present era. In addition, clearing of land for human agriculture and urbanization has been a major factor in driving biodiversity losses through loss of species’ habitats. Increasingly, additional pressures such as invasive species and climate change have the potential to cause biodiversity losses. It is worth noting that human activities may also lead to increases in biodiversity in some areas through species introductions and climatic changes (e.g. particularly Arctic areas), although these overall increases in species richness may come at the cost of loss of native species.
These changes to biodiversity, comprising in many cases the overall loss of genetic diversity and species richness at a local level, have the potential to have substantial impacts on human wellbeing. A wide range of species may be necessary for the provision of ecosystem services such as pollination, pest control, and decomposition. The importance of biodiversity becomes particularly marked, however, over longer time periods and, in particular, under varying environmental conditions. Here, biodiversity (both genetic and species-level diversity) provide resilience of ecosystem services. Limiting losses of biodiversity is likely to be important for maintaining the wellbeing of humans in current and future generations.
Soil salinity has been causing problems for agriculturists for millennia, primarily in irrigated lands. The importance of salinity issues is increasing, since large areas are affected by irrigation-induced salt accumulation. A wide knowledge base has been collected to better understand the major processes of salt accumulation and choose the right method of mitigation. There are two major types of soil salinity that are distinguished because of different properties and mitigation requirements. The first is caused mostly by the large salt concentration and is called saline soil, typically corresponding to Solonchak soils. The second is caused mainly by the dominance of sodium in the soil solution or on the soil exchange complex. This latter type is called “sodic” soil, corresponding to Solonetz soils. Saline soils have homogeneous soil profiles with relatively good soil structure, and their appropriate mitigation measure is leaching. Naturally sodic soils have markedly different horizons and unfavorable physical properties, such as low permeability, swelling, plasticity when wet, and hardness when dry, and their limitation for agriculture is mitigated typically by applying gypsum. Salinity and sodicity need to be chemically quantified before deciding on the proper management strategy. The most complex management and mitigation of salinized irrigated lands involves modern engineering including calculations of irrigation water rates and reclamation materials, provisions for drainage, and drainage disposal. Mapping-oriented soil classification was developed for naturally saline and sodic soils and inherited the first soil categories introduced more than a century ago, such as Solonchak and Solonetz in most of the total of 24 soil classification systems used currently. USDA Soil Taxonomy is one exception, which uses names composed of formative elements.
Confidence in the projected impacts of climate change on agricultural systems has increased substantially since the first Intergovernmental Panel on Climate Change (IPCC) reports. In Africa, much work has gone into downscaling global climate models to understand regional impacts, but there remains a dearth of local level understanding of impacts and communities’ capacity to adapt. It is well understood that Africa is vulnerable to climate change, not only because of its high exposure to climate change, but also because many African communities lack the capacity to respond or adapt to the impacts of climate change. Warming trends have already become evident across the continent, and it is likely that the continent’s 2000 mean annual temperature change will exceed +2°C by 2100. Added to this warming trend, changes in precipitation patterns are also of concern: Even if rainfall remains constant, due to increasing temperatures, existing water stress will be amplified, putting even more pressure on agricultural systems, especially in semiarid areas. In general, high temperatures and changes in rainfall patterns are likely to reduce cereal crop productivity, and new evidence is emerging that high-value perennial crops will also be negatively impacted by rising temperatures. Pressures from pests, weeds, and diseases are also expected to increase, with detrimental effects on crops and livestock.
Much of African agriculture’s vulnerability to climate change lies in the fact that its agricultural systems remain largely rain-fed and underdeveloped, as the majority of Africa’s farmers are small-scale farmers with few financial resources, limited access to infrastructure, and disparate access to information. At the same time, as these systems are highly reliant on their environment, and farmers are dependent on farming for their livelihoods, their diversity, context specificity, and the existence of generations of traditional knowledge offer elements of resilience in the face of climate change. Overall, however, the combination of climatic and nonclimatic drivers and stressors will exacerbate the vulnerability of Africa’s agricultural systems to climate change, but the impacts will not be universally felt. Climate change will impact farmers and their agricultural systems in different ways, and adapting to these impacts will need to be context-specific.
Current adaptation efforts on the continent are increasing across the continent, but it is expected that in the long term these will be insufficient in enabling communities to cope with the changes due to longer-term climate change. African famers are increasingly adopting a variety of conservation and agroecological practices such as agroforestry, contouring, terracing, mulching, and no-till. These practices have the twin benefits of lowering carbon emissions while adapting to climate change as well as broadening the sources of livelihoods for poor farmers, but there are constraints to their widespread adoption. These challenges vary from insecure land tenure to difficulties with knowledge-sharing.
While African agriculture faces exposure to climate change as well as broader socioeconomic and political challenges, many of its diverse agricultural systems remain resilient. As the continent with the highest population growth rate, rapid urbanization trends, and rising GDP in many countries, Africa’s agricultural systems will need to become adaptive to more than just climate change as the uncertainties of the 21st century unfold.
Margarete Kalin and William N. Wheeler
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The first treatise on mining and extractive metallurgy, published in 1556, mentioned the side effects of mining, namely dead fish and poisoned water. These same side effects are still with us today, even though our knowledge of extractive techniques and chemical processes has grown tremendously. The dead fish and poisoned water, we now know, are caused by oxidative weathering of minerals, resulting in acidic and metal-laden water. The weathering is exacerbated by microbes that break chemical bonds in pyrite to derive their energy.
To compound the problem, our insatiable appetite for metals and energy, combined with our development of industrial tools, has allowed us to dig mines vastly larger than those envisioned in 1556. This exponentially increases the weathering area available in waste rock and finely ground rock (tailings). Through infiltration of atmospheric precipitation, severely polluted seepages emerge from these mining wastes to surface and ground water.
Since metals are essential products needed in society, cost-effective remediation measures need to be developed. New sustainable approaches to mining need to be established. Currently, engineered covers and dams contain and reduce the infiltration of atmospheric precipitation, slowing the weathering process. However, weathering will continue for millennia. With this much time, covers will break down and dams will leak. Currently accepted practice is to integrate basic neutralizing agents (lime) to wastes or seeps in perpetuity. These and other stop-gap measures do not show any resemblance to sustainable mine development and reclamation.
What is needed is a paradigm shift in thinking about mine waste management. Waste rock and tailings need to be thought of as primitive ecosystems, characterized by harsh physical and chemical conditions. These harsh environments are similar to those encountered in the vicinity of hot springs characterized by highly acidic, or alkaline and saline conditions. These ecosystems are populated by thermophilic, acidophilic, and halophilic microbes (as a group called extremophiles), all of which can modify their surroundings. If managed properly, based on ecological principles, mines and these ecosystems will provide the resources of the future.
Ecological engineering utilizes ecological, geo-microbiological, and physical processes to change the conditions within the wastes to favor microbial remediation. To counter oxidative conditions, reductive environments and their microbes are supported with the ecological measures introduced. Reducing conditions can be generated in sediments and on the water-rock or water-sediment interphases through microbial growth. Gradually, contaminated acidic or alkaline water is cleansed by indigenous biota. These organisms sequester metal ions on or inside their cells and neutralize aquatic waste streams. Eventually, biomass (and metals) are relegated to the sediment, where they are bio-mineralized - forming new biogenic ore bodies. Re-oxidation of bio-mineralized metals is prevented by the introduction of underwater and emerging vegetation, which reduce mixing and consume oxygen above and at the sediment-water interphase. Natural cycles of oxidation and reduction have been operating on the planet for millennia, producing biogenic ore bodies, and are ecologically sound, sustainable approaches.
The emergence of environment as a security imperative is something that could have been avoided. Early indications showed that if governments did not pay attention to critical environmental issues, these would move up the security agenda. As far back as the Club of Rome 1972 report, Limits to Growth, variables highlighted for policy makers included world population, industrialization, pollution, food production, and resource depletion, all of which impact how we live on this planet.
The term environmental security didn’t come into general use until the 2000s. It had its first substantive framing in 1977, with the Lester Brown Worldwatch Paper 14, “Redefining Security.” Brown argued that the traditional view of national security was based on the “assumption that the principal threat to security comes from other nations.” He went on to argue that future security “may now arise less from the relationship of nation to nation and more from the relationship between man to nature.”
Of the major documents to come out of the Earth Summit in 1992, the Rio Declaration on Environment and Development is probably the first time governments have tried to frame environmental security. Principle 2 says: “States have, in accordance with the Charter of the United Nations and the principles of international law, the sovereign right to exploit their own resources pursuant to their own environmental and developmental policies, and the responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond the limits of national.”
In 1994, the UN Development Program defined Human Security into distinct categories, including:
• Economic security (assured and adequate basic incomes).
• Food security (physical and affordable access to food).
• Health security.
• Environmental security (access to safe water, clean air and non-degraded land).
By the time of the World Summit on Sustainable Development, in 2002, water had begun to be identified as a security issue, first at the Rio+5 conference, and as a food security issue at the 1996 FAO Summit. In 2003, UN Secretary General Kofi Annan set up a High-Level Panel on “Threats, Challenges, and Change,” to help the UN prevent and remove threats to peace. It started to lay down new concepts on collective security, identifying six clusters for member states to consider. These included economic and social threats, such as poverty, infectious disease, and environmental degradation.
By 2007, health was being recognized as a part of the environmental security discourse, with World Health Day celebrating “International Health Security (IHS).” In particular, it looked at emerging diseases, economic stability, international crises, humanitarian emergencies, and chemical, radioactive, and biological terror threats. Environmental and climate changes have a growing impact on health. The 2007 Fourth Assessment Report (AR4) of the UN Intergovernmental Panel on Climate Change (IPCC) identified climate security as a key challenge for the 21st century. This was followed up in 2009 by the UCL-Lancet Commission on Managing the Health Effects of Climate Change—linking health and climate change.
In the run-up to Rio+20 and the launch of the Sustainable Development Goals, the issue of the climate-food-water-energy nexus, or rather, inter-linkages, between these issues was highlighted. The dialogue on environmental security has moved from a fringe discussion to being central to our political discourse—this is because of the lack of implementation of previous international agreements.
Juha Merilä and Ary A. Hoffmann
Changing climatic conditions have both direct and indirect influences on abiotic and biotic processes and represent a potent source of novel selection pressures for adaptive evolution. In addition, climate change can impact evolution by altering patterns of hybridization, changing population size, and altering patterns of gene flow in landscapes. Given that scientific evidence for rapid evolutionary adaptation to spatial variation in abiotic and biotic environmental conditions—analogous to that seen in changes brought by climate change—is ubiquitous, ongoing climate change is expected to have large and widespread evolutionary impacts on wild populations. However, phenotypic plasticity, migration, and various kinds of genetic and ecological constraints can preclude organisms from evolving much in response to climate change, and generalizations about the rate and magnitude of expected responses are difficult to make for a number of reasons.
First, the study of microevolutionary responses to climate change is a young field of investigation. While interest in evolutionary impacts of climate change goes back to early macroevolutionary (paleontological) studies focused on prehistoric climate changes, microevolutionary studies started only in the late 1980s. The discipline gained real momentum in the 2000s after the concept of climate change became of interest to the general public and funding organizations. As such, no general conclusions have yet emerged. Second, the complexity of biotic changes triggered by novel climatic conditions renders predictions about patterns and strength of natural selection difficult. Third, predictions are complicated also because the expression of genetic variability in traits of ecological importance varies with environmental conditions, affecting expected responses to climate-mediated selection.
There are now several examples where organisms have evolved in response to selection pressures associated with climate change, including changes in the timing of life history events and in the ability to tolerate abiotic and biotic stresses arising from climate change. However, there are also many examples where expected selection responses have not been detected. This may be partly explainable by methodological difficulties involved with detecting genetic changes, but also by various processes constraining evolution.
There are concerns that the rates of environmental changes are too fast to allow many, especially large and long-lived, organisms to maintain adaptedness. Theoretical studies suggest that maximal sustainable rates of evolutionary change are on the order of 0.1 haldanes (i.e., phenotypic standard deviations per generation) or less, whereas the rates expected under current climate change projections will often require faster adaptation. Hence, widespread maladaptation and extinctions are expected. These concerns are compounded by the expectation that the amount of genetic variation harbored by populations and available for selection will be reduced by habitat destruction and fragmentation caused by human activities, although in some cases this may be countered by hybridization. Rates of adaptation will also depend on patterns of gene flow and the steepness of climatic gradients. Theoretical studies also suggest that phenotypic plasticity (i.e., nongenetic phenotypic changes) can affect evolutionary genetic changes, but relevant empirical evidence is still scarce. While all of these factors point to a high level of uncertainty around evolutionary changes, it is nevertheless important to consider evolutionary resilience in enhancing the ability of organisms to adapt to climate change.
Fisheries science emerged in the mid-19th century, when scientists volunteered to conduct conservation-related investigations of commercially important aquatic species for the governments of North Atlantic nations. Scientists also promoted oyster culture and fish hatcheries to sustain the aquatic harvests. Fisheries science fully professionalized with specialized graduate training in the 1920s.
The earliest stage, involving inventory science, trawling surveys, and natural history studies continued to dominate into the 1930s within the European colonial diaspora. Meanwhile, scientists in Scandinavian countries, Britain, Germany, the United States, and Japan began developing quantitative fisheries science after 1900, incorporating hydrography, age-determination studies, and population dynamics. Norwegian biologist Johan Hjort’s 1914 finding, that the size of a large “year class” of juvenile fish is unrelated to the size of the spawning population, created the central foundation and conundrum of later fisheries science. By the 1920s, fisheries scientists in Europe and America were striving to develop a theory of fishing. They attempted to develop predictive models that incorporated statistical and quantitative analysis of past fishing success, as well as quantitative values reflecting a species’ population demographics, as a basis for predicting future catches and managing fisheries for sustainability. This research was supported by international scientific organizations such as the International Council for the Exploration of the Sea (ICES), the International Pacific Halibut Commission (IPHC), and the United Nations’ Food and Agriculture Organization (FAO).
Both nationally and internationally, political entanglement was an inevitable feature of fisheries science. Beyond substituting their science for fishers’ traditional and practical knowledge, many postwar fisheries scientists also brought progressive ideals into fisheries management, advocating fishing for a maximum sustainable yield. This in turn made it possible for governments, economists, and even scientists, to use this nebulous target to project preferred social, political, and economic outcomes, while altogether discarding any practical conservation measures to rein in globalized postwar industrialized fishing. These ideals were also exported to nascent postwar fisheries science programs in developing Pacific and Indian Ocean nations and in Eastern Europe and Turkey.
The vision of mid-century triumphalist science, that industrial fisheries could be scientifically managed like any other industrial enterprise, was thwarted by commercial fish stock collapses, beginning slowly in the 1950s and accelerating after 1970, including the massive northern cod crisis of the early 1990s. In the 1980s scientists, aided by more powerful computers, attempted multi-species models to understand the different impacts of a fishery on various species. Daniel Pauly led the way with multi-species models for tropical fisheries, where the need for such was most urgent, and pioneered the global database FishBase, using fishing data collected by the FAO and national bodies. In Canada the cod crisis inspired Ransom Myers to use large databases for fisheries analysis to show the role of overfishing in causing that crisis. After 1980 population ecologists also demonstrated the importance of life history data for understanding fish species’ responses to fishery-induced population and environmental change.
With fishing continuing to shrink many global commercial stocks, scientists have demonstrated how different measures can manage fisheries for species with different life-history profiles. Aside from the need for effective scientific monitoring, the biggest ongoing challenges remain having politicians, governments, fisheries industry members, and other stakeholders commit to scientifically recommended long-term conservation measures.