Fisheries science emerged in the mid-19th century, when scientists volunteered to conduct conservation-related investigations of commercially important aquatic species for the governments of North Atlantic nations. Scientists also promoted oyster culture and fish hatcheries to sustain the aquatic harvests. Fisheries science fully professionalized with specialized graduate training in the 1920s.
The earliest stage, involving inventory science, trawling surveys, and natural history studies continued to dominate into the 1930s within the European colonial diaspora. Meanwhile, scientists in Scandinavian countries, Britain, Germany, the United States, and Japan began developing quantitative fisheries science after 1900, incorporating hydrography, age-determination studies, and population dynamics. Norwegian biologist Johan Hjort’s 1914 finding, that the size of a large “year class” of juvenile fish is unrelated to the size of the spawning population, created the central foundation and conundrum of later fisheries science. By the 1920s, fisheries scientists in Europe and America were striving to develop a theory of fishing. They attempted to develop predictive models that incorporated statistical and quantitative analysis of past fishing success, as well as quantitative values reflecting a species’ population demographics, as a basis for predicting future catches and managing fisheries for sustainability. This research was supported by international scientific organizations such as the International Council for the Exploration of the Sea (ICES), the International Pacific Halibut Commission (IPHC), and the United Nations’ Food and Agriculture Organization (FAO).
Both nationally and internationally, political entanglement was an inevitable feature of fisheries science. Beyond substituting their science for fishers’ traditional and practical knowledge, many postwar fisheries scientists also brought progressive ideals into fisheries management, advocating fishing for a maximum sustainable yield. This in turn made it possible for governments, economists, and even scientists, to use this nebulous target to project preferred social, political, and economic outcomes, while altogether discarding any practical conservation measures to rein in globalized postwar industrialized fishing. These ideals were also exported to nascent postwar fisheries science programs in developing Pacific and Indian Ocean nations and in Eastern Europe and Turkey.
The vision of mid-century triumphalist science, that industrial fisheries could be scientifically managed like any other industrial enterprise, was thwarted by commercial fish stock collapses, beginning slowly in the 1950s and accelerating after 1970, including the massive northern cod crisis of the early 1990s. In the 1980s scientists, aided by more powerful computers, attempted multi-species models to understand the different impacts of a fishery on various species. Daniel Pauly led the way with multi-species models for tropical fisheries, where the need for such was most urgent, and pioneered the global database FishBase, using fishing data collected by the FAO and national bodies. In Canada the cod crisis inspired Ransom Myers to use large databases for fisheries analysis to show the role of overfishing in causing that crisis. After 1980 population ecologists also demonstrated the importance of life history data for understanding fish species’ responses to fishery-induced population and environmental change.
With fishing continuing to shrink many global commercial stocks, scientists have demonstrated how different measures can manage fisheries for species with different life-history profiles. Aside from the need for effective scientific monitoring, the biggest ongoing challenges remain having politicians, governments, fisheries industry members, and other stakeholders commit to scientifically recommended long-term conservation measures.
The emergence of environment as a security imperative is something that could have been avoided. Early indications showed that if governments did not pay attention to critical environmental issues, these would move up the security agenda. As far back as the Club of Rome 1972 report, Limits to Growth, variables highlighted for policy makers included world population, industrialization, pollution, food production, and resource depletion, all of which impact how we live on this planet.
The term environmental security didn’t come into general use until the 2000s. It had its first substantive framing in 1977, with the Lester Brown Worldwatch Paper 14, “Redefining Security.” Brown argued that the traditional view of national security was based on the “assumption that the principal threat to security comes from other nations.” He went on to argue that future security “may now arise less from the relationship of nation to nation and more from the relationship between man to nature.”
Of the major documents to come out of the Earth Summit in 1992, the Rio Declaration on Environment and Development is probably the first time governments have tried to frame environmental security. Principle 2 says: “States have, in accordance with the Charter of the United Nations and the principles of international law, the sovereign right to exploit their own resources pursuant to their own environmental and developmental policies, and the responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond the limits of national.”
In 1994, the UN Development Program defined Human Security into distinct categories, including:
• Economic security (assured and adequate basic incomes).
• Food security (physical and affordable access to food).
• Health security.
• Environmental security (access to safe water, clean air and non-degraded land).
By the time of the World Summit on Sustainable Development, in 2002, water had begun to be identified as a security issue, first at the Rio+5 conference, and as a food security issue at the 1996 FAO Summit. In 2003, UN Secretary General Kofi Annan set up a High-Level Panel on “Threats, Challenges, and Change,” to help the UN prevent and remove threats to peace. It started to lay down new concepts on collective security, identifying six clusters for member states to consider. These included economic and social threats, such as poverty, infectious disease, and environmental degradation.
By 2007, health was being recognized as a part of the environmental security discourse, with World Health Day celebrating “International Health Security (IHS).” In particular, it looked at emerging diseases, economic stability, international crises, humanitarian emergencies, and chemical, radioactive, and biological terror threats. Environmental and climate changes have a growing impact on health. The 2007 Fourth Assessment Report (AR4) of the UN Intergovernmental Panel on Climate Change (IPCC) identified climate security as a key challenge for the 21st century. This was followed up in 2009 by the UCL-Lancet Commission on Managing the Health Effects of Climate Change—linking health and climate change.
In the run-up to Rio+20 and the launch of the Sustainable Development Goals, the issue of the climate-food-water-energy nexus, or rather, inter-linkages, between these issues was highlighted. The dialogue on environmental security has moved from a fringe discussion to being central to our political discourse—this is because of the lack of implementation of previous international agreements.
Aijun Ding, Xin Huang, and Congbin Fu
Air pollution is one of the grand environmental challenges in developing countries, especially those with high population density like China. High concentrations of primary and secondary trace gases and particulate matter (PM) are frequently observed in the industrialized and urbanized regions, causing negative effects on the health of humans, plants, and the ecosystem.
Meteorological conditions are among the most important factors influencing day-to-day air quality. Synoptic weather and boundary layer dynamics control the dispersion capacity and transport of air pollutants, while the main meteorological parameters, such as air temperature, radiation, and relative humidity, influence the chemical transformation of secondary air pollutants at the same time. Intense air pollution, especially high concentration of radiatively important aerosols, can substantially influence meteorological parameters, boundary layer dynamics, synoptic weather, and even regional climate through their strong radiative effects.
As one of the main monsoon regions, with the most intense human activities in the world, East Asia is a region experiencing complex air pollution, with sources from anthropogenic fossil fuel combustion, biomass burning, dust storms, and biogenic emissions. A mixture of these different plumes can cause substantial two-way interactions and feedbacks in the formation of air pollutants under various weather conditions. Improving the understanding of such interactions needs more field measurements using integrated multiprocess measurement platforms, as well as more efforts in developing numerical models, especially for those with online coupled processes. All these efforts are very important for policymaking from the perspectives of environmental protection and mitigation of climate change.
James B. London
Coastal zone management (CZM) has evolved since the enactment of the U.S. Coastal Zone Management Act of 1972, which was the first comprehensive program of its type. The newer iteration of Integrated Coastal Zone Management (ICZM), as applied to the European Union (2000, 2002), establishes priorities and a comprehensive strategy framework. While coastal management was established in large part to address issues of both development and resource protection in the coastal zone, conditions have changed. Accelerated rates of sea level rise (SLR) as well as continued rapid development along the coasts have increased vulnerability. The article examines changing conditions over time and the role of CZM and ICZM in addressing increased climate related vulnerabilities along the coast.
The article argues that effective adaptation strategies will require a sound information base and an institutional framework that appropriately addresses the risk of development in the coastal zone. The information base has improved through recent advances in technology and geospatial data quality. Critical for decision-makers will be sound information to identify vulnerabilities, formulate options, and assess the viability of a set of adaptation alternatives. The institutional framework must include the political will to act decisively and send the right signals to encourage responsible development patterns. At the same time, as communities are likely to bear higher costs for adaptation, it is important that they are given appropriate tools to effectively weigh alternatives, including the cost avoidance associated with corrective action. Adaptation strategies must be pro-active and anticipatory. Failure to act strategically will be fiscally irresponsible.
Archis R. Ambulkar
Since the industrial revolution, societies across the globe have observed significant urbanization and population growth. Newer technologies, industries, and manufacturing plants have evolved over the period to develop sophisticated infrastructures and amenities for mankind. To achieve this, communities have utilized and exploited natural resources, resulting in sustained environmental degradation and pollution. Among various adverse ecological effects, nutrient contamination in water is posing serious problems for the water bodies worldwide.
Nitrogen and phosphorus are the basic constituents for the growth and reproduction of living organisms and occur naturally in the soil, air, and water. However, human activities are affecting their natural cycles and causing excessive dumping into the surface and groundwater systems. Higher concentrations of nitrogen and phosphorus-based nutrients in water resources lead to eutrophication, reduction in sunlight, lower dissolved oxygen levels, changing rates of plant growth, reproduction patterns, and overall deterioration of water quality. Economically, this pollution can impact the fishing industry, recreational businesses, property values, and tourism. Also, using nutrient-polluted lakes or rivers as potable water sources may result in excess nitrates in drinking water, production of disinfection by-products, and associated health effects.
Nutrients contamination in water commonly originates from point and non-point sources. Point sources are the specific discharge locations, like wastewater treatment plants (WWTP), industries, and municipal waste systems; whereas, non-point sources are discrete dischargers, like agricultural lands and storm water runoffs. Compared to non-point sources, point sources are easier to identify, regulate, and treat. WWTPs receive sewage from domestic, business, and industrial settings. With growing pollution concerns, nutrients removal and recovery at treatment plants is gaining significant attention. Newer chemical and biological nutrient removal processes are emerging to treat wastewater. Nitrogen removal mainly involves nitrification-denitrification processes; whereas, phosphorus removal includes biological uptake, chemical precipitation, or filtration. In regards to non-point sources, authorities are encouraging best management practices to control pollution loads to waterways.
Governments are opting for novel strategies like source nutrient reduction schemes, bioremediation processes, stringent effluent limits, and nutrient trading programs. Source nutrient reduction strategies such as discouraging or banning use of phosphorus-rich detergents and selective chemicals, industrial pretreatment programs, and stormwater management programs can be effective by reducing nutrient loads to WWTPs. Bioremediation techniques such as riparian areas, natural and constructed wetlands, and treatment ponds can capture nutrients from agricultural lands or sewage treatment plant effluents. Nutrient trading programs allow purchase/sale of equivalent environmental credits between point and non-point nutrient dischargers to manage overall nutrient discharges in watersheds at lower costs.
Nutrient pollution impacts are quite evident and documented in many parts of the world. Governments and environmental organizations are undertaking several waterways remediation projects to improve water quality and restore aquatic ecosystems. Shrinking freshwater reserves and rising water demands are compelling communities to make efficient use of the available water resources. With smarter choices and useful strategies, nutrient pollution in the water can be contained to a reasonable extent. As responsible members of the community, it is important for us to understand this key environmental issue as well as to learn the current and future needs to alleviate this problem.
Wim De Vries, Enzai Du, Klaus Butterbach Bahl, Lena Schulte Uebbing, and Frank Dentener
Human activities have rapidly accelerated global nitrogen (N) cycling since the late 19th century. This acceleration has manifold impacts on ecosystem N and carbon (C) cycles, and thus on emissions of the greenhouse gases nitrous oxide (N2O), carbon dioxide (CO2), and methane (CH4), which contribute to climate change.
First, elevated N use in agriculture leads to increased direct N2O emissions. Second, it leads to emissions of ammonia (NH3), nitric oxide (NO), and nitrogen dioxide (NO2) and leaching of nitrate (NO3−), which cause indirect N2O emissions from soils and waterbodies. Third, N use in agriculture may also cause changes in CO2 exchange (emission or uptake) in agricultural soils due to N fertilization (direct effect) and in non-agricultural soils due to atmospheric NHx (NH3+NH4) deposition (indirect effect). Fourth, NOx (NO+NO2) emissions from combustion processes and from fertilized soils lead to elevated NOy (NOx+ other oxidized N) deposition, further affecting CO2 exchange. As most (semi-) natural terrestrial ecosystems and aquatic ecosystems are N limited, human-induced atmospheric N deposition usually increases net primary production (NPP) and thus stimulates C sequestration. NOx emissions, however, also induce tropospheric ozone (O3) formation, and elevated O3 concentrations can lead to a reduction of NPP and plant C sequestration. The impacts of human N fixation on soil CH4 exchange are insignificant compared to the impacts on N2O and CO2 exchange (emissions or uptake). Ignoring shorter lived components and related feedbacks, the net impact of human N fixation on climate thus mainly depends on the magnitude of the cooling effect of CO2 uptake as compared to the magnitude of the warming effect of (direct and indirect) N2O emissions.
The estimated impact of human N fixation on N2O emission is 8.0 (7.0–9.0) Tg N2O-N yr−1, which is equal 1.02 (0.89–1.15) Pg CO2-C equivalents (eq) yr−1. The estimated CO2 uptake due to N inputs to terrestrial, freshwater, and marine ecosystems equals −0.75 (−0.56 to −0.97) Pg CO2-C eq yr−1. At present, the impact of human N fixation on increased CO2 sequestration thus largely (on average near 75%) compensates the stimulating effect on N2O emissions. In the long term, however, effects on ecosystem CO2 sequestration are likely to diminish due to growth limitations by other nutrients such as phosphorus. Furthermore, N-induced O3 exposure reduces CO2 uptake, causing a net C loss at 0.14 (0.07–0.21) Pg CO2-C eq yr−1. Consequently, human N fixation causes an overall increase in net greenhouse gas emissions from global ecosystems, which is estimated at 0.41 (−0.01–0.80) Pg CO2-C eq yr−1. Even when considering all uncertainties, it is likely that human N inputs lead to a net increase in global greenhouse gas emissions.
These estimates are based on most recent science and modeling approaches with respect to: (i) N inputs to various ecosystems, including NH3 and NOx emission estimates and related atmospheric N (NH3 and NOx) deposition and O3 exposure; (ii) N2O emissions in response to N inputs; and (iii) carbon exchange in responses to N inputs (C–N response) and O3 exposure (C–O3 response), focusing on the global scale. Apart from presenting the current knowledge, this article also gives an overview of changes in the estimates of those fluxes and C–N response factors over time, including debates on C–N responses in literature, the uncertainties in the various estimates, and the potential for improving them.
Vincent Moreau and Guillaume Massard
The concept of metabolism takes root in biology and ecology as a systematic way to account for material flows in organisms and ecosystems. Early applications of the concept attempted to quantify the amount of water and food the human body processes to live and sustain itself. Similarly, ecologists have long studied the metabolism of critical substances and nutrients in ecological succession towards climax. With industrialization, the material and energy requirements of modern economic activities have grown exponentially, together with emissions to the air, water and soil. From an analogy with ecosystems, the concept of metabolism grew into an analytical methodology for economic systems.
Research in the field of material flow analysis has developed approaches to modeling economic systems by assessing the stocks and flows of substances and materials for systems defined in space and time. Material flow analysis encompasses different methods: industrial and urban metabolism, input–output analysis, economy-wide material flow accounting, socioeconomic metabolism, and more recently material flow cost accounting. Each method has specific scales, reference substances such as metals, and indicators such as concentration. A material flow analysis study usually consists of a total of four consecutive steps: (a) system definition, (b) data acquisition, (c) calculation, and (d) interpretation. The law of conservation of mass underlies every application, which implies that all material flows, as well as stocks, must be accounted for.
In the early 21st century, material depletion, accumulation, and recycling are well-established cases of material flow analysis. Diagnostics and forecasts, as well as historical or backcast analyses, are ideally performed in a material flow analysis, to identify shifts in material consumption for product life cycles or physical accounting and to evaluate the material and energy performance of specific systems.
In practice, material flow analysis supports policy and decision making in urban planning, energy planning, economic and environmental performance, development of industrial symbiosis and eco industrial parks, closing material loops and circular economy, pollution remediation/control and material and energy supply security. Although material flow analysis assesses the amount and fate of materials and energy rather than their environmental or human health impacts, a tacit assumption states that reduced material throughputs limit such impacts.
Elisabet Lindgren and Thomas Elmqvist
Ecosystem services refer to benefits for human societies and well-being obtained from ecosystems. Research on health effects of ecosystem services have until recently mostly focused on beneficial effects on physical and mental health from spending time in nature or having access to urban green space. However, nearly all of the different ecosystem services may have impacts on health, either directly or indirectly. Ecosystem services can be divided into provisioning services that provide food and water; regulating services that provide, for example, clean air, moderate extreme events, and regulate the local climate; supporting services that help maintain biodiversity and infectious disease control; and cultural services.
With a rapidly growing global population, the demand for food and water will increase. Knowledge about ecosystems will provide opportunities for sustainable agriculture production in both terrestrial and marine environments. Diarrheal diseases and associated childhood deaths are strongly linked to poor water quality, sanitation, and hygiene. Even though improvements are being made, nearly 750 million people still lack access to reliable water sources. Ecosystems such as forests, wetlands, and lakes capture, filter, and store water used for drinking, irrigation, and other human purposes. Wetlands also store and treat solid waste and wastewater, and such ecosystem services could become of increasing use for sustainable development.
Ecosystems contribute to local climate regulation and are of importance for climate change mitigation and adaptation. Coastal ecosystems, such as mangrove and coral reefs, act as natural barriers against storm surges and flooding. Flooding is associated with increased risk of deaths, epidemic outbreaks, and negative health impacts from destroyed infrastructure. Vegetation reduces the risk of flooding, also in cities, by increasing permeability and reducing surface runoff following precipitation events.
The urban heat island effect will increase city-center temperatures during heatwaves. The elderly, people with chronic cardiovascular and respiratory diseases, and outdoor workers in cities where temperatures soar during heatwaves are in particular vulnerable to heat. Vegetation and especially trees help in different ways to reduce temperatures by shading and evapotranspiration. Air pollution increases the mortality and morbidity risks during heatwaves. Vegetation has been shown also to contribute to improved air quality by, depending on plant species, filtering out gases and airborne particulates. Greenery also has a noise-reducing effect, thereby decreasing noise-related illnesses and annoyances. Biological control uses the knowledge of ecosystems and biodiversity to help control human and animal diseases.
Natural surroundings and urban parks and gardens have direct beneficial effects on people’s physical and mental health and well-being. Increased physical activities have well-known health benefits. Spending time in natural environments has also been linked to aesthetic benefits, life enrichments, social cohesion, and spiritual experience. Even living close to or with a view of nature has been shown to reduce stress and increase a sense of well-being.
Confidence in the projected impacts of climate change on agricultural systems has increased substantially since the first Intergovernmental Panel on Climate Change (IPCC) reports. In Africa, much work has gone into downscaling global climate models to understand regional impacts, but there remains a dearth of local level understanding of impacts and communities’ capacity to adapt. It is well understood that Africa is vulnerable to climate change, not only because of its high exposure to climate change, but also because many African communities lack the capacity to respond or adapt to the impacts of climate change. Warming trends have already become evident across the continent, and it is likely that the continent’s 2000 mean annual temperature change will exceed +2°C by 2100. Added to this warming trend, changes in precipitation patterns are also of concern: Even if rainfall remains constant, due to increasing temperatures, existing water stress will be amplified, putting even more pressure on agricultural systems, especially in semiarid areas. In general, high temperatures and changes in rainfall patterns are likely to reduce cereal crop productivity, and new evidence is emerging that high-value perennial crops will also be negatively impacted by rising temperatures. Pressures from pests, weeds, and diseases are also expected to increase, with detrimental effects on crops and livestock.
Much of African agriculture’s vulnerability to climate change lies in the fact that its agricultural systems remain largely rain-fed and underdeveloped, as the majority of Africa’s farmers are small-scale farmers with few financial resources, limited access to infrastructure, and disparate access to information. At the same time, as these systems are highly reliant on their environment, and farmers are dependent on farming for their livelihoods, their diversity, context specificity, and the existence of generations of traditional knowledge offer elements of resilience in the face of climate change. Overall, however, the combination of climatic and nonclimatic drivers and stressors will exacerbate the vulnerability of Africa’s agricultural systems to climate change, but the impacts will not be universally felt. Climate change will impact farmers and their agricultural systems in different ways, and adapting to these impacts will need to be context-specific.
Current adaptation efforts on the continent are increasing across the continent, but it is expected that in the long term these will be insufficient in enabling communities to cope with the changes due to longer-term climate change. African famers are increasingly adopting a variety of conservation and agroecological practices such as agroforestry, contouring, terracing, mulching, and no-till. These practices have the twin benefits of lowering carbon emissions while adapting to climate change as well as broadening the sources of livelihoods for poor farmers, but there are constraints to their widespread adoption. These challenges vary from insecure land tenure to difficulties with knowledge-sharing.
While African agriculture faces exposure to climate change as well as broader socioeconomic and political challenges, many of its diverse agricultural systems remain resilient. As the continent with the highest population growth rate, rapid urbanization trends, and rising GDP in many countries, Africa’s agricultural systems will need to become adaptive to more than just climate change as the uncertainties of the 21st century unfold.
David E. Clay, Sharon A. Clay, Thomas DeSutter, and Cheryl Reese
Since the discovery that food security could be improved by pushing seeds into the soil and later harvesting a desirable crop, agriculture and agronomy have gone through cycles of discovery, implementation, and innovation. Discoveries have produced predicted and unpredicted impacts on the production and consumption of locally produced foods. Changes in technology, such as the development of the self-cleaning steel plow in the 18th century, provided a critical tool needed to cultivate and seed annual crops in the Great Plains of North America. However, plowing the Great Plains would not have been possible without the domestication of plants and animals and the discovery of the yoke and harness. Associated with plowing the prairies were extensive soil nutrient mining, a rapid loss of soil carbon, and increased wind and water erosion. More recently, the development of genetically modified organisms (GMOs) and no-tillage planters has contributed to increased adoption of conservation tillage, which is less damaging to the soil. In the future, the ultimate impact of climate change on agronomic practices in the North American Great Plains is unknown. However, projected increasing temperatures and decreased rainfall in the southern Great Plains (SGP) will likely reduce agricultural productivity. Different results are likely in the northern Great Plains (NGP) where higher temperatures can lead to increased agricultural intensification, the conversion of grassland to cropland, increased wildlife fragmentation, and increased soil erosion. Precision farming, conservation, cover crops, and the creation of plants better designed to their local environment can help mitigate these effects. However, changing practices require that farmers and their advisers understand the limitations of the soils, plants, and environment, and their production systems. Failure to implement appropriate management practices can result in a rapid decline in soil productivity, diminished water quality, and reduced wildlife habitat.