Along with ceramics production, sedentism, and herding, agriculture is a major component of the Neolithic as it is defined in Europe. Therefore, the agricultural system of the first Neolithic societies and the dispersal of exogenous cultivated plants to Europe are the subject of many scientific studies. To work on these issues, archaeobotanists rely on residual plant remains—crop seeds, weeds, and wild plants—from archaeological structures like detritic pits, and, less often, storage contexts. To date, no plant with an economic value has been identified as domesticated in Western Europe except possibly opium poppy. The earliest seeds identified at archaeological sites dated to about 5500–5200
The Neolithic pioneers settled in an area that had experienced a long tradition of hunting and gathering. The Neolithization of Europe followed a colonization model. The Mesolithic groups, although exploiting plant resources such as hazelnut more or less intensively, did not significantly change the landscape. The impact of their settlements and their activities are hardly noticeable through palynology, for example. The control of the mode of reproduction of plants has certainly increased the prevalence of Homo sapiens, involving, among others, a demographic increase and the ability to settle down in areas that were not well adapted to year-round occupation up to that point. The characterization of past agricultural systems, such as crop plants, technical processes, and the impact of anthropogenic activities on the landscape, is essential for understanding the interrelation of human societies and the plant environment. This interrelation has undoubtedly changed deeply with the Neolithic Revolution.
Worldwide, governments subsidize agriculture at the rate of approximately 1 billion dollars per day. This figure rises to about twice that when export and biofuels production subsidies and state financing for dams and river basin engineering are included. These policies guide land use in numerous ways, including growers’ choices of crop and buyers’ demand for commodities. The three types of state subsidies that shape land use and the environment are land settlement programs, price and income supports, and energy and emissions initiatives. Together these subsidies have created perennial surpluses in global stores of cereal grains, cotton, and dairy, with production increases outstripping population growth. Subsidies to land settlement, to crop prices, and to processing and refining of cereals and fiber, therefore, can be shown to have independent and largely deleterious effect on soil fertility, fresh water supplies, biodiversity, and atmospheric carbon.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
In 1945, the Amazon biome was still mostly intact. The scars of ancient cultural developments in Andean and lowland Amazon had healed, and the impacts of rubber and other resource exploitation were reversible. Very few roads existed, and only in its periphery. In the 1950s and especially in the 1960s, Brazil and other Andean countries launched ambitious road building and colonization projects, largely driven by Brazilian geopolitical concerns. Interest in the Amazon became much more intense in the 1970s as forest loss began to raise worldwide concern. Construction of more and better roads continued at an exponentially growing pace in each following decade, multiplying correlated deforestation and forest degradation everywhere in the Amazon. A point of no return was reached when interoceanic roads crossed the borders of Brazilian-Andean countries in the 2000s, exposing the remaining safe havens for indigenous people and nature. It is commonly estimated that today no less than 18% of the forest has been replaced with agriculture and that more than 50% of the remaining forests are significantly degraded. Most deforested land, especially in Andean countries, is wasted or scarcely used. Oil, mining, and intense urban development, as well as intensive agriculture, spread serious water and soil contamination throughout the region. Logging, fisheries, and hunting gave rise to the successive commercial extinction of valuable species.
Theories regarding the importance of biogeochemical cycles had already been in development since the 1970s, however, in the late 1980s the dominant popular view on the environmental value of the Amazon “lungs of the planet” emerged. The confirmation of the role of the Amazon as a carbon sink added some international pressure for its protection. But, in general, the many scientific discoveries regarding the Amazon have not been helpful in improving its conservation. Instead, a combination of new agricultural technologies, anthropocentric philosophies, and economic changes has strongly promoted forest clearing.
From the 1980s to the present day, Amazon conservation efforts have increasingly diversified, and now consist of five theoretically complementary strategies: (1) the creation of more, larger and better managed protected areas, including biological corridors; (2) the protection of more and larger indigenous territories; (3) the promotion of a series of “sustainable use” options such as “community based conservation,” sustainable forestry, and agroforestry; (4) the financing of conservation through debt swaps and related financial mechanisms for mitigating climate change and; (5) the use of better legislation, monitoring, and control. Five small protected areas have existed in the Amazon since the early 1960s but, in response to the road building boom of the 1970s, several larger patches of forests were set aside with the aim of conserving viable samples of biological diversity. Today, around 25 % of the Amazon is designated as protected areas, but almost half of these areas are categorized in a way that allows human presence and resource exploitation, and there is no effective management. Another 25.3% is designated to indigenous people who may or not conserve the forest. Excluding areas of overlap, both types of protected areas cover 41.2% of the Amazon. Neither strategy has fully achieved its objective, alone or together, and development pressures and threats grow as road construction and deforestation continue relentlessly with increasing funding by multilateral and national banks and pressure from transnational enterprises.
The future will be directed by unprecedented agricultural expansion and the corresponding intensification of deforestation and forest degradation. Additionally, the Amazon basin will be impacted by new, larger hydraulic works. Mining will increase and spread. Policy makers of Amazon countries still view the region as the future for expanding conventional development, and the population continues to be indifferent.
The emergence of environment as a security imperative is something that could have been avoided. Early indications showed that if governments did not pay attention to critical environmental issues, these would move up the security agenda. As far back as the Club of Rome 1972 report, Limits to Growth, variables highlighted for policy makers included world population, industrialization, pollution, food production, and resource depletion, all of which impact how we live on this planet.
The term environmental security didn’t come into general use until the 2000s. It had its first substantive framing in 1977, with the Lester Brown Worldwatch Paper 14, “Redefining Security.” Brown argued that the traditional view of national security was based on the “assumption that the principal threat to security comes from other nations.” He went on to argue that future security “may now arise less from the relationship of nation to nation and more from the relationship between man to nature.”
Of the major documents to come out of the Earth Summit in 1992, the Rio Declaration on Environment and Development is probably the first time governments have tried to frame environmental security. Principle 2 says: “States have, in accordance with the Charter of the United Nations and the principles of international law, the sovereign right to exploit their own resources pursuant to their own environmental and developmental policies, and the responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond the limits of national.”
In 1994, the UN Development Program defined Human Security into distinct categories, including:
• Economic security (assured and adequate basic incomes).
• Food security (physical and affordable access to food).
• Health security.
• Environmental security (access to safe water, clean air and non-degraded land).
By the time of the World Summit on Sustainable Development, in 2002, water had begun to be identified as a security issue, first at the Rio+5 conference, and as a food security issue at the 1996 FAO Summit. In 2003, UN Secretary General Kofi Annan set up a High-Level Panel on “Threats, Challenges, and Change,” to help the UN prevent and remove threats to peace. It started to lay down new concepts on collective security, identifying six clusters for member states to consider. These included economic and social threats, such as poverty, infectious disease, and environmental degradation.
By 2007, health was being recognized as a part of the environmental security discourse, with World Health Day celebrating “International Health Security (IHS).” In particular, it looked at emerging diseases, economic stability, international crises, humanitarian emergencies, and chemical, radioactive, and biological terror threats. Environmental and climate changes have a growing impact on health. The 2007 Fourth Assessment Report (AR4) of the UN Intergovernmental Panel on Climate Change (IPCC) identified climate security as a key challenge for the 21st century. This was followed up in 2009 by the UCL-Lancet Commission on Managing the Health Effects of Climate Change—linking health and climate change.
In the run-up to Rio+20 and the launch of the Sustainable Development Goals, the issue of the climate-food-water-energy nexus, or rather, inter-linkages, between these issues was highlighted. The dialogue on environmental security has moved from a fringe discussion to being central to our political discourse—this is because of the lack of implementation of previous international agreements.
Russian environmental history is a new field of inquiry, with the first archivally based monographs appearing only in the last years of the 20th century. Despite the field’s youth, scholars studying the topic have developed two distinct and contrasting approaches to its central question: How should the relationship between Russian culture and the natural world be characterized? Implicit in this question are two others: Is the Russian attitude toward the non-human world more sensitive than that which prevails in the West; and if so, is the Russian environment healthier or more stable than that of the United States and Western Europe? In other words, does Russia, because of its traditional suspicion of individualism and consumerism, have something to teach the West? Or, on the contrary, has the Russian historical tendency toward authoritarianism and collectivism facilitated predatory policies that have degraded the environment? Because environmentalism as a political movement and environmental history as an academic subject both emerged during the Cold War, at a time when the Western social, political, and economic system vied with the Soviet approach for support around the world, the comparative (and competitive) aspect of Russian environmental history has always been an important factor, although sometimes an implicit one. Accordingly, the existing scholarly works about Russian environmental history generally fall into one of two camps: one very critical of the Russian environmental record and the seeming disregard of the Russian government for environmental damage, and a somewhat newer group of works that draw attention to the fundamentally different concerns that motivate Russian environmental policies. The first group emphasizes Russian environmental catastrophes such as the desiccated Aral Sea, the eroded Virgin Lands, and the public health epidemics related to the severely polluted air of Soviet industrial cities. The environmental crises that the first group cites are, most often, problems once prevalent in the West, but successfully ameliorated by the environmental legislation of the late 1960s and early 1970s. The second group, in contrast, highlights Russian environmental policies that do not have strict Western analogues, suggesting that a thorough comparison of the Russian and Western environmental records requires, first of all, a careful examination of what constitutes environmental responsibility.
Fisheries science emerged in the mid-19th century, when scientists volunteered to conduct conservation-related investigations of commercially important aquatic species for the governments of North Atlantic nations. Scientists also promoted oyster culture and fish hatcheries to sustain the aquatic harvests. Fisheries science fully professionalized with specialized graduate training in the 1920s.
The earliest stage, involving inventory science, trawling surveys, and natural history studies continued to dominate into the 1930s within the European colonial diaspora. Meanwhile, scientists in Scandinavian countries, Britain, Germany, the United States, and Japan began developing quantitative fisheries science after 1900, incorporating hydrography, age-determination studies, and population dynamics. Norwegian biologist Johan Hjort’s 1914 finding, that the size of a large “year class” of juvenile fish is unrelated to the size of the spawning population, created the central foundation and conundrum of later fisheries science. By the 1920s, fisheries scientists in Europe and America were striving to develop a theory of fishing. They attempted to develop predictive models that incorporated statistical and quantitative analysis of past fishing success, as well as quantitative values reflecting a species’ population demographics, as a basis for predicting future catches and managing fisheries for sustainability. This research was supported by international scientific organizations such as the International Council for the Exploration of the Sea (ICES), the International Pacific Halibut Commission (IPHC), and the United Nations’ Food and Agriculture Organization (FAO).
Both nationally and internationally, political entanglement was an inevitable feature of fisheries science. Beyond substituting their science for fishers’ traditional and practical knowledge, many postwar fisheries scientists also brought progressive ideals into fisheries management, advocating fishing for a maximum sustainable yield. This in turn made it possible for governments, economists, and even scientists, to use this nebulous target to project preferred social, political, and economic outcomes, while altogether discarding any practical conservation measures to rein in globalized postwar industrialized fishing. These ideals were also exported to nascent postwar fisheries science programs in developing Pacific and Indian Ocean nations and in Eastern Europe and Turkey.
The vision of mid-century triumphalist science, that industrial fisheries could be scientifically managed like any other industrial enterprise, was thwarted by commercial fish stock collapses, beginning slowly in the 1950s and accelerating after 1970, including the massive northern cod crisis of the early 1990s. In the 1980s scientists, aided by more powerful computers, attempted multi-species models to understand the different impacts of a fishery on various species. Daniel Pauly led the way with multi-species models for tropical fisheries, where the need for such was most urgent, and pioneered the global database FishBase, using fishing data collected by the FAO and national bodies. In Canada the cod crisis inspired Ransom Myers to use large databases for fisheries analysis to show the role of overfishing in causing that crisis. After 1980 population ecologists also demonstrated the importance of life history data for understanding fish species’ responses to fishery-induced population and environmental change.
With fishing continuing to shrink many global commercial stocks, scientists have demonstrated how different measures can manage fisheries for species with different life-history profiles. Aside from the need for effective scientific monitoring, the biggest ongoing challenges remain having politicians, governments, fisheries industry members, and other stakeholders commit to scientifically recommended long-term conservation measures.
David E. Clay, Sharon A. Clay, Thomas DeSutter, and Cheryl Reese
Since the discovery that food security could be improved by pushing seeds into the soil and later harvesting a desirable crop, agriculture and agronomy have gone through cycles of discovery, implementation, and innovation. Discoveries have produced predicted and unpredicted impacts on the production and consumption of locally produced foods. Changes in technology, such as the development of the self-cleaning steel plow in the 18th century, provided a critical tool needed to cultivate and seed annual crops in the Great Plains of North America. However, plowing the Great Plains would not have been possible without the domestication of plants and animals and the discovery of the yoke and harness. Associated with plowing the prairies were extensive soil nutrient mining, a rapid loss of soil carbon, and increased wind and water erosion. More recently, the development of genetically modified organisms (GMOs) and no-tillage planters has contributed to increased adoption of conservation tillage, which is less damaging to the soil. In the future, the ultimate impact of climate change on agronomic practices in the North American Great Plains is unknown. However, projected increasing temperatures and decreased rainfall in the southern Great Plains (SGP) will likely reduce agricultural productivity. Different results are likely in the northern Great Plains (NGP) where higher temperatures can lead to increased agricultural intensification, the conversion of grassland to cropland, increased wildlife fragmentation, and increased soil erosion. Precision farming, conservation, cover crops, and the creation of plants better designed to their local environment can help mitigate these effects. However, changing practices require that farmers and their advisers understand the limitations of the soils, plants, and environment, and their production systems. Failure to implement appropriate management practices can result in a rapid decline in soil productivity, diminished water quality, and reduced wildlife habitat.
Céline Granjou and Isabelle Arpin
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The recent implementation of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES) is a major cornerstone of the transformation of international environmental governance in the early 21st century. Often presented as “the Intergovernmental Panel on Climate Change (IPCC) for biodiversity,” the IPBES aims to produce regular expert assessments of the state and evolution of biodiversity and ecosystems at the local, regional, and global levels. Its creation was promoted in the 1990s by biodiversity scientists and nongovernmental organizations (NGOs), that increasingly came to view the failure of achieving effective conservation of biodiversity and ecosystems as the consequence of the gap between science and policy rather than lack of knowledge. Articulating and building new proximities between nature conservation and social development was thus viewed from the beginning as critical to creating the new platform. The IPBES creation process was also rooted in the idea that biodiversity conservation required the implementation of a science-policy interface in which governments would be truly involved, in a similar way as in the IPCC. From 2008 onward, the project was called IPBES, a name that referred to the notion of ecosystem services, defined and popularized by the Millennium Ecosystem Assessment as the services that ecosystems render to people. Its creation was entrusted to the United Nations Environment Programme (UNEP).
The relevance, organization, and missions of the new institution were strongly discussed and debated in a series of multistakeholder meetings convened by the UNEP from 2008 up to the official creation of the IPBES in 2012. Social science scholarship highlighted two main tensions in the genesis and further implementation of the IPBES. The first opposed various views of what counts as legitimate knowledge on biodiversity and ecosystems: while promoters of a “purified science” standard aimed to achieve a science-based institution drawing on peer-reviewed expert opinions (following the model of academic science), promoters of a more open and inclusive definition of biodiversity knowledge promoted a broader recognition of the relevance of types of knowledge beyond academia, such as that of “traditional ecological knowledge.” The second tension concerned two contrasted conceptions of nature and human/nature relations, opposing the ecosystem services framework promoted by Western countries and inspired by the Millennium Ecosystem Assessment and the Mother Earth notion promoted by a number of South-American countries. While some of the research scrutinizing how the IPBES addressed these tensions insisted on the supremacy of Western utilitarian approaches to nature embodied in the notion of ecosystem services, other social scientists emphasized that the IPBES endeavored to encompass the various approaches to nature and to handle them through the experimentation of new inclusive organizations and notions. They also emphasized that bridging science and policy is a collective, ongoing, and fragile achievement that requires diplomatic skills of the institutional leaders so that they can handle the shifting tensions between the participants and build a truly inclusive platform. The IPBES may thus be considered a new, emergent institutional model for organizing science/policy interfaces in the early 21st century, focusing on the production of assessments both scientifically robust and socially inclusive in order to address the unprecedented threats to biodiversity and ecosystems in a time of global change.
Simon Holdaway and Rebecca Phillipps
Northeast Africa forms an interesting case study for investigating the relationship between changes in environment and agriculture. Major climatic changes in the early Holocene led to dramatic changes in the environment of the eastern Sahara and to the habitation of previously uninhabitable regions. Research programs in the eastern Sahara have uncovered a wealth of archaeological evidence for sustained occupation during the African Humid Period, from about 11,000 years ago. Initial studies of faunal remains seemed to indicate early shifts in economic practice toward cattle pastoralism. Although this interpretation was much debated when it was first proposed, the possibility of early pastoralism stimulated discussion concerning the relationships between people and animals in particular environmental contexts, and ultimately led to questions concerning the role of agriculture imported from elsewhere in contrast to local developments. Did agriculture, or indeed cultivation and domestication more generally (sensu Fuller & Hildebrand, 2013), develop in North Africa, or were the concepts and species imported from Southwest Asia? And if agriculture did spread from elsewhere, were just the plants and animals involved, or was the shift part of a full socioeconomic suite that included new subsistence strategies, settlement patterns, technologies, and an agricultural “culture”? And finally, was this shift, wherever and however it originated, related to changes in the environment during the early to mid-Holocene?
These questions refer to the “big ideas” that archaeologists explore, but before answers can be formed it is important to consider the nature of the material evidence on which they are based. Archaeologists must consider not only what they discover but also what might be missing. Materials from the past are preserved only in certain places, and of course some materials can be preserved better than others. In addition, people left behind the material remains of their activities, but in doing so they did not intend these remains to be an accurate historical record of their actions. Archaeologists need to consider how the remains found in one place may inform us about a range of activities that occurred elsewhere for which the evidence may be less abundant or missing. This is particularly true for Northeast Africa where environmental shifts and consequent changes in resource abundance often resulted in considerable mobility. This article considers the origins of agriculture in the region covering modern-day Egypt and Sudan, paying particular attention to the nature of the evidence from which inferences about past socioeconomies may be drawn.
Noa Kekuewa Lincoln and Peter Vitousek
Agriculture in Hawaiʻi was developed in response to the high spatial heterogeneity of climate and landscape of the archipelago, resulting in a broad range of agricultural strategies. Over time, highly intensive irrigated and rainfed systems emerged, supplemented by extensive use of more marginal lands that supported considerable populations. Due to the late colonization of the islands, the pathways of development are fairly well reconstructed in Hawaiʻi. The earliest agricultural developments took advantage of highly fertile areas with abundant freshwater, utilizing relatively simple techniques such as gardening and shifting cultivation. Over time, investments into land-based infrastructure led to the emergence of irrigated pondfield agriculture found elsewhere in Polynesia. This agricultural form was confined by climatic and geomorphological parameters, and typically occurred in wetter, older landscapes that had developed deep river valleys and alluvial plains. Once initiated, these wetland systems saw regular, continuous development and redevelopment. As populations expanded into areas unable to support irrigated agriculture, highly diverse rainfed agricultural systems emerged that were adapted to local environmental and climatic variables. Development of simple infrastructure over vast areas created intensive rainfed agricultural systems that were unique in Polynesia. Intensification of rainfed agriculture was confined to areas of naturally occurring soil fertility that typically occurred in drier and younger landscapes in the southern end of the archipelago. Both irrigated and rainfed agricultural areas applied supplementary agricultural strategies in surrounding areas such as agroforestry, home gardens, and built soils. Differences in yield, labor, surplus, and resilience of agricultural forms helped shape differentiated political economies, hierarchies, and motivations that played a key role in the development of sociopolitical complexity in the islands.