Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA,  ENVIRONMENTAL SCIENCE (environmentalscience.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 17 November 2017

Risk Perception and Its Impacts on Risk Governance

Summary and Keywords

Risk perception is an important component of risk governance, but it cannot and should not determine environmental policies. The reality is that people suffer and die as a result of false information or perception biases. It is particularly important to be aware of intuitive heuristics and common biases in making inferences from information in a situation where personal or institutional decisions have far-reaching consequences. The gap between risk assessment and risk perception is an important aspect of environmental policymaking. Communicators, risk managers, as well as representatives of the media, stakeholders, and the affected public should be well informed about the results of risk perception and risk response studies. They should be aware of typical patterns of information processing and reasoning when they engage in designing communication programs and risk management measures. At the same time, the potential recipients of information should be cognizant of the major psychological and social mechanisms of perception as a means to avoid painful errors.

To reach this goal of mutual enlightenment, it is crucial to understand the mechanisms and processes of how people perceive risks (with emphasis on environmental risks) and how they behave on the basis of their perceptions. Based on the insights from cognitive psychology, social psychology, micro-sociology, and behavioral studies, one can distill some basic lessons for risk governance that reflect universal characteristics of perception and that can be taken for granted in many different cultures and risk contexts.

This task of mutual enlightenment on the basis of evidence-based research and investigations is constrained by complexity, uncertainty, and ambiguity in describing, assessing, and analyzing risks, in particular environmental risks. The idea that the “truth” needs to be framed in a way that the targeted audience understands the message is far too simple. In a stochastic and nonlinear understanding of (environmental) risk there are always several (scientifically) legitimate ways of representing scientific insights and causal inferences. Much knowledge in risk and disaster assessment is based on incomplete models, simplified simulations, and expert judgments with a high degree of uncertainty and ambiguity. The juxtaposition of scientific truth, on one hand, and erroneous risk perception, on the other hand, does not reflect the real situation and lends itself to a vision of expertocracy that is neither functionally correct nor democratically justified. The main challenge is to initiate a dialogue that incorporates the limits and uncertainties of scientific knowledge and also starts a learning process by which obvious misperceptions are corrected and the legitimate corridor of interpretation is jointly defined.

In essence, expert opinion and lay perception need to be perceived as complementing, rather than competing with each other. The very essence of responsible action is to make viable and morally justified decisions in the face of uncertainty based on a range of scientifically legitimate expert assessments. These assessments have to be embedded into the context of criteria for acceptable risks, trade-offs between risks to humans and ecosystems, fair risk and benefit distribution, and precautionary measures. These criteria most precisely reflect the main points of lay perception. For a rational politics of risk, it is, therefore, imperative to collect both ethically justifiable evaluation criteria and standards and the best available systematic knowledge that inform us about the performance of each risk source or disaster-reduction option according to criteria that have been identified and approved in a legitimate due process. Ultimately, decisions on acceptable risks have to be based on a subjective mix of factual evidence, attitudes toward uncertainties, and moral standards.

Keywords: risk perception, risk governance, cognitive biases, semantic images of risk, cultural theory of risk, risk communication, risk management, policy implications of risk perception

Introduction

Within the social sciences the term risk perception has a long tradition (Slovic, 1987). The term denotes the process of collecting, selecting and interpreting signals about uncertain impacts of events, activities, or technologies (Renn, 2008, pp. 93ff.; Scholz, 2011, p. 179; Slovic, 1987). These signals can refer to direct experience (e.g., witnessing a flood) or indirect experience (e.g., information from others, such as reading about a technical disaster or a heightened level of pollution in the newspaper). Yet risks cannot be “perceived” in the sense of being taken up by the human senses, as are images of real phenomena. Mental models and other psychological mechanisms that individuals use to judge risks (such as cognitive heuristics and risk images) are internalized through social and cultural learning and constantly moderated (reinforced, modified, amplified, or attenuated) by media reports, peer influences, and other communication processes (Morgan, Fischhoff, Bostrom, & Atman, 2001; Zinn & Taylor-Gooby, 2006). Perceptions may differ depending on the type of risk, the risk context, the personality of the individual, and the social context. Various factors such as knowledge, experience, values, attitudes, and emotions influence the thinking and judgment of individuals about the seriousness and acceptability of risks. Perceptions also play a major role for motivating individuals to take action in order to avoid, mitigate, adapt to, or even ignore the risk. Different schools of psychological risk perception research have been working on shedding more light on the rationales and structures of individual and cultural patterns of risk perception.

Four different approaches to studying risk perception dominate the literature on this subject and are summarized here (cf. Renn, 2008, pp. 98ff.). The purpose of this article is, first, is to provide an overview of different “schools of thought” and their contributions to understanding the psychological and social drivers for perceiving and evaluating environmental risks. The second part will draw some lessons from the insights of risk perception studies for normative advice on risk governance and risk communication.

Attention and Selection Filters: The First Step of Information Processing About Risks

Today’s society provides an abundance of information, much more than any individual can digest (OECD, 2002; Renn, 2014, pp. 178ff.). Most information to which the average person is exposed will be ignored. This is not a malicious act but a sheer necessity because of the limited amount of information a person can process in a given time. Once information has been received, common-sense mechanisms process the information and help the receiver to draw inferences. One example of an intuitive strategy to evaluate risks is to use the mini-max rule for making decisions, i.e., choose the option that minimizes the worst possible outcome. This is a rule that many consumers and people tend to apply when making a judgment about the acceptability of a new, unfamiliar technology. This rule also implies that people try to minimize postdecisional regret by choosing the option that has the least potential for a disaster regardless of probabilities. The use of this rule is not irrational; it has evolved over a long evolution of human behavior as a fairly successful strategy to cope with uncertainty (better safe than sorry) (Renn, 2008, p. 105; Wynne, 1984).

However, experience with most contemporary environmental risks to which society is exposed is limited. Individuals rely on information from third parties in order to come to a judgment about the seriousness and acceptability of a given environmental risk. No ordinary consumer has a lab in his or her basement to test emissions from technologies or to verify claims of safety or immanent threats by professionals. They have hardly any choice but to believe one side or another in a risk debate. Reliance on third-party information is a typical pattern of environmental risk perception. Risk perception is not so much a product of experience or personal evidence as it is a result of social communication (Luhmann, 1986, 1993, 1997).

The main criteria for selecting relevant information about technological risks are ability and motivation (Chaiken & Stangor, 1987). Ability refers to the physical possibility that the receiver can follow the message without distraction; motivation refers to the readiness and interest of the receiver to process messages. If information about uncertain consequences—that could be a risk or a benefit—has passed the initial selection filters, people will draw inferences from the information and compare the content with previously held images and memories. They will evaluate the significance, truthfulness, and personal relevance of the information, construct new beliefs, and form an opinion or an attitude toward the risk and/or its source.

Cognitive Heuristics: Using Rules of Thumb

Once information has been received, common-sense mechanisms process the information and help the receiver to draw inferences. These processes are called intuitive heuristics. They are particularly important for environmental risk perception since they relate to the mechanisms of processing probabilistic information (Breakwell, 2007, pp. 79–82; Kahneman, 2011, pp. 109–197; Thaler & Sunstein, 2008, pp. 31–60). Early psychological studies focused on personal preferences for different compositions of probabilities and outcome (risk aversion, risk neutrality, and risk proneness) and attempted to explain why individuals do not base their risk judgments on expected values (i.e., the product of probability and magnitude of an adverse effect) (Pollatsek & Tversky, 1970). One of the interesting results of these investigations was the discovery of systematic patterns of probabilistic reasoning. People are risk-averse if the stakes for losses are high and risk-prone if the stakes for gains are high (Kahneman & Tversky, 1979; Tversky & Kahneman, 1981). Many people balance their risk-taking behavior by pursuing an optimal risk strategy that does not maximize their benefits but ensures a satisfactory payoff and the avoidance of major disasters. Using rules of thumb rather than calculating expected values has been the main outcome of many empirical studies of how people perceive risks (Boholm, 1998; Breakwell, 2007, pp. 109ff.; Covello, 1983; Sunstein, 2002, pp. 37ff.).

Exposure and Hazard Perception

One important rule of thumb is to overrate exposure and hazard rather than the probability of harm (Renn, Burns, Kasperson, Kasperson, & Slovic, 1992). Most people rate the potential for harm expressed in number of exposed individuals or the seriousness of the hazard in terms of energy released or degree of toxicity as the prime or sometimes even only relevant indicator for risk, underestimating or ignoring the probability of this hazardous potential to become effective. If people assume an exposure above zero or believe that an agent is present that can cause harm, such as cancer, they normally conclude that any disease from which a person (exposed to this risk) suffers must have been caused by this agent (Kraus, Malmfors, & Slovic, 1992). Such assumptions imply that any exposure is regarded as being negative irrespective of dose and exposure. For most people it is not relevant whether the dose of the substance or agent was low or high. Once a risk source is associated with emissions such as ionizing radiation, electromagnetic fields, chemicals in air, or water pollutants, most people tend to express high concern about this risk even if the concentration is below the threshold of causing harm. One example is the use of phthalates in toys. All analysts are aware that the substance is potentially carcinogenic, but given the known exposure and the dose-response functions, there is little evidence that harmful effects can be expected from exposure to these toys. However, the mere fact that a potentially harmful substance is incorporated in children’s toys has incited a fierce debate about the tolerability of such an ingredient (Klinke & Renn, 2010). Many nongovernmental organizations (NGOs) and consumer organizations have opted for a total ban of this material in toys and successfully lobbied the EU Commission to follow their recommendations (Renn, 2005).

Harmonization of Risk and Benefit Estimates

A second rule of thumb refers to the perception of risks and benefits. In most cases, one would assume that an activity that leads to high benefits may also be associated with high risks (and vice versa). Empirical studies on how people process information about risks and benefits show the opposite effect. For example, the intake of pharmaceuticals or dietary supplements is linked to high benefit and low risks (Alhakami & Slovic, 1994). One explanation for this high correlation between perception of risks and benefits may be the fact that respondents calculate a crude net balance between risks and benefits. If the balance is positive they rate the risks as low and the benefits as high, while a negative balance would result in a high perception of risks and a low perception of benefits. This adjustment process avoids painful inner conflicts to make trade-offs between risks and benefits. (De Jonge, van Kleef, Frewer, & Renn, 2007).

Understanding of Uncertainty

A third rule of thumb that deviates from the experts’ perspective on risk is the public understanding of uncertainty. The distinction that experts perform when conducting a probabilistic risk assessment (PRA) between a probability distribution and the associated degrees of remaining uncertainties (expressed in confidence intervals or in other forms of uncertainty characterization) is not echoed in risk perception studies (Frewer et al., 2002; Sparks & Shepherd, 1994). There is now a basic understanding among most people that the preferred deterministic worldview of judging a situation as either safe or unsafe cannot be sustained and that this view needs to be replaced by a mental model that differentiates among different degrees of certainty. The open space between safe and unsafe is, however, perceived as an indication of bad or incomplete science rather than an indication of (genuine) probability distributions. The more people associate uncertainties with a specific ambient risk, the more they believe that society needs more science and research to reduce these uncertainties (De Jonge et al., 2007; Frewer et al., 2002; Sparks & Shepherd, 1994). For example, in the case of genetically modified organisms (GMOs) in agriculture, most people are unwilling to accept the risk associated with the consumption of GMOs unless they are convinced that there is little or no uncertainty about the potential side effects.

These rules of thumb and examples show that deviations from expert advice are less a product of ignorance or irrationality than an indication of one or several intervening context variables that often make perfect sense if seen in the light of the original context in which the individual decision-maker has learned to use them (Brehmer, 1987; Gigerenzer, 1991, 2000; Lee, 1981). However, there is also ample evidence for clear violations of mathematical or logical rules in common-sense reasoning when it comes to processing probabilistic information. Many specific studies identified biases in people’s ability to draw inferences from probabilistic information (Festinger, 1957; Kahneman & Tversky, 1979; Ross, 1977; Simon, 1976, 1987; reviews in Boholm, 1998; Breakwell, 2007, pp. 78ff.; Covello, 1983; Jungermann, Pfister, and Fischer, 2005; Kahneman, 2011). These biases are summarized in Table 1. Risk managers should be aware of these biases because they shape public risk perception and may be one of the underlying causes for the discrepancy between layperson and expert judgment of risk.

Table 1 Intuitive Biases of Risk Perception

Biases

Description

Example

Availability

Events that come immediately to people’s minds are rated as more probable than events that are of less personal importance.

Risks of nuclear energy are rated as more frequent than risk assessment studies have calculated since most people have vivid memories of the nuclear disasters in Chernobyl or Fukushima.

Anchoring effect

Probabilities are estimated according to the plausibility of contextual links between cause and effect, but not according to knowledge about statistical frequencies or distributions (people will “anchor” the information that is of personal significance to them).

Toxic substances such as arsenic or mercury tend to be overrated in their potential for harm as most people associate this substance with crime and murder.

Personal experience

Singular events experienced in person or associated with the properties of an event are regarded as more typical than information based on frequency of occurrence.

People who have experienced a stroke of lightning tend to estimate the frequency of damage by lightning much higher than those who did not have such an experience.

Avoidance of cognitive dissonance

Information that challenges perceived probabilities that are already part of a belief system will be either ignored or downplayed.

People who believe that non-ionizing radiation from cellular phone my cause cancer are more likely to google sources that confirm their view than people who do not share this belief.

Source: Renn (2008, p. 103).

Evolutionary Strategies for Coping with Risks

The psychometric paradigm conceptualizes risks as a subjective estimate of individual fears or expectations about unwanted consequences. Such individual strategies to estimate and handle technological risks have been thoroughly researched in psychology and social psychology (Boholm, 1998; Breakwell, 2007; Knight & Warland, 2005; McDaniels, Axelrod, Cavanagh, & Slovik, 1997; Rohrmann & Renn, 2000; Sjöberg, 1999, 2000b; Slovic, 1987; Slovic, Fischhoff, & Lichtenstein, 1986; Townsend, Clarke, & Travis, 2004). People do not use completely irrational strategies to assess and evaluate information about risks, but most of the time they follow relatively consistent patterns of perception. These patterns can be traced back to certain evolutionary traits of hazard deterrence (Marks & Nesse, 1994; Renn, 2014, pp. 248ff.). In dangerous situations, humans’ reactions rely on four basic strategies:

  • Flight

  • Fight

  • Play dead

  • Experimentation (on the basis of trial and error) or subordination.

These reaction patterns can be visualized by imagining how our ancestors reacted to a predator in the wilderness. In a situation of acute threat, such as coming up against a lion, the victim would not have much time—it would not make sense to conduct a probability analysis as to whether the tiger is hungry or not. At this moment, a person who is threatened has only three possibilities: first, to flee and hope to be faster than the lion; second, to believe he or she is strong enough and fight; or, third, to play dead, believing the lion could be duped (Bracha, 2004, p. 679). In this case, the last option—namely, experimentation—is only open to the tiger; subordination would not help.

Cultural Patterns and Qualitative Context Variables

In the course of cultural evolution, these basic patterns of perception were increasingly supplemented with cultural patterns. Cultural patterns can be described by so-called qualitative evaluation characteristics and, in the school of psychometrics, are measured by using numerical scaling techniques. This approach to risk research was originally developed by the Oregon Group (see Fischhoff, Slovic, Lichtenstein, Read, & Combs, 1978; Slovic, 1992; Slovic, Fischhoff, & Lichtenstein, 1980, 1986).

Table 2 List of Important Qualitative Risk Characteristics

Qualitative characteristics

Direction of influence

Personal control

Increases risk tolerance

Institutional control

Depends upon confidence in institutional performance

Voluntariness

Increases risk tolerance

Familiarity

Increases risk tolerance

Dread

Decreases risk tolerance

Inequitable distribution of risks and benefits

Depends upon individual utility; strong social incentive for rejecting risks

Artificiality of risk source

Amplifies attention to risk; often decreases risk tolerance

Blame

Increases quest for social and political responses

Source: Adapted from Renn (1990).

Psychometric methods provide another empirically driven explanation of why individuals do not base their risk judgments on subjectively expected utilities. The research revealed several contextual characteristics that individual decision-makers use when assessing and evaluating risks (Renn, Schweizer, Dreyer, & Klinke, 2007, p. 78; Rohrmann & Renn, 2000).

Qualitative characteristics of risk can, for example, be applied to the perception of technologies and their environmental impacts (OECD, 2002). First, large-scale technologies such a nuclear power plants, chemical production facilities, and waste disposal installations are associated with negative risk characteristics, such as dread, lack of personal control, and high catastrophic potential. The perception of technologically induced environmental risks is usually linked to an absence of personal control, and the preponderance of dread amplifies the impression of seriousness. These characteristics make people even more concerned about the negative impacts than is warranted by the predicted physical impacts alone. Second, the beliefs associated with the risk source (e.g., industry) center around greed, profit-seeking, and alleged disrespect for public health. Third, the possibility of users of technologies and neighbors of technical facilities being exposed to risks without their consent touches upon serious equity concerns if susceptibility to these risks varies considerably among individuals or rest on probabilistic balancing. Inequitable distribution of risks and benefits make the risk appear more severe and unacceptable. Finally, the possibility of catastrophic accidents and the sensational press coverage about such accidents invokes negative emotions and may even lead to stigmatization. Nuclear energy or genetically modified organisms appear already to be associated with strong stigma effects (Renn, 2005).

Another option of grouping and classifying contextual variables is to construct typical patterns—so-called semantic images—which serve as orientations for individuals. This topic is explained and discussed in the following section.

Semantic Images: Constructing One’s Own Reality

Research on risk perception has identified a range of perception patterns that constitute discrete manifestations of key risk characteristics depending upon the context in which the risk is embedded. These are called semantic risk images (Jaeger, Renn, Rosa, & Webler, 2001, pp. 105ff.; Renn, 2004, 2014, pp. 265ff.; Renn et al., 2007, pp. 80ff.; Streffer et al., 2003). Although these semantic images have not been directly tested in empirical experiments or surveys, they have been deduced from statistical processing of data from studies of qualitative characteristics. In general, five distinct semantic images have been identified (Renn, 1990) (see Table 3). In addition to these five images, additional images of risk exist for habitual and lifestyle risks that are, however, less clear in their composition and structure.

Table 3 The Five Semantic Images of Risk Perception

1. Emerging danger (fatal threat)

Artificial risk source

Large catastrophic potential

Inequitable risk–benefit distribution

Possibility of assigning personal or institutional blame

Perception of randomness as a threat

2. Stroke of fate

Natural risk source

Belief in cycles (not perceived as a random event)

Belief in personal control (can be mastered by oneself)

Accessible through human senses

3. Personal thrill (desired risks)

Personal control over degree of risk

Personal skills necessary to master danger

Voluntary activity

Noncatastrophic consequences

4. Gamble

Confined to monetary gains and losses

Orientation toward variance of distribution rather than expected value

Asymmetry between risks and gains

Dominance of probabilistic thinking

5. Indicator of insidious danger (slow killer)

(Artificial) ingredient in food, water, or air

Delayed effects; noncatastrophic

Contingent upon information rather than experience

Quest for deterministic risk management

Strong incentive for blame

Source: Adapted from Renn (1990).

The semantic images allow individuals to order risks in general, and environmental risks in particular, on the basis of a few salient characteristics. Reducing complexity by creating classes of similar phenomena is certainly a major strategy for coping with information overload and uncertainty. The five semantic images are powerful guides that help individuals to navigate through an abundance of often-contradictory information. They provide an efficient method of balancing the time for collecting and processing information with the personal need for orientation and attitude formation.

Some risk sources evoke more than one semantic image. Combinations of insidious and pending danger images are particularly interesting. Most risks from technological threats are found in the category of pending danger. Emissions and other environmental risks are typical for the category of insidious dangers. This has far-reaching implications. Most risks belonging to the category of insidious dangers are regarded as potentially harmful substances that defy human senses and “poison” people without their knowledge. Risks associated with air pollutants, water impurities, and radiation are undetectable to the person exposed. They require warning by regulators or scientists. There is a widely shared belief that toxicity depends less on the dose than on the characteristics of the substance. Hence a rigid regulatory approach is necessary when it comes to controlling environmental pollutants (Kraus et al., 1992).

An Integrative Model of Risk Perception

Based on the review of psychological, social, and cultural factors that shape individual and social risk perceptions, Rohrmann and Renn have attempted to develop a structured framework that provides an integrative and systematic perspective on risk perception. Figure 1 illustrates this perspective by pointing toward four distinct context levels (Renn & Rohrmann, 2000; inspired by the generic model in Breakwell, 1994). Each level is further divided in two subsections, representing individual and collective manifestations of risk perceptions, and is embedded in the next-higher level to highlight the mutual contingencies and interdependencies among and between individual, social, and cultural variables.

Risk Perception and Its Impacts on Risk GovernanceClick to view larger

Figure 1 Four context levels of risk perception. (Adapted from Renn & Rohrmann, 2000)

Level 1: Heuristics of Information Processing

The first level includes the collective and individual heuristics that individuals apply during the process of forming judgments about risks. These heuristics are independent of the nature of the risk in question or the personal beliefs, emotions, or other conscious perception patterns of the individual. Heuristics represent common-sense reasoning strategies that have evolved over the course of biological and cultural evolution. They may differ between cultures, but most evidence in this field of psychological research shows a surprising degree of universality in applying these heuristics across different cultures. Improved knowledge and expertise in logical reasoning and inferential statistics, as well as a principal awareness of these heuristics, can help individuals to correct their intuitive judgments or to apply these heuristics to situations where they seem appropriate. Recent research results suggest that these heuristics are more appropriate for problem-solving in many everyday situations than previously assumed (Gigerenzer, 2013; Gigerenzer & Selten, 2001). Regardless of the normative value that these heuristics may offer, they represent primary mechanisms of selecting, memorizing, and processing signals from the outside world and preshape the judgments about the seriousness of the risk in question.

Level 2: Cognitive and Affective Factors

The second level refers to the cognitive and affective factors that influence the perception of specific properties of environmental risks. Cognition about a risk source—what people believe to be true about a risk—governs the attribution of qualitative characteristics (psychometric variables) to specific risks (e.g., dread or personal control options) and determines the degree to which these qualitative risk characteristics influence the perceived seriousness of risk and the judgment about acceptability. It is interesting to note that different cognitive processes can lead to the same attribution result. In an empirical study, Rosa, Matsuda, and Kleinhesselink (2000) were able to show that a Japanese and a U.S. sample of respondents assigned identical numerical equivalents when characterizing the catastrophic potential of different hazardous technologies, although they had different mental models about what constitutes catastrophic potential. In the Japanese sample, the arousal of catastrophic images was associated with the degree of individual knowledge of and familiarity with the respective risk in question, whereas U.S. respondents linked collective scientific experience and knowledge to frame their estimate of catastrophic potential.

The fact that individuals, within their own culture or by their own agency, are able to choose between different cognitive routes justifies the distinction between the two primary levels: cognitive factors and heuristics.

While cognitive factors have been extensively explored, emotions have been neglected in risk perception research for a long time. More recently, however, psychologists have discovered that affect and emotions play an important role in people’s decision processes (Loewenstein, Weber, Hesse, & Welch, 2001; Slovic, Finucane, Peters, & MacGregor, 2002). People’s feelings about what is good or bad in terms of the causes and consequences of risks color their beliefs about the risk and, in addition, influence their process of balancing potential benefits and risks. Affective factors are particularly relevant when individuals face a decision that involves a difficult trade-off between attributes, or where there is interpretative ambiguity as to what constitutes a “right” answer. In these cases, people often appear to resolve problems by focusing on those cues that send the strongest affective signals (see also Kunreuther, 2000; Peters, Burraston, & Mertz, 2004). On the collective level, researchers have identified stigmatization effects referring to risk sources or activities that stimulate highly negative emotional responses (Slovic et al., 2002). Examples are nuclear waste repositories or BSE (aka mad cow disease) in beef.

Empirical studies regarding technological hazards show that emotional and cognitive factors are mutually related (Zwick & Renn, 1998). It is not yet clear whether cognitive beliefs trigger the respective emotional responses or whether emotional impulses act as heuristic strategies to select or develop arguments supporting one’s emotional stance.

Level 3: Social and Political Institutions

The third level refers to the social and political institutions that individuals and groups associate with either the cause of risk or the risk itself. Most studies on this level focus on trust in institutions, personal and social value commitments, organizational constraints, social and political structures, and socio-economic status. One important factor in evaluating risk is the perception of fairness and justice in allocating benefits and risks to different individuals and social groups (Linnerooth-Bayer & Fitzgerald, 1996). Theoretical approaches, such as reflexive modernization or the social arena metaphor, provide plausible explanations of why the debate on equity and justice has become so relevant for risk perception (Knight & Warland, 2005; Rosa, Renn, & McCright, 2014). Other studies have chosen political and social organizations and their strategies of communicating with other organizations and society at large as the prime focus of their attention (Clarke, 1989; Shubik, 1991).

The media, social reference groups, and organizations also shape individual and societal risk experience. Press coverage appears to contribute substantially to a person’s perception of risk, particularly if the person lacks personal experience with the risk and is unable to verify claims of risks or benefits from his or her own experience (Dunwoody, 1992). In contrast to popular belief, however, there is no evidence that the media create opinions about risks or even determine risk perceptions. Studies on media reception rather suggest that people select elements from media reports and use their own frame of reference to create understanding and meaning. Most people reconfirm existing attitudes when reading or viewing media reports (Peters, 1991).

Level 4: Cultural Background

The last level refers to cultural factors that govern or co-determine many of the lower levels of influence. The most specific explanation for cultural differences about risk perceptions comes from the so-called “cultural theory of risk.” This theory claims that there are four or, in some studies, five prototypes of response to risk (Douglas & Wildavsky, 1982; Thompson, 1980; Thompson, Ellis, & Wildavsky, 1990). These prototypes refer to entrepreneurs, egalitarians, hierarchists, atomized individuals, and, as a separate category, hermits. Opinions on the validity of the cultural theory of risk differ widely. Slovic, Flynn, Mertz, Poumadere, & Mays (2000) regard this approach as useful in explaining some of the differences in risk perception; Sjöberg (2001) and Sjöberg et al. (2000a) found the variance explained by cultural prototypes to be so low that they rejected the whole concept. Rohrmann (2000) also expressed a skeptical view, mainly because of methodological considerations about the empirical validity of the claims. All authors agree, however, that specific culture-based preferences and biases are, indeed, important factors in risk perception. The disagreement is about the relevance of the postulated four or five prototypes within the realm of cultural factors.

In addition to the theory of cultural prototypes, two sociological concepts provide plausible explanations for the link between macro-sociological developments and risk perceptions. The theory of reflexive modernization claims that individualization, pluralization, and globalization have contributed to the decline of legitimacy with respect to risk professionals and managers (Marshall, 1999; Mythen, 2005; Renn, 2014, pp. 286ff.: Rosa et al., 2014, pp. 69ff.). Due to this loss of confidence in private and public institutions, people have become skeptical about the promises of modernity and evaluate the acceptability of risks according to the perceived interest and hidden agenda of those who want society to accept these risks (Beck, 1992). The second approach picks up the concept of social arenas in which powerful groups struggle for resources in order to pursue their interest and objectives. Here, symbolic connotations constructed by these interest groups act as powerful shaping instruments for eliciting new beliefs or emotions about the risk or the source of risk (Jaeger et al., 2001, pp. 175f.).

All four levels of influence are relevant in order to gain a better and more accurate understanding of risk perception. In spite of many open questions and ambiguities in risk perception research, one conclusion is beyond any doubt: abstracting the risk concept to a rigid formula and reducing it to the two components’ “probability and consequences” does not match people’s intuitive thinking of what is important when making judgments about the acceptability of risks, in particular technological risks to human health and the environment (Mazur, 1987; Pidgeon, 1997; Wilkinson, 2001). Paul Slovic stated this point quite clearly:

To understand risk perception, one needs to study the psychological, social and cultural components and, in particular, their mutual interactions. The framework of social amplification may assist researchers and risk managers to forge such an integrative perspective on risk perception. Yet, a theory of risk perception that offers an integrative, as well as empirically valid, approach to understanding and explaining risk perception is still missing. (1992, p. 50)

Insights from Cross-Cultural Studies of Risk Perception

Several important insights result from this systematic comparison of risk perception studies performed in many different countries and cultures (Renn & Rohrmann, 2000). First, it is clear that the primary objective of reducing risks to a standard deemed acceptable by the vast majority of the affected people is just as universal as the desire for further economic and personal development. Even in China, where—according to the official version—individual freedom gives way to collective discipline, a clear desire for expansion of individual freedom and personal risk reduction can be seen (Bi, 2006). Despite this, the trust in collective risk management institutions is much higher there than in most industrialized countries (Rohrmann & Chen, 1999).

A second surprising insight is the increasing differentiation of globalizing social subcultures. The bankers, feminists, physicists, civil servants, or environmentalists in this world are becoming increasingly similar, while at the same time they have less and less in common with their fellow citizens. The new information media, the globalization of the economy, and the functionalization of jobs certainly play a major role. Although there are still relevant differences between the representatives of similar groups in different countries, these are less marked than the differences between the groups within a country (Rohrmann & Renn, 2000).

Naturally social researchers have noted a number of key differences in the perception of risks: for example, the degree of apathy toward environmental risks varies between the countries studied just as much as the extent of the fear of natural risks as opposed to technical or artificial risks (Hofstede, 2001). Cultural factors are certainly one of the drivers of what people choose as the risks they are most afraid of—regardless of the level of the risk. Nevertheless, the degree of agreement between the countries is much higher than one would suppose on the basis of their very different cultures.

What do these findings mean for risk managers involved in governing or managing environmental risks? The stereotypical response has been that cultures differ and that common criteria to judge these risks are not in sight. But the available research disagrees. People of all cultures share a broad agreement about the primary principles of protecting human health, the natural environment, and human accomplishments. When governments or certain interest groups claim that these principles are not valid in their culture or need to be adjusted toward native cultural standards, one should be cautious. This might reflect a partial interest of those who make these claims rather than an empirically proven fact of cultural deviation or diversity.

Beyond this, the increasing professionalization and globalization of subcultures mean that people with similar basic attitudes and valuation backgrounds come together in international networks. There, too, so-called cultural differences are often disregarded for tactical reasons, without any actual empirical evidence that they exist. The values and standards represented in the various cultures are not so diverse factually or with respect to their normative justification that culture-specific criteria for evaluating environmental risks should be developed or taken into account.

This does not mean, of course, that every standard that exists in one country can be transferred to the wider world. Rather, what is important are the primary principles that form the common background for sensible and productive international agreements on environmental standards. Intercultural studies on understanding human responses to environmental risks continue to be important and to provide essential indicators of individual and social human behavior with regard to their natural environment.

Implications for Risk Governance

From a normative perspective, knowledge about individual perceptions of risk cannot be translated directly into environmental policies. If perceptions are based partially on biases or ignorance, it does not seem wise to use them as yardsticks for risk reduction. In addition, risk perceptions vary among individuals and groups: Whose perceptions should be used to make decisions on risk? At the same time, however, these perceptions reflect the real concerns of people and include the undesirable effects that “technical” analyses of risk often miss. It is true that laypeople’s views of risk are intuitive and less formal and precise than experts’ statements. But, as Paul Slovic observed, “their basic conceptualization of risk is much richer than that of experts and reflects legitimate concerns that are typically omitted from expert risk assessments” (Slovic, 1987, p. 282).

In fact, risk judgments indicate more than just the perception of riskiness. They reveal global views on what matters to people, on technological progress, on the meaning of nature, and on the fair distribution of chances, benefits, and risks. Facing this dilemma, how can risk perception studies contribute to improving risk policies? Pertinent benefits may be as follows (de Marchi, 2015; Fischhoff, 1985):

  • They can identify and explain public concerns associated with the risk source.

  • They can elucidate the context of the risk-taking situation.

  • They can enhance understanding of controversies about risk evaluation.

  • They can identify cultural meanings and associations linked with special risk arenas.

  • Based on this knowledge, they can be useful when articulating objectives of risk policies that go beyond risk minimization, such as fairness, procedural equity, and institutional trust.

  • They can indicate how to design procedures or policies that incorporate these cultural values into the decision-making process.

  • They can be useful in the design of programs for participation and joint decision-making.

  • They can provide criteria for evaluating risk management performance and organizational structures for monitoring and controlling risks.

Social science research on risk perception therefore has many implications for risk governance. Even if there are no recipes to be obtained from analytical studies about risk perception, studies on risk perception can provide some insights that might help policymakers improve their performance (Slovic, 2000; Slovic, Fischhoff, & Lichtenstein, 1982).

First, risk perception studies demonstrate what matters to people. In a democratic society, the concerns of people should be the guiding principle for collective action. Context and supporting circumstances of risk events or activities constitute significant concerns. These perception patterns are not just subjective preferences cobbled together: they stem from cultural evolution, are tried and trusted concepts in everyday life, and, in many cases, control our actions in much the same way as a universal reaction to the perception of danger. Their universal nature across all cultures allows collective focus on risk and provides a basis for communication (Renn, 2008, pp. 146–147; Rohrmann & Renn, 2000). From a rational standpoint, it would appear useful to systematically identify the various dimensions of intuitive risk perception (concerns assessment) and to measure the extent to which these dimensions are met or violated by the best available scientific methods. Many psychometric variables that matter to people are open to scientific study and scrutiny. In principle, the extent to which different technical options distribute risk across various social groups, the degree to which institutional control options exist, and the level of risk that can be accepted by way of voluntary agreement can all be measured using appropriate research tools. Risk perception studies help to diagnose these concerns. Scientific investigations can determine whether these dimensions are met or violated, and to what degree. This integration of risk expertise and public concerns is based on the view that the dimensions (concerns) of intuitive risk perception are legitimate elements of rational policy, but assessment of the various risk sources must follow robust scientific procedures on every dimension.

Second, designing policies about advancing, supporting, and regulating risks requires trade-offs (i.e., relative weights of the various target dimensions). Such trade-offs depend upon both context and the choice of dimension. Perception research offers important pointers concerning the selection of dimensions for focus. For example, the aspect of fairness that rates highly among people as an evaluation tool for the acceptability of risks plays a significant role in such trade-offs and in weighting the various dimensions. In their roles as risk assessors, experts have no authority to select these dimensions or to specify their relative importance. This is where formal methods such as risk–risk comparisons and other evaluation tools reach their limits. The multidimensionality of the intuitive risk model prevents risk policy from focusing one-sidedly on the minimization of expected impacts. A breach of the minimization requirement, however, implies acceptance of greater damage than is absolutely necessary (although this can be justified in individual cases depending upon the risk situation). The main point here is that trade-offs imply value judgments—and these judgments need to be politically legitimized.

Lessons for Risk Communication

Risk perception studies are crucial for designing and evaluating technological risk communication programs (Besley & McComas, 2014). Without knowing the concerns of the targeted audience, communication will not succeed. In addition, risk perception studies help communicators to identify points of conflict or disbelief. They can diagnose lack of trust or credibility and suggest more effective ways of restoring trust once trust has been lost or eroded (Engdahl & Lidskog, 2014). The insights from risk perception research will not guarantee the success of risk communication, but they can certainly assist risk communicators in designing more effective and efficient communication programs.

Research on the risk communication process indicates the need for trust and credibility between the communicators and their audience. They also reveal the continuous trend towards distrust and suspicion on the side of those who are supposed to bear risk. The credibility of a communication source is closely linked to its perceived past performance record and its openness to public demands (Earle & Cvetkovich, 1999; Löfstedt, 2005). The more institutions comply with the expectations of the public, the more confidence people have in these institutions and the more trust they assign to their messages. Communication efforts alone may successfully change excessive aspirations or mitigate misperceptions of the actual performance record, but it is not very likely that communication can compensate for poor performance (Löfstedt, 2003).

Furthermore, in a climate of general distrust toward social organizations, it is helpful to accept countervailing powers and public control and to provide public access to all relevant information. On the basis of these structural opportunities for public involvement and control, specific communication programs can be designed, which include elements of successful information and possibly education and persuasion. Risk communication—whether organized as providing information to the public, as a mutual learning process, or as an attempt to reconcile conflicts about risks—is therefore a necessary step in bridging the gap between the conclusions drawn from quantitative risk analysis and inferences based on risk perception.

That said, the goal of risk communication should not be to induce people to accept whatever the communicator thinks is best for them. The ideal communication program envisions an active citizen who processes all the available information to form a well-balanced judgment in accordance with the factual evidence, the arguments of all sides, and his or her own interests and needs (Ad-hoc Commission, 2003). The ultimate goal of risk communication is to reconcile expertise, interests, and public preferences across the cultures within a society and between societies. This goal cannot be achieved without accepting risk perception as a legitimate expression of people’s view of the world and their vision of a “good life” (Ruddat, 2009).

Overcoming the Drift Between Experts’ and Laypeople’s Judgments

The analysis of environmental risk perception showed major gaps in the evaluation of the severity of environmental risk as well as the urgency to take protective actions. However, when one intends to overcome the old assumption that laypersons and experts face each other like two monolithic blocks, one needs to find a new angle in looking at risk perception and risk assessment. If everything is based on an average mean value of lay and expert findings—something that some social sciences tend to do—we will indeed find a deep rift between these two groups. But this rift is obscuring the fact that in expert circles and within the great mass of the laypeople there is an enormous variety of opinions and assessments (Smith, 2013, p. 17). Every lay perception has its opposite; every expert has a counter-expert. You will get different assessments depending on the make-up of a particular group of experts, even if the final decision was due solely to the experts. Ultimately, the only answer would be a scientific supreme court with the authority to decide which expert is right. Some of the more constructivist elements among the science theorists would have it that expert opinion is interchangeable. This is not so, but neither does it provide unequivocal results, and certainly no unequivocal instructions. This is particularly true in the risk debate, where chance rules over many outcomes. In the same way as the number 0 can occur twice during a game of roulette without any manipulation of the ball or the wheel (although there may have been), two large-scale reactor accidents are no positive or negative proof of risk analysis that predicts such an accident happening once in a thousand or even in a million years. It is impossible to predict individual outcome when you only know the odds, and knowledge alone is a limited single resource for the determination of priorities. Varying levels of knowledge are in competition with each other, and determining which among all the competing claims represents the truth is ultimately not feasible. It is thus impossible to expect an unequivocal expert answer to an urgent question of risk, even if we were prepared to use sound science as a guideline for general risk policies (Renn, Klinke, & van Asselt, 2011).

At the same time, it is equally unhelpful to base the determination of priorities in the politics of risk on a generalized lay perception. The differences in risk perception among laypeople are as extensive as among experts, and we find the same problem in deciding which lay opinion will be the dominant one when it comes to judging risk. When people talk about risk, they are driven by personal or professional interest. If truth is replaced by interest, however, bargaining power is going to determine what is regarded as truth. This kind of replacement is a breeding ground for fundamentalism, with one side wanting to abolish any possible risk to the environment (and, in doing so, jeopardizing the economy), while the other side wants to redefine risk as opportunity without taking the ecological risk into account at all. Between those two extreme positions there is hardly any room for compromise other than the strategic kind where the argument ends up with a philosophy of: “You give me your risk and I will give you mine.”

What is needed to establish the necessary priorities? How can we break this deadlock in determining the rationality of the politics of risk? Is it possible to integrate lay and expert findings? Can we even legitimize the politics of risk today (Renn, 2008, pp. 64ff.)?

  1. 1. We have to let go of the postmodern concept of knowledge being a random social construct and that there are no predominant criteria for truths or quality. The reality is that people suffer and die because of false information. It is particularly important to be quite clear about the limits of legitimate information in a situation of global environmental risks, where environmental decisions have far-reaching consequences and, at the same time, we are aware that our knowledge is severely limited, in particular about the secondary and tertiary impacts. It is precisely the fuzziness of environmental risks that demands that we set clear boundaries between what scientific evidence can support and what appears to be nonsense or absurdities. If we have no clear boundaries, there will be room for pseudo-scientific legitimization of practically any fear of risk, no matter how far-fetched. We have now a number of methods and techniques at our disposal, such as the meta-analysis or the Delphi technique, and they allow us a fair overview of available range of legitimate knowledge without needing to resort to a jury with an overall power of legislation (Gregory, 2004; Webler, Levine, Rakel, & Renn, 1991). The scientific establishment itself has to limit the range of legitimate knowledge because they are the ones bound by scientific rigor and with access to the appropriate conflict resolution procedures and thus equipped to evolve and resolve competing claims to truth.

  2. 2. Expert opinion and lay perception need to be perceived as complementing rather than competing with each other. When we designed public participation exercises about issues of risk acceptance, we never came across lay participants who insisted on their own perception of acceptable risk being used as a standard for making a collective decision (Renn, 2004; Webler, Kastenholz, & Renn, 1995). On the contrary, the first question has usually been on the range of expert assessments and their professional evaluations. Once these questions were answered, the participants addressed the political problem of how to deal with the remaining risks and the uncertainties that could not be resolved. Acceptability cannot be delineated from expertise, but the best expertise is one necessary input to order to come to a prudent judgment about acceptability. The very essence of responsible action is to make viable and morally justified decisions in the face of uncertainty about the outcome, based on a range of legitimately varied expert assessments. These assessments have to be locked into the context of criteria of acceptable and fair risk, risk distribution, and precautionary measures (Klinke, Dreyer, Renn, Stirling, & van Zwanenberg, 2006). It is these criteria that most precisely reflect the main points of lay perception. For a rational politics of risk, it is, therefore, imperative to collect both ethically justifiable evaluation criteria and standards and the best available systematic knowledge that inform us about the performance of each risk source or risk reduction option on the self-chosen criteria.

  3. 3. Ultimately, decisions on acceptable risks have to be based on a subjective integration of factual evidence, attitudes toward uncertainties, and moral standards (Shrader-Frechette, 1998). It is only in the discourse on these three elements that a competent and fair decision is possible. This is what makes the irritating polarization of the two camps, with experts brandishing rationality on one side and counter-experts claiming the moral high ground, particularly damaging. Risk governance is intrinsically bound up with the knowledge and the moral assessment of the expected consequences. There are no logical, factual, or normative guidelines on the question of acceptability of nuclear technology, geo-engineering, or waste disposal by incineration. A discourse without a systematic scientific basis is nothing but empty waffle, while, on the other hand, a discourse that disregards the moral aspect of the available options will aid and abet amoral actions.

Implications for Public Discourse

Given these principles for integrating perception and assessment, one is still left with the question of how to operationalize this integration in the risk governance context. How can decision-makers deal with competing risk perceptions and standards in their policymaking institutions? An integrated governance process, combining risk assessments and perceptions, implies decision-making processes that include a multitude of actors and value clusters. On a general level, there is the distinction between the risk producers, on the one hand, and those who are exposed to the risks, on the other hand. It is obvious that between these two groups, conflicting interests are to be expected. Both groups can be further divided into subgroups with distinct interests of their own, the so-called stakeholder. They are defined “as socially organised groups that are or will be affected by the outcome of the event or the activity from which the risk originates and/or by the risk management options taken to counter the risk” (IRGC, 2005, p. 49). In general, risk issues affect the four main stakeholders in society. These are political, business, scientific, and civil society representatives (as far as they are socially organized). Additionally, other groups that play a role in the risk governance process can be defined—the media, cultural elites and opinion leaders, and the general public—in their role as either the nonorganized affected public or the nonorganized observing public (IRGC, 2005).

As governance aims at reaching acceptance of the outcomes of the decision-making process, the interests of all these different actors have to be met. At the same time, however, the number of options and the procedures through which they are selected have to be restricted, as the time and effort of the participants of the governance process have to be regarded as spare resources and therefore treated with care. Consequently, an inclusive risk governance process, as is required when facing complex risks, can be characterized by inclusion of all affected parties, on the one hand, and closure concerning the selection of possible options and the procedures that generate them, on the other.

Inclusion describes the question of what and whom to include into the governance process, not only into the decision-making but into the whole process from framing the problem, generating options and evaluating them, to coming to a joint conclusion. This goal presupposes that major attempts have been made to meet the following conditions (IRGC, 2005, pp. 49–50; Renn & Schweizer, 2009; Trustnet, 1999; Webler, 1999; Wynne, 2002):

  • Representatives of all four major actor groups have been involved.

  • All actors have been empowered to participate actively and constructively in the discourse.

  • The framing of the risk problem (or the issue) has been co-designed in a dialogue with the different groups.

  • A common understanding of the magnitude of the risk and the potential risk management options has been generated and a plurality of options that represent the different interests and values of all involved parties have been included.

  • Major efforts have been made to conduct a forum for decision-making that provides equal and fair opportunities for all parties to voice their opinion and to express their preferences.

  • There exists a clear connection between the participatory bodies of decision-making and the political implementation level.

Two goals can be reached with compliance with these requirements: the so-included actors have the chance to develop faith in their own competences and they start to trust each other and to have confidence in the process of risk management.

While these aims can be reached in most cases where environmental risks are able to be governed on a local level, where the different parties are familiar with each other and with the risk issue in question, it is much more difficult to reach these objectives for risks that concern actors on a national or global level and where the risk is characterized by high complexity or the effects are, for example, not directly visible or not easily referred to the corresponding risk agent. Sometimes one party may gain an advantage by sabotaging the process because it is in their interest to leave the existing risk management strategies. Consequently, inclusive governance processes need to be thoroughly monitored and evaluated to prevent such strategic deconstructions of the process.

Closure, on the other hand, is needed to restrict the selection of management options to guarantee an efficient use of resources, be it financial or the use of the time and effort of the participants in the governance process. Closure concerns the part of generating and selecting risk management options—more specifically, which options are selected for further consideration and which options are rejected. Closure therefore concerns the product of the deliberation process. It describes the rules of when and how to close a debate and what level of agreement is to be reached. The quality of the closure process must meet the following requirements (IRGC, 2005, p. 50; Renn & Schweizer, 2009; Webler, 1995):

  • Have all arguments been properly treated? Have all truth claims been fairly and accurately tested against commonly agreed standards of validation?

  • Has all the relevant evidence, in accordance with the actual state-of-the-art knowledge, been collected and processed?

  • Was systematic, experimental and practical knowledge and expertise adequately included and processed?

  • Were all interests and values considered, and was there a major effort to come up with fair and balanced solutions?

  • Were all normative judgements made explicit and thoroughly explained? Were normative statements derived from accepted ethical principles or legally prescribed norms?

  • Were all efforts undertaken to preserve plurality of lifestyle and individual freedom and to restrict the realm of binding decisions to those areas in which binding rules and norms are essential and necessary to produce the wanted outcome?

If these requirements are met, there is at least a real chance to be able to achieve a jointly approved agreement and a common understanding of the preferred risk assessment options when facing complex environmental choices. The success of the stakeholder involvement strongly depends on the quality of the process. Consequently, this process has to be specifically designed for the context and characteristics of the corresponding risk (Renn, 2004). The balance of inclusion and closure is one of the crucial tasks of risk governance.

Coping with the Plurality of Knowledge and Values: Inclusive Governance Formats

The different social groups enter the governance process with very different preconditions regarding their knowledge about the risk characteristics. Earlier in this article, we set out that the perception of risks varies greatly among different actor groups. Even among different scientific disciplines, the concepts of risk are highly variable. All relevant types of knowledge and the existing plurality of values must be taken into consideration if acceptable outcomes of the risk governance process are to be found. The only way to include these knowledge bases and values is to embed procedures for participation into the governance process.

A report by the American Academy of Sciences on the subject of understanding environmental risks concludes that scientifically valid and ethically justified procedures for the collective valuation of risks can be realized only within the context of an analytic-deliberative process (Stern & Fineberg, 1996; U.S. National Research Council, 2008). Analytic means that the best scientific findings about the possible consequences and conditions of collective action are incorporated in the negotiations, while deliberative means that rationally and ethically transparent criteria for making trade-offs are used and documented externally. Moreover, the authors consider fair participation by all groups concerned to be necessary. It is essential to ensuring that the various moral and cultural reference systems, which can legitimately exist alongside each other, are also incorporated into the process. Depending on the nature of the risk and the available information about the risk, the analytic-deliberative approach needs to be further specified. In the context of integrated risk governance, suggestions for the participation of the public and stakeholders within an analytic-deliberative framework have been made depending on the nature of the risk (IRGC, 2005, pp. 51–52). Four types of “discourse,” describing the extent of participation, have been suggested by the authors (Klinke & Renn, 2012, 2014).

In the case of simple risk problems with obvious consequences, low remaining uncertainties, and no controversial values implied, like municipal waste dumps, it seems not necessary and even inefficient to involve all potentially affected parties in the process of decision-making. An “instrumental discourse” is proposed to be the adequate strategy to deal with these risks. In this first type of discourse, agency staff, directly affected groups (like waste providers and immediately exposed individuals), and enforcement personnel are the relevant actors. It is likely that the interest of the public in the regulation of these types of risk will be very low. However, regular monitoring of the outcomes is important, as the risk might turn out to be more complex, uncertain, or ambiguous than characterized by the original assessment (Birkmann, 2011).

In case of complex risk problems another discourse is needed. An example of complexity-based risk problems are the so-called “cocktail effects” of combined pesticide residues in food. While the effects of single pesticides are more or less scientifically proven, the cause-and-effect chains of exposure to different pesticides via multiple exposure routes are highly complex. As complexity is a problem of insufficient knowledge about the coherences of the risk characteristics, which is in itself not solvable, it is important to produce transparency about the subjective judgments and the inclusion of knowledge elements in order to find the best estimates for characterizing the risks under consideration. This “epistemic discourse” aims at bringing together the knowledge from the agency staff of different scientific disciplines and other experts from academia, government, industry, or civil society. The principle of inclusion is bringing new or additional knowledge into the process and aims at resolving cognitive conflicts. Appropriate instruments of this discourse are Delphi, Group Delphi, or consensus workshops (Gregory, McDaniels, & Fields, 2001; Webler et al., 1991; Wieiring & Arts, 2006).

In the case of risk problems due to high unresolved uncertainty, the challenges are even greater. The problem here is: How can one judge the severity of a situation when the potential damage and its probability are unknown or highly uncertain? This dilemma concerns the characterization of the risk as well as the evaluation and design of options for the reduction of the risk. Environmental pollution by an accumulation of potential pollutants (such as plastic residues in oceans) or the carcinogenic effects of exposure to a mix of pollutants are, for example, characterized by high uncertainty. In this case, it is no longer sufficient to include experts in the discourse; policymakers and the main stakeholders should also be included to find consensus on the extra margin of safety in which they would be willing to invest in order to avoid uncertain but potentially catastrophic consequences. This type is called “reflective discourse,” because it is based on a collective reflection about balancing the possibilities for over- and underprotection. For this type of discourse, round tables, open space forums, negotiated rule-making exercises, mediation, or mixed advisory committees are suggested (Beierle & Cayford, 2002; Klinke, 2006; Rowe & Frewer, 2000; Stoll-Kleemann & Welp, 2006).

Finally, problems may arise due to high ambiguity, i.e., unclear interpretations of what the environmental impacts mean and how they should be interpreted. One prominent example might be the intrusion of alien species into a pristine area. Is this a process that needs to be reversed, controlled, or left to natural selection? For resolving ambiguous problems, a most inclusive strategy is required, as not only the directly affected groups but also the indirectly affected groups have something to contribute to the debate. If, for example, decisions have to be taken concerning the use or the ban of genetically modified crops, the problem goes far beyond a mere risk problem, including principal values and ethical questions and questions of lifestyle or future visions. A “participatory discourse” must be organized where competing arguments, beliefs, and values can be openly discussed. This discourse affects the very early step of risk framing and risk evaluation. The aim of this type of discourse is to resolve conflicting expectations through identifying common values, defining options to allow people to live their own visions of a “good life,” to find equitable and just distribution rules for common resources, and to activate institutional means for reaching common welfare so that all can profit from the collective benefits. The means for leading this normative discourse include citizen panels, citizen juries, consensus conferences, ombudspersons, citizen advisory commissions, etc. (Abels, 2007; Dienel, 1989; Durant & Joss, 1995; Fiorino, 1990; Hagendijk & Irvin, 2006; Renn, 2008, pp. 248ff.).

In this typology of discourse, it is presupposed that the categorization of risks into simple, complex, uncertain, and ambiguous is uncontested. But very often this turns out to be complicated. Who decides whether a risk issue can be categorized as simple, complex, uncertain, or ambiguous? For the purpose of categorizing the nature of discourse, one should initiate a meta-discourse charged with the task to determine, where a specific risk on the risk classification scheme is located and, in consequence, to which route of risk assessment and management it is allocated. This discourse is called “design discourse” and is meant to provide stakeholder involvement at this more general level (Renn & Walker, 2008, pp. 356ff.). Allocating the risks to one of the four routes has to be done before assessment starts, but as knowledge and information may change during the governance process, it may be necessary to reorder the risk. One way to carry out this task involves a screening board consisting of members of the risk and concern assessment team, risk managers, and key stakeholders.

Figure 2 provides an overview of the described discourses depending on the risk characteristics and the actors included. Additionally, it sets out the type of conflict produced through the plurality of knowledge and values and the required remedy to deal with the corresponding risk.

Risk Perception and Its Impacts on Risk GovernanceClick to view larger

Figure 2 The risk management escalator and stakeholder involvement. (IRGC, 2005, p. 53)

Of course, this scheme is a simplification of real risk problems and is meant to provide an idealized overview of the requirements related to different risk problems. Under real conditions, environmental risks and their conditions often turn out to be interdependent, and the required measures will vary according to the unique context.

Conclusions

The individual and social factors that shape environmental risk perception demonstrate that the intuitive understanding of risks is a multidimensional concept and cannot be reduced to the product of probabilities and consequences of technical assessment alone (Allen, 1987). Although risk perceptions differ considerably among social and cultural groups, two common features appear to be universal: the multidimensionality of risk beyond probability and extent of damage and the integration of beliefs related to perceived risks, perceived benefits, and the context in which the technology has been introduced and diffused into a holistic judgment (Rohrmann & Renn, 2000). This is not to say that professional risk assessments do not matter for people’s perception, but they are only one element among many that shape the formation of attitudes toward technologies and judgments about their acceptability (Boholm, 1998; IRGC, 2005). Risk perception studies have revealed the various elements that shape the individual and social experience of living with and next to technologies. What lessons can we draw from the review about the research insights about risk perception of technologies?

First, the observed discrepancy between the results of risk assessments conducted by experts and intuitive assessment of the same risks by nonexperts is not, in the first instance, due to ignorance about the statistically derived expected values or an expression of erratic thought processes but is rather an indication of a multidimensional assessment process in which anticipated harm is only one factor among many (Breakwell, 2007, p. 3; IRGC, 2005; Mazur, 1987; Slovic et al., 1982; Zinn & Taylor-Gooby, 2006).

Second, individual and social risk experience appears to be influenced by intuitive heuristics, by the perceived characteristics of the risk and the risk situation, and by affective associations and beliefs about the risk source and about the actors involved in the risk-taking activity. It is also worth mentioning that the degree of perceived seriousness of environmental risks is more strongly related to exposure than to actual casualties, upon which most technical risk assessments are based (Renn et al., 1992). An exposure of a few people resulting in several casualties is likely to be less influential with regard to risk perception and public response than an exposure of many people resulting in minor injuries or only a few casualties.

Third, individual perception is widely governed by semantic images. These images constitute tools of reducing complexity by providing easily identifiable cues for ordering new risks into one of five images: emerging danger, insidious danger, stroke of fate, gamble, and personal thrill (Renn, 2008, pp. 110ff.). These images are internalized through cultural and social learning. They cluster around qualitative variables that specify the context and the situation in which the risk manifests itself within each image. These variables allow for a certain degree of abstraction with respect to perceiving and evaluating risks across different risk sources, yet they still provide sufficient contextual specification for making the distinctions between negligible, serious, and unacceptable risks. Rather than evaluating technological risk with a single formula, most people use a set of multiple attributes, many of which make normative sense.

Fourth, among these multiple attributes, catastrophic potential, dread, personal control, familiarity, and blame were shown to be good predictors of risk perception of environmental impacts caused by large-scale technologies in most countries. This has been confirmed by empirical investigations in, for example, the UnitedStates, Germany, France, Canada, Austria, Australia, and Japan (Rohrmann & Renn, 2000). Psychometric variables explain a greater proportion of the variance in risk perception than alternative approaches (Marris, Langford, & O’Riordan, 1998; Sjöberg, 1999, 2000b; Slovic et al., 2000; Zwick & Renn, 2002). The degrees to which these qualitative characteristics are assigned to specific risk sources depend, however, upon cultural context and social amplification effects, partially triggered by extensive media coverage.

Fifth, studies conducted on an international scale show that people everywhere, regardless of their social or cultural background, use virtually universal risk perception criteria in forming their opinions (Hofstede, 2001; Renn & Rohrmann, 2000; Schwartz & Bilsky, 1990; more critical views about this claim can be found in Kone & Mullet, 1994; Schmidt & Wei, 2006). However, the relative effectiveness of these criteria in forming opinions and in judgments about risk tolerance varies considerably between different social groups and cultures. While the above-mentioned qualitative characteristics are accepted (often subconsciously) as intuitive yardsticks for perceiving risks, their relative contribution to a person’s actual opinion or motivation to take action depends upon more than just contextual characteristics (Siegrist, Keller, & Kiers, 2005). In addition, individual lifestyles, threatening environmental factors, worldviews about nature (particularly tampering with nature), technology and society, and ingrained cultural values play a major role (Scherer & Cho, 2003; Sjöberg, 2000; Wilkinson, 2001). In assessing technological or environmental risks, people who favor alternative lifestyles tend, more than others, to consider both “reversibility of the consequences of risk” and “congruence between risk bearers and benefactors,” while those with strong material values assess risk more by way of personal control opportunities and trust in institutional risk control (Buss & Craik, 1983).

The conclusion to be drawn from this is that value expectations and cultural background are significant determinants of subjective risk that do not add to the semantic and qualitative factors already described but, in effect, presuppose the existence of those factors in that they use them as heuristics to incorporate and process information on complex properties associated with the risk in question. Internalized value expectations and external circumstances can control the relative effectiveness of intuitive perception processes, but not their existence. This is not a matter of academic hair-splitting: it has direct relevance to communication and conflict management. If we assume that intuitive mechanisms of risk perception and assessment bear virtually universal characteristics that can be more or less reshaped by socio-cultural influences, then they can provide a fundamental basis for communication of which one can avail oneself, regardless of differences between the various standpoints. In addition to the pool of common symbols and rituals (shared meaning), whose importance to social integration is in constant decline in pluralistic societies, a new pool of common mechanisms of risk perception emerges that, along with common sense, signals the existence of supra-individual perception mechanisms.

What does this all mean for risk governance? First, risk perceptions cannot replace or even challenge risk. They provide important information to supplement scientific and technical assessments. They represent public preferences about the desirability of envisioned opportunities and about the degree of risk aversion with respect to potential negative outcomes. Second, any complex decision with environmental consequences implies trade-offs between different types of risks and benefits. One option may include more environmental damage but less negative health effects, while other options are more environmentally friendly but may lead to more negative effects on human health. Who is going to determine the trade-offs between these decision options? This is not a scientific task. It should rather be grounded in a true revelation of public preferences. Third, modern societies are multifaceted in values, convictions, and lifestyles. There is no common public preference on most risks; there are many different viewpoints and preferences. The only way to reconcile these differences is the organization of a structured risk discourse in which factual claims, value judgments, and competing interests are fairly represented and openly discussed. Based on past experiences with such structured discourses, there is sufficient evidence to conclude that such an attempt to include the major stakeholders and to strive for a common agreement (consensus or compromise) is worth the effort and has improved both the output as well as the outcome of environmental risk management (U.S. National Research Council, 2008).

Further Reading

Beck, U. (1992). Risk society: Toward a new modernity (M. A. Ritter, Trans.). London: Sage.Find this resource:

Boholm, A. (1998). Comparative studies of risk perception: A review of twenty years of research. Journal of Risk Research, 1(2), 135–163.Find this resource:

Breakwell, G. M. (2007). The psychology of risk. Cambridge, U.K.: Cambridge University Press.Find this resource:

Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real world. Oxford: Oxford University Press.Find this resource:

Hofstede, G. (2001). Culture’s Consequences (2d ed.). Thousand Oaks, CA: Sage.Find this resource:

Kahneman, D. (2011). Thinking, fast and slow. New York: Penguin Books.Find this resource:

Löftstedt, R. (2005). Risk management in post trust societies. London: Palgrave Macmillan.Find this resource:

Luhmann, N. (1993). Risk: A sociological theory. Berlin: de Gruyter.Find this resource:

Morgan, M. G., Fischhoff, B., Bostrom, A., & Atman, C. J. (2001). Risk communication: A mental models approach. Cambridge, U.K.: Cambridge University Press.Find this resource:

Renn, O. (2008). Risk governance. Coping with uncertainty in a complex world. London: Earthscan.Find this resource:

Renn, O., & Rohrmann, B. (Eds.). (2000). Cross-cultural risk perception: A survey of research results. Dordrecht, The Netherlands: Kluwer.Find this resource:

Rosa, E.A., Renn, O., & McCright, A. M. (2014). The risk society revisited. Social theory and governance. Philadelphia: Temple University Press.Find this resource:

Slovic, P. (1992). Perception of risk reflections on the psychometric paradigm. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 117–152). Westport, CT: Praeger.Find this resource:

Sterling, A. (2007). Risk assessment in science: Towards a more constructive policy debate. EMBO Reports, 8, 309–315.Find this resource:

Sunstein, C. (2002). Risk and reason. Cambridge: U.K.: Cambridge University Press.Find this resource:

Taylor-Gooby, P., & Zinn, J. (Eds.). (2006). Risk in social science. Oxford: Oxford University Press.Find this resource:

Thompson, M., Ellis, W., & Wildavsky, A. (1990). Cultural theory. Boulder, CO: Westview Press.Find this resource:

U.S. National Research Council of the National Academies. (2008). Public participation in environmental assessment and decision making. Washington, DC: The National Academies Press.Find this resource:

Wynne, B. (1992). Risk and social learning. Reification to engagement. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 275–297). Westport, CT: Praeger.Find this resource:

References

Abels, G. (2007). Citizen investment in public policymaking: Does it improve democratic legitimacy and accountability? The case of pTA. Interdisciplinary Information Science, 13(1), 103–116.Find this resource:

Ad-hoc Commission. (2003). Neuordnung der Verfahren und Strukturen zur Risikobewertung und Standardsetzung im gesundheitlichen Umweltschutz der Bundesrepublik Deutschland. Final Report of the Risk Commission to the German Government. Munich: Federal Institute of Radiation Protection (BRS).Find this resource:

Alhakami, A. S., & Slovic, P. (1994). A psychological study of the inverse relationship between perceived risks and perceived benefit. Risk Analysis, 14(6), 1085–1096.Find this resource:

Allen, F. W. (1987). Towards a holistic appreciation of risk the challenge for communicators and policymakers. Science, Technology, and Human Values 12 (3&4), 138–143.Find this resource:

Beck, U. (1992). Risk society: Toward a new modernity. (Mark A. Ritter, Trans.). London: Sage.Find this resource:

Beierle, T. C., & Cayford, J. (2002). Democracy in practice, public participation in environmental decisions. Washington, DC: Resources for the Future.Find this resource:

Besley, J. C., & McComas, K. A. (2014). Fairness, public engagement and risk communication. In J. L. Arvai & L. Rivers (Eds.), Effective risk communication (pp. 108–123). New York: Routledge/Earthscan.Find this resource:

Bi, J. (2006). Regional environmental risk analysis and management. Beijing: China Environmental Science Press.Find this resource:

Birkmann, J. (2011). First- and second-order adaptation to natural hazards and extreme events in the context of climate change. Natural Hazards, 58(2), 811–840.Find this resource:

Boholm, A. (1998). Comparative studies of risk perception: A review of twenty years of research. Journal of Risk Research, 1(2), 135–163.Find this resource:

Bracha, S. (2004). Freeze, flight, fight, fright, faint. Adaptionist perspectives on the acute stress response spectrum. CNS Spectrum, 9(9), 679–685.Find this resource:

Breakwell, G. M. (1994). The echo of power: A framework for social psychological research. The Psychologist, 17, 65–72.Find this resource:

Breakwell, G. M. (2007). The psychology of risk. Cambridge, U.K.: Cambridge University Press.Find this resource:

Brehmer, B. (1987). The psychology of risk. In W. T. Singleton & J. Howden (Eds.), Risk and decisions (pp. 25–39). New York: Wiley.Find this resource:

Buss, D., & Craik, K. (1983). Contemporary worldviews: Personal and policy implications. Journal of Applied Social Psychology, 13, 259–280.Find this resource:

Chaiken, S., & Stangor, C. (1987). Attitudes and attitude change. Annual Review of Psychology, 38, 575–630.Find this resource:

Clarke, L. (1989). Acceptable risk? Making decisions in a toxic environment. Berkeley: University of California Press.Find this resource:

Covello, V. T. (1983). The perception of technological risks: A literature review. Technological Forecasting and Social Change, 23, 285–297.Find this resource:

De Jonge, J., van Kleef, E., Frewer, L., & Renn, O. (2007). Perception of risk, benefit and trust associated with consumer food choice. In L. Frewer & H. van Trijp (Eds.), Understanding consumers of food products (pp. 534–557). Cambridge, U.K.: Woodhead.Find this resource:

De Marchi, B. (2015). Risk governance and the integration of different types of knowledge. In U. Fra.Paleo (Ed.), Risk governance. The articulation of hazard, politics and ecology (pp. 149–166). Heidelberg: Springer.Find this resource:

Dienel, P. C. (1989). Contributing to social decision methodology: Citizen reports on technological projects. In C. Vlek & G. Cvetkovich (Eds.), Social decision methodology for technological projects (pp. 133–151). Dordrecht, The Netherlands: Kluwer Academic.Find this resource:

Douglas, M., & Wildavsky, A. (1982). Risk and culture. Berkeley: University of California Press.Find this resource:

Dunwoody, S. (1992). The media and public perception of risk: How journalists frame risk stories. In D. W. Bromley & K. Segerson (Eds.), The social response to environmental risk policy formulation in an age of uncertainty (pp. 75–100). Dordrecht, The Netherlands: Kluwer.Find this resource:

Durant, J., & Joss, S. (1995). Public participation in science. London: Science Museum.Find this resource:

Earle, T. C., & Cvetkovich, G. T. (1999). Social trust and culture in risk management. In G. T. Cvetkovich & R. Löfstedt (Eds.), Social trust and the management of risk (pp. 9–21). London: Earthscan.Find this resource:

Engdahl, E., & Lidskog, R. (2014). Risk, communication and trust: Towards an emotional understanding of trust. Public Understanding of Science, 23(6), 703–717.Find this resource:

Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.Find this resource:

Fiorino, D. J. (1990). Citizen participation and environmental risk: A survey of institutional mechanisms. Science, Technology, & Human Values, 15(2), 226–243.Find this resource:

Fischhoff, B. (1985). Managing risk perceptions. Issues in Science and Technology, 2(1), 83–96.Find this resource:

Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S., & Combs, B. (1978). How safe is safe enough? A psychometric study of attitudes toward technological risks and benefits. Policy Science, 9, 127–152.Find this resource:

Fischhoff, B., Watson, S. R., & Hope, C. (1984). Defining risk. Policy Sciences, 17, 123–129.Find this resource:

Frewer, L. J., Miles, S., Brennan, M., Kusenof, S., Ness, M., & Ritson, C. (2002). Public preferences for informed choice under conditions of risk uncertainty. Public Understanding of Science, 11(4), 1–10.Find this resource:

Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond “heuristics and biases.” European Review of Social Psychology, 2, 83–115.Find this resource:

Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real world. Oxford: Oxford University Press.Find this resource:

Gigerenzer, G. (2013). Risiko. Wie man die richtigen Entscheidungen trifft. Munich: Bertelsmann.Find this resource:

Gigerenzer, G., & Selten, R. (2001). Rethinking rationality. In G. Gigerenzer & R. Selten (Eds.), Bounded rationality: The adaptive toolbox (pp. 1–12). Boston: MIT Press.Find this resource:

Gregory, R., McDaniels, T., & Fields, D (2001). Decision aiding, not dispute resolution: A new perspective for environmental negotiation. Journal of Policy Analysis and Management, 20(3), 415–432.Find this resource:

Gregory, R. S. (2004). Valuing risk management choices. In T. McDaniels & M. J. Small (Eds.), Risk analysis and society. An interdisciplinary characterization of the field (pp. 213–250). Cambridge, U.K.: Cambridge University Press.Find this resource:

Hagendijk, R., & Irwin, A. (2006). Public deliberation and governance: Engaging with science and technology in contemporary Europe. Minerva, 44, 167–184.Find this resource:

Hofstede, G. (2001). Cultures Consequences (2d ed.). Thousand Oaks: Sage.Find this resource:

IRGC (International Risk Governance Council). (2005). Risk governance—towards an integrative approach. White Paper. Geneva: IRGC.Find this resource:

Jaeger, C. C., Renn, O., Rosa, E. A., & Webler, T. (2001). Risk, uncertainty and rational action. London: Earthscan.Find this resource:

Jungermann, H., Pfister, H.-R., & Fischer, K. (2005). Die Psychologie der Entscheidung (2d ed.). Heidelberg: Elsevier.Find this resource:

Kahneman, D. (2011). Thinking, fast and slow. New York: Penguin.Find this resource:

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291.Find this resource:

Klinke, A. (2006). Demokratisches Regieren jenseits des Staates. Deliberative Politik im nordamerikanischen Große Seen-Regime. Opladen, Germany: Barbara Budrich.Find this resource:

Klinke, A., Dreyer, M., Renn, O., Stirling, A., & van Zwanenberg, P. (2006). Precautionary risk regulation in European governance. Journal of Risk Research, 9(4), 373–392.Find this resource:

Klinke, A., & Renn, O. (2010). Risk governance. Contemporary and future challenges. In M. Gilek, J. Erisksson, & Ch. Ruden (Eds.), Regulating chemical risks. European and global challenges (pp. 9–28). Heidelberg: Springer.Find this resource:

Klinke, A., & Renn, O. (2012). Adaptive and integrative governance on risk and uncertainty. Journal of Risk Research, 15(3), 273–292.Find this resource:

Klinke, A., & Renn, O. (2014). Expertise and experience: A deliberative system of a functional division of labor for post-normal risk governance. Innovation: The European Journal of Social Science Research, 27(4), 442–465.Find this resource:

Knight, A., & Warland, J. (2005). Determinants of food safety risks: A multi-disciplinary approach. Rural Sociology, 70(2), 253–275.Find this resource:

Koné, D., & Mullet, E. (1994). Societal risk perception and media coverage. Risk Analysis, 14(1), 21–24.Find this resource:

Kraus, N., Malmfors, T., & Slovic, P. (1992). Intuitive toxicology expert and lay judgments of chemical risks. Risk Analysis, 12, 215–232.Find this resource:

Kunreuther, H. (2000). Insurance as cornerstone for public-private sector partnerships. Natural Hazards Review, 1(2), 126–136.Find this resource:

Lee, T. R. (1981). The public perception of risk and the question of irrationality. In Risk Perception (Vol. 376, pp. 5–16). London: The Royal Society.Find this resource:

Linnerooth-Bayer, J., & Fitzgerald, K. B. (1996). Conflicting views on fair siting processes: Evidence from Austria and the US. Risk Issues in Health, Safety and Environment, 7(2), 119–134.Find this resource:

Loewenstein, G., Weber, E., Hsee, C., & Welch, E. (2001). Risk as feelings. Psychological Bulletin, 127, 267–286.Find this resource:

Löfstedt, R. (2003). Risk communication: Pitfalls and promises. European Review, 11(3), 417–435.Find this resource:

Löftstedt, R. (2005). Risk management in post trust societies. London: Palgrave Macmillan.Find this resource:

Luhmann, N. (1986). The autopoiesis of social systems. In R. F. Geyer & J. van der Zouven (Eds.), Sociokybernetic paradoxes: Observation, control and evolution of self–steering systems (pp.172–192). London: Sage.Find this resource:

Luhmann, N. (1993). Risk: A sociological theory. Berlin: de Gruyter.Find this resource:

Luhmann, N. (1997). Grenzwerte der ökologischen Politik: Eine Form von Risikomanagement. In P. Hiller & G. Krücken (Eds.), Risiko und Regulierung. Soziologische Beiträge zu Technikkontrolle und präventiver Umweltpolitik (pp. 195–221). Frankfurt/Main: Suhrkamp.Find this resource:

Marks, I., & Nesse, R. (1994). Fear and fitness: An evolutionary analysis of anxiety disorders. Ethology and Sociobiology, 15, 247–261.Find this resource:

Marris, C., Langford, I. H., & O’Riordan, T. (1998). A quantitative test of the cultural theory of risk perceptions: Comparison with the psychometric paradigm. Risk Analysis, 18, 635–647.Find this resource:

Marshall, B. K. (1999). Globalization, environmental degradation and Ulrich Beck’s risk society. Environmental Values, Special Issue: Risk, 8(2), 253–275.Find this resource:

Mazur, A. (1987). Does public perception of risk explain the social response to potential hazard? Quarterly Journal of Ideology, 11, 41–45.Find this resource:

McDaniels, T. L., Axelrod, L. J., Cavanagh, N. S., & Slovic, P. (1997). Perception of ecological risk to water environments. Risk Analysis, 17(3), 341–352.Find this resource:

Morgan, M. G., Fischhoff, B., Bostrom, A., & Atman, C. J. (2001). Risk communication: A mental models approach. Cambridge, U.K.: Cambridge University Press.Find this resource:

Mythen, G. (2005). Employment, individualization, and insecurity: Rethinking the risk society perspective. The Sociological Review, 53(1), 129–149.Find this resource:

OECD (Organisation for Economic Co-operation and Development). (2002). Guidance document on risk communication for chemical risk management. Series on Risk Management, Vol. 16. Paris: Environment, Health and Safety Publications, OECD.Find this resource:

Peters, E., Burraston, B., & Mertz, C. K. (2004). An emotion-based model of risk perception and stigma-susceptibility: Cognitive appraisals of emotion, affective reactivity, worldviews, and risk perceptions in the generation of technological stigma. Risk Analysis, 24(5), 1349–1367.Find this resource:

Peters, H. P. (1991). Durch Risikokommunikation zur Technikakzeptanz? Die Konstruktion von Risiko‘wirklichkeiten’ durch Experten, Gegenexperten und Öffentlichkeit. In J. Krüger & St. Ruß-Mohl (Eds.), Risikokommunikation (pp. 11–67). Berlin: Edition Sigma.Find this resource:

Pidgeon, N. F. (1997). The limits to safety? Culture, politics, learning and manmade disasters. Journal of Contingencies and Crisis Management, 5(1), 1–14.Find this resource:

Pollatsek, A., & Tversky, A. (1970). A theory of risk. Journal of Mathematical Psychology 7,540–553.Find this resource:

Renn, O. (1990). Risk perception and risk management: A review. Risk Abstracts, 7(1), 1–9, Part 1; 7(2), 1–9, Part 2.Find this resource:

Renn, O. (1992). Concepts of risk: A classification. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 53–79). Westport, CT: Praeger.Find this resource:

Renn, O. (2004). The challenge of integrating deliberation and expertise: Participation and discourse in risk management. In T. McDaniels & M. J. Small (Eds.), Risk analysis and society. An interdisciplinary characterization of the field (pp. 289–366). Cambridge, U.K.: Cambridge University Press.Find this resource:

Renn, O. (2005). Risk perception and communication lessons for the food and food packaging industry. Food Additives and Contaminants, 22(10), 1061–1071.Find this resource:

Renn, O. (2008). Risk governance. Coping with uncertainty in a complex world. London: Earthscan.Find this resource:

Renn, O. (2014). Das Risikoparadox. Warum wir uns vor dem Falschen fürchten. Frankfurt/Main: Fischer.Find this resource:

Renn, O., Burns, W., Kasperson, R. E., Kasperson, J. X., & Slovic, P. (1992). The social amplification of risk: Theoretical foundations and empirical application. Social Issues, Special Issue: Public Responses to Environmental Hazards, 48(4), 137–160.Find this resource:

Renn, O., Klinke, A., & van Asselt, M. (2011). Coping with complexity, uncertainty and ambiguity in risk governance: A synthesis. AMBIO, 40(2), 231–246.Find this resource:

Renn, O., & Rohrmann, B. (2000). Cross-cultural risk perception research: State and challenges. In O. Renn & B. Rohrmann (Eds.), Cross-cultural risk perception: A survey of empirical studies (pp. 211–233). Dordrecht, The Netherlands: Kluwer.Find this resource:

Renn, O., & Schweizer, P. (2009). Inclusive risk governance: Concepts and application to environmental policy making. Environmental Policy and Governance, 19, 174–185.Find this resource:

Renn, O., Schweizer, P.-J., Dreyer, M., & Klinke, A. (2007). Risiko. Über den gesellschaftlichen Umgang mit Unsicherheit. Munich: Ökom Verlag.Find this resource:

Renn, O., & Walker, K. (2008). Lessons learned: A re-assessment of the IRGC framework on risk governance. In O. Renn & K. Walker (Eds.), The IRGC risk governance framework: Concepts and practice (pp. 331–367). Dordrecht, The Netherlands: Springer.Find this resource:

Renn, O., & Zwick, M. M. (1997). Risiko- und Technikakzeptanz. Heidelberg: Springer.Find this resource:

Rohrmann, B. (2000). Cross-national studies on the perception and evaluation of hazards. In O. Renn & B. Rohrmann (Eds.), Cross-cultural risk perception: A survey of research results (pp. 55–78). Dordrecht, The Netherlands: Kluwer).Find this resource:

Rohrmann, B., & Chen, H. (1999). Risk perception in China and Australia: An exploratory crosscultural study. Journal of Risk Research, 2(3), 219–241.Find this resource:

Rohrmann, B., & Renn, O. (2000). Risk perception research—an introduction. In O. Renn & B. Rohrmann (Eds.), Cross-cultural risk perception: A survey of research results (pp. 11–54). Dordrecht, The Netherlands: Kluwer.Find this resource:

Rosa, E. A., Matsuda, N., & Kleinhesselink, R. R. (2000). The cognitive architecture of risk: Pancultural unity or cultural shaping? In O. Renn & B. Rohrmann (Eds.), Cross-cultural risk perception: A survey of research results (pp. 185–210). (Dordrecht, The Netherlands: Kluwer.Find this resource:

Rosa, E. A., Renn, O., & McCright, A. M. (2014). The risk society revisited. Social theory and governance. Philadelphia: Temple University Press.Find this resource:

Ross, L. D. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 10, pp. 173–220). New York: Random House.Find this resource:

Rowe, G., & Frewer, L. J. (2000). Public participation methods: A framework for evaluation. Science, Technology & Human Values, 25(1), 3–29.Find this resource:

Ruddat, M. (2009). Kognitive Kompetenz zur Risikobewertung als Vorbedingung der Risikomündigkeit und ihre Bedeutung für die Risikokommunikation (Cognitive competence as a precondition for risk maturity and its role for risk communication). Ph.D. thesis, University of Stuttgart.Find this resource:

Scherer, C. W., & Cho, H. (2003). A social network contagion theory of risk perception. Risk Analysis, 23(2), 261–267.Find this resource:

Schmidt, M. R., & Wei, W. (2006). Loss of agro-biodiversity, uncertainty, and perceived control: A comparative risk perception study in Austria and China. Risk Analysis, 26(2), 455–470.Find this resource:

Scholz, R. W. (2011). Environmental literacy in science and society. Cambridge, U.K.: Cambridge University Press.Find this resource:

Schwartz, S. H., & Bilsky, W. (1990). Personality processes and individual differences. Towards a theory of the universal content and structure of values extensions and cross-cultural replications. Journal of Personality and Social Psychology, 58(5), 878–891.Find this resource:

Shrader-Frechette, K. (1998). Scientific methods, antifoundationalism, and decision making. In R. Löfstedt & L. Frewer (Eds.), Risk & modern society (pp. 45–55). London: Earthscan.Find this resource:

Shubik, M. (1991). Risk, society, politicians, scientists, and people. In M. Shubik (Ed.), Risk, organizations, and society (pp. 7–30). Dordrecht, The Netherlands: Kluwer.Find this resource:

Siegrist, M., Keller, C., & Kiers, H. A. (2005). A new look at the psychometric paradigm of perceptions of hazards. Risk Analysis, 25(1), 211–222.Find this resource:

Simon, H. A. (1976). Administrative behavior: A study of decision-making processes in administrative organizations (3d ed.). New York: Basic.Find this resource:

Simon, H. A. (1987). Rationality in psychology and economics. In R. M. Hogarth & M. W. Reder (Eds.), Rational choice: The contrast between economics and psychology (pp. 25–40). Chicago: University of Chicago Press.Find this resource:

Sjöberg, L. (1999). Risk perception in Western Europe. AMBIO, 28(6), 543–549.Find this resource:

Sjöberg, L. (2000a). Perceived risk and tampering with nature. Journal of Risk Research, 3, 353–367.Find this resource:

Sjöberg, L. (2000b). Factors in risk perception. Risk Analysis, 22(1), 1–11.Find this resource:

Sjöberg, L. (2001). Limits of knowledge and the limited importance of trust. Risk Analysis, 21, 189–198.Find this resource:

Sjöberg, L., Kolarova, D., Rucai, A.-A., & Bernström, M.-L. (2000). Risk perception in Bulgaria and Romania. In O. Renn & B. Rohrmann (Eds.), Cross-cultural risk perception: A survey of research results (pp. 145–184). (Dordrecht, The Netherlands: Kluwer.Find this resource:

Slovic, P. (1987). Perception of risk. Science, 236(4799), 280–285.Find this resource:

Slovic, P. (1992). Perception of risk reflections on the psychometric paradigm. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp.117–152). Westport, CT: Praeger.Find this resource:

Slovic, P. (2000). Informing and educating the public about risk. In P. Slovic (Ed.), The perception of risk (pp. 182–191). London: Earthscan.Find this resource:

Slovic, P., Finucane, E., Peters, D., & MacGregor, R. (2002). The affect heuristic. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Intuitive judgment heuristics and biases (pp. 397–420). Cambridge, U.K.: Cambridge University Press.Find this resource:

Slovic, P., Fischhoff, B., & Lichtenstein, S. (1980). Facts and fears: Understanding perceived risk. In R. Schwing & W. A. Albers, Jr. (Eds.), Societal risk assessment: How safe is safe enough? (pp. 181–214). New York: Plenum.Find this resource:

Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Why study risk perception? Risk Analysis, 2, 83–94.Find this resource:

Slovic, P., Fischhoff, B., & Lichtenstein, S. (1986). The psychometric study of risk perception. In V. R. Covello, J. Menkes, & J. Mumpower (Eds.), Risk evaluation and management (pp. 3–24). New York: Plenum.Find this resource:

Slovic, P., Flynn, J., Mertz, C. K., Poumadere, M., & Mays, C. (2000). Nuclear power and the public: A comparative study of risk perception in the United States and France. In O. Renn & B. Rohrmann (Eds.), Cross-cultural risk perception: A survey of research results (pp. 55–102). Dordrecht, The Netherlands: Kluwer.Find this resource:

Smith, K. (2013). Environmental hazards: Assessing risk and reducing disaster. London: Routledge.Find this resource:

Sparks, P., & Shepherd, R. (1994). Public perceptions of the potential hazards associated with food production and food consumption: An empirical study. Risk Analysis, 14, 799–806.Find this resource:

Sterlin, A. (2008). “Opening up” and “closing down”: Power, participation and pluralism in the social appraisal of technology. Science, Technology, and Human Values, 33(2), 262–294Find this resource:

Stern, P. C., & Fineberg, V. (1996). Understanding risk: Informing decisions in a democratic society. National Research Council, Committee on Risk Characterization. Washington, DC: National Academy Press.Find this resource:

Stoll-Kleemann, S., & Welp, M. (2006). Stakeholder dialogues in natural resources management. Theory and practice. Heidelberg: Springer.Find this resource:

Streffer, C., Bücker, J., Cansier, A., Cansier, D., Gethmann, C. F., Guderian, R., et al. (2003). Environmental standards: Combined exposures and their effects on human beings and their environment. Berlin: Springer.Find this resource:

Sunstein, C. (2002). Risk and reason. Cambridge, U.K.: Cambridge University Press.Find this resource:

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions on health, wealth and happiness. New Haven, CT: Yale University Press.Find this resource:

Thompson, M. (1980). An outline of the cultural theory of risk. Working Paper of the International Institute for Applied Systems Analysis (IIASA), WP-80-177. Laxenburg, Austria: IIASA.Find this resource:

Thompson, M., Ellis, W., & Wildavsky, A. (1990). Cultural theory. Boulder, CO: Westview Press.Find this resource:

Townsend, E., Clarke, D. D., & Travis, B. (2004). Effects of context and feelings on perceptions of genetically modified food. Risk Analysis, 24(5), 1369–1384.Find this resource:

Trustnet. (1999). A New Perspective on Risk Governance. Document of the Trustnet Network. Paris: European Union.Find this resource:

Tversky, A., & Kahneman, D. (1981). The framing of decision and the psychology of choice. Science, 211, 453–458.Find this resource:

U.S. National Research Council of the National Academies. (2008). Public participation in environmental assessment and decision making. Washington, DC: The National Academies Press.Find this resource:

von Winterfeldt, D., & Edwards, W. (1984). Patterns of conflict about risk debates. Risk Analysis, 4, 55–68.Find this resource:

Webler, T., Levine, D., Rakel, H., & Renn, O. (1991). The group Delphi: A novel attempt at reducing uncertainty. Technological Forecasting and Social Change, 39, 253–263.Find this resource:

Webler, Th. (1995). “Right” discourse in citizen participation. An evaluative yardstick. In O. Renn, Th. Webler, & P. Wiedemann (Eds.), Fairness and competence in citizen participation. Evaluating new models for environmental discourse (pp. 35–86). (Dordrecht, The Netherlands: Kluwer.Find this resource:

Webler, Th. (1999). The craft and theory of public participation: A dialectical process. Risk Research, 2(1), 55–71.Find this resource:

Webler, Th., Kastenholz, H., & Renn, O. (1995). Public participation in impact assessment: A social learning perspective. Environmental Impact Assessment Review, 15, 443–463.Find this resource:

Wiering, M. A., & Arts, B. J. M. (2006). Discursive shifts in Dutch river management: “Deep” institutional change or adaptation strategy? Hydrobiologia, 565(1), 327–338.Find this resource:

Wilkinson, I. (2001). Social theories of risk perception at once indispensable and insufficient. Current Sociology, 49(1), 1–22.Find this resource:

Wynne, B. (1984). Public perceptions of risk. In J. Aurrey (Ed.), The urban transportation of irradiated fuel (pp. 246–259). London: Macmillan.Find this resource:

Wynne, B. (2002). Risk and environment as legitimatory discourses of technology: Reflexivity inside out? Current Sociology, 50(30), 459–477.Find this resource:

Zinn, J. O., & Taylor-Gooby, P. (2006). Risk as an interdisciplinary research area. In P. Taylor-Gooby & J. Zinn (Eds.), Risk in social science (pp. 20–53). Oxford: Oxford University Press.Find this resource:

Zwick, M. M., & Renn, O. (1998). Wahrnehmung und Bewertung von Technik in Baden-Württemberg. Stuttgart: Stuttgart Center of Technology Assessment.Find this resource:

Zwick, M. M., & Renn, O. (2002). Perception and evaluation of risks: Findings of the Baden-Württemberg Risk Survey 2001. Working Paper, Vol. 203. Stuttgart: Stuttgart Center of Technology Assessment.Find this resource: