Description
The essay will consist of 2 min. – 3 max. (Note: it is acceptable to go over 3 pages if needed) typed, double-spaced pages, times roman 12, 1 inch margins, detailing a time when you experienced an ethical dilemma. Please note your paper will not be shared with the entire class. Only the professor will read. Use an example that you are comfortable writing about. After reading Chapter 6, the complexity ethical leaders encounter when choosing the ethical course of action and the difficulty following through on their choices is discussed in a variety of ways. Discuss an ethical dilemma that you have experienced in the past. What were the circumstances? What were the choices? What did you do? What happened? Analyze your ethical issue using whatever parts or concepts from this chapter that best fit your example. Looking back on your situation now, based on material from this chapter, what steps or information can you use to improve or relates directly your ethical situation? For example, viewing your situation more clearly, etc or discuss whatever ways this text material connects to your situation. If so, discuss how this material helps you with your situation as you look back on your situation in this discussion. In closing your discussion, would you have changed what you did? Or not? Include references to your readings in the text citing page numbers as it relates to your paper. (you can just use the quote from pdf I provided and I will add page number )
Unformatted Attachment Preview
For decades, scholars viewed ethical decision making as a cognitive process. Moral psychologists, ethicists
and ethics educators focused on how individuals consciously use logic and reason to solve ethical problems.
They assumed that leaders reach their conclusions after careful deliberation. Researchers ignored emotions or
treated them with suspicion because feelings could undermine moral reasoning.
In recent years, a growing number of scholars have challenged the cognitive approach to ethical decision
making. One critic is psychologist Jonathan Haidt.1 He argues that we quickly make ethical determinations
and then employ logic after the fact to justify our choices. Haidt points to moral dumbfounding as evidence that
moral decision making is the product of intuition, not deliberation. In moral dumbfounding, leaders and
followers have strong opinions about right or wrong but can’t explain why they feel as they do. For example,
when he asked Americans if eating the family dog for dinner was morally wrong, most people felt disgusted but
were at a loss to explain why they felt this way.
Haidt calls his approach to ethical decision making the Social Intuitionist Model to highlight the role that
intuition and social norms play in moral determinations. He defines intuition as the sudden appearance in
consciousness, or at the fringe of consciousness, of an evaluative feeling (like-dislike, good-bad) about a
person or event without any conscious awareness of having gone through steps of weighing evidence, crafting
evaluative arguments, or inferring a conclusion
Haidt argues that automatic processes are the elephant and logic is the rider. In most cases, the elephant goes
where it wants to go, though the rider can occasionally steer the pachyderm in a different direction. Our
instantaneous intuitions about right and wrong are the product of social forces like our cultural background. For
example, dogs are routinely eaten in some societies. In these cultures, which don’t treat pets as family
members, respondents would approve of eating a dog for dinner. Haidt doesn’t completely eliminate reason
from his model. Other people may challenge our intuitions, introducing new information and arguments that
change our initial position. Or we may modify our attitudes based on self-reflection. Many Americans used to
immediately condemn interracial couples. As time passed, society recognized that this reaction was biased,
unfounded, and unjust. The nature of moral dilemma may trigger either an intuitive or rational response.
Researchers in cognitive neuroscience or neuroethics also challenge the notion that ethical thinking is devoid
of emotion.3 They report that individuals who suffer brain damage may retain their reasoning abilities but make
poor decisions due to emotional deficits. Neuroimaging using MRI screens reveals that ethical decision making
is not localized in one portion of the brain but involves several different regions. Ethical thinking activates both
cognitive and emotional areas of the brain. Other investigators believe that our brains evolved with moral
structures that operate automatically. Some draw a linguistic analogy, arguing that we are born with the ability
to make ethical choices (a moral grammar) in the same way that we are born with the ability to use language
(linguistic grammar).4
Investigators continue to disagree as to whether logic or intuition plays the more important role in moral
decision making. However, nearly all researchers now adopt a dual process approach, agreeing that both logic
and emotion are essential to making good ethical choices.5 Recognizing the role of intuition in decision making
enables us to better direct its use. There are times that we need to regulate our initial intuitive reaction,
controlling our tendency to automatically respond in anger or condemnation. We may need to step back and
reappraise the situation or work on calming down. (You would likely want to control your automatic response if,
while travelling, your foreign hosts serve you dog meat.) We need to recognize how our intuitions are being
shaped by the social situation. Are we accepting corruption because the organization has weakened our
intuitions about right and wrong, for instance? Should we keep our distance from coworkers who might be
corrupting influences? We may want to “train” our intuitions in the same way we develop our character,
working to eliminate automatic prejudices against other groups, for instance, and combating the temptation to
cheat by developing more respect for academic authority.6
We shouldn’t always tamp down our emotions. On some occasions, we may need to give priority our intuitions.
Researchers have discovered that deliberate reasoning can “crowd out” altruism, for instance.7 Paying
volunteers can significantly reduce volunteering. A day care center that began fining adults who were late
picking up their children. The number of tardy parents actually increased. Why? Because parents no longer felt
guilty for inconveniencing the day care staff.8
Incorporating intuition into ethics training can improve ethical behavior, as in the case of the Johnson &
Johnson company. Instead of emphasizing obligations to customers in general, the firm invokes emotional
images by citing specific groups through its credo: “Our first responsibility is to the doctors, nurses, and
patients, to mothers and fathers and all others who use our products and services.”
One practical strategy for drawing upon both reason and feeling is to record your initial reaction to an ethical
dilemma. When confronted with ethical scenarios like those presented in Case Study 6.1, write down your
initial reaction. Then use the decision-making formats and other cognitive tools to test your immediate
response. When you’re finished, compare your final decision to your initial reaction. Your ultimate conclusion
after following a series of steps may be the same as your first judgment. Or you might come to a significantly
different decision. (You may also want to test your conclusion to see if it “feels” right.) In any case, you should
be more comfortable with your solution because your deliberations were informed both by your experiences,
emotions, and intuitions as well as by your conscious reasoning. To assist you in dual processing, I’ll be
introducing research findings from both the cognitive and intuitionist traditions in the next section of the
chapter.
COMPONENTS OF MORAL ACTION
James Rest of the University of Minnesota developed the most widely used model of moral behavior. Rest built
his four-component model by working backward. He started with the end product—moral action—and then
determined the steps that produce such behavior. He concluded that ethical action is the result of four
psychological subprocesses: (1) moral sensitivity (recognition), (2) moral judgment, (3) moral focus
(motivation), and (4) moral character.9
Component 1: Moral Sensitivity (Recognition)
Moral sensitivity (recognizing the presence of an ethical issue) is the first step in ethical decision making
because we can’t solve a moral problem unless we first know that one exists. A great many moral failures
stem from ethical insensitivity. The safety committee at Ford Motor decided not to fix the defective gas tank on
the Pinto automobile (see Chapter 2) because members saw no problem with saving money rather than
human lives. Toshiba overstated earnings for 7 years as managers concentrated on reaching high quarterly
goals instead of on maintaining the company’s previously high ethical standards.10 Many students, focused on
finishing their degrees, see no problem with cheating.
According to Rest, problem recognition requires that we consider how our behavior affects others, identify
possible courses of action, and determine the consequences of each potential strategy. Empathy and
perspective skills are essential to this component of moral action. If we understand how others might feel or
react, we are more sensitive to potential negative effects of our choices and can better predict the likely
outcomes of each option.
A number of factors prevent us from recognizing ethical issues. Sometimes, we frame or interpret ethical
issues as business or legal issues instead, believing that offering bribes is a normal way of doing business or
that a marketing tactic, such as advertising sugary cereals to children, is acceptable because it doesn’t violate
any laws.11 We may be reluctant to use moral terminology—values, justice, right, wrong—to describe our
decisions because we want to avoid controversy or believe that keeping silent will make us appear strong and
capable.12 We may even deceive ourselves into thinking that we are acting morally when we are clearly not, a
process called ethical fading. The moral aspects of a decision fade into the background if we use euphemisms
to disguise unethical behavior, numb our consciences through repeated misbehavior, blame others, and claim
that only we know the “truth.”13
Low levels of moral attentiveness can also reduce our sensitivity to ethical issues.14 Moral attentiveness is a
personality-like trait that describes the amount of attention individuals pay to the moral dimension of
experiences and events. This trait is made up of two elements: (1) perceptual moral attentiveness (the
tendency to notice morality in everyday life), and (2) reflective moral attentiveness (routinely considering ethics
when making choices). Perceptual attentiveness taps into intuition while reflective attentiveness involves
conscious reasoning. Those low in moral attentiveness aren’t as aware of the ethical implication of specific
situations (i.e., bribery, conflicts of interest) and act less ethically as a result. (Complete Self-Assessment 6.1
to determine your level of perceptual and reflective moral attentiveness.)
Component 2: Moral Judgment
Once an ethical problem is identified, decision makers select a course of action from the options generated in
Component 1. In other words, they make judgments about what is the right or wrong thing to do in this
situation.
Moral judgment has generated more research than the other components of Rest’s model. Investigators have
been particularly interested in (1) cognitive moral development, the process by which people develop their
moral reasoning abilities over time, and (2) biases or errors that undermine the decision making process.
Cognitive Moral Development
Harvard psychologist Lawrence Kohlberg argued that individuals progress through a series of moral stages
just as they do physical ones.16 Each stage is more advanced than the one before. Not only do people
engage in more complex reasoning as they progress up the stages, but they also become less self-centered
and develop broader definitions of morality.
Kohlberg identified three levels of moral development, each divided into two stages. Level I, preconventional
thinking, is the most primitive and focuses on consequences. This form of moral reasoning is common among
children who choose to obey to avoid punishment (Stage 1) or follow the rules in order to meet their interests
(Stage 2). Stage 2 thinkers are interested in getting a fair deal: You help me, and I’ll help you.
Conventional thinkers (Level II) look to others for guidance when deciding how to act. Stage 3 people want to
live up to the expectations of those they respect, such as parents, siblings, and friends, and value concern for
others and respect. Stage 4 individuals take a somewhat broader perspective, looking to society as a whole for
direction. They believe in following rules at work, for example, and the law. Kohlberg found that most adults are
Level II thinkers.
Level III, postconventional or principled reasoning, is the most advanced type of ethical thinking. Stage 5
people are guided by utilitarian principles. They are concerned for the needs of the entire group and want to
make sure that rules and laws serve the greatest good for the greatest number. Stage 6 people operate
according to internalized, universal principles such as justice, equality, and human dignity. These principles
consistently guide their behavior and take precedence over the laws of any particular society. According to
Kohlberg, fewer than 20% of American adults ever reach Stage 5, and almost no one reaches Stage 6.
Critics take issue with both the philosophical foundation of Kohlberg’s model and its reliance on concrete
stages of moral development.17 They contend that Kohlberg based his postconventional stage on Rawls’s
justice-as-fairness theory and made deontological ethics superior to other ethical approaches. They note that
the model applies more to societal issues than to individual ethical decisions. A great many psychologists
challenge the notion that people go through a rigid or “hard” series of moral stages, leaving one stage
completely behind before moving to the next. They argue instead that a person can engage in many ways of
thinking about a problem, regardless of age.
Rest (who studied under Kohlberg), Darcia Narvaez, and their colleagues responded to the critics by replacing
the hard stages with a staircase of developmental schemas.18 Schemas are networks of knowledge organized
around life events. We use schemas when encountering new situations or information. You are able to master
information in new classes, for instance, by using strategies you developed in previous courses. According to
this “neo-Kohlbergian” approach, decision makers rely on more sophisticated moral schemas as they develop.
The least sophisticated schema is based on personal interest. People at this level are concerned only with
what they may gain or lose in an ethical dilemma. No consideration is given to the needs of broader society.
Those who reason at the next level, the maintaining norms schema, believe they have a moral obligation to
maintain social order. They are concerned with following rules and laws and making sure that regulations apply
to everyone. These thinkers believe that there is a clear hierarchy with carefully defined roles (e.g.,
bosses–subordinates, teachers–students, officers–enlisted personnel). The postconventional schema is the
most advanced level of moral reasoning. Thinking at this level is not limited to one ethical approach, as
Kohlberg argued, but encompasses many different philosophical traditions. Postconventional individuals
believe that moral obligations are to be based on shared ideals, should not favor some people at the expense
of others, and are open to scrutiny (testing and examination). Such thinkers reason act like moral
philosophers, looking behind societal norms to determine whether they serve moral purposes.
Rest developed the Defining Issues Test (DIT) to measure moral development. Subjects taking the DIT (and its
successor, the DIT-2) respond to ethical scenarios and then choose statements that best reflect the reasoning
they used to come up with their choices. These statements, which correspond to the three levels of moral
reasoning, are then scored. In the best-known dilemma, Heinz’s wife is dying of cancer and needs a drug he
cannot afford to buy. He must decide whether to steal the drug to save her life.
Hundreds of studies using the DIT reveal that moral reasoning generally increases with age and education.19
Undergraduate and graduate students benefit from their educational experiences in general and ethical
coursework in particular. Discussing ethical dilemmas is a particularly effective way to stimulate higher-level
moral reasoning. In addition, moral development is a universal concept, crossing cultural boundaries.
Principled leaders can boost the moral judgment of a group by encouraging members to adopt more
sophisticated ethical schemas.20
Models of cognitive development provide important insights into the process of ethical decision making. First,
contextual variables play an important role in shaping ethical behavior. Most people look to others as well as to
rules and regulations when making ethical determinations. They are more likely to make wise moral judgments
if coworkers and supervisors encourage and model ethical behavior. As leaders, we need to build ethical
environments. (We’ll take a closer look at the formation of ethical groups and organizations in Chapters 9 and
10.) Second, education fosters moral reasoning. Pursuing a bachelor’s, master’s, or doctoral degree can
promote your moral development. As part of your education, focus as much attention as you can on ethics
(i.e., take ethics courses, discuss ethical issues in groups and classes, reflect on the ethical challenges you
experience in internships). Third, a broader perspective is better. Consider the needs and viewpoints of others
outside your immediate group or organization; determine what is good for the local area, the larger society,
and the global community. Fourth, moral principles produce superior solutions. The best ethical thinkers base
their choices on widely accepted ethical guidelines. Do the same by drawing on important ethical approaches
such as utilitarianism, the categorical imperative, altruism, the ethic of care, and justice-as-fairness theory.
Ethical Blind Spots
Harvard professor Max Bazerman and his colleagues believe that unethical choices are often the result of
unconscious distortions. These ethical blind spots cause us to participate in or approve of behaviors we would
normally condemn. Significant biases include the following:21
Overestimating our ethicality. Studies consistently demonstrate that, when it comes to ethics, we have an
inflated opinion of ourselves. We boldly predict, for example, that we will do the right thing when faced with an
ethical dilemma. Unfortunately, we often fall well short of our predictions. This was illustrated by a study of
female college students who were asked how they would respond to inappropriate job interview questions like
whether they had a boyfriend or whether they think it is appropriate for women to wear a bra to work. Sixty to
seventy percent of the participants said they would refuse to answer these questions, challenge the
interviewer, or tell him that such queries were inappropriate. However, when a male interviewer actually asked
them the offensive questions, none refused to answer. At the end of the session, only a few participants asked
the interviewer why he had posed these queries.22 (The Leadership at the Movies case found on the Student
Study site introduces a leader who had an overly high opinion of his ethical strength.)
Our belief in our inherent goodness may blind us to potential conflicts of interests that can undermine our
objectivity and influence our choices. Consider the case of Housing and Urban Development (HUD) Secretary
Ben Carson, for example. HUD granted a contract to his daughter-in-law’s consulting firm. The Secretary
allowed his son on a HUD housing tour despite the fact that his son had business dealings with some of those
invited to meet with HUD officials. Carson defended his family’s involvement, saying: “My family, or people with
relationships with my family, have never influenced any decision at HUD.”23
Forgiving our own unethical behavior. Driven by the desire to be moral and behave ethically, we feel a sense of
psychological tension called cognitive dissonance when we fall short of our ethical standards (i.e., lying when
we believe that we are honest). Our “want self” (our desire for status, money, etc.) overcomes our “should self”
(who we think we ought to be ethically). To relieve the distress generated when our actions and self-images
don’t match, we either change our behavior or excuse what we’ve done. We may convince ourselves that the
objectionable behavior was really morally permissible—see the discussion of moral disengagement in Chapter
2. We blame the boss or claim that we were “just following orders” or that “everyone else is doing it.” The
“everybody is doing it” excuse was used to justify the use of steroids and other performance enhancing drugs
in major league baseball and professional cycling. We may also become “revisionist historians.” Using
selective recall, we remember events in a way that supports our decisions. We recollect the times we stood up
to an unjust boss or told the truth; we forget the times we caved into pressure from a supervisor or lied to make
a sale or to get a job.
In-group favoritism. Doing favors for people we know that share our nationality, neighborhood, religion, social,
class, or alma mater seems harmless. We may ask our neighbor to hire our son or daughter, for example, or
recommend a sorority sister for an overseas program. Trouble comes because when those in power give
resources to members of their in-groups, they discriminate against those who are different from them.
Caucasian loan officers who relax lending standards for white applicants may end up refusing loans to
better-qualified black applicants, therefore hurting the bank’s bottom line. In-group favoritism can also prompt
us to excuse other’s unethical behavior. For example, when basketball players on our team knock opponents
to the court, they are playing “hard.” When players on the other team knock our team members to the floor,
they are playing “dirty.” We are particularly willing to forgive shortcomings when we benefit from the choices
they have made. Many bank and hedge fund managers funneled money into Bernie Madoff’s fraudulent
investment fund, the biggest swindle in history, even though the returns he promised were statistically
impossible. They likely ignored the danger signs because they were earning generous fees from Madoff.
Implicit prejudice. Implicit prejudice is different from visible or explicit forms of prejudice like racism or sexism.
Individuals are not generally aware of these biases, which are based on our tendency to associate things that
generally go together, like thunder and rain and wealthy people and luxury cars. However, these associations
are not always accurate. Thunder doesn’t always bring rain and not all high-income individuals drive expensive
vehicles. Unconscious biases can undermine ethical decision making. Take hiring decisions, for instance.
Personnel managers are likely to exclude qualified applicants if they assume that someone with a physical
disability is also mentally challenged or that women can’t fill traditional “masculine” jobs.
Judging based on outcomes, not the process. Two leaders can follow the same process when making a
decision but we typically judge them differently based on their results. When the decision turns out well, we
consider her or him successful. If the outcome is poor, we believe the leader is a failure. Nevertheless, just
because a poorly made decision had positive consequences in one case doesn’t mean that following the same
process will have positive results the next time. In fact, poor decision-making procedures eventually produce
bad (unethical) results. Officials at the Peanut Corporation of America repeatedly shipped tainted peanut
products. Few, if any people got sick. However, the last shipment of contaminated products eventually caused
eight deaths and sickened thousands more.24 The CEO was later sentenced to prison.
Bazerman and his colleagues argue that our good intentions and our determination to act ethically won’t be
enough to overcome these biases because we aren’t aware of them. Instead, we need to acknowledge these
blind spots. Admit that that you aren’t as ethical and unbiased as you believe, for instance, and that you forgive
your ethical misbehavior and that of other group members. Then take steps to combat these ethical distortions.
Publicly commit yourself to an ethical course of action ahead of time so your “should” self doesn’t get
overwhelmed your “want” self in the heat of the moment. Focus on the moral principles involved in a choice,
not on the immediate benefit you might receive. Put yourself in environments that challenge your implicit
stereotypes. Include a wider variety of people in the decision-making process; consider a wider variety of job
applicants. Audit your organization to determine if it is trapped by in-group biases and eliminate programs that
perpetuate the hiring and promotion of those of similar backgrounds. Step behind the veil of ignorance (see
Chapter 5) to make more equitable choices. Evaluate the quality of the decision-making process, not the
outcome; don’t condemn those who make good quality decisions only to see them turn out badly.
Component 3: Moral Focus (Motivation)
After concluding what course of action is best, decision makers must be focused (motivated to follow through)
on their choices. Moral values often conflict with other significant values. For instance, an accountant who
wants to blow the whistle on illegal accounting practices at her firm must balance her desire to do the right
thing against her desire to keep her job, provide income for her family, and maintain relationships with her
fellow workers. She will report the accounting abuses to outside authorities only if moral considerations take
precedence over these competing priorities. It’s no wonder, then, that there is often a significant gap between
knowing and doing, between moral judgment (Component 2) and moral action (Components 3 and 4).25
Developing moral potency is one way to address the moral thought-action gap.26 Moral potency is a
psychological state or resource made up of moral ownership, moral courage and moral efficacy. Moral
ownership occurs when individuals believe that their teams, organizations and communities are extensions of
themselves. This sense of ownership increases their obligation to behave in an ethical manner. Thus, a project
manager who identifies strongly with his team will view status reports as a symbol of his own ethicality. He has
a strong motivation to see that such reports are accurate. Moral courage provides the impetus to act despite
outside pressures and adversity. Moral efficacy is a leader’s belief or confidence that he or she has the ability
to carry out the plan of action. For instance, a manager may determine that a high-performing salesperson
should be fired for submitting bogus expense reports. Yet, she won’t take action if she doesn’t think she has
the support of top company leaders or if she believes she doesn’t have the necessary skills to confront the
individual.
As a capability or capacity, moral potency can be developed, in ourselves as well as in our followers. (The
“Focus on Follower Ethics” box provides a closer look at how subordinates can exercise their moral potency.)
To foster moral ownership, clarify the ethical responsibilities associated with each organizational role. Identify
and commit to professional codes and values while encouraging others to do the same. To develop moral
courage, look to courageous leaders as role models and seek to be a courageous role model yourself (see
Chapter 3). Build in organizational cues that encourage courageous actions. Cadets at U.S. Military Academy
at West Point, for instance, must sign an honor code that states they “will not lie, cheat or steal,” but also that
they will not “tolerate those that do.” The second part of the honor code requires cadets to display moral
courage if they witness unethical behavior. They are disciplined or removed from the Academy if they fail to do
so.
To develop moral efficacy, take on increasingly difficult ethical challenges and then debrief them to evaluate
your responses. Learn from how others have handled ethical dilemmas. Prepare for ethical challenges through
simulations, cases, discussions, and training. The moral efficacy of your followers is directly tied to your moral
efficacy as a leader. If you are confident that you can effectively deal with moral problems, your followers will
also be confident they can handle such situations.
Psychologists report that self-interest and hypocrisy undermine moral motivation.27 Sometimes individuals
genuinely want to do the right thing, but their integrity is “overpowered” when they discover that they will have
to pay a personal cost for acting in an ethical manner. Others never intend to follow an ethical course of action
but engage in moral hypocrisy instead. These decision makers “want to appear moral while, if possible,
avoiding the cost of actually being moral.”28 In experimental settings, they say that assignments should be
distributed fairly but then assign themselves the most desirable tasks while giving less desirable chores to
others. Both self-interest and hypocrisy encourage leaders to set their moral principles aside. For instance,
corporate executives might declare that lower-level employees deserve higher wages. However, these
executives may not pay employees more if it means that they will earn less as a result.
Rewards play an important role in ethical follow-through. People are prone to give ethical values top priority
when rewarded through raises, promotions, public recognition, and other means for doing so. Conversely,
moral motivation drops when the reward system reinforces unethical behavior.29 Unfortunately, misplaced
rewards are all too common, as in the case of electronics retailers who reward employees for selling expensive
extended warranties on new products. Such warranties are generally a bad deal for consumers.
Emotions also play a part in moral motivation.30 Researchers report that positive emotions such as joy and
happiness encourage people to live out their moral choices and to help others. Depression, on the other hand,
lowers motivation, and jealousy, rage, and envy contribute to lying, revenge, stealing, and other antisocial
behaviors. Moral emotions are of particular importance to ethical follow-through. Moral emotions are part of our
makeup as humans.31 These feelings are triggered even when we do not have a personal stake in an event.
For example, we may feel sympathy when we see a picture of a refugee living in a squalid camp or contempt
for a public official charged with domestic violence.
Anger, disgust, and contempt are other-condemning moral emotions. They are elicited by unfairness, betrayal,
immorality, cruelty, poor performance, and status differences. Anger can motivate us to redress injustices like
racism, oppression, and poverty. For example, driven by moral anger, some citizens traveled to the
U.S.–Mexican border to try to prevent authorities from separating immigrant children from their families.
Disgust encourages us to set up rewards and punishments to deter inappropriate behaviors, like sexual
deviancy or mistreating animals.32 Contempt generally causes us to step back from others and to feel less
compassion for them.
Shame, embarrassment, and guilt are self-conscious emotions that encourage us to obey the rules and uphold
the social order. Shame and embarrassment are elicited when we violate norms and social conventions,
present the wrong image to others, and fail to live up to moral guidelines. Shame produces a negative
evaluation of the self for being flawed, such as when we are disappointed in ourselves for not helping a
coworker struggling with a computer problem. Embarrassment produces a negative evaluation of a specific
act, such as when we violate cultural norms by failing to present gifts when meeting businesspeople abroad.
Shame and embarrassment can keep us from engaging in further damaging behavior and may drive us to
withdraw from social contact. Guilt arises in relationships when one party causes harm or distress to the other
through lying, betrayal, physical harm, or other behavior. We then want to right the wrong, to help others, and
to treat them well.
Sympathy and compassion are other-suffering emotions. They are activated when we perceive suffering or
sorrow in our fellow human beings. We are most likely to feel compassion for our family, friends and
community but sympathy typically extends toward total strangers as well. Such feelings encourage us to
comfort, help, and alleviate the pain of others by, for example, setting up a GoFundMe account for a seriously
ill friend, volunteering at the local food bank, and protesting cuts in social service programs. (See our earlier
discussion of altruism in Chapter 5.)
Gratitude, awe, and elevation are other-praising (positive) emotions that open us up to new opportunities and
relationships. Gratitude is a form of reciprocal altruism, coming into play when someone has done something
on our behalf. We then want to repay their generosity and kindness. Those feeling gratitude generally have a
greater sense of personal and spiritual well-being and are more likely to become engaged in their
communities. Awe