Showing posts with label Communication Skills. Show all posts
Showing posts with label Communication Skills. Show all posts

Wednesday, February 14, 2024

Progress Report (Hindi to English Learning)

Date Total Right Wrong

Thursday, February 8, 2024

Hindi to English Learning (Version 2)

Username: 

Password: 



Try: ashish/monkey

अध्याय चुनें:

इन वाक्यों को हिंदी से अंग्रेजी में बदलें:

Sentence:




Tuesday, February 6, 2024

Hindi to English Learning

Username: 

Password: 



इन वाक्यों को हिंदी से अंग्रेजी में बदलें

Sentence:




Saturday, February 4, 2023

Index of Guest Interviews

  1. Moni Singh (YOB: 1990, Cook) Year: 2023 (Feb)
  2. Shalu Jain (YOB: 1995, 3D Visualizer) Year: 2023 (Feb)
  3. Tenzin Kunsel (1996, Chef and Co-Manager at 'Tenzin Tibet Kitchen') Year: 2021 (Jul)
  4. Kusang Lhamo (1992, Owner of 'Tenzin Tibet Kitchen') Year: 2021 (Jun)
  5. Anil Dahiya (1986, Software Engineer) Year: 2021
  6. Alisha Gera (1996, Software Engineer, Drawing Artist, Entrepreneur (Handprints Club)) Year: 2020
  7. Priyansha Singh (1992, Technology Analyst) Year: 2020
  8. Salil Bansal (1992, Technology Analyst) Year: 2020
  9. Prateek Kohli (1996, Founder and CEO (Gratitude Systems)) Year: 2020
  10. Shambhavi Choudhary (1990, Entrepreneur) Year: 2020
  11. Sadhana Jain (1965, Rental Property Owner) Year: 2020
  12. Divjot Singh (1988, Senior Data Scientist) Year: 2020
  13. Yajuvendra Gupta (1976, IT Professional) Year: 2020
  14. Ahana Mandal (1991, IT Professional) Year: 2019
  15. Akshita Taneja (1995, Data Science Engineer) Year: 2019
  16. Duhita Dey (1994, Patent Research Analyst) Year: 2019
  17. Anirudh Sharma (1996, Software Engineer) Year: 2019
  18. Akriti Chauhan (1994, Banker) Year: 2019
  19. Diljot Kaur (1988, Human resources professional) Year: 2019
  20. Neha Pal (1992, GIS Developer) Year: 2019
  21. Peeyush Khosla (1989, Software Engineer) Year: 2019
  22. Anil Dahiya (1986, Software Engineer) Year: 2019
  23. Jayeta Sharma (1993, CRM Consultant) Year: 2019
  24. Manish Chauhan (1987, IT Professional / Consultant) Year: 2019
  25. Bhupendra Dixit (1987, Software Engineer) Year: 2018
  26. Akash Saxena (1995, Software Engineer) Year: 2018
  27. Mayank Singh Bisht (1991, Business Consultant and Cricket coach/player) Year: 2018
  28. Anmol Thukral (1995, IT Professional) Year: 2018
  29. Magdalene (1990, Product Management) Year: 2018
  30. Sarthak Bajaj (1992, IT Professional) Year: 2018
  31. Lovanya Chaudhary (1991, Software Engineer) Year: 2018
  32. Himanshu Panwar (1994, Software Engineer) Year: 2018
  33. Deepika Thakur (1994, Programmer) Year: 2018
  34. Srishti Jain (1994, Business Analyst) Year: 2018
  35. Prity Singh (1990, Software Professional) Year: 2018
  36. Shubham Jain (1991, Marine Engineer) Year: 2018
  37. Ankur Singh (1991, Marine Engineer) Year: 2018
  38. Bhimsen Naranjan Ahuja (1947, Civil engineer (Retired)) Year: 2018
  39. Sneha Kiran (1977, Application Consultant, Owner at S&Y soaps) Year: 2018
  40. Geeta Sharma (1993, Software Engineer) Year: 2018
  41. Rohit Sud (1990, Software Professional) Year: 2018
  42. Rahul Mehra (1991, Data Scientist) Year: 2018
  43. Gurarchi Kaur (1991, Law Student and Astrologer) Year: 2018

My Own Answers

  1. 2023
Even exceptionally successful people are not immune to tough times, for ex. Steve Jobs was thrown out of the company he founded, Bill Gates was deposed in the Microsoft v. US antitrust lawsuit in which the judgement was that Microsoft would be broken into two separate companies. Elon Musk was removed as chairman of the company he founded and barred from holding this position for any company for three years after he posted a tweet that was misleading to the investors.

If you have a story to share, please write to us at "ashishjainblogger@gmail.com".

Steve Jobs was forced out of Apple in 1985 after a long power struggle with the company's board and its then-CEO John Sculley. In 1983, Jobs had lured John Sculley away from Pepsi-Cola to serve as Apple's CEO, asking, "Do you want to spend the rest of your life selling sugared water, or do you want a chance to change the world?" Bill Gates was deposed in the "United States v. Microsoft Corp." antitrust law case in which Microsoft Corporation was accused of holding a monopoly and engaging in anti-competitive practices contrary to sections 1 and 2 of the Sherman Antitrust Act. This resulted in breaking down of the Microsoft into smaller companies (one making operating systems and another for software solutions like web browser). (Date decided: 28 June 2001) Elon Musk had to step down as chairman of Tesla, his electric car making company, and pay a $20 million fine to settle charges brought against him by the Securities and Exchange Commission, according to CNN. SEC said they were suing Musk for misleading investors, an allegation stemming from a controversial tweet he sent out in August. Musk had sent out an early morning tweet on August 7 (2018) declaring he was taking Tesla private at $420 a share. “Am considering taking Tesla private at $420. Funding secured,” Musk tweeted, short and simple.
Tags: Psychology,Communication Skills,

Saturday, January 21, 2023

What's focal is causal (And a story about false confessions)

It’s no wonder that we assign elevated import to factors that have our attention. We also assign them causality. Therefore, directed attention gives focal elements a specific kind of initial weight in any deliberation. It gives them standing as causes, which in turn gives them standing as answers to that most essential of human questions: Why? Because we typically allot special attention to the true causes around us, if we see ourselves giving such attention to some factor, we become more likely to think of it as a cause. Take monetary payments. Because the amount of money is so salient in the exchanges—“I’ll pay you x when you do y”—we tend to infer that the payment spurred the act, when, in fact, it was often some other, less visible factor. Economists, in particular, are prone to this bias because the monetary aspects of a situation dominate their attentions and analyses. Thus, when Harvard Business School economist Felix Oberholzer-Gee approached people waiting in line at several different venues and offered them money to let him cut in, he recognized that a purely economics-based model would predict that the more cash he offered, the more people would agree to the exchange. And that’s what he found: half of everyone offered $1 let him cut in line; 65 percent did so if offered them $3, and acceptance rates jumped to 75 percent and 76 percent when he proposed the larger sums of $5 and $10. According to classical economic theory, which enshrines financial self- interest as the primary cause of human behavior, those greater incentives convinced people to take the deal for their own fiscal betterment. How could any observer to the transaction doubt it? The highly visible incentives caused the obtained effects due to their direct links to personal monetary gain, right? Nothing surprising occurred here, right? Well, right, except for an additional finding that challenges all this thinking: almost no one took the money.“Gee,” Oberholzer-Gee must have said to himself, “that’s odd.” Indeed, a number of oddities appeared in his data, at least for adherents to the idea that the ultimate cause of human action is one’s own financial interest. For instance, although bigger cash incentives upped compliance with the line cutter’s wish, they didn’t increase acceptance of the payment; richer deals increasingly caused people to sacrifice their places in line but without taking the greater compensation. To explain his findings, Oberholzer-Gee stepped away from a consideration of salient economic factors and toward a hidden factor: an obligation people feel to help those in need. The obligation comes from the helping norm, which behavioral scientists sometimes call the norm of social responsibility. It states that we should aid those who need assistance in proportion to their need. Several decades’ worth of research shows that, in general, the more someone needs our help, the more obligated we feel to provide it, the more guilty we feel if we don’t provide it, and the more likely we are to provide it. When viewed through this lens, the puzzling findings make perfect sense. The payment offers stimulated compliance because they alerted recipients to the amount of need present in the situation. This account explains why larger financial inducements increased consent even though most people weren’t willing to pocket them: more money signaled a stronger need on the part of the requester. (“If this guy is willing to pay a lot to jump ahead of me, he must really need to get to the front fast.”) It would be naïve to assert that fiscal factors are less than potent determinants of human action. Still, I’d argue that merely because they are so visible (and, therefore, prominent in attention), they are often less determining than they seem. Conversely, there are many other factors— social obligations, personal values, moral standards—that, merely because they are not readily observable, are often more determining than they seem. Elements such as money that attract notice within human exchanges don’t just appear more important, they also appear more causal. And presumed causality, especially when acquired through channeled attention, is a big deal for creating influence—big enough to account for patterns of human conduct that can range from perplexing to alarming.

Taking a Chance

In the first of these categories, consider the most famous case of product tampering of all time. In the autumn of 1982, someone went into supermarkets and drug stores in the Chicago area, injected packaged capsules of Tylenol with cyanide, and then returned the containers to the store shelves, where they were later purchased. Several reasons exist for the incident’s long-standing notoriety. First, seven Chicago residents died from ingesting the poison—four of them family members who had swallowed capsules from the same Tylenol container. Second, their killer has never been found, giving the crime an uncomfortably memorable lack of closure. But, for the most part, the case lives on today not so much for these regrettable reasons as for a pair of favorable ones: it led to the passage of important product safety legislation and to pharmaceutical industry shifts to tamperproof seals and packaging that have reduced risks to consumers. In addition—owing to the rapid, customer-centered steps taken by Tylenol’s maker, Johnson & Johnson, which recalled thirty-one million of the capsules from all stores—it produced a textbook approach to proper corporate crisis management that is still considered the gold standard. (The recommended approach urges companies to act without hesitation to fully inform and protect the public, even at substantial expense to its own immediate economic interests.) Aside from these high-profile features, another element of the case has gone almost entirely unnoticed but strikes me as remarkable. Early on, after it had been determined that the deaths were linked to bottles of Tylenol but before the extent of the tampering had been established, Johnson & Johnson issued nationwide warnings intended to prevent further harm. One widely communicated sort of warning alerted consumers to the production lot numbers on the affected bottles—numbers that identified where and when a particular batch of capsules had been manufactured. Because they were the first to be identified, two of the numbers received the most such publicity: lots 2,880 and 1,910. Immediately, and bewilderingly, US residents of states that ran lotteries began playing those two numbers at unprecedented rates. In three states, Rhode Island, New Hampshire, and Pennsylvania, officials announced that they had to halt wagers on the numbers because betting on them shot above “maximum liability levels.”To know how best to account for this set of events, let’s review the characteristics of the numbers. First, they were ordinary; not inherently memorable in any way. Second, they were associated with grievous misfortune. Moreover, they were intensely connected in American minds to imagery of poison-fed death. Yet many thousands of those minds responded to something about the numbers that lifted expectations of lottery success. What? Our previous analysis offers one answer: Because of all the publicity surrounding them, they had become focal in attention; and what is focal is seen to have causal properties—to have the ability to make events occur. It turned out that every one of the minds that thought those numbers would provide an advantage over chance was proved wrong by the subsequent lottery results. But I doubt that the losses taught those minds to avoid, in any general way, similar future errors. The tendency to presume that what is focal is causal holds sway too deeply, too automatically, and over too many types of human judgment.

Taking a Life

Imagine that you are in a café enjoying a cup of coffee. At the table directly in front of you, a man and a woman are deciding which movie to see that evening. After a few minutes, they settle on one of the options and set off to the theater. As they leave, you notice that one of your friends had been sitting at the table behind them. Your friend sees you, joins you, and remarks on the couple’s movie conversation, saying, “It’s always just one person who drives the decision in those kinds of debates, isn’t it?” You laugh and nod because you noticed that too: although the man was trying to be diplomatic about it, he clearly was the one who determined the couple’s movie choice. Your amusement disappears, though, when your friend observes, “She sounded sweet, but she just pushed until she got her way.” Dr. Shelley Taylor, a social psychologist at the University of California at Los Angeles (UCLA), knows why you and your friend could have heard the same conversation but come to opposite judgments about who determined the end result. It was a small accident of seating arrangements: you were positioned to observe the exchange over the shoulder of the woman, making the man more visible and salient, while your friend had the reverse point of view. Taylor and her colleagues conducted a series of experiments in which bservers watched and listened to conversations that had been scripted carefully so that neither discussion partner contributed more than the other. Some observers watched from a perspective that allowed them to see the face of one the parties over the shoulder of the other, while other observers saw both faces from the side, equally. All the observers were then asked to judge who had more influence in the discussion, based on tone, content, and direction. The outcomes were always the same: whomever’s face was more visible was judged to be more causal. Taylor told me a funny but nonetheless enlightening story about how she first became convinced of the power of the what’s-focal-is-presumed-causal phenomenon. In setting up the initial study, she arranged for a pair of research assistants to rehearse a conversation in which it was critical for each discussion partner to contribute about equally. Standing alternately behind first one and then the other person, she found herself criticizing whomever she was facing for “dominating the exchange.” Finally, after several such critiques, two of Taylor’s colleagues, who were watching the conversation partners from the side, stopped her in exasperation, asserting that, to them, neither partner seemed to be dominating the conversation. Taylor reports that she knew then, without a single piece of data yet collected, that her experiment would be a success because the rehearsal had already produced the predicted effect—in her. No matter what they tried, the researchers couldn’t stop observers from presuming that the causal agent in the interaction they’d witnessed was the one whose face was most visible to them. They were astonished to see it appear in “practically unmovable” and “automatic” form, even when the conversation topic was personally important to the observers; even when the observers were distracted by the researchers; even when the observers experienced a long delay before judging the discussants; and even when the observers expected to have to communicate their judgments to other people. What’s more, not only did this pattern emerge whether the judges were male or female, but also it appeared whether the conversations were viewed in person or on videotape.26 When I asked Taylor about this last variation, she recalled that the taping was done for reasons of experimental control. By recording the same discussion from different camera angles, she could ensure that everything about the conversation itself would be identical every time she showed it. When her results were first published, that videotaped interactions could produce the what’s-focal-is-presumed-causal effect was not viewed as an important facet of Taylor’s findings. But circumstances have now changed, because certain kinds of videotaped interactions are used frequently to help determine the guilt or innocence of suspects in major crimes. To register how and why this is so, it is necessary to take an instructive detour and consider a frightening component of all highly developed criminal justice systems: the ability of police interrogators to generate confessions from individuals who did not commit the crime. Extracted false confessions are unsettling for a pair of reasons. The first is societal and concerns the miscarriages of justice and the affronts to fairness that such manufactured confessions create within any culture. The second is more personal, involving the possibility that we ourselves might be induced to confess by the tactics of interrogators convinced, mistakenly, of our guilt. Although for most of us such a possibility is remote, it is likely to be more real than we think. The idea that no innocent person could be persuaded to confess to a crime, especially a serious one, is wrong. It happens with disquieting frequency. Even though the confessions obtained in the great majority of police interrogations are in fact true and are corroborated by other evidence, legal scholars have uncovered a distressingly large number of elicited false confessions. Indeed, the confessions have often been shown later to be demonstrably false by evidence such as physical traces (DNA or fingerprint samples), newly obtained information (documentation of the suspect’s presence hundreds of miles away from the crime), and even proof that no crime occurred (when a presumed murder victim is discovered alive and well).27 The same legal scholars have proposed a long list of factors that can help explain persuaded false confessions. Two strike me as particularly potent. I can relate to the first as an ordinary citizen. If I were asked by authorities to come to the police station to help them resolve the suspicious death of one of my neighbors—perhaps one I’d argued with in the past—I’d be glad to oblige. It would be the civically responsible thing to do. And if during the consequent questioning I began to feel that I was a suspect in police eyes, I might continue on anyway without demanding to be represented by a lawyer because, as an innocent man, I’d be confident that my interrogators would recognize the truth in what I told them. Plus, I wouldn’t want to confirm any doubts they harbored about my innocence by seeming to hide behind a lawyer; instead, I’d want to walk away from the session with all those doubts dismissed.28 As a person of interest, my understandable inclinations—to help the police and then to convince them against my involvement—could lead me to ruin, though, for the other potent reason induced false confessions occur. In this instance, it’s a reason I can relate to as a student of social influence: by deciding to persist through the interview on my own, I might subject myself to a set of techniques perfected by interrogators over centuries to get confessions from suspects. Some of the techniques are devious and have been shown by research to increase the likelihood of false confessions: lying about the existence of incriminating fingerprints or eyewitness testimony; pressing suspects to repeatedly imagine committing the crime; and putting them into a brain-clouded psychological state through sleep deprivation and relentless, exhaustive questioning. Defenders of such tactics insist that they are designed to extract the truth. An accompanying, complicating truth, however, is that sometimes they just extract confessions that are verifiably untrue.

A Story About False Confessions

Eighteen-year-old Peter Reilly’s life changed forever one night in 1973 when he returned home from a youth meeting at a local church to find his mother on the floor, dying in a pool of blood. Though shaken and reeling from the sight, he had the presence of mind to phone for help immediately. By the time aid arrived, however, Barbara Gibbons had died. An examination of the body revealed that she had been murdered savagely: her throat had been cut, three ribs had been broken, and the thigh bones of both legs had been fractured. At five foot seven and 121 pounds, and with not a speck of blood on his body, clothes, or shoes, Peter Reilly seemed an unlikely killer. Yet from the start, when they found him staring blankly outside the room where his mother lay dead, the police suspected that Peter had murdered her. Some people in their Connecticut town laughed at her unconventional ways, but many others were not amused, describing her as unpredictable, volatile, belligerent, and unbalanced. She appeared to take delight in irritating the people she met—men especially—belittling, confronting, and challenging them. By any measure, Barbara Gibbons was a difficult woman to get along with. So it didn’t seem unreasonable to police officials that Peter, fed up with his mother’s constant antagonisms, would “fly off the handle” and murder her in a spasm of rage. At the scene and even later when taken in for questioning, Peter waived his right to an attorney, thinking that if he told the truth, he would be believed and released in short order. That was a serious miscalculation, as he was not prepared, legally or psychologically, for the persuasive assault he would face. Over a period of sixteen hours, he was interrogated by a rotating team of four police officers, including a polygraph operator who informed Peter that, according to the lie detector, he had killed his mother. That exchange, as recorded in the interrogation’s transcript, left little question of the operator’s certainty in the matter: Peter: Does that actually read my brain? Polygraph operator: Definitely. Definitely. Peter: Would it definitely be me? Could it have been someone else? Polygraph operator: No way. Not from these reactions. Actually, the results of polygraph examinations are far from infallible, even in the hands of experts. In fact, because of their unreliability, they are banned as evidence in the courts of many states and countries. The chief interrogator then told Peter, falsely, that physical evidence had been obtained proving his guilt. He also suggested to the boy how he could have done it without remembering the event: he had become furious with his mother and erupted into a murderous fit during which he slaughtered her, and now he had repressed the horrible memory. It was their job, Peter’s and his, to “dig, dig, dig” at the boy’s subconscious until the memory surfaced. Dig, dig, dig they did, exploring every way to bring back that memory, until Peter began to recall—dimly at first but then more vividly—slashing his mother’s throat and stomping on her body. By the time the interrogation was over, these imaginations had become reality for both the interrogators and Peter: Interrogator: But you recall cutting her throat with a straight razor. Peter: It’s hard to say. I think I recall doing it. I mean, I imagine myself doing it. It’s coming out of the back of my head. Interrogator: How about her legs? What kind of vision do we get there? Can you remember stomping her legs? Peter: You say it, then I imagine I’m doing it. Interrogator: You’re not imagining anything. I think the truth is starting to come out. You want it out. Peter: I know... Analyzing and reanalyzing these images convinced Peter that they betrayed his guilt. Along with his interrogators, who pressured him to break through his “mental block,” the teenager pieced together from the scenes in his head an account of his actions that fit the details he’d been given of the murder. Finally, a little more than twenty-four hours after the grizzly crime, though still uncertain of many specifics, Peter Reilly confessed to it in a written, signed statement. That statement conformed closely to the explanation that had been proposed by his interrogators and that he had come to accept as accurate, even though he believed none of it at the outset of his questioning and even though, as events demonstrated later, none of it was true. When Peter awoke in a jail cell the next day, with the awful fatigue and the persuasive onslaught of the interrogation room gone, he no longer believed his confession. But it was too late to retract it convincingly. To virtually every official in the criminal justice system, it remained compelling evidence of his guilt: a judge rejected a motion to suppress it at Peter’s trial, ruling that it had been made voluntarily; the police were so satisfied that it incriminated Peter that they stopped considering other suspects; the prosecuting attorneys made it the centerpiece of their case; and the jury that ultimately convicted Peter of murder relied on it heavily in its deliberations. To a one, these individuals did not believe that a normal person could be made to confess falsely to a crime without the use of threats, violence, or torture. And to a one, they were mistaken: Two years later, when the chief prosecutor died, evidence was found hidden in his files that placed Peter at a time and in a location on the night of the crime that established his innocence and that led to the repeal of his conviction, the dismissal of all charges, and his release from prison.
If you admit, we don’t acquit. Peter Reilly surrounded by deputy sheriffs taking him to prison after his conviction. There is an old saying that confession is good for the soul. But for criminal suspects, it is bad for just about everything else. Those who confess are much more likely to be charged, tried, convicted, and sentenced to harsh punishment. As the great American jurist Daniel Webster recognized in 1830, “There is no refuge from confession but suicide; and suicide is a confession.” A century and a half later, renowned US Supreme Court Justice William Brennan expanded upon Webster’s assertion with a stunning observation about the criminal justice system: “the introduction of a confession makes other aspects of a trial in court superfluous; and the real trial, for all purposes, occurs as the confession is obtained.” There is chilling evidence that Brennan was right. An analysis of 125 cases involving fabricated confessions found that suspects who first confessed but then renounced their statements and pled not guilty were still convicted at trial 81 percent of the time—yet these, recall, were all false confessions! Peter Reilly suffered the same fate as the great majority of individuals persuaded to confess to crimes they didn’t commit, which raises a legitimate question: Why should we spotlight his confession over other more publicized and harrowing cases with the same outcome—for example, those in which multiple suspects were convinced to claim that, as a group, they had perpetrated a crime none of them had committed? Notably, it wasn’t anything that had occurred during his interrogation, trial, conviction, or subsequent legal battles. It surfaced at an event twenty years later where Peter, who had been employed on and off in various low- level sales jobs, was a speaker on a panel considering the causes and consequences of wrongfully obtained confessions and where it was described, not by Peter, but by a man sitting next to him with the ordinaryname of Arthur Miller. This, though, was no ordinary Arthur Miller. It was the Arthur Miller, who some view as the greatest-ever American playwright, who wrote what some view as the greatest-ever American drama, Death of a Salesman, and who—if that isn’t enough to draw our notice—was married for five years to the woman some view as the greatest-ever American sex symbol, Marilyn Monroe.
Life for a salesman. Arthur Miller and Peter Reilly, who had worked in various sales positions, twenty years after the murder. After being introduced to the audience by Peter as one of his key supporters, Miller explained his presence on the panel as due to a long- standing concern with “the business of confessions, in my life as well as in my plays.” During the period of anti-Communist fervor in the United States, in the 1950s, several of Miller’s friends and acquaintances were summoned to appear at hearings before congressional committees. There they were pushed in calculated questioning to confess to Communist Party affiliations as well as to knowing (and then revealing) the names of members of the party prominent in the entertainment world. Miller himself was subpoenaed by the US House Un-American Activities Committee (HUAC) and was blacklisted, fined, and denied a passport for failing to answer all the chairman’s questions. The role of confessions in Miller’s plays can be seen in e Crucible, the most frequently produced of all his works. Although set in 1692 during the Salem witchcraft trials, Miller wrote it allegorically to reflect the form of loaded questioning he witnessed in congressional hearings and that he later recognized in the Peter Reilly case. Miller’s comments on the panel with Reilly were relatively brief. But they included an account of a meeting he had in New York with a Chinese woman named Nien Cheng. During Communist China’s Cultural Revolution of the 1960s and 1970s, which was intended to purge the country of all captialistic elements, she was subjected to harsh interrogations designed to get her to confess to being an anti-Communist and a spy. With tear-rimmed eyes, Nien related to the playwright her deep feelings upon seeing, after her eventual release from prison, a production of e Crucible in her native country. At the time, she was sure that parts of the dialogue had been rewritten by its Chinese director to connect with national audiences, because the questions asked of the accused in the play “were exactly the same as the questions I had been asked by the Cultural Revolutionaries.” No American, she thought, could have known these precise wordings, phrasings, and sequencings. She was shocked to hear Miller reply that he had taken the questions from the record of the 1692 Salem witchcraft trials—and that they were the same as were deployed within the House Un-American Activities Committee hearings. Later, it was the uncanny match to those in the Reilly interrogation that prompted Miller to get involved in Peter’s defense.30 A scary implication arises from Miller’s story. Certain remarkably similar and effective practices have been developed over many years that enable investigators, in all manner of places and for all manner of purposes, to wring statements of guilt from suspects—sometimes innocent ones. This recognition led Miller and legal commentators to recommend that all interrogations involving major crimes be videotaped. That way, these commentators have argued, people who see the recordings—prosecutors, jury members, judges—can assess for themselves whether the confession was gained improperly. And, indeed, video recording of interrogation sessions in serious criminal cases has been increasingly adopted around the globe for this reason. It’s a good idea in theory, but there’s a problem with it in practice: the point of view of the video camera is almost always behind the interrogator and onto the face of the suspect. The legal issue of whether a confession had been made freely by the suspect or extracted improperly by an interrogator involves a judgment of causality—of who was responsible for the incriminating statement. As we know from the experiments of Professor Taylor, a camera angle arranged to record the face of one discussant over the shoulder of another biases that critical judgment toward the more visually salient of the two. We also know now—from the more recent experiments of social psychologist Daniel Lassiter—that such a camera angle aimed at a suspect during an interrogation leads observers of the recording to assign the suspect greater responsibility for a confession (and greater guilt). Moreover, as was the case when Taylor and her coworkers tried it, Lassiter and his coworkers found this outcome to be stubbornly persistent. In their studies, it surfaced regardless of whether the observers were men or women, college students or jury-eligible adults in their forties and fifties, exposed to the recording once or twice, intellectually deep or shallow, and previously informed or not about the potentially biasing impact of the camera angle. Perhaps most disturbingly, the identical pattern appeared whether the watchers were ordinary citizens, law enforcement personnel, or criminal court judges. Nothing could change the camera angle’s prejudicial impact—except changing the camera angle itself. The bias disappeared when the recording showed the interrogation and confession from the side, so that the suspect and questioner were equally focal. In fact, it was possible to reverse the bias by showing observers a recording of the identical interaction with the camera trained over the suspect’s shoulder onto the interrogator’s face; then, compared with the side-view judgments, the interrogator was perceived to have coerced the confession. Manifestly here, what’s focal seems causal. Thus, a potential dilemma exists for an innocent person—perhaps you— invited to a police station to help investigators solve a major crime. There is certainly nothing wrong with complying and providing that assistance; it’s what good citizens do. But matters would get more complicated if you began to sense that the session was designed not so much to obtain information from you as to obtain a possible confession from you. The standard recommendation of defense attorneys at this point would be to stop the proceedings and request a lawyer. That choice, though, has its risks. By terminating the session, you might not be able to give your questioners the facts they need to solve the crime quickly and to discount your involvement fully, which would allow you to dispel the specter of suspicion then and there. Being suspected of a serious crime can be a terrifying, nasty, lingering experience that might well be prolonged by the appearance of having something to conceal. But choosing to go on with the increasingly interrogation-like session includes perils of its own. You might be laying yourself open to tactics that have evolved in disparate places over centuries to extract incriminating statements from suspects, including blameless ones. There are ample grounds for caution here because, wherever employed, these are the techniques that have proven themselves to interrogators most able to achieve that end. Suppose, after considering your options, you decide to soldier on through the interview in an earnest attempt to clear your name. Is there anything you could do to increase the odds that, should you be somehow tricked or pressured into making falsely incriminating comments, external observers would be able to identify the tricks and pressure as the causes? There is. It comes in two steps, straight from the research of Professors Taylor and Lassiter. First, find the camera in the room, which will usually be above and behind the police officer. Second, move your chair. Position yourself so that the recording of the session will depict your face and your questioner’s face equally. Don’t allow the what’s-focal-is-presumed-causal effect to disadvantage you at trial. Otherwise, as Justice Brennan believed, your trial might already be over.31 By the way, if you ever found yourself in the interview situation I described, and you chose to end the session and demand a lawyer, is there anything you might do to reduce police suspicions that you therefore have something to hide? I have a suggestion: blame me. Say that, although you’d like to cooperate fully on your own, you once read a book that urged you to consider extensive police questioning unsafe, even for innocent individuals. Go ahead, blame me. You can even use my name. What are the police going to do, arrest me on a trumped-up charge, bring me down to the stationhouse, and employ Machiavellian tactics to gain a false confession? They’ll never win a conviction, because I’ll just find the camera and move my chair. Evidence that people automatically view what’s focal as causal helps me to understand other phenomena that are difficult to explain. Leaders, for example, are accorded a much larger causal position than they typically deserve in the success or failure of the teams, groups, and organizations they head. Business performance analysts have termed this tendency “the romance of leadership” and have demonstrated that other factors (such as workforce quality, existing internal business systems, and market conditions) have a greater impact on corporate profits than CEO actions do; yet the leader is assigned outsize responsibility for company results. Thus even in the United States, where worker wages are relatively high, an analysis showed that the average employee in a large corporation is paid one half of 1 percent of what the CEO is paid. If that discrepancy seems hard to account for on grounds of economic or social fairness, perhaps we can account for it on other grounds: the person at the top is visually prominent, psychologically salient, and, hence, assigned an unduly causal role in the course of events.

End Note

In sum, because what’s salient is deemed important and what’s focal is deemed causal, a communicator who ushers audience members’ attention to selected facets of a message reaps a significant persuasive advantage: recipients’ receptivity to considering those facets prior to actually considering them. In a real sense, then, channeled attention can make recipients more open to a message pre-suasively, before they process it. It’s a persuader’s dream, because very often the biggest challenge for a communicator is not in providing a meritorious case but in convincing recipients to devote their limited time and energy to considering its merits. Perceptions of issue importance and causality meet this challenge exquisitely. If captured attention does indeed provide pre-suasive leverage to a communicator, a related issue arises: Are there any features of information that don’t even require a communicator’s special efforts to draw attention to them because, by their nature, they draw attention to themselves? Reference Chapter 4: from Presuasion (A Revolutionary Way to Influence and Persuade) by Robert Cialdini
Tags: Book Summary,Communication Skills,Negotiation,

Monday, January 16, 2023

Pre-suasion (a revolutionary way to influence and persuade) by Robert Cialdini

Part 1: PRE-SUASION: THE FRONTLOADING OF ATTENTION

Ch 2: Privileged Moments

TARGET CHUTING

If I inquired whether you were unhappy in, let’s say, the social arena, your natural tendency to hunt for confirmations rather than for disconfirmations of the possibility would lead you to find more proof of discontent than if I asked whether you were happy there. This was the outcome when members of a sample of Canadians were asked either if they were unhappy or happy with their social lives. Those asked if they were unhappy were far more likely to encounter dissatisfactions as they thought about it and, consequently, were 375 percent more likely to declare themselves unhappy.There are multiple lessons to draw from this finding. First, if a pollster wants to know only whether you are dissatisfied with something—it could be a consumer product or an elected representative or a government policy —watch out. Be suspicious as well of the one who asks only if you are satisfied. Single-chute questions of this sort can get you both to mistake and misstate your position. I’d recommend declining to participate in surveys that employ this biased form of questioning. Much better are those that use two-sided questions: “How satisfied or dissatisfied are you with this brand?” “Are you happy or unhappy with the mayor’s performance in office?” “To what extent do you agree or disagree with this country’s current approach to the Middle East?” These kinds of inquiries invite you to consult your feelings evenhandedly. Decidedly more worrisome than the pollster whose leading questions usher you into a less than accurate personal stance, though, is the questioner who uses this same device to exploit you in that moment—that privileged moment. Cult recruiters often begin the process of seducing new prospects by asking if they are unhappy (rather than happy). I used to think this phrasing was designed only to select individuals whose deep personal discontent would incline them toward the kind of radical change that cults demand. But now I’m convinced that the “Are you unhappy?” question is more than a screening device. It’s also a recruiting device that stacks the deck by focusing people, unduly, on their dissatisfactions. (The truth is that cults don’t want malcontents within their ranks; they are looking for basically well-adjusted individuals whose positive, can-do style can be routed to cult pursuits.) As the results of the Canadian study show, after being prompted by the question’s wording to review their dissatisfactions, people become more likely to describe themselves as unhappy. In the unfairly engineered instant after such an admission, the cult’s moment maker is trained to strike: “Well, if you’re unhappy, you’d want to change that, right?” Sure, cult recruitment tactics can offer provocative anecdotes. But cult members, including recruiters, are known for their willingness to engage in self-delusion; maybe they’re kidding themselves about the effectiveness of this particular practice. What’s the hard proof that such a made moment leads to anything more than a temporarily and inconsequentially alteredself-view? Could a pre-suader employ that moment to change another’s willingness to do or concede or provide anything of real value? Merchandisers value consumer information enormously. Proponents of marketing research say it serves the admirable purpose of giving the sellers the data they need to satisfy likely buyers; and, they are not alone in their high regard for the benefits of such data. Profitable commercial organizations recognize the advantages of having good information about the wants and needs of their customers or prospective customers. Indeed, the best of them consistently spend princely sums to uncover the particulars. The prevailing problem for these organizations is that the rest of us can’t be bothered to participate in their surveys, focus groups, and taste tests. Even with sizable inducements in the form of cash payments, free products, or gift certificates, the percentage of people agreeing to cooperate can be low, which gives market researchers heartburn because they can’t be sure the data they’ve collected reflect the feelings of the majority of their target group. Could these researchers eliminate their problem by requesting consumer information in the moment following a pre-suasive single-chute question? Consider the results of an experiment performed by communication scientists San Bolkan and Peter Andersen, who approached people and made a request for assistance with a survey. We have all experienced something similar when a clipboard-carrying researcher stops us in a shopping mall or supermarket and asks for a few minutes of our time. As is the case for the typical shopping mall requester, these scientists’ success was dismal: only 29 percent of those asked to participate consented. But Bolkan and Andersen thought they could boost compliance without resorting to any of the costly payments that marketers often feel forced to employ. They stopped a second sample of individuals and began the interaction with a pre-suasive opener: “Do you consider yourself a helpful person?” Following brief reflection, nearly everyone answered yes. In that privileged moment— after subjects had confirmed privately and affirmed publicly their helpful natures—the researchers pounced, requesting help with their survey. Now 77.3 percent volunteered.

ARE YOU ADVENTUROUS ENOUGH TO CONSIDER A REVOLUTIONARY MODEL OF INFLUENCE?

According to this nontraditional—channeled attention—approach, to get desired action it’s not necessary to alter a person’s beliefs or attitudes or experiences. It’s not necessary to alter anything at all except what’s prominent in that person’s mind at the moment of decision. In our example of the new soft drink, it might be the fact that, in the past, he or she has been willing to look at new possibilities. Evidence for precisely this process can be found in an extension of the Bolkan and Andersen research demonstrating that a marketer could greatly increase the chance of finding surveyparticipants by beginning with a particular pre-suasive opener: asking people if they considered themselves helpful. In a companion study, the two scientists found that it was similarly possible to increase willingness to try an unfamiliar consumer product by beginning with a comparable but differently customized pre-suasive opener —this time asking people if they considered themselves adventurous. The consumer product was a new soft drink, and individuals had to agree to supply an email address so they could be sent instructions on how to get a free sample. Half were stopped and asked if they wanted to provide their addresses for this purpose. Most were reluctant—only 33 percent volunteered their contact information. The other subjects were asked initially, “Do you consider yourself to be somebody who is adventurous and likes to try new things?” Almost all said yes—following which, 75.7 percent gave their email addresses. Two features of these findings strike me as remarkable. First, of the subjects who were asked if they counted themselves adventurous, 97 percent (seventy out of seventy-two) responded affirmatively. The idea that nearly everybody qualifies as an adventurous type is ludicrous. Yet when asked the single-chute question of whether they fit this category, people nominate themselves almost invariably. Such is the power of positive test strategy and the blinkered perspective it creates. The evidence shows that this process can significantly increase the percentage of individuals who brand themselves as adventurous or helpful or even unhappy. Moreover, the narrowed perspective, though temporary, is anything but inconsequential. For a persuasively privileged moment, it renders these individuals highly vulnerable to aligned requests—as the data of research scientists and the practices of cult recruiters attest. The other noteworthy feature of the soft-drink experiment is not that a simple question could shunt so many people into a particular choice but that it could shunt so many of them into a potentially dangerous choice. In recent years, if there is anything we have been repeatedly warned to safeguard against by all manner of experts, it’s opening ourselves to some unscrupulous individual who might bombard our computers with spam, infect them with destructive viruses, or hack into them to sting us with the protracted misery of identity theft. (Of course, to be fair, it must be acknowledged that experienced and discerning users are unlikely to befooled by the offers they receive electronically. I, for instance, have been flattered to learn through repeated internet messages that many Ukrainian virgin prostitutes want to meet me; if that can’t be arranged, they can get me an outstanding deal on reconditioned printer cartridges. Notwithstanding this particular exception, we’d be well advised to regard the authenticity of such solicitations skeptically.) Indeed, given the mass of negative publicity regarding computer fraud, it makes great sense that two-thirds of Bolkan and Andersen’s first group of subjects turned down the request for their email addresses. After all, this was a complete stranger who advanced on them unintroduced and unbidden. The circumstances clearly called for prudence. What’s significant is that these circumstances applied equally to all those individuals (75.6 percent in Bolkan and Andersen’s second group) who, after being channeled to their adventurous sides by an initial single-chute question, ignored the cues for caution and piled rashly into a potentially foolish choice. Their behavior, bewildering as it is on the surface, confirms this book’s contention that the guiding factor in a decision is often not the one that counsels most wisely; it’s one that has recently been brought to mind. But why? The answer has to do with the ruthlessness of channeled attention, which not only promotes the now-focal aspect of the situation but also suppresses all competing aspects of it—even critically important ones.

Ch 3: The Importance of Attention... Is Importance

Numerous researchers have documented the basic human inclination to assign undue weight to whatever happens to be salient at the time. One of those researchers is Daniel Kahneman, who, for personal and professional reasons, is an excellent informant on the character and causes of human behavior. On the personal side, he’s been able to observe from within a multitude of cultures and roles—having grown up in France, earned degrees in Jerusalem, Israel, and Berkeley, California, served as a soldier and personnel assessor in Israel, and taught in Canada and the United States. More impressive, though, are Kahneman’s credentials as a renowned authority on matters of human psychology. His teaching positions have always been prestigious, culminating with an appointment at Princeton University that included simultaneous professorships in psychology and public affairs. His numerous awards have also been prestigious, but none as noteworthy as the 2002 Nobel Prize in Economic Sciences, the only such Nobel in history given to an individual trained as a psychologist. It’s no wonder, then, that when Daniel Kahneman speaks on issues of human psychology, he gets hushed attention. I am reminded of a famous television commercial of many years ago for the financial services firm E. F. Hutton that depicts a pair of businessmen in a busy restaurant trying to talk over the din of clanking silverware, loud waiters, and neighboring tableconversations. One of the men says to his colleague, “Well, my broker is E. F. Hutton, and E. F. Hutton says . . . ” The place goes silent—waiters stop taking orders, busboys stop clearing tables, diners stop speaking—while everyone in the room turns to take in the advice, and an announcer’s voice intones: “When E. F. Hutton talks, people listen.”17 I’ve been to several scientific conferences at which Professor Kahneman has spoken; and, when Daniel Kahneman talks, people listen. I am invariably among them. So I took special notice of his answer to a fascinating challenge put to him not long ago by an online discussion site. He was asked to specify the one scientific concept that, if appreciated properly, would most improve everyone’s understanding of the world. Although in response he provided a full five-hundred-word essay describing what he called “the focusing illusion,” his answer is neatly summarized in the essay’s title: “Nothing in life is as important as you think it is while you are thinking about it.” The implications of Kahneman’s assertion apply to much more than the momentary status of the caller to a ringing phone. They apply tellingly well to the practice of pre-suasion, because a communicator who gets an audience to focus on a key element of a message pre-loads it with importance. This form of pre-suasion accounts for what many see as the principle role (labeled agenda setting) that the news media play in influencing public opinion. The central tenet of agenda-setting theory is that the media rarely produce change directly, by presenting compelling evidence that sweeps an audience to new positions; they are much more likely to persuade indirectly, by giving selected issues and facts better coverage than other issues and facts. It’s this coverage that leads audience members—by virtue of the greater attention they devote to certain topics—to decide that these are the most important to be taken into consideration when adopting a position. As the political scientist Bernard Cohen wrote, “The press may not be successful most of the time in telling people what to think, but it is stunningly successful in telling them what to think about.” According to this view, in an election, whichever political party is seen by voters to have the superior stance on the issue highest on the media’s agenda at the moment will likely win.That outcome shouldn’t seem troubling provided the media have highlighted the issue (or set of issues) most critical to the society at the time of the vote. Regrettably, other factors often contribute to coverage choices, such as whether a matter is simple or complicated, gripping or boring, familiar or unfamiliar to newsroom staffers, inexpensive or expensive to examine, and even friendly or not to the news director’s political leanings. In the summer of 2000, a pipe bomb exploded at the main train station in Düsseldorf, Germany, injuring several Eastern European immigrants. Although no proof was ever found, officials suspected from the start that a fringe right-wing group with an anti-immigrant agenda was responsible. A sensational aspect of the story—one of the victims not only lost a leg in the blast but also the baby in her womb—stimulated a rash of news stories in the following month regarding right-wing extremism in Germany. Polls taken at the same time showed that the percentage of Germans who rated right-wing extremism as the most important issue facing their country spiked from near zero to 35 percent—a percentage that sank back to near zero again as related news reports disappeared in subsequent months. A similar effect appeared more recently in the United States. As the tenth anniversary of the terrorist attacks of September 11, 2001, approached, 9/11- related media stories peaked in the days immediately surrounding the anniversary date and then dropped off rapidly in the weeks thereafter. Surveys conducted during those times asked citizens to nominate two “especially important” events from the past seventy years. Two weeks prior to the anniversary, before the media blitz began in earnest, about 30 percent of respondents named 9/11. But as the anniversary drew closer, and the media treatment intensified, survey respondents started identifying 9/11 in increasing numbers—to a high of 65 percent. Two weeks later, though, after reportage had died down to earlier levels, once again only about 30 percent of the participants placed it among their two especially important events of the past seventy years. Clearly, the amount of news coverage can make a big difference in the perceived significance of an issue among observers as they are exposed to the coverage.

BACK ROADS TO ATTENTION

It is rousing and worrisome (depending on whether you are playing offense or defense) to recognize that these persuasive outcomes can flow from attention-shifting techniques so slight as to go unrecognized as agents of change. Let’s consider three ways communicators have used such subtle tactics to great effect. Managing the Background Suppose you’ve started an online furniture store that specializes in various types of sofas. Some are attractive to customers because of their comfort andothers because of their price. Is there anything you can think to do that would incline visitors to your website to focus on the feature of comfort and, consequently, to prefer to make a sofa purchase that prioritized it over cost? You’ve no need to labor long for an answer, because two marketing professors, Naomi Mandel and Eric Johnson, have provided one in a set of studies using just such an online furniture site. When I interviewed Mandel regarding why she decided on this particular set of issues to explore, she said her choice had to do with two big, unresolved matters within the field of marketing—one relatively recent and one long-standing. The new topic at the time was e-commerce. When she began the research project in the late 1990s, the impact of virtual stores such as Amazon and eBay was only beginning to be seen. But how to optimize success within this form of exchange had not been addressed systematically. So she and Johnson opted for a virtual store site as the context for their study. The other matter that had piqued Mandel’s interest is one that has vexed merchandisers forever: how to avoid losing business to a poorer-quality rival whose only competitive advantage is lower cost. That is why Mandel chose to pit higher-quality furniture lines against less expensive, inferior ones in her study. “It’s a traditional problem that the business-savvy students in our marketing courses raise all the time,” she said. “We always instruct them not to get caught up in a price war against an inferior product, because they’ll lose. We tell them to make quality the battleground instead, because that’s a fight they’ll most likely win. “Fortunately for me,” she continued, “the best of the students in those classes have never been satisfied with that general advice. They’d say, ‘Yeah, but how?’ and I never really had a good answer for them, which gave me a great question to pursue for my research project.” Fortunately for us, after analyzing their results, Mandel and Johnson were in a position to deliver a stunningly simple answer to the “Yeah, but how?” question. In an article largely overlooked since it was published in 2002, they described how they were able to draw website visitors’ attention to the goal of comfort merely by placing fluffy clouds on the background wallpaper of the site’s landing page. That maneuver led those visitors to assign elevated levels of importance to comfort when asked what they were looking for in a sofa. Those same visitors also became more likely to search the site for information about the comfort features of the sofas in stock and, mostnotably, to choose a more comfortable (and more costly) sofa as their preferred purchase. To make sure their results were due to the landing page wallpaper and not to some general human preference for comfort, Mandel and Johnson reversed their procedure for other visitors, who saw wallpaper that pulled their attention to the goal of economy by depicting pennies instead of clouds. These visitors assigned greater levels of importance to price, searched the site primarily for cost information, and preferred an inexpensive sofa. Remarkably, despite having their importance ratings, search behavior, and buying preferences all altered pre-suasively by the landing page wallpaper, when questioned afterward, most participants refused to believe that the depicted clouds or pennies had affected them in any way. Soft sell. Visitors to an online furniture website who saw this landing page wallpaper decorated with clouds became more inclined toward soft, comfortable furniture. Those who saw wallpaper decorated with pennies became more inclined toward inexpensive furniture. Courtesy of Naomi Mandel and Oxford University Press Additional research has found similarly sly effects for online banner ads —the sort we all assume we can ignore without impact while we read. Well- executed research has shown us mistaken in this regard. While reading an online article about education, repeated exposure to a banner ad for a new brand of camera made the readers significantly more favorable to the ad when they were shown it again later. Tellingly, this effect emerged even though they couldn’t recall having ever seen the ad, which had been presented to them in five-second flashes near the story material. Further, the more often the ad had appeared while they were reading the article, themore they came to like it. This last finding deserves elaboration because it runs counter to abundant evidence that most ads experience a wear-out effect after they have been encountered repeatedly, with observers tiring of them or losing trust in advertisers who seem to think that their message is so weak that they need to send it over and over. Why didn’t these banner ads, which were presented as many as twenty times within just five pages of text, suffer any wear-out? The readers never processed the ads consciously, so there was no recognized information to be identified as tedious or untrustworthy. These results pose a fascinating possibility for online advertisers: Recognition/recall, a widely used index of success for all other forms of ads, might greatly underestimate the effectiveness of banner ads. In the new studies, frequently interjected banners were positively rated and were uncommonly resistant to standard wear-out effects, yet they were neither recognized nor recalled. Indeed, it looks to be this third result (lack of direct notice) that makes banner ads so effective in the first two strong and stubborn ways. After many decades of using recognition/recall as a prime indicator of an ad’s value, who in the advertising community would have thought that the absence of memory for a commercial message could be a plus? Within the outcomes of the wallpaper and the banner ad studies is a larger lesson regarding the communication process: seemingly dismissible information presented in the background captures a valuable kind of attention that allows for potent, almost entirely uncounted instances of influence. The influence isn’t always desirable, however. In this regard, there’s a body of data on consequential background factors that parents, especially, should take into account. Environmental noise such as that coming from heavy traffic or airplane flight paths is something we think we get used to and even block out after awhile. But the evidence is clear that the disruptive noise still gets in, reducing the ability to learn and perform cognitive tasks. One study found that the reading scores of students in a New York City elementary school were significantly lower if their classrooms were situated close to elevated subway tracks on which trains rattled past every four to five minutes. When the researchers, armed with their findings, pressed NYC transit system officials and Board of Education members to install noise-dampening materials on the tracks and in the classrooms, students’ scores jumped back up. Similar results have been found for children near airplane flight paths. When the city of Munich, Germany, moved its airport, the memory and reading scores of children near the new location plummeted, while those near the old location rose significantly. Thus, parents whose children’s schools or homes are subjected to intermittent automotive, train, or aircraft noise should insist on the implementation of sound-baffling remedies. Employers, for the sake of their workers—and their own bottom lines—should do the same. Teachers need to consider the potentially negative effects of another kind of distracting background stimuli (this one of their own making) on young students’ learning and performance. Classrooms with heavily decorated walls displaying lots of posters, maps, and artwork reduce the test scores of young children learning science material there. It is clear that background information can both guide and distract focus of attention; anyone seeking to influence optimally must manage that information thoughtfully.

BACK ROADS TO ATTENTION

It is rousing and worrisome (depending on whether you are playing offense or defense) to recognize that these persuasive outcomes can flow from attention-shifting techniques so slight as to go unrecognized as agents of change. Let’s consider three ways communicators have used such subtle tactics to great effect.

Managing the Background:

Soft sell. Visitors to an online furniture website who saw this landing page wallpaper decorated with clouds became more inclined toward soft, comfortable furniture. Those who saw wallpaper decorated with pennies became more inclined toward inexpensive furniture.

Inviting Favorable Evaluation

Although communicators can use attention-drawing techniques to amplify the judged importance of a feature or issue, that’s not always wise. Relevant here is Bernard Cohen’s observation about press coverage—that it doesn’t so much tell people what to think as what to think about. Any practice that pulls attention to an idea will be successful only when the idea has merit. If the arguments and evidence supporting it are seen as meritless by an audience, directed attention to the bad idea won’t make it any more persuasive. If anything, the tactic might well backfire. After all, if audience members have come to see an idea as more important to them than before, they should then be even more likely to oppose it when it is a plainly poor one. Indeed, a lot of research has demonstrated that the more consideration people give to something, the more extreme (polarized) their opinions of it become. So attention-capturing tactics provide no panacea to would-be persuaders. Still, if you have a good case to make, there are certain places where those tactics will give your persuasive appeals special traction. One such place is in a field of strong competitors. In modern business, it is becomingincreasingly difficult to outpace one’s rivals. Easily copied advances in development technologies, production techniques, and business methods make it hard for a company to distinguish the essence of what it offers— bottled water, gasoline, insurance, air travel, banking services, industrial machinery—from what other contestants for the same market can deliver. To deal with the problem, alternative ways of creating separation have to be tried. Retailers can establish multiple, convenient locations; wholesalers can put big sales staffs into the field; manufacturers can grant broad guarantees; service providers can assemble extensive customer care units; and they all can engage in large-scale advertising and promotional efforts to create and maintain brand prominence. But there’s a downside to such fixes. Because these means of differentiation are so costly, their expense might be too burdensome for many organizations to bear. Could resolving the dilemma lie in finding an inexpensive way to shift attention to a particular product, service, or idea? Well, yes, as long as the spotlighted item is a good one—a high scorer in customer reviews, perhaps. Critical here would be to arrange for observers to focus their attention on that good thing rather than on rivals’ equally good options. Then its favorable features should gain both verification and importance from the scrutiny. Already some data show that these twin benefits can produce a substantial advantage for a brand when consumers focus on it in isolation from its competitors. Although the data have come from different settings (shopping malls, college campuses, and websites) and different types of products (cameras, big-screen TVs, VCRs, and laundry detergents), the results all point to the same conclusion: if you agreed to participate in a consumer survey regarding some product, perhaps 35-millimeter cameras, the survey taker could enhance your ratings of any strong brand—let’s say Canon—simply by asking you to consider the qualities of Canon cameras but not asking you to consider the qualities of any of its major rivals, such as Nikon, Olympus, Pentax, or Minolta. More than that, without realizing why, your intention to purchase a Canon 35mm camera would likely also jump, as would your desire to make the purchase straightaway, with no need to search for information about comparable brands. However, all of these advantages for Canon would drop away if you’d been asked to consider the qualities of its cameras but, beforerating those qualities, to think about the options that Nikon, Olympus, Pentax, and Minolta could provide. Thus, to receive the benefits of focused attention, the key is to keep the focus unitary. Some impressive research demonstrates that merely engaging in a single-chute evaluation of one of several established hotel and restaurant chains, consumer products, and even charity organizations can automatically cause people to value the focused-upon entity more and become more willing to support it financially. One applicable tactic being employed with increasing frequency by various organizations is to request evaluation of their products and services —only their products and services. As a consumer, I am routinely asked by providers to consider and rate business performances of one sort or another. Occasionally I am petitioned through a phone call or direct mail, but typically it is via email. Sometimes I am to evaluate a single experience such as a recent hotel stay, online purchase, or customer service interaction. Periodically, the “How are we doing?” question asks me to assess features of an ongoing partnership with my travel agency, financial services firm, or phone provider. The requests seem innocent enough and acceptable because they appear intended (as I am sure they are) to gather information that will improve the quality of my commercial exchanges. But I’d be surprised if my compliance didn’t also give the petitioners, especially the highly ranked ones, a hidden bonus: my focused attention to their mostly favorable facets with no comparable attention to the mostly favorable facets of their ablest rivals. Other research has extended these findings to the way that leaders and managers make strategic choices inside their organizations. Individuals assigned the responsibility for reversing a sales slump within a paint manufacturing company took part in a study. Each was asked to evaluate the wisdom of only one of four worthy possible solutions: (1) increasing the advertising budget, which would raise brand awareness among do-it- yourself painters; (2) lowering prices, which would attract more price- sensitive buyers; (3) hiring additional sales representatives, who could press for more shelf space in retail stores; or (4) investing in product development, to boost quality so that the brand could be promoted to professional painters as the best in the market. It didn’t matter which of the four ideas the decision makers evaluated: the process of targeting and evaluating one, byitself, pushed them to recommend it among the options as the best remedy for the company to adopt. But surely the typical highly placed decision maker wouldn’t settle on an important course of action without evaluating all viable alternatives fully, and he or she certainly wouldn’t make that choice after evaluating just one strong option, right? Wrong and wrong, for a pair of reasons. First, a thorough analysis of all legitimate roads to success is time consuming, requiring potentially lengthy delays for identifying, vetting, and then mapping out each of the promising routes; and highly placed decision makers didn’t get to their lofty positions by being known as bottlenecks inside their organizations. Second, for any decision maker, a painstaking comparative assessment of multiple options is difficult and stressful, akin to the juggler’s task of trying to keep several objects in the air all at once. The resultant (and understandable) tendency is to avoid or abbreviate such an arduous process by selecting the first practicable candidate that presents itself. This tendency has a quirky name, “satisficing”—a term coined by economist and Nobel laureate Herbert Simon—to serve as a blend of the words satisfy and suffice. The combination reflects two simultaneous goals of a chooser when facing a decision—to make it good and to make it gone—which, according to Simon, usually means making it good enough. Although in an ideal world one would work and wait until the optimal solution emerged, in the real world of mental overload, limited resources, and deadlines, satisficing is the norm. But even courses of action selected in this manner should not be allowed the unfair advantages of a different sort of unitary assessment—one focused only on upsides. In the excitement of a looming opportunity, decision makers are infamous for concentrating on what a strategy could do for them if it succeeded and not enough, or at all, on what it could do to them if it failed. To combat this potentially ruinous overoptimism, time needs to be devoted, systematically, to addressing a pair of questions that often don’t arise by themselves: “What future events could make this plan go wrong?” and “What would happen to us if it did go wrong?” Decision scientists who have studied this consider-the-opposite tactic have found it both easy to implement and remarkably effective at debiasing judgments. The benefits to the organization that strives to rid itself of this and other decision-making biases can be considerable. One study of over a thousand companiesdetermined that those employing sound judgment-debiasing processes enjoyed a 5 percent to 7 percent advantage in return on investment over those failing to use such approaches.

Shifting the Task at Hand

Issues that gain attention also gain presumed importance. Research demonstrates that if people fail to direct their attention to a topic, they presume that it must be of relatively little importance. With those basic human tendencies in mind, think of the implications of the embedded reporter program for US public opinion toward the invasion of Iraq. The dispatches of journalists in the program carried the kinds of content—vivid firsthand accounts of combat and emotionally charged human interest stories of combatants—that the media love to pitch and the public loves to catch. That content dominated public attention and thereby defined for the public which factors to consider more and less important about the invasion, such as those related to individual actions and battlefield outcomes versus those related to initial justifications and geopolitical ends. Because frontline combat factors represented a prime strength of the war, whereas larger strategic ones represented a prime weakness, the effect of the embedded reporter program was to award center stage import to the main success, not the main failure, of the Bush administration’s Iraq campaign. The focusing illusion ensured it. There is nothing to suggest that this topically imbalanced coverage was part of the grand design for the program on the part of administration and military officials, who seem to have been interested in it mostly for traditional information warfare purposes, such as gaining more control over the screening, training, and review of reporters, as well as putting them in an eyewitness position to counter enemy propaganda. Similarly, there is no evidence that the media chiefs who helped forge the program anticipated the full span of its public relations benefits to the Bush administration. Instead, it was only in retrospect, after the results of news story analyses started surfacing in academic journals, that this realization began to form. Ironically, then, the major public relations effect of the embedded reporter program appears to have been a side effect—a hidden one. It was an unexpected by-product of a decision to make the task of the most visible journalists covering the war molecular rather than molar in scope.

Ch 4: What’s Focal Is Causal

It’s no wonder that we assign elevated import to factors that have our attention. We also assign them causality. Therefore, directed attention gives focal elements a specific kind of initial weight in any deliberation. It gives them standing as causes, which in turn gives them standing as answers to that most essential of human questions: Why? Because we typically allot special attention to the true causes around us, if we see ourselves giving such attention to some factor, we become more likely to think of it as a cause. Take monetary payments. Because the amount of money is so salient in the exchanges—“I’ll pay you x when you do y”—we tend to infer that the payment spurred the act, when, in fact, it was often some other, less visible factor. Economists, in particular, are prone to this bias because the monetary aspects of a situation dominate their attentions and analyses. Thus, when Harvard Business School economist Felix Oberholzer-Gee approached people waiting in line at several different venues and offered them money to let him cut in, he recognized that a purely economics-based model would predict that the more cash he offered, the more people would agree to the exchange. And that’s what he found: half of everyone offered $1 let him cut in line; 65 percent did so if offered them $3, and acceptance rates jumped to 75 percent and 76 percent when he proposed the larger sums of $5 and $10. According to classical economic theory, which enshrines financial self- interest as the primary cause of human behavior, those greater incentives convinced people to take the deal for their own fiscal betterment. How could any observer to the transaction doubt it? The highly visible incentives caused the obtained effects due to their direct links to personal monetary gain, right? Nothing surprising occurred here, right? Well, right, except for an additional finding that challenges all this thinking: almost no one took the money.“Gee,” Oberholzer-Gee must have said to himself, “that’s odd.” Indeed, a number of oddities appeared in his data, at least for adherents to the idea that the ultimate cause of human action is one’s own financial interest. For instance, although bigger cash incentives upped compliance with the line cutter’s wish, they didn’t increase acceptance of the payment; richer deals increasingly caused people to sacrifice their places in line but without taking the greater compensation. To explain his findings, Oberholzer-Gee stepped away from a consideration of salient economic factors and toward a hidden factor: an obligation people feel to help those in need. The obligation comes from the helping norm, which behavioral scientists sometimes call the norm of social responsibility. It states that we should aid those who need assistance in proportion to their need. Several decades’ worth of research shows that, in general, the more someone needs our help, the more obligated we feel to provide it, the more guilty we feel if we don’t provide it, and the more likely we are to provide it. When viewed through this lens, the puzzling findings make perfect sense. The payment offers stimulated compliance because they alerted recipients to the amount of need present in the situation. This account explains why larger financial inducements increased consent even though most people weren’t willing to pocket them: more money signaled a stronger need on the part of the requester. (“If this guy is willing to pay a lot to jump ahead of me, he must really need to get to the front fast.”)25 It would be naïve to assert that fiscal factors are less than potent determinants of human action. Still, I’d argue that merely because they are so visible (and, therefore, prominent in attention), they are often less determining than they seem. Conversely, there are many other factors— social obligations, personal values, moral standards—that, merely because they are not readily observable, are often more determining than they seem. Elements such as money that attract notice within human exchanges don’t just appear more important, they also appear more causal. And presumed causality, especially when acquired through channeled attention, is a big deal for creating influence—big enough to account for patterns of human conduct that can range from perplexing to alarming.

Taking a Chance

In the first of these categories, consider the most famous case of product tampering of all time. In the autumn of 1982, someone went into supermarkets and drug stores in the Chicago area, injected packaged capsules of Tylenol with cyanide, and then returned the containers to the store shelves, where they were later purchased. Several reasons exist for the incident’s long-standing notoriety. First, seven Chicago residents died from ingesting the poison—four of them family members who had swallowed capsules from the same Tylenol container. Second, their killer has never been found, giving the crime an uncomfortably memorable lack of closure. But, for the most part, the case lives on today not so much for these regrettable reasons as for a pair of favorable ones: it led to the passage of important product safety legislation and to pharmaceutical industry shifts to tamperproof seals and packaging that have reduced risks to consumers. In addition—owing to the rapid, customer-centered steps taken by Tylenol’s maker, Johnson & Johnson, which recalled thirty-one million of the capsules from all stores—it produced a textbook approach to proper corporate crisis management that is still considered the gold standard. (The recommended approach urges companies to act without hesitation to fully inform and protect the public, even at substantial expense to its own immediate economic interests.) Aside from these high-profile features, another element of the case has gone almost entirely unnoticed but strikes me as remarkable. Early on, after it had been determined that the deaths were linked to bottles of Tylenol but before the extent of the tampering had been established, Johnson & Johnson issued nationwide warnings intended to prevent further harm. One widely communicated sort of warning alerted consumers to the production lot numbers on the affected bottles—numbers that identified where and when a particular batch of capsules had been manufactured. Because they were the first to be identified, two of the numbers received the most such publicity: lots 2,880 and 1,910. Immediately, and bewilderingly, US residents of states that ran lotteries began playing those two numbers at unprecedented rates. In three states, Rhode Island, New Hampshire, and Pennsylvania, officials announced that they had to halt wagers on the numbers because betting on them shot above “maximum liability levels.”To know how best to account for this set of events, let’s review the characteristics of the numbers. First, they were ordinary; not inherently memorable in any way. Second, they were associated with grievous misfortune. Moreover, they were intensely connected in American minds to imagery of poison-fed death. Yet many thousands of those minds responded to something about the numbers that lifted expectations of lottery success. What? Our previous analysis offers one answer: Because of all the publicity surrounding them, they had become focal in attention; and what is focal is seen to have causal properties—to have the ability to make events occur. It turned out that every one of the minds that thought those numbers would provide an advantage over chance was proved wrong by the subsequent lottery results. But I doubt that the losses taught those minds to avoid, in any general way, similar future errors. The tendency to presume that what is focal is causal holds sway too deeply, too automatically, and over too many types of human judgment.

Taking a Life

The legal issue of whether a confession had been made freely by the suspect or extracted improperly by an interrogator involves a judgment of causality—of who was responsible for the incriminating statement. As we know from the experiments of Professor Taylor, a camera angle arranged to record the face of one discussant over the shoulder of another biases that critical judgment toward the more visually salient of the two. We also knownow—from the more recent experiments of social psychologist Daniel Lassiter—that such a camera angle aimed at a suspect during an interrogation leads observers of the recording to assign the suspect greater responsibility for a confession (and greater guilt). Moreover, as was the case when Taylor and her coworkers tried it, Lassiter and his coworkers found this outcome to be stubbornly persistent. In their studies, it surfaced regardless of whether the observers were men or women, college students or jury-eligible adults in their forties and fifties, exposed to the recording once or twice, intellectually deep or shallow, and previously informed or not about the potentially biasing impact of the camera angle. Perhaps most disturbingly, the identical pattern appeared whether the watchers were ordinary citizens, law enforcement personnel, or criminal court judges. Nothing could change the camera angle’s prejudicial impact—except changing the camera angle itself. The bias disappeared when the recording showed the interrogation and confession from the side, so that the suspect and questioner were equally focal. In fact, it was possible to reverse the bias by showing observers a recording of the identical interaction with the camera trained over the suspect’s shoulder onto the interrogator’s face; then, compared with the side-view judgments, the interrogator was perceived to have coerced the confession. Manifestly here, what’s focal seems causal. Thus, a potential dilemma exists for an innocent person—perhaps you— invited to a police station to help investigators solve a major crime. There is certainly nothing wrong with complying and providing that assistance; it’s what good citizens do. But matters would get more complicated if you began to sense that the session was designed not so much to obtain information from you as to obtain a possible confession from you. The standard recommendation of defense attorneys at this point would be to stop the proceedings and request a lawyer. That choice, though, has its risks. By terminating the session, you might not be able to give your questioners the facts they need to solve the crime quickly and to discount your involvement fully, which would allow you to dispel the specter of suspicion then and there. Being suspected of a serious crime can be a terrifying, nasty, lingering experience that might well be prolonged by the appearance of having something to conceal. But choosing to go on with the increasingly interrogation-like session includes perils of its own. You might be layingyourself open to tactics that have evolved in disparate places over centuries to extract incriminating statements from suspects, including blameless ones. There are ample grounds for caution here because, wherever employed, these are the techniques that have proven themselves to interrogators most able to achieve that end. Suppose, after considering your options, you decide to soldier on through the interview in an earnest attempt to clear your name. Is there anything you could do to increase the odds that, should you be somehow tricked or pressured into making falsely incriminating comments, external observers would be able to identify the tricks and pressure as the causes? There is. It comes in two steps, straight from the research of Professors Taylor and Lassiter. First, find the camera in the room, which will usually be above and behind the police officer. Second, move your chair. Position yourself so that the recording of the session will depict your face and your questioner’s face equally. Don’t allow the what’s-focal-is-presumed-causal effect to disadvantage you at trial. Otherwise, as Justice Brennan believed, your trial might already be over. By the way, if you ever found yourself in the interview situation I described, and you chose to end the session and demand a lawyer, is there anything you might do to reduce police suspicions that you therefore have something to hide? I have a suggestion: blame me. Say that, although you’d like to cooperate fully on your own, you once read a book that urged you to consider extensive police questioning unsafe, even for innocent individuals. Go ahead, blame me. You can even use my name. What are the police going to do, arrest me on a trumped-up charge, bring me down to the stationhouse, and employ Machiavellian tactics to gain a false confession? They’ll never win a conviction, because I’ll just find the camera and move my chair. Evidence that people automatically view what’s focal as causal helps me to understand other phenomena that are difficult to explain. Leaders, for example, are accorded a much larger causal position than they typically deserve in the success or failure of the teams, groups, and organizations they head. Business performance analysts have termed this tendency “the romance of leadership” and have demonstrated that other factors (such as workforce quality, existing internal business systems, and marketconditions) have a greater impact on corporate profits than CEO actions do; yet the leader is assigned outsize responsibility for company results. Thus even in the United States, where worker wages are relatively high, an analysis showed that the average employee in a large corporation is paid one half of 1 percent of what the CEO is paid. If that discrepancy seems hard to account for on grounds of economic or social fairness, perhaps we can account for it on other grounds: the person at the top is visually prominent, psychologically salient, and, hence, assigned an unduly causal role in the course of events.

Ch 5: Commanders of Attention 1: The Attractors

When I was first sending around the manuscript of my book Influence to possible publishers, its working title was Weapons of Influence. An acquisitions editor phoned to say that his house would be interested in publishing the book but with an important modification. To ensure that bookstore aisle browsers would notice and reach for it, he recommended changing the title to Weapons of Social Seduction. “Then,” he pointed out, “they’d register both sex and violence in the same glance.” Although I didn’t accept his suggestion, I can see some of its logic. Certain cues seize our attention vigorously. Those that do so most powerfully are linked to our survival. Sexual and violent stimuli are prime examples because of their connections to our fundamental motivations to reproduce on the one hand and to avoid harm on the other—life and death, literally.
Lighting up will bring you down. Frightening cigarette pack images like these have reduced smoking around the world. HHS.gov. US Department of Health & Human Services
Conditioning interruptus. One of Pavlov’s dogs is pictured with the saliva collection tube used to show how its salivation response to food could be conditioned (shifted) to the sound of a bell. When some new stimulus in the lab drew the dog’s attention, the conditioned response vanished. Courtesy of Rklawton

Ch 6: Commanders of Attention 2: The Magnetizers

THE SELF-RELEVANT

There is no question that information about the self is an exceedingly powerful magnet of attention. The ramifications for pre-suasive social influence are significant. In the province of personal health, when recipients get a message that is self-relevant because it has been tailored specifically for them (for example, by referencing the recipient’s age, sex, or health history), they are more likely to lend it attention, find it interesting, take it seriously, remember it, and save it for future reference—all of which leads to greater communication effectiveness, as reflected in arenas as diverse as weight loss, exercise initiation, smoking cessation, and cancer screening. The continuing emergence of large-scale electronic databases, digitized medical records, and personal contact devices such as mobile phones makes individualized message customization and delivery increasingly possible and economical.Purely from an effectiveness standpoint, any health communicator who has not fully investigated the potential use of such tools should be embarrassed. The focus-fixing impact of self-relevance applies to commercial appeals, too. Suppose you are a persuasion consultant approached to help market a new underarm antiperspirant to NASCAR dads. Let’s call it Pit Stop. Suppose further that the product has concrete, convincing scientific evidence of its superior effectiveness, which the manufacturer’s advertising agency plans to feature in its launch ads. But the agency is unsure about what to say first to draw audience attention to the rest of the ad and its compelling case. That’s why it has come to you, to get your opinion on the lead-in lines of ad copy, which read: “After all these years, people might accept that antiperspirants just aren’t gonna get any better. They might even accept the ugly stains on clothes from hot days and hard work. They won’t have to anymore.” What seemingly minor wording change could you suggest to improve the odds that the Pit Stop campaign will be a big success, the ad agency will be delighted, and your reputation as a wizard of influence will be burnished? It would be to replace the externalizing words people and they in the opener with the personalizing pronoun you.

THE UNFINISHED

The widely acknowledged father of modern social psychology is Kurt Lewin, who before emigrating to the United States taught for a decade at the University of Berlin and who, as an early champion of women’s role in higher education, gave the field several noteworthy academic daughters. One, a gifted young Lithuanian woman named Bluma Zeigarnik, was in a collection of students and research assistants who met regularly with Lewin at a local beer garden restaurant to discuss ideas when, one evening, the talk turned to a remarkable talent of a veteran waiter there. Without keeping any written record, he could remember and distribute perfectly the food and drink selections of large tables of diners. As the university group’sconversation progressed, Lewin and Zeigarnik developed a plan to explore the limits of the man’s impressive memory. After he had served all of the group members (once again flawlessly), they covered their plates and glasses and asked him to return to the table and recall what each had ordered. This time, though, he couldn’t do it; he couldn’t even come close. What accounted for the difference? A length of time had passed, of course; but that seemed an unlikely cause, as it was only long enough for the diners to hide their plates and glasses under napkins. Lewin and Zeigarnik suspected a different reason: As soon as the waiter correctly placed the last dish in front of the last diner at the table, his task of serving the group changed from unfinished to finished. And unfinished tasks are the more memorable, hoarding attention so they can be performed and dispatched successfully. Once completed, attentional resources are diverted from the undertaking to other pursuits; but while the initial activity is under way, a heightened level of cognitive focus must be reserved for it. To test this logic, Zeigarnik performed an initial set of experiments that she, Lewin, and numerous others have used as the starting point for investigating what has come to be known as the Zeigarnik effect. For me, two important conclusions emerge from the findings of now over six hundred studies on the topic. First (and altogether consistent with the beer garden series of events), on a task that we feel committed to performing, we will remember all sorts of elements of it better if we have not yet had the chance to finish, because our attention will remain drawn to it. Second, if we are engaged in such a task and are interrupted or pulled away, we’ll feel a discomforting, gnawing desire to get back to it. That desire—which also pushes us to return to incomplete narratives, unresolved problems, unanswered questions, and unachieved goals—reflects a craving for cognitive closure. The first of these conclusions—that not completing an activity can make everything about it more memorable—helps explain certain research results I never would have understood otherwise. In one set of studies, people either watched or listened to television programming that included commercials for soft drinks, mouthwash, and pain relievers. Later, their memory for the commercials was tested. The greatest recall occurred for details of ads that the researchers stopped five to six seconds before their natural endings. What’s more, better memory for specifics of the unfinished ads was evident immediately, two days later, and (especially) two weeks later, demonstrating the holding power that a lack of closure possesses. Perhaps even more bewildering at first glance are findings regarding college women’s attraction to certain good-looking young men. The women participated in an experiment in which they knew that attractive male students (whose photographs and biographies they could see) had been asked to evaluate them on the basis of their Facebook information. The researchers wanted to know which of these male raters the women, in turn, would prefer at a later time. Surprisingly, it wasn’t the guys who had rated them highest. Instead, it was the men whose ratings remained yet unknown to the women. An additional piece of information allows us to understand this puzzling result. During the experiment, the men who kept popping up in the women’s minds were those whose ratings hadn’t been revealed, confirming the researchers’ view that when an important outcome is unknown to people, “they can hardly think of anything else.” And because, as we know, regularattention to something makes it seem more worthy of attention, the women’s repeated refocusing on those guys made them appear the most attractive.

THE MYSTERIOUS

Teaching at a university is a really great job for all kinds of reasons. Yet there are inherent difficulties. They surface not only in the ongoing challenges of proper topic coverage within one’s courses, consistently updated lectures, and reliably fair examination/grading procedures, but also in a more basic way: in getting students to devote their full attention to the lecture material so that they comprehend the concepts involved. It’s a traditional problem because, first of all, the average class period lasts upward (sometimes far upward) of forty-five minutes, which is a long time to count on concentrated focus. Besides, these are college students at or near their peaks of sexual attractiveness and sexual inclination. How could we expect them to deny systematic attention to the eye-catchingly outfitted, viscerally stimulating romantic possibilities all around them in favor of the physically fading academic at the front of the room whose unfashionable “look” is relentlessly similar from session to session? A number of years ago, while looking elsewhere, I came across an effective way to reduce the problem. It involves employing a combination of the Zeigarnik effect and what Albert Einstein proclaimed as “the most beautiful thing we can experience” and simultaneously “the source of all true science and art.” I was preparing to write my first book for a general audience. Before beginning, I decided to go to the library to get all the books I could find thathad been written by academics for nonacademics. My strategy was to read the books, identify what I felt were the most and least successful sections, photocopy those sections, and arrange them in separate piles. I then reread the entries, looking for particular qualities that differentiated the piles. In the unsuccessful segments, I found the usual suspects: lack of clarity, stilted prose, use of jargon, and so on. In the successful group, I found pretty much what I expected, too: the polar-opposite traits of the weak sections plus logical structure, vivid examples, and humor. But I also found something I had not anticipated: the most successful of the pieces each began with a mystery story. The authors described a state of affairs that seemed perplexing and then invited the reader into the subsequent material as a way of dispatching the enigma. In addition, there was something about this discovery that struck me as more than a little curious—something I’ll tee up, unashamedly, as a mystery: Why hadn’t I noticed the use of this technique before, much less its remarkably effective functioning in popularized scholarship? After all, I was at the time an avid consumer of such material. I had been buying and reading it for years. How could the recognition of this mechanism have eluded me the whole while? The answer, I think, has to do with one reason the technique is so effective: it grabs readers by the collar and pulls them in to the material. When presented properly, mysteries are so compelling that the reader can’t remain an aloof outside observer of story structure and elements. In the throes of this particular literary device, one is not thinking of literary devices; one’s attention is magnetized to the mystery story because of its inherent, unresolved nature. I saw evidence of the force of the craving for closure born within mystery stories after I began using them in my classroom lectures. I was still inexperienced enough that on one particular day I got the timing wrong, and the bell rang, ending the lecture before I’d revealed the solution to a puzzle I’d posed earlier. In every college course I’d ever taught, about five minutes before the scheduled end of a class period, some students start preparing to leave. The signs are visible, audible, and, consequently, contagious: pencils and notebooks are put away, laptops closed, backpacks zipped. But in this instance, not only were there no such preparations but also after the bell rang, no one moved. In fact, when I tried to end thelecture there, students pelted me with protests. They would not let me stop until I had given them closure on the mystery. I remember thinking, “Cialdini, you’ve stumbled onto dynamite here!” Besides mystery stories being excellent communication devices for engaging and holding any audience’s interest, I encountered another reason to use them: they were instructionally superior to the other, more common forms of teaching I had been using, such as providing thoroughgoing descriptions of course material or asking questions about the material. Whereas descriptions require notice and questions require answers, mysteries require explanations. When I challenged students to engage in the process of providing explanations to account for states of affairs that otherwise wouldn’t make sense, their test scores went up. Why? Because that process also provided them the best chance to understand the lecture material in a meaningful and enduring way. An example is in order. A little-recognized truth I often try to convey to various audiences is that, in contests of persuasion, counterarguments are typically more powerful than arguments. This superiority emerges especially when a counterclaim does more than refute a rival’s claim by showing it to be mistaken or misdirected in the particular instance, but does so instead by showing the rival communicator to be an untrustworthy source of information, generally. Issuing a counterargument demonstrating that an opponent’s argument is not to be believed because its maker is misinformed on the topic will usually succeed on that singular issue. But a counterargument that undermines an opponent’s argument by showing him or her to be dishonest in the matter will normally win that battle plus future battles with the opponent. In keeping with the holding power of puzzles, I’ve learned that I can arrange for an audience to comprehend those teaching points more profoundly if I present them in mystery-story format. Of course, there are various ways to structure a mystery-story-based case for the potency of counterarguments. One that has worked well in my experience involves supplying the following information in the following sequence: 1. Pose the Mystery. Most people are familiar with legendary cigarette advertising campaign successes featuring Joe Camel, the Marlboro Man, and Virginia Slims’s “You’ve come a long way, baby.” Butperhaps the most effective marketing decision ever made by the tobacco companies lies buried and almost unknown in the industry’s history: after a three-year slide of 10 percent in tobacco consumption in the United States during the late 1960s, Big Tobacco did something that had the extraordinary effect of ending the decline and boosting consumption while slashing advertising expenditures by a third. What was it? 2. Deepen the Mystery. The answer also seems extraordinary. On July 22, 1969, during US congressional hearings, representatives of the major American tobacco companies strongly advocated a proposal to ban all of their own ads from television and radio, even though industry studies showed that the broadcast media provided the most effective routes to new sales. As a consequence of that unprecedented move, tobacco advertising has been absent from the airwaves in the United States since 1971. 3. Home In on the Proper Explanation by Considering (and Offering Evidence Against) Alternative Explanations. Could it be that American business interests, sobered by the 1964 Surgeon General’s report that detailed the deadly denouement of tobacco use, decided to forgo some of their profits to improve the well-being of fellow citizens? That appears unlikely, because representatives of the other major US business affected by the ban—the broadcast industry— filed suit in US Supreme Court to overturn the law one month after it was enacted. Thus, it was only the tobacco industry that supported the restriction on its ads. Could it have been the tobacco company executives, then, who became suddenly concerned with the health of the nation? Hardly. They didn’t reduce their concentrated efforts to increase tobacco sales one whit. They merely shifted their routes for marketing their products away from the broadcast media to print ads, sports sponsorships, promotional giveaways, and movie products. For instance, one tobacco company, Brown & Williamson, paid for product placements in twenty-two films in just a four-year period. 4. Provide a Clue to the Proper Explanation. So, by tobacco executives’ logic, magazines, newspapers, billboards, and films were fair game; only the airwaves should be off-limits to their marketingefforts. What was special about the broadcast media? In 1967, the US Federal Communications Commission (FCC) had ruled that its “fairness doctrine” applied to the issue of tobacco advertising. The fairness doctrine required that equal advertising time be granted on radio and television—solely on radio and television—to all sides of important and controversial topics. If one side purchased broadcast time on these media, the opposing side must be given free time to counterargue. 5. Resolve the Mystery. That decision had an immediate impact on the landscape of broadcast advertising. For the first time, anti-tobacco forces such as the American Cancer Society could afford to air counterarguments to the tobacco company messages. They did so via counter-ads that disputed the truthfulness of the images displayed in tobacco company commercials. If a tobacco ad featured healthy, attractive, independent characters, the opposing ads would counterargue that, in fact, tobacco use led to diseased health, damaged attractiveness, and slavish dependence. During the three years that they ran, those anti-tobacco spots slashed tobacco consumption in the United States by nearly 10 percent. At first the tobacco companies responded predictably, increasing their advertising budgets to try to meet the challenge. But, by the rules of the fairness doctrine, for each tobacco ad, equal time had to be provided for a counter-ad that would take another bite out of industry profits. When the logic of the situation hit them, the tobacco companies worked politically to ban their own ads, but solely on the air where the fairness doctrine applied—thereby ensuring that the anti-tobacco forces would no longer get free airtime to make their counterargument. As a consequence, in the year following the elimination of tobacco commercials on air, the tobacco companies witnessed a significant jump in sales coupled with a significant reduction in advertising expenditures. 6. Draw the Implication for the Phenomenon Under Study. Tobacco opponents found that they could use counterarguments to undercut tobacco ad effectiveness. But the tobacco executives learned (and profited from) a related lesson: one of the best ways to enhance audience acceptance of one’s message is to reduce the availability ofstrong counterarguments to it—because counterarguments are typically more powerful than arguments. At this stage in the sequence, the teaching point about the superior impact and necessary availability of counterarguments is an explanation. As such, it produces more than recognition of basic facts (for example, “US tobacco companies argued successfully for a ban of their ads from TV and radio”) or answers to related questions (“What was the result? The companies witnessed a jump in sales and a reduction in advertising costs”). It produces an understanding of how certain psychological processes associated with the prepotency of counterarguments brought about both of those otherwise baffling events. Notice that this type of explanation offers not just any satisfying conceptual account. Owing to its intrigue-fueled form, it carries a bonus. It’s part of a presentational approach constituted to attract audiences to the fine points of the information—because to resolve any mystery or detective story properly, observers have to be aware of all the relevant details. Think of it: we have something available to us here that not only keeps audience members focused generally on the issues at hand but also makes them want to pay attention to the details—the necessary but often boring and attention-deflecting particulars—of our material. What more could a communicator with a strong but intricate case want? Oh, by the way, there’s a telling answer to the question of what Albert Einstein claimed was so remarkable it could be labeled as both “the most beautiful thing we can experience” and “the source of all true science and art.” His contention: the mysterious.
Mysterious attraction. Considered the most famous painting of all time, da Vinci’s Mona Lisa has raised unanswered questions from the start. Is she smiling? If so, what does the smile signify? And how did the artist produce so enigmatic an expression? Despite continuing debate, one thing is clear: The unresolved mysteries account for a significant portion of the attention.

Part 2: PROCESSES: THE ROLE OF ASSOCIATION

Ch 7: The Primacy of Associations: I Link, Therefore I Think

In the family of ideas, there are no orphans. Each notion exists within a network of relatives linked through a shared system of associations. The physiology and biochemistry of the links—involving the brain’s neurons, axons, dendrites, synapses, neurotransmitters and the like—have been a source of fascination to many scientists. Alas, not to me. I’ve been less interested in the internal workings of these neuronal processes than in their external consequences—especially their consequences for the ways in which a precisely worded communication can alter human assessment and action.

THINKING IS LINKING

Still, for those like me intrigued by the persuasive properties of a communication, there is a crucial insight to be gained from the underlying structure of mental activity: the brain’s operations arise fundamentally and inescapably from raw associations. Just as amino acids can be called the building blocks of life, associations can be called the building blocks of thought. In various influence training programs, it’s common to hear instructors advise participants that to convince others to accept a message, it is necessary to use language that manages the recipients’ thoughts, perceptions, or emotional reactions. That strikes me as partially right. We convince others by using language that manages their mental associations to our message. Their thoughts, perceptions, and emotional reactions merely proceed from those associations. Nowhere are the implications for effective messaging so stark than in a relatively recent research program designed to answer the question “What islanguage principally for?” The leader among the group of researchers pursuing this line of inquiry is the renowned psycholinguist Gün Semin, whose conclusion, in my view, comes down to this: the main purpose of speech is to direct listeners’ attention to a selected sector of reality. Once that is accomplished, the listeners’ existing associations to the now-spotlighted sector will take over to determine the reaction. For issues of persuasion, this assertion seems to me groundbreaking. No longer should we think of language as primarily a mechanism of conveyance; as a means for delivering a communicator’s conception of reality. Instead, we should think of language as primarily a mechanism of influence; as a means for inducing recipients to share that conception or, at least, to act in accord with it. When describing our evaluation of a film, for instance, the intent is not so much to explain our position to others as to persuade them to it. We achieve the goal by employing language that orients recipients to those regions of reality stocked with associations favorable to our view. Especially interesting are the linguistic devices that researchers have identified for driving attention to one or another aspect of reality. They include verbs that draw attention to concrete features of a situation, adjectives that pull one’s focus onto the traits (versus behaviors) of others, personal pronouns that highlight existing relationships, metaphors that frame a state of affairs so that it is interpreted in a singular way, or just particular wordings that link to targeted thoughts. We’ll benefit by considering the last, and simplest, of these devices first.

Speak No Evil, Leak No Evil

Not long ago, I came across an organization that, more self-consciously than any other I’ve encountered, has sought to shape the elements of its internal language to ensure that the mental associations to those language elements align with its corporate values. The company, SSM Health—a not-for-profit system of hospitals, nursing homes, and related entities—had asked me to speak at its annual leadership conference. I agreed, in part because of SSM’s stellar reputation. I knew it as the first health care provider to be designated a Malcolm Baldrige National Quality Award winner. The Baldrige Awards, traditionally presented each year by the president of the United States anddetermined by the nation’s Commerce Department, honor organizations that demonstrate stratospheric levels of performance and leadership in their fields. I wondered how SSM operated to attain such excellence and was glad to accept the invitation as a way to find out. At the conference, I learned, for example, that the company’s website claim that “Employees drive success” was much more than a claim. Despite being subjected to a rigorous vetting process and imported from a thousand miles away, I was not a conference keynote speaker. On the day I spoke, the keynote presentation, labeled “Our People Keynote,” was delivered by seven employees who, one after another, described how they had participated in something exceptional on the job during the previous year. I also learned that on two additional days of the conference, fourteen other employees delivered similar “Our People Keynote” speeches. Of course, I realized that the practice of elevating twenty-one employees to keynote speaker status is unusual; and installing the practice as a follow-through on a stated belief in employee-driven exceptionalism is even more unusual. But by then, I wasn’t surprised to see it, as I’d already experienced how relentlessly SSM people walk their talk—literally, their talk. A month earlier, on a call with organizers of the leadership conference intended to help me prepare my remarks, I spoke not to the usual one or two informants that organizations normally assign to the task but to six SSM employees. Although each contributed valuably, the spokesperson for the team was the conference chairperson, Steve Barney. Steve was amiable and warm throughout the process until the end when his tone turned stern, and he issued an admonition: “Your presentation is not to include bullet points, and you are not to tell us how to attack our influence problems.” When I protested that removing these elements would weaken my talk, Steve responded, “Oh, you can keep them in; you just have to call them something else.” My cleverly phrased comeback—I believe it was, “Uh . . . what?”—got Steve to elaborate: “As a health care organization, we’re devoted to acts of healing, so we never use language associated with violence. We don’t have bullet points; we have information points. We don’t attack a problem, we approach it.” At the conference, I asked one of the participants, a physician, about the nonviolent-language policy. He responded with even more examples: “We’ve replaced business targets with business goals. And one of those goals is nolonger to beat our competition; it’s to outdistance or outpace them.” He even offered an impassioned rationale: “Can’t you see how much better it is for us to associate ourselves with concepts like ‘goal’ and ‘outdistance’ than ‘target’ and ‘beat’?” In truth, I couldn’t. I was skeptical that such small wording shifts would affect the thinking and conduct of individuals within the SSM system in any meaningful fashion.50 But that was then. I’m a convert now. My response to SSM’s strict language policy transformed from “Geez, this is silly” to “Geez, this is smart.” The conversion occurred after I undertook a concentrated review of an astounding body of research findings.

Incidental (but Not Accidental) Exposure to Words

He who wants to persuade should put his trust not in the right argument, but in the right word. —Joseph Conrad Staying within the realm of violent language for the moment, consider the results of an experiment that exposed people to hostile words and then measured their subsequent aggressiveness. In the study, subjects completed a task requiring them to arrange thirty sets of scrambled words to make coherent sentences. For half of the subjects, when the words they were given were arranged correctly, they resulted mostly in sentences associated with aggression; for example, “hit he them” became “he hit them.” For the other half of the subjects, when the words they were given were arranged correctly, they resulted mostly in sentences with no connections to aggression; for example, “door the fix” became “fix the door.” Later, all the subjects participated in another task in which they had to deliver twenty electric shocks to a fellow subject and got to decide how painful the required shocks would be. The results are alarming: prior exposure to the violence-linked words led to a 48 percent jump in selected shock intensity. In light of such findings, nonviolent language requirements make perfect sense for SSM. As a health care organization, it should operate within the bounds of the fundamental principle of medical ethics, “Above all, do no harm.” But note that, as a high-performance health care organization, SSM did not prohibit the use of achievement-related words. Instead, it replacedsuch words possessing menacing associations (target, beat) with comparable words that did not (goal, outdistance). Perhaps this practice reveals the belief of SSM’s leadership that, just as violence-laden language could lead to elevated harm doing and therefore should be eliminated, achievement-laden language could lead to elevated performance and therefore should be retained. If SSM leaders do hold that belief, they’d be right. Multiple studies have shown that subtly exposing individuals to words that connote achievement (win, attain, succeed, master) increases their performance on an assigned task and more than doubles their willingness to keep working at it. Evidence like this has changed my mind about the worth of certain kinds of posters that I’ve occasionally seen adorning the walls of business offices. Call centers appear to be a favored location. The signs usually carry a single word in capital letters (OVERCOME, SUCCEED, PERSEVERE, ACHIEVE) designed to spur employees toward greater accomplishments. Sometimes the word is presented alone; sometimes it’s accompanied by a related image such as a runner winning a race; sometimes just the image is presented. In any of their forms, I’d always thought it only laughably likely that signs of this sort would work. But once again—this time thanks to some Canadian researchers—that was then. I’ve since become aware of a project those researchers undertook to influence the productivity of fund-raisers who operated out of a call center. At the start of callers’ work shifts, all were given information designed to help them communicate the value of contributing to the cause for which they were soliciting (a local university). Some of the callers got the information printed on plain paper. Other callers got the identical information printed on paper carrying a photo of a runner winning a race. It was a photo that had previously been shown to stir achievement-related thinking. Remarkably, by the end of their three-hour shifts, the second sample of callers had raised 60 percent more money than their otherwise comparable coworkers. It appears, then, that initial incidental exposure either to simple words or simple images can have a pre- suasive impact on later actions that are merely associated with the words or images. Let’s explore some influence-related implications, beginning with words of a special kind.51Winners incite winning. This photo increased both the achievement-related thoughts and the productivity of individuals exposed to it. John Gichigi/Getty Images Metaphor Is a Meta-Door (to Change) If you want to change the world, change the metaphor. —Joseph Campbell Since Aristotle’s Poetics (circa 350 BCE), communicators have been advised to use metaphor to get their points across. They’ve been told that an effective way to convey a somewhat elusive concept to an audience is to describe it in terms of another concept that the audience can recognize readily. Long- distance runners, for instance, recount the experience of being unable to continue a race as “hitting the wall.” Of course, there is no real wall involved. But certain characteristics of a physical barrier—it blocks further passage, it can’t be dispatched easily, it can’t be denied—have enough in common with the runners’ bodily sensations that the label delivers useful meaning. Yet, the use of metaphor has its critics, complaining that it is often misleading. They point out that when one thing (such as the inability to take another step in a race) is understood in terms of another (like a wall), somegenuine overlap between the two might be revealed, but the correspondence is usually far from perfect. For instance, a physical wall normally owes its presence to the actions of someone other than the person who hits it, whereas a runner’s wall normally owes its presence to the actions of the runner—whose training (or lack thereof) and race pacing led to the problem. So runners employing the wall metaphor might be doing more than choosing a frame designed to communicate the feeling of motoric collapse. For strategic purposes, they might be choosing a frame designed to depict the failing as external to them, as not of their doing, and, thus, not their fault. Recall that new psycholinguistic analysis suggests that the main function of language is not to express or describe but to influence—something it does by channeling recipients to sectors of reality pre-loaded with a set of mental associations favorable to the communicator’s view. If so, we can see why metaphor, which directs people to think of one thing in terms of their associations to a selected other thing, possesses great potential as a linguistic device. Indeed, for well over a half century, researchers have been documenting the superior impact of metaphor, applied properly. More recently, though, emphasis on the transfer of associations inherent in metaphor has generated an eye-opening array of persuasive effects. Suppose, for instance, that you are a political consultant who has been hired by a candidate for mayor of a nearby city to help her win an election in which a recent surge in crime is an important issue. In addition, suppose that this candidate and her party are known for their tough stance on crime that favors policies designed to capture and incarcerate lawbreakers. The candidate wants your counsel on what she could do to make voters believe that her approach to the problem is correct. With an understanding of the workings of metaphoric persuasion, your advice could be swift and confident: in any public pronouncements on the topic, she should portray the crime surge as a wild beast rampaging through the city that must be stopped. Why? Because to bring a wild beast under control, it’s necessary to catch and cage it. In the minds of her audiences, these natural associations to the proper handling of rampaging animals will transfer to the proper handling of crime and criminals. Now imagine instead that the candidate and her party are known for a different approach to the problem: one that seeks to halt the growth of crimeby treating its societal causes such as joblessness, lack of education, and poverty. In this instance—still on the basis of an understanding of metaphoric persuasion—your advice could also be swift and confident: in all her public pronouncements on the topic, the candidate should portray the crime surge as a spreading virus infecting the city that must be stopped. Why? Because to bring a virus under control, it’s necessary to remove the unhealthy conditions that allow it to breed and spread. These disease-related associations should now frame the way citizens think about how best to deal with their crime problem. If other advisers within the candidate’s campaign scoff at the metaphor- based rationale for your plan, calling it simplistic, you might ask them to consider some relevant data: Stanford University researchers exposed a randomly chosen set of online readers to a news account of a three-year rise in city crime rates that depicted crime as a ravaging beast. Other randomly chosen readers saw the same news account and statistics except for one word: the criminality was depicted as a ravaging virus. Later the survey asked them all to indicate their preferred solutions. In the most precise analysis of the results, readers who initially saw crime portrayed as a beast recommended catch-and-cage solutions rather than remove-unhealthy- conditions solutions. But the opposite pattern emerged among readers who initially saw crime portrayed as a virus. Remarkably, the size of the difference due to the change of a single word (22 percent) was more than double the size of preferred solution differences that were naturally due to the readers’ gender (9 percent) or political party affiliation (8 percent). When predicting voter preferences, political campaigns include the role of demographic factors such as gender and party affiliation. Rarely, though, do they consider the potentially greater predictive power of a pre-suasively deployed metaphor. If the mayor’s other advisers are of the sort that dismisses findings from controlled scientific research as irrelevant to real-world settings, you could offer them a form of evidence from the real world. Maximally effective salespeople understand the power of metaphor. You might ask the advisers to consider the case of high school dropout Ben Feldman, who, despite never doing business outside a sixty-mile radius of his little hometown of East Liverpool, Ohio, became the greatest life insurance salesman of his time (and perhaps of all time). Indeed, at his peak, in the 1970s and 1980s, hesold more life insurance by himself than 1,500 of the 1,800 insurance agencies in the United States. In 1992, after he had been admitted to a hospital because of a cerebral hemorrhage, his employer, New York Life, decided to honor the great salesman’s fifty years with the company by declaring “Feldman February”—a month in which all its agents would compete to get the largest total in new sales. Who won? Ben Feldman. How? By calling prospects from his hospital bed, where the eighty-year-old closed $15 million in new contracts in twenty-eight days. That kind of relentless drive and commitment to the job accounts for some, but not all, of the man’s phenomenal success. According to chroniclers of that success, he never pressured reluctant prospects into a sale. Instead, he employed a light (and enlightened) touch that led them smoothly toward a purchase. Mr. Feldman was a master of metaphor. In his portrayal of life’s end, for instance, people didn’t die, they “walked out” of life—a characterization that benefitted from associations to a breach in one’s family responsibilities that would need to be filled. He was then quick to depict life insurance as the (metaphorically aligned) solution: “When you walk out,” he would say, “your insurance money walks in.” When exposed to this metaphoric lesson in the moral responsibility of buying life insurance, many a customer straightened up and walked right. Although metaphors require a language-based link between two things to work, once that link is in place, metaphoric persuasion can be triggered nonverbally. For instance, in English and many other languages, the concept of weight—heaviness—is linked metaphorically to the concepts of seriousness, importance, and effort. For that reason, (1) raters reading a job candidate’s qualifications attached to a heavy (versus light) clipboard come to see the applicant as a more serious contender for the job; (2) raters reading a report attached to a heavy clipboard come to see the topic as more important; and (3) raters holding a heavy object (requiring more effort of them) put more effort into considering the pros and cons of an improvement project for their city. This set of findings raises the specter that manufacturers’ drive to make e-readers as light as possible will lessen the seeming value of the presented material, the perceived intellectual depth of its author, and the amount of energy readers will be willing to devote to its comprehension.Comparable findings have appeared in studies of another arena of human judgment: personal warmth, where individuals who have held a warm object briefly—for example, a cup of hot (versus iced) coffee—immediately feel warmer toward, closer to, and more trusting of those around them. Hence, they become more giving and cooperative in the social interactions that follow shortly afterward. It’s evident, then, that powerful metaphoric associations can be pre-suasively activated without a word; touch is enough.

More Hot Stuff

Because negative associations can be transferred as easily as positive ones, spontaneously shared meaning can be as much a nightmare as a dream for communicators. A few years ago, a white government official received so much criticism that he resigned his position after using the word niggardly to describe how he planned to handle his office’s tight budget. The word means “miserly” or “reluctant to spend,” but, plainly, another family of associations provoked the negative reaction. For a fundamentally related reason, used-car salespeople are taught not to describe their cars as “used”— which links to notions of wear and tear—but to say “preowned,” which bridges to thoughts of possession and ownership. Similarly, information technology providers are counseled against telling customers the “cost” or “price” of their offerings, which are terms associated with the loss of resources; rather, they are to speak of the “purchase” or “investment” amount involved—terms that make contact with the concept of gain. The pilot and flight attendant training programs of some commercial airlines now include hints on how to avoid death-related language when communicating to passengers before or during a flight: The scary-sounding “your final destination” is to be trimmed to “your destination,” and “terminal” is to be replaced with “gate” whenever possible. It goes without saying that savvy marketers not only want to avoid coupling their products and services to elements with negative associations; they want to play both defense and offense by eliminating connections to factors that carry the most unfavorable connotations while maximizing connections to those with the most favorable ones. What are those most intensely evaluated elements? There is much to say in chapter 13 about theconcept that people respond to with the greatest passion on the negative side of the ledger, so its coverage will be deferred until then. But to reduce any resulting Zeigarnik-effect tensions, a brief advance notice is in order: The concept pre-loaded with associations most damaging to immediate assessments and future dealings is untrustworthiness, along with its concomitants, such as lying and cheating. Our hotties, ourselves. On the upside of things, though, the factor with most favorable impact in the realm of human evaluation is one we have encountered before: the self, which gains its power from a pair of sources. Not only does it draw and hold our attention with nearly electromagnetic strength, thereby enhancing perceived importance; it also brings that attention to an entity that the great majority of us shower with positive associations. Therefore, anything that is self-connected (or can be made to seem self-connected) gets an immediate lift in our eyes. Sometimes the connections can be trivial but can still serve as springboards to persuasive success. People who learn that they have a birthday, birthplace, or first name in common come to like each other more, which leads to heightened cooperativeness and helpfulness toward each other. Potential customers are more willing to enroll in an exercise program if told they have the same date of birth as the personal trainer who’ll be providing the service. Learning of such connections online offers no immunity: young women are twice as likely to “friend” a man who contacts them on Facebook if he claims to have the same birthday. The small-business loans to citizens of developing nations brokered through a microfinance website are significantly more likely to be offered by loan providers to recipients whose names share their initials. Finally, researchers studying this general tendency to value entities linked to the self (called implicit egoism) have found that individuals prefer not just people but also commercial products—crackers, chocolates, and teas—with names that share letters of the alphabet with their own names. To take advantage of this affinity, in the summer of 2013 the British division of Coca-Cola replaced its own package branding with one or another of 150 of the most common first names in the United Kingdom—doing so on 100 million packs of their product! What could justify the expense? Similar programs in Australia and New Zealand had boosted sales significantly inthose regions the year before. When finally tried in the United States, it produced the first increase in Coke sales in a decade. Even organizations can be susceptible to the tendency to overvalue things that include elements of their names. In 2004, to celebrate the fiftieth anniversary of rock and roll, Rolling Stone magazine issued a list of the five hundred greatest songs of the rock era. The two highest-ranked songs, as compiled and weighted by Rolling Stone editors, were “Like a Rolling Stone” by Bob Dylan and “(I Can’t Get No) Satisfaction” by the Rolling Stones. At the time of this writing, I checked ten comparable lists of the greatest rock- and-roll songs, and none listed either of Rolling Stone’s picks as its number one or number two choice.

I Am We, and We Are Number One.

When considering the persuasive implications of implicit egoism, there’s an important qualification to be taken into account. The overvalued self isn’t always the personal self. It can also be the social self—the one framed not by the characteristics of the individual but by the characteristics of that individual’s group. The conception of self as residing outside the individual and within a related social unit is particularly strong in some non-Western societies whose citizens have a special affinity for things that appear connected to a collectively constructed self. An analysis of two years of magazine ads in the United States and South Korea found that (1) in South Korea, the ads attempted to link products and services mostly to the reader’s family or group, whereas in America it was mostly to the individual reader; and (2) in terms of measured impact, group-linked ads were more effective in South Korea, while ads linked to the individual were more effective in the United States. The recognition of what Eastern-world audiences value furnished South Korea’s government with a wise negotiating tactic to use in dealing with Afghan militants. It was a tactic that, although simple, had been almost absent from the approach of Western negotiators in Afghanistan up to that point and is still underused there by Western powers. In July 2007 the Afghan Taliban kidnapped twenty-one South Korean church-sponsored aid workers, holding them hostage and killing two as a savage initial show of will. Talks designed to free the remaining nineteen went so badly that the kidnappers named the next two hostages they planned to murder, prompting the head of the South Korean NationalIntelligence Service, Kim Man-bok, to fly in to try to salvage the negotiations. He brought a plan. It was to connect the South Korean bargaining team to something central to the group identity of the militants: their language. Upon his arrival, Kim replaced his head negotiator, whose appeals had been transmitted through an Afghan translator, with a South Korean representative who spoke fluent Pashtun. According to Kim, who won the hostages’ swift release, “The key in the negotiations was language.” However, it was not because of any greater precision or lucidity of the verbal exchanges involved but because of something more primitive and pre-suasive. “When our counterparts saw that our negotiator was speaking their language, Pashtun, they developed a kind of strong intimacy with us, and so the talks went well.”

“Easy” Does It.

Besides the self, there is another concept possessed of decidedly positive associations worth examining because of how communicators can fumble the opportunity to harness those associations effectively. It is “easy.” There is much positivity associated with getting something with ease, but in a particular way. When we grasp something fluently—that is, we can picture or process it quickly and effortlessly—we not only like that thing more but also think it is more valid and worthwhile. For this reason, poetry possessing rhyme and regular meter evokes something more than greater favor from readers; it is also evokes perceptions of higher aesthetic value— the opposite of what the proponents of free verse and the gatekeepers of modern poetry journals appear to believe. Researchers in the field of cognitive poetics have even found that the fluency-producing properties of rhyme lead to enhanced persuasion. The statement “Caution and measure will win you riches” is seen as more true when changed to “Caution and measure win you treasure.” There’s a mini-lesson here for persuasive success: to make it climb, make it rhyme. Within the domain of general attraction, observers have a greater liking for those whose facial features are easy to recognize and whose names are easy to pronounce. Tellingly, when people can process something with cognitive ease, they experience increased neuronal activity in the muscles of their face that produce a smile. On the flip side, if it’s difficult to process something, observers tend to dislike that experience and, accordingly, that thing. The consequences can be striking. An analysis of the names of fivehundred attorneys at ten US law firms found that the harder an attorney’s name was to pronounce, the lower he or she stayed in the firm’s hierarchy. This effect held, by the way, independent of the foreignness of the names: a person with a difficult-to-pronounce foreign name would likely be in an inferior position to one with an easy-to-pronounce foreign name. A similar effect occurs when observers encounter hard-to-pronounce drugs or food additives; they become less favorable toward the products and their potential risks. So why do nutritional supplement and pharmaceutical companies give their products names that are difficult to pronounce and spell, such as Xeljanz and Farxiga? Maybe they are trying to communicate the family of plants or chemicals the product comes from. If so, it seems a poor trade-off. A lack of fluency in business communication can be problematic in additional ways. I can’t count the number of times I’ve sat at restaurant tables struggling to read extended descriptions of menu items presented in almost illegible, flourish-filled fonts or in inadequate light—or both. You would think that restaurateurs, hoping to tempt us, would know better, as research has revealed that the food detailed in difficult-to-process descriptions is seen as less tempting and that difficult-to-read claims are, in general, seen as less true. But perhaps the most damaging failure of business professionals to heed these effects occurs within stock exchanges. One analysis of eighty-nine randomly selected companies that began trading shares on the New York Stock Exchange between 1990 and 2004 found that although the effect dwindled over time, those companies with easier-to-pronounce names outperformed those with difficult-to-pronounce names. A comparable analysis of easy-to-pronounce three-letter stock ticker codes (such as KAR) versus difficult-to-pronounce codes (such as RDO) on the American Stock Exchange produced comparable results.

When names are easy to pronounce, early profits are easy to announce.

On US stock exchanges, the initial value of a company’s shares was greater if the company’s name (top graph) or stock ticker code (bottom graph) was easy to pronounce. Courtesy of Adam Oppenheimer and the National Academy of Sciences, U.S.A. If it seems from this evidence that we are relegated to a discomforting pawnlike status in many ordinary situations, much of the research covered so far in this book indicates that there is good reason for the concern. Must we resign ourselves, then, to being moved around haphazardly on the chessboard of life by the associations to whatever words, symbols, or images we happen to encounter? Fortunately, no. Provided that we understand how associative processes work, we can exert strategic, pre-suasive control over them. First, we can choose to enter situations that possess the set of associations we want to experience. When we don’t have such available choices, we can frontload impending situations with cues carrying associations that will send us in personally gainful directions. We consider how next.

Ch 8: Persuasive Geographies: All the Right Places, All the Right Traces

WHAT’S ALREADY IN USIt’s easy for some feature of the outside world to redirect our attention to an inner feature—to a particular attitude, belief, trait, memory, or sensation. As I’ve reported, there are certain consequential effects of such a shift in focus: within that moment, we are more likely to grant the focal factor importance, assign it causal status, and undertake action associated with it. Have you ever attended an arts performance disturbed by another audience member’s loud coughs? In addition to the distracting noise, there’s another reason that performers of all sorts—stage actors, singers, musicians, dancers—hate the sound of even one cough: it can become contagious. Although there is solid scientific proof of this point, the most dramatic testament comes from the ranks of the artists. The author and playwright Robert Ardrey described how the offensive sequence operates in a theater: “One cougher begins his horrid work in an audience, and the cough spreads until the house is in bedlam, the actors in a rage, and the playwright in retreat to the nearest saloon.” This kind of contagion isn’t confined to gatherings of playgoers. In one case, two hundred attendees at a newspaper editorial writers’ dinner were overcome by coughing fits after the problem began in one corner of the room and spread so pervasively that officials had to evacuate everyone, including the attorney general of the United States at the time, Janet Reno. Despite rigorous testing of the room, no physical cause for the coughing spasms could be found. Every year, thousands of similar incidents, involving sundry symptoms besides coughing, take place around the world. Consider a few representative examples: • In Austria, the news media reported several sightings of a poisonous variety of spider whose bite produced a combination of headache and nausea. Residents flooded hospitals certain that they had been bitten. Those who were wrong outnumbered those who were right by 4,000 percent. • When a Tennessee high school teacher reported that she smelled gas in her classroom and felt dizzy and nauseous, an array of individuals —including students, other teachers, and staff—started experiencing the same symptoms. A hundred people from the school went to hospital emergency rooms that day with symptoms associated withthe gas leak, as did seventy-one more when the school reopened five days later. No gas leak was found on either day—or ever. • Citizens of two small Canadian towns located near oil refineries learned from an epidemiological study that cancer rates in their communities were 25 percent higher than normal, which led residents to begin perceiving escalations in a variety of health problems associated with exposure to toxic chemicals. However, the validity of these perceptions was undercut when the study’s authors issued a retraction months later. The elevated incidence of cancer in the communities had initially been reported in error due to a statistical miscalculation. • In Germany, audience members listening to a lecture on dermatological conditions typically associated with itchy skin immediately felt skin irritations of their own and began scratching themselves at an increased rate. This last example offers the best indication of what’s going on, as it seems akin to the well-known occurrence of “medical student syndrome.” Research shows that 70 percent to 80 percent of all medical students are afflicted by this disorder, in which they experience the symptoms of whatever disease they happen to be learning about at the time and become convinced that they have contracted it. Warnings by their professors to anticipate the phenomenon don’t seem to make a difference; students nonetheless perceive their symptoms as alarmingly real, even when experienced serially with each new “disease of the week.” An explanation that has long been known to medical professionals tells us why. As the physician George Lincoln Walton wrote in 1908: “Medical instructors are continually consulted by students who fear that they have the diseases they are studying. The knowledge that pneumonia produces pain in a certain spot leads to a concentration of attention upon that region [italics added], which causes any sensation there to give alarm. The mere knowledge of the location of the appendix transforms the most harmless sensations in that region into symptoms of serious menace.”58 What are the implications for achieving effective influence—in this case effective self-influence? Lying in low-level wait within each of us are units of experience that can be given sudden standing and force if we just divert ourattention to them. There are the constituents of a cough in all of us, and we can activate them by concentrating on the upper half of the lungs, where coughs start. Try it; you’ll see. The same applies to the constituents of dizziness, nausea, or headache, which we can activate by focusing on a spot in the middle of the brain or at the top of the stomach or just above the eyes, respectively. But those units of experience waiting within us also include advantageous attitudes, productive traits, and useful capacities that we can energize by merely channeling attention to them instead. Let’s explore how that might work for our most coveted unit of experience: felt happiness. Although cherished for its own sake, happiness provides an additional benefit. It doesn’t just flow from favorable life circumstances, it also creates them—including higher levels of physical health, psychological well-being, and even general success. There’s good justification, then, for determining how to increase our joyfulness through self-influence. But to do so, first we have to unravel a mystery from the arena of happiness studies.59 The Positivity Paradox Suppose that following an extensive physical checkup, your doctor returned to the examination room conveying undeniable news of a medical condition that will impair your health in multiple ways. Its relentless advance will erode your ability to see, hear, and think clearly. Your enjoyment of food will be undercut by the combination of a dulled sense of taste and a compromised digestive system that will limit your dietary choices to mostly bland options. You’ll lose access to many of your favorite activities as the condition saps your energy and strength, eventually making you unable to drive or even to walk on your own. You’ll become increasingly vulnerable to an array of other afflictions, such as coronary heart disease, stroke, atherosclerosis, pneumonia, arthritis, and diabetes. You don’t have to be a health professional to identify this progressive medical condition. It’s the process of growing old. The undesirable outcomes of aging vary from person to person, but, on average, elderly individuals experience significant losses in both physical and mental functioning. Yet they don’t let the declines undermine their happiness. In fact—and here’s the paradox—old age produces the opposite result: the elderly feel happier thanthey did when younger, stronger, and healthier. The question of why this paradox exists has intrigued camps of lifespan researchers for decades. After considering several possibilities, one set of investigators, led by the psychologist Laura Carstensen, hit upon a surprising answer: when it comes to dealing with all the negativity in their lives, seniors have decided that they just don’t have time for it, literally. They’ve come to desire a time of emotional contentment for their remaining years, and they take deliberate steps to achieve it—something they accomplish by mastering the geography of self-influence. The elderly go more frequently and fully to the locations inside and outside themselves populated by mood-lifting personal experiences. To a greater extent than younger individuals, seniors recall positive memories, entertain pleasant thoughts, seek out and retain favorable information, search for and gaze at happy faces, and focus on the upsides of their consumer products. Notice that they route their travels to these sunny locales through a highly effective mental maneuver we’ve encountered before: they focus their attentions on those spots. Indeed, the seniors with the best “attention management” skills (those good at orienting to and staying fixed on positive material) show the greatest mood enhancement. Those with poor such skills, however, can’t use strong attentional control to extricate themselves from their tribulations. They are the ones who experience mood degeneration as they age. I’d bet that they are also the ones who account for the misguided stereotype of the elderly as irascible and sour—because the grumpy are just more conspicuous than the contented. I once asked Professor Carstensen how she first got the idea that many elderly have decided to make the most of their remaining days by concentrating on the positive over the negative. She reported having interviewed a pair of sisters living in a senior home and asking them how they dealt with various negative events—for example, the sickness and death they witnessed regularly all around them. They replied in unison, “Oh, we don’t have time for worrying about that.” She recalled being puzzled by the answer because, as retirees with no jobs, housekeeping tasks, or family responsibilities, they had nothing but time in their days. Then, with an insight that launched her influential thinking on the topic of life span motivation, Carstensen recognized that the “time” the sisters referred to wasn’t the amount available to them in any one day but in the rest of theirlives. From that perspective, allocating much of their remaining time to unpleasant events didn’t make sense to the ladies.60 What about the rest of us, though? Must we wait for advanced age to bring about a happy outlook on life? According to research in the field of positive psychology, no. But we do have to alter our tactics to be more like seniors’. Luckily for us, someone has already prepared a list of ways to go about it pre-suasively. Dr. Sonja Lyubomirsky is not the first researcher to study happiness. Yet, in my view, she has made noteworthy contributions to the subject by choosing to investigate a key question more systematically than anyone else. It’s not the conceptual one you might think: “What are the factors associated with happiness?” Instead, it’s a procedural one: “Which specific activities can we perform to increase our happiness?” As a child, Lyubomirsky came to the United States as part of a family of Russian immigrants who, in the midst of meager economic circumstances, had to deal with the relentless additional problems of fitting into an unfamiliar and sometimes challenging culture. At the same time, this new life brought many favorable and gratifying features. Looking back on those days, she wondered what actions family members could have taken to disempower the dispiriting emotions in favor of the uplifting ones. “It wasn’t all bleak,” she wrote in her 2013 book, The Myths of Happiness, “but if I knew then what I know now, my family would have been better positioned to make the best of it.” That statement made me curious to know what she does know now on the matter. I phoned and asked what she could say with scientific confidence about the steps people can take to make their lives better, emotionally. Her answer offered good news and bad news to anyone interested in securing a gladness upgrade. On the one hand, she specified a set of manageable activities that reliably increase personal happiness. Several of them—including the top three on her list—require nothing more than a pre-suasive refocusing of attention: 1. Count your blessings and gratitudes at the start of every day, and then give yourself concentrated time with them by writing them down. 2. Cultivate optimism by choosing beforehand to look on the bright side of situations, events, and future possibilities.3. Negate the negative by deliberately limiting time spent dwelling on problems or on unhealthy comparisons with others. There’s even an iPhone app called Live Happy that helps users engage in certain of these activities, and their greater happiness correlates with frequent use. On the other hand, the process requires consistent work. “You can make yourself happier just like you can make yourself lose weight,” Dr. Lyubomirsky assured me. “But like eating differently and going to the gym faithfully, you have to put in the effort every day. You have to stay with it.” That last comment seemed instructive about how the elderly have found happiness. They don’t treat the most hospitable places of their inner geographies the way visitors or sightseers would. Instead, they’ve elected to stay in those vicinities mentally. They’ve relocated to them psychologically for the same reason they might move physically to Florida or Arizona: for the warming climates they encounter every morning. When I asked Dr. Lyubomirsky why, before attaining senior status, most people have to work so hard at becoming happier, she said her team hadn’t uncovered that answer yet. But I think it might be revealed already in the research of Professor Carstensen, who, you’ll recall, found that the elderly have decided to prioritize emotional contentment as a main life goal and, therefore, to turn their attentions systematically toward the positive. She also found that younger individuals have different primary life goals that include learning, developing, and striving for achievement. Accomplishing those objectives requires a special openness to discomforting elements: demanding tasks, contrary points of view, unfamiliar people, and owning mistakes or failures. Any other approach would be maladaptive. It makes sense, then, that in early and middle age, it can be so hard to turn our minds away from tribulations. To serve our principal aims at those times, we need to be receptive to the real presence of negatives in order to learn from and deal with them. The problem arises when we allow ourselves to become mired in the emotions they generate; when we let them lock us into an ever-cycling loop of negativity. There’s where Dr. Lyubomirsky’s activities list can be so helpful. Even if we’re not ready to take up full-time residence in our most balmy psychological sites, we can use those attention- shifting activities to visit regularly and break the sieges of winter.61The field of happiness studies has shown us that relatively simple attention-based tactics can help manage our emotional states. Can we use similar methods to manage other desirable states, such as those involving personal achievement and professional success? When I first went to graduate school, I was among an incoming class of six students who had been recruited as part of an established social psychology doctoral program. A sweet guy named Alan Chaikin inspired early awe in the rest of us because word had spread of his remarkable achievement on the Graduate Record Examination (GRE)—the standardized test all students have to take before applying to most graduate programs. He scored in the top 1 percent of all test takers around the world in each of the three sections of the exam: verbal aptitude, mathematical proficiency, and analytical reasoning. Moreover, he scored in the top 1 percent of all psychology students in knowledge of that subject. Some of us had hit such scores on one or two of the sections, but rarely three and never all four. So we were ready to be consistently surprised by the level and range of Alan’s intellect. And indeed we were, although not in the manner we’d expected. Alan was a smart man. But it became apparent after a while that he wasn’t any smarter than the rest of us, in a general sense. He wasn’t any better at generating good ideas or spotting flawed arguments or making perceptive comments or offering clarifying insights. What he was better at was taking standardized tests—in particular the Graduate Record Exam. I shared an office with him during our first year and grew close enough to him to ask how he had done so stunningly well compared with the rest of our class. He laughed, but when I assured him it was a serious question, he told me without hesitation that he thought his relative success had to do with two main differentiators. First, he was a speed reader. He had taken a course the year before that taught him how to move rapidly through written material without missing its important features. That gave him a sizable advantage on the GRE because, at the time, it was scored in terms of the raw number of items a student answered correctly. Alan realized that by harnessing his speed- reading skills, he could zip through all of the large set of items in any given section of the test and, on a first pass, immediately answer each whose solution was simple or already known to him. After piling up every easypoint this way, he could then go back and address the tougher items. Other students would almost always advance from one item to the next, bogged down in the difficult ones likely to carry the twin penalties of producing incorrect answers and preventing contact with easier questions they would then never reach before time ran out. Most standardized tests, including the GRE, have since been redesigned in ways that no longer allow Alan’s speed- reading technique to provide a competitive edge. Hence, it won’t benefit current test takers. But that’s not the case for his other (pre-suasive) tactic. Alan told me that just prior to taking any standardized exam, he’d spend systematic time “getting psyched up” for it. He described a set of activities that could have come from a modified version of Dr. Lyubomirsky’s list. He didn’t take up the minutes before the exam room doors opened as I always had: notes in hand, trying to cram every piece of information I was unsteady about into my brain. He knew, he said, that focusing on material that was still vexing him would only elevate his anxieties. Instead, he spent that crucial time consciously calming his fears and simultaneously building his confidence by reviewing his past academic successes and enumerating his genuine strengths. Much of his test-taking prowess, he was convinced, stemmed from the resultant combination of diminished fear and bolstered confidence: “You can’t think straight when you’re scared,” he reminded me, “plus, you’re much more persistent when you’re confident in your abilities.” I was struck that he could create an ideal state of mind for himself not just because he understood where, precisely, to focus his attention but also because as a savvy moment maker, he understood how to do it pre-suasively immediately before the test. So Alan was smarter than the rest of us in a meaningful way. His was a particular type of smart: a kind of tactical intelligence that allowed him to turn common general knowledge—for example, that fear worsens test takers’ performance but earned confidence improves it—into specific applications with desirable outcomes. That’s a useful sort of intelligence. Let’s follow Alan’s lead and see how we can do the same—this time to move others rather than ourselves toward desired outcomes.

Ch 9: The Mechanics of Pre-Suasion: Causes, Constraints, and Correctives

If/When-Then Plans

The recognition that pre-suasive associations are manufacturable can lead us to much personal profit, even if we are not savvy advertising copywriters or renowned Russian scientists. From time to time we all set objectives for ourselves, targets to hit, standards to meet and exceed. But too often, our hopes go unrealized as we fail to reach the goals. There’s a well-studied reason for the difficulty: although generating an intention is important, that process alone isn’t enough to get us to take all the steps necessary to achieve a goal. Within health, for instance, we translate our good intentions into anytype of active step only about half the time. The disappointing success rates have been traced to a pair of failings. First, besides sometimes forgetting about an intention—let’s say, to exercise more—we frequently don’t recognize opportune moments or circumstances for healthy behaviors, such as taking the stairs rather than the elevator. Second, we are often derailed from goal strivings by factors—such as especially busy days—that distract us from our purpose. Fortunately, there is a category of strategic self-statements that can overcome these problems pre-suasively. The statements have various names in scholarly usage, but I’m going to call them if/when-then plans. They are designed to help us achieve a goal by readying us (1) to register certain cues in settings where we can further our goal, and (2) to take an appropriate action spurred by the cues and consistent with the goal. Let’s say that we aim to lose weight. An if/when-then plan might be “If/when, after my business lunches, the server asks if I’d like to have dessert, then I will order mint tea.” Other goals can also be effectively achieved by using these plans. When epilepsy sufferers who were having trouble staying on their medication schedules were asked to formulate an if/when-then plan—for example, “When it is eight in the morning, and I finish brushing my teeth, then I will take my prescribed pill dose”—adherence rose from 55 percent to 79 percent. In one particularly impressive demonstration, hospitalized opiate drug addicts undergoing withdrawal were urged to prepare an employment history by the end of the day to help them get a job after their release. Some were asked to form an if/when-then plan for compiling the history, whereas those in the control group were not asked to do so. A relevant if/when-then plan might be, “If/when lunch is over and space has become available at the lunchroom table, then I will start writing my employment history there.” By day’s end, not one person in the control group had performed the task, which might not seem surprising—after all, these were drug addicts in the process of opiate withdrawal! Yet at the end of the same day, 80 percent of those in the relevant if/when-then treatment group had turned in a completed job résumé. Additionally impressive is the extent to which if/when-then plans are superior to simple intention statements or action plans such as “I intend to lose five pounds this month” or “I plan to lose weight by cutting down onsweets.” Merely stating an intention to reach a goal or even forming an ordinary action plan is considerably less likely to succeed. There are good reasons for the superiority of if/when-then plans: the specific sequencing of elements within the plans can help us defeat the traditional enemies of goal achievement. The “if/when-then” wording is designed to put us on high alert for a particular time or circumstance when a productive action could be performed. We become prepared, first, to notice the favorable time or circumstance and, second, to associate it automatically and directly with desired conduct. Noteworthy is the self-tailored nature of this pre-suasive process. We get to install in ourselves heightened vigilance for certain cues that we have targeted previously, and we get to employ a strong association that we have constructed previously between those cues and a beneficial step toward our goal. There are certain strongly motivating concepts that communicators don’t initially need to make ready to influence an audience through pre-suasion. These concepts have been previously primed for influence. By analogy, think of almost any computer program you use. It is likely to contain transfer links (to desired sources of information) that you need to click twice: once to ready the link and once to launch it. But the program also likely contains links that launch with just one click, because they have already been readied —that is, hyperlinked to the desired information. The effect of hyperlinking to a location has been labeled by Web browser engineers as “prefetching it.” Just as the designers of our information technology software have installed rapid access to particular sources of information within our computers’ programming, the designers of our lives—parents, teachers, leaders, and, eventually, we ourselves—have done the same within our mental programming. These prefetched sources of information have already been put on continuing “standby” in consciousness so that only a single reminder cue (click) will launch them into action. This recognition highlights the potential usefulness of if/when-then plans for accomplishing our main goals. These goals exist as prefetched sources of information and direction that have been placed on standby, waiting to be launched into operation by cues that remind us of them. Notice again that the form of if/when-then plans puts the specification of those reminders in our own hands so that we are likely to encounter them at a time and under a set of circumstances that work well for us (“when it is eight in the morning,and I finish brushing my teeth . . .”). Even seemingly intractable bad habits can be improved as a result. Chronically unsuccessful dieters eat fewer high- calorie foods and lose more weight after forming if/when-then plans such as “If/when I see chocolate displayed in the supermarket, then I will think of my diet.” Especially for goals we are highly committed to reaching, we’d be foolish not to take advantage of the pre-suasive leverage that if/when-then plans can provide.68

Part 3: BEST PRACTICES: THE OPTIMIZATION OF PRE-SUASION

Ch 10: Six Main Roads to Change: Broad Boulevards as Smart Shortcuts

We’ve seen how it’s possible to move others in our direction by saying or doing just the right thing immediately before we want them to respond: If we want them to buy a box of expensive chocolates, we can first arrange for them to write down a number that’s much larger than the price of the chocolates. If we want them to choose a bottle of French wine, we can expose them to French background music before they decide. If we want them to agree to try an untested product, we can first inquire whether they consider themselves adventurous. If we want to convince them to select a highly popular item, we can begin by showing them a scary movie. If we want them to feel warmly toward us, we can hand them a hot drink. If we want them to be more helpful to us, we can have them look at photos of individuals standing close together. If we want them to be more achievement oriented, we can provide them with an image of a runner winning a race. If we want them to make careful assessments, we can show them a picture of Auguste Rodin’s The Thinker. Notice that whatever is just the right thing to say or do in a situation changes depending on what we want of others there. Arranging for them to hear a French song might get them to purchase French wine, but it isn’t going to get them to become more achievement oriented or helpful. And asking if they are adventurous might get them to try an untested product, but it isn’t going to make them more willing to select a highly popular item or make careful assessments. This specificity fits with the way that successfulopeners operate for a communicator. They pre-suasively channel recipients’ attention only to those concepts that are associated favorably with the communicator’s particular goal. But isn’t there an overarching goal common to all would-be persuaders: the goal of assent? After all, any persuasive communicator wants to spur audiences toward “Yes.” Are there concepts that are aligned especially well with the broad goal of obtaining agreement? I believe so. In my book Influence, I argued that there are six such concepts that empower the major principles of human social influence. They are reciprocation, liking, social proof, authority, scarcity, and consistency. These principles are highly effective general generators of acceptance because they typically counsel people correctly regarding when to say yes to influence attempts. To take the principle of authority as an example, people recognize that in the great majority of circumstances, they are likely to be steered to a good choice if that choice fits with the views of experts on the topic. This recognition allows them a valuable decision-making shortcut: when they encounter the presence of solid authoritative data, they can cease further deliberation and follow the lead of authorities in the matter. Therefore if a message points to authority-based evidence, the odds of persuasive success will jump. In recognition of the mounting behavioral science evidence for pre- suasion, though, I’d like to extend my earlier contention. Let’s stay with the principle of authority to illustrate the expanded point: communicators stand to be more effective by highlighting the idea of authority not just inside their message but inside the moment before their message. In this pre-suasive way, audiences will become sensitized to (and thus readied for) the coming authoritative evidence in the message, making them more likely to pay attention to it, assign it importance, and, consequently, be influenced by it.

THE ROADS OFT TAKEN

If it is indeed the case that directing attention (both before and during a message) to the concepts of reciprocation, liking, social proof, authority, scarcity, and consistency can influence recipients toward assent, it makessense for us to review and update the information on how each concept operates. Accordingly, this chapter is not designed to focus primarily on the process of pre-suasion. Instead, we take a step back and explore the specifics of why these six concepts possess such sweeping psychological force.

Reciprocation

People say yes to those they owe. Not always, of course—nothing in human social interaction works like that—but often enough that behavioral scientists have labeled this tendency the rule for reciprocation. It states that those who have given benefits to us are entitled to benefits from us in return. So valuable is it to the functional health of societies that all human cultures teach the rule from childhood and assign socially punishing names— freeloader, user, taker, parasite—to those who don’t give back after receiving. As a result, children respond to the rule before they are two years old. By the time they are adults, its pre-suasive power influences all aspects of their lives, including their buying patterns. In one study, shoppers at a candy store became 42 percent more likely to make a purchase if they’d received a gift piece of chocolate upon entry. According to sales figures from the retail giant Costco, other types of products—beer, cheese, frozen pizza, lipstick— get big lifts from free samples, almost all accounted for by the shoppers who accept the free offer. Much more worrisome is the impact of the rule on the voting actions of legislators. In the United States, companies making sizable campaign contributions to lawmakers who sit on tax policy–making committees see significant reductions in their tax rates. The legislators will deny any quid pro quo. But the companies know better. And so should we. Requesters who hope to commission the pre-suasive force of the rule for reciprocation have to do something that appears daring: they have to take a chance and give first. They must begin an interaction by providing initial gifts, favors, advantages, or concessions without a formal guarantee of compensation. But because the tendency to reciprocate is so embedded in most people, the strategy frequently works better than the traditional approach to commercial exchange, in which a requester offers benefits only after an action has been taken: a contract signed, a purchase made, a task performed. Dutch residents who received an advance letter asking if theywould take part in a long survey were much more likely to agree if the proposed payment was sent to them before they decided to participate (the money accompanied the letter) than if it was to be paid, as is normally the case, after they had participated. Similarly, hotel guests in the United States encountered a card in their rooms asking them to reuse their towels. They read in addition either that the hotel had already made a financial contribution to an environmental protection organization in the name of its guests or that it would make such a contribution after guests did reuse their towels. The before-the-act donation proved 47 percent more effective than the after-the-act one. Still, supplying resources up front without the traditional guarantee of agreed-upon compensation can be risky. Returns might not be forthcoming at adequate levels—or at all—because certain recipients might resent being given something they didn’t invite, while others might not judge what they got as beneficial to them. Others (the “freeloaders” among us) might not feel compelled by the rule. It makes sense to inquire, then, if there are specific features of an initial gift or favor that increase significantly the chance that it will be returned at high levels of recompense. There are three main features of this sort: in order to optimize the return, what we give first should be experienced as meaningful, unexpected, and customized.

Meaningful and Unexpected.

The first two of these optimizing features have been shown to affect the size of tips that food servers receive. Some diners in a New Jersey restaurant were offered a piece of chocolate at the end of their meals, one per person, from a basket carried to the table by the waitress. Her tips went up 3.3 percent compared with those from guests who weren’t offered chocolate. However, when other diners were invited to take two chocolates from the basket, the waitress’s tips rose by 14.1 percent. What could account for the dramatic difference? For one, the second chocolate represented a meaningful increase in the size of the gift—a doubling. Plainly, meaningful is not the same as expensive, as the second chocolate cost only pennies. Providing a costly gift can often be meaningful, but costliness isn’t necessary. Of course, the receipt of two chocolates was not only twice that of one chocolate but also more unexpected. The clear-cut impact of a gift’s unexpectedness became evident when the waitress tried yet a third technique. After offering guests one chocolate from her basket and turningto walk away, she unexpectedly returned to the table and offered a second chocolate to each diner. As a result, her average tip improved by 21.3 percent. There’s a lesson in these multiple findings that goes well beyond informing restaurant servers of how to enrich their gratuities: requesters of various sorts can elevate the likelihood that they will receive high levels of benefit from others if they first deliver benefits viewed by the others as meaningful and unexpected. But besides these features, there’s a third element in the reciprocity-optimizing triumvirate that, in my opinion, is more influential than the other two combined.

Customized.

When a first favor is customized to the needs, preferences, or current circumstances of the recipient, it gains leverage. Consider as evidence what happened in a fast-food restaurant where visitors were greeted as they entered and given one of two equally priced gifts. If the gift was not food related (a key chain), the amount they then spent increased by 12 percent compared with visitors who were greeted without being given a gift. But if the gift was food related (a cup of yogurt), their increased outlay climbed to 24 percent. From a purely economic perspective, this finding is puzzling. Giving restaurant visitors free food before they order should make them likely to purchase less, because they won’t need to spend as much on a meal. Although the obtained (reverse) outcome doesn’t make good logical sense, it makes good psycho-logical sense: Visitors went to the restaurant because they were hungry. An upfront gift of food activated not only the rule for reciprocation but a more muscular version, which states that people should feel especially obligated to reciprocate a gift designed to meet their particular needs. If a gift, favor, or service incorporates all three features of meaningfulness, unexpectedness, and customization, it can become a formidable source of change. But might we be asking too much to expect it to make a difference in the struggle against hard-core terrorists? Perhaps not, for a pair of reasons. First, the rule for reciprocation is a cultural universal taught in all societies, including those from which terrorists spring. Second, accounts from within that struggle shed light on the singular power of favors that combine the three optimizing features. Take the case of Abu Jandal, Osama bin Laden’s former chief bodyguard, who, following his capture, was questioned in a Yemeni prison in the days after 9/11. Attempts to get him to reveal information about Al Qaeda’sleadership structure appeared hopeless, as his responses consisted only of screeds against the ways of the West. But when interrogators noticed that he never ate the cookies he was served with food and learned that the man was diabetic, they did something for him that was meaningful, unexpected, and customized: At the next interrogation session, they brought him sugar-free cookies to eat with tea. According to one of those interrogators, that was a key turning point: “We had shown him respect, and we had done this nice thing for him. So he started talking to us instead of giving us lectures.” In subsequent sessions, Jandal provided extensive data on Al Qaeda operations, as well as the names of seven of the 9/11 hijackers. But as any veteran of the battles with terrorism knows, sometimes the way to win those battles requires winning allies to the cause. US intelligence officers in Afghanistan frequently visited rural territories to gain the assistance of tribal chiefs against the Taliban. These interactions were challenging because the leaders were often unwilling to help, owing to a dislike of Westerners, a fear of Taliban retribution, or both. On one such visit, a Central Intelligence Agency (CIA) operative noted a patriarch’s exhaustion from the duties of heading his tribe and his immediate family, which included four younger wives. On the following visit, the CIA man came prepared with a fully optimized gift: four Viagra tablets, one per wife. The “potency” of this meaningful, unexpected, customized favor became manifest during the CIA agent’s next trip, when the beaming leader responded with a wealth of information about Taliban movements and supply routes. Cookies as kindness. Abu Jandal’s refusal to disclose information to his interrogators changed after they did him an unexpected and meaningful favor customized to his diabetic condition.

Liking

Back when I was infiltrating the training programs of various sales organizations, I heard an assertion made repeatedly with great confidence: “The number one rule for salespeople is to get your customer to like you.” That was the case, we trainees were assured, because people say yes to those they like—something that was so undeniable that it never seemed interesting to me. What did interest me, though, was what we were told to do to arrange for customers to like us. Being friendly, attractive, and humorous were mentioned frequently in this regard. Accordingly, we were often given smiling lessons, grooming tips, and jokes to tell. But by far, two specific ways to create positive feelings got the most attention. We were instructed to highlight similarities and provide compliments. There’s good reason why these two practices would be emphasized: each increases liking and assent.

Similarities.

We like those who are like us. It’s a tendency that’s part of the human experience almost from the start: infants smile more at adults whose facial expressions match their own. And the affinity can be activated by seemingly trivial similarities that might nonetheless generate big effects. Parallels in language style (the types of words and verbal expressions conversation partners use) increase romantic attraction, relationship stability, and, somewhat amazingly, the likelihood that a hostage negotiation will end peacefully. What’s more, this influence occurs even though the overlap of styles typically goes unnoticed by the conversation partners. In addition, the consequences of the basic tendency are visible within helping decisions. People are massively more willing to help an emergency victim if they share a nationality or even a favorite sports team. The tendency also operates in educational settings. The factor that plays the largest role in the success of youth mentoring programs is the initial similarity of interests between student and mentor. But it is in the business arena where the impact on assent seems most direct. Waitresses coached to mimic the verbal style of customers doubled their tips. Negotiators coached to do the same with their opponents got significantly better final outcomes. Salespeople who mimicked the language styles and nonverbal behaviors (gestures, postures) of customers sold more of the electronic equipment they recommended.77Uncovered similarities. Even seemingly unimportant matches can lead to greater rapport.

Compliments.

“I can live for two months,” confessed Mark Twain, “on a good compliment.” It’s an apt metaphor, as compliments nourish and sustain us emotionally. They also cause us to like and benefit those who provide them; and this is true whether the praise is for our appearance, taste, personality, work habits, or intelligence. In the first of these categories, consider what happened in one hair salon when stylists complimented customers by saying, “Any hairstyle would look good on you.” Their tips rose by 37 percent. Indeed, we seem so charmed by flattery that it can work on us even when it appears to have an ulterior motive. Chinese college students who received a preprinted flier from a clothing store saying “We’re contacting you because you’re fashionable and stylish” developed positive attitudes toward the store and were more likely to want to shop there. Other researchers found that individuals who worked on a computer task and received flattering task-related feedback from the computer developed more favorable feelings toward the machine, even though they were told that the feedback had been preprogrammed and did not reflect their actual task performance at all. Nonetheless, they became prouder of their performances after receiving this hollow form of praise.

The Real Number One Rule for Salespeople.

I am hesitant to disagree with knowledgeable professionals that the number one rule for salespeopleis to get your customer to like you and that similarities and compliments are the best routes to that end. But I’ve seen research that makes me want to rethink their claims for why these statements are true. The account I heard in traditional sales training sessions always went as follows: similarities and compliments cause people to like you, and once they come to recognize that they like you, they’ll want to do business with you. Although this kind of pre-suasive process no doubt operates to some degree, I am convinced that a more influential pre-suasive mechanism is at work. Similarities and compliments cause people to feel that you like them, and once they come to recognize that you like them, they’ll want to do business with you. That’s because people trust that those who like them will try to steer them correctly. So by my lights, the number one rule for salespeople is to show customers that you genuinely like them. There’s a wise adage that fits this logic well: people don’t care how much you know until they know how much you care.

Social Proof

In John Lennon’s song “Imagine,” he proposes a world without hunger, greed, possessions, or countries—one characterized by universal brotherhood, peace, and unity. It’s a world different from today’s and, indeed, any other day’s in the long track of human history. While conceding that his vision seems that of a dreamer, he tries to convince listeners to accept it with a single follow-on fact: “But I’m not the only one.” Lennon’s trust in this lone argument is a testament to the projected power of the principle of social proof. The principle asserts that people think it is appropriate for them to believe, feel, or do something to the extent that others, especially comparable others, are believing, feeling, or doing it. Two components of that perceived appropriateness—validity and feasibility—can drive change.

Validity.

After receiving information that multiple, comparable others have responded in a particular way, that response seems more valid, more right to us, both morally and practically. As regards the first of these dimensions, when we see evidence of the increased frequency of an action, it elevates our judgments of the act’s moral correctness. In one study, after learning that the majority of their peers supported the military’s use oftorture to gain information, 80 percent of group members found the practice more acceptable and demonstrated greater support for it in their public pronouncements and, more revealingly, their private opinions. Fortunately, besides increasing the acceptability of what might be undesirable, the responses of others can do the same for desirable behavior. Working professionals who were told that the great majority of people try to overcome their stereotypes became more resistant to stereotypes of women in their own work-related conduct. In addition to clarifying what’s right morally, social proof reduces uncertainty about what’s right pragmatically. Not every time, but the crowd is usually correct about the wisdom of actions, making the popularity of an activity a stand-in for its soundness. As a result, we typically follow the lead of those around us who are like us. The upshots can be remarkable, creating simple, almost costless solutions to traditional influence challenges. Restaurant managers can increase the demand for particular dishes on their menus without the expense of upgrading the recipes with more costly ingredients, the kitchen staff with new personnel, or the menu with flowery descriptions of the selected items. They have only to label the items as “most popular” dishes. When this entirely honest yet rarely employed tactic was tried in a set of restaurants in Beijing, China, each dish became 13 percent to 20 percent more popular. Restaurateurs aren’t the only ones who can use social proof to affect food choices. Instead of bearing the cost of assembling and communicating extensive nutritional information regarding the health benefits of eating fruit, a school can lift its students’ fruit intake by stating, contrary to what students think, that the majority of their schoolmates do try to eat fruit to be healthy. This kind of information increased the fruit consumption of Dutch high schoolers by 35 percent—even though, in classic adolescent fashion, they claimed no intention to change.Social proof signage. Internet merchandisers aren’t alone in telling us what to buy because others have done so. Many governments expend significant resources regulating, monitoring, and sanctioning companies that pollute our air and water; these expenditures often appear wasted on some of the offenders, who either flout the regulations altogether or are willing to pay fines that are smaller than the costs of compliance. But certain nations have developed cost-effective programs that work by firing up the (nonpolluting) engine of social proof. They initially rate the environmental performance of polluting firms within an industry and then publicize the ratings, so that all companies in that industry can see where they stand relative to their peers. The overall improvements have been dramatic—upward of 30 percent—almost all of which have come from changes made by the relatively heavy polluters, who recognized how poorly they’d been doing compared with their contemporaries.

Feasibility.

With a set of estimable colleagues leading the way, I once did a study to see what we could best say to get people to conserve household energy. We delivered one of four messages to their homes, once a week for a month, asking them to reduce their energy consumption. Three of the messages contained a frequently employed reason for conserving energy: the environment will benefit; or it’s the socially responsible thing to do; or it will save you significant money on your next power bill. The fourth message played the social-proof card, stating (honestly) that most of your fellow community residents do try to conserve energy at home. At the end of the month, we recorded how much energy was used and learned that the social-proof-based message had generated 3.5 times as much energy savings as any of the other messages. The size of the difference surprised almost everyone associated with the study—me, for one, but also my fellow researchers, and even a sample of other home owners. The home owners, in fact, expected that the social-proof message would be least effective. When I report on this research to utility company officials, they frequently don’t trust it because of an entrenched belief that the strongest motivator of human action is economic self-interest. They say something like, “C’mon, how are we supposed to believe that telling people their neighbors are conserving is three times more effective than telling them they can cut their power bills significantly?” Although there are various possible responses to this legitimate question, there’s one that’s nearly always proven persuasive for me. It involves the second reason, besides validity, that social- proof information works so well: feasibility. If I inform home owners that by saving energy, they could also save a lot of money, it doesn’t mean they would be able to make it happen. After all, I could reduce my next power bill to zero if I turned off all the electricity in my house and curled up on the floor in the dark for a month; but that’s not something I’d reasonably do. A great strength of social-proof information is that it destroys the problem of uncertain achievability. If people learn that many others like them are conserving energy, there is little doubt as to its feasibility. It comes to seem realistic and, therefore, implementable.

Authority

For most people, the way to make a message persuasive is to get its content right: to ensure that the communication possesses strong evidence, sound reasoning, good examples, and clear relevance. Although this view (“The merit is the message”) is certainly correct to an extent, some scholars have argued that other parts of the process can be just as important. The most famous of these contentions is embodied in the assertion of the communication theorist Marshall McLuhan that “The medium is the message”—the idea that the channel through which information is sent is a form of consequential messaging itself, which affects how recipients experience content. In addition, persuasion scientists have pointed to compelling support for yet a third claim: “The messenger is the message.”Of the many types of messengers—positive, serious, humorous, emphatic, modest, critical—there is one that deserves special attention because of its deep and broad impact on audiences: the authoritative communicator. When a legitimate expert on a topic speaks, people are usually persuaded. Indeed, sometimes information becomes persuasive only because an authority is its source. This is especially true when the recipient is uncertain of what to do. Take as evidence the results of a study in which individuals had to make a series of difficult economic decisions while hooked up to brain-scanning equipment. When they made choices on their own, related activity jumped in the areas of the brain associated with evaluating options. But when they received expert advice on any of these decisions (from a distinguished university economist), they not only followed that advice, they did so without thinking about the inherent merits of the options. Related activity in the evaluating sectors of their brains flatlined. Tellingly, not all brain regions were affected in this way; the sectors associated with understanding another’s intentions were activated by the expert’s advice. The messenger had become the focal message. As should be plain from this illustration, the kind of authority we are concerned with here is not necessarily someone who is in authority— someone who has hierarchical status and can thereby command assent by way of recognized power—but someone who is an authority and can thereby induce assent by way of recognized expertise. Moreover, within this latter category, there is a type—the credible authority—who is particularly productive. A credible authority possesses the combination of two highly persuasive qualities: expertise and trustworthiness. We’ve already considered the effects of the first. Let’s concentrate on the second.

Trustworthiness.

If there is one quality we most want to see in those we interact with, it is trustworthiness. And this is the case compared with other highly rated traits such as attractiveness, intelligence, cooperativeness, compassion, and emotional stability. In a persuasion-focused interaction, we want to trust that a communicator is presenting information in an honest and impartial fashion—that is, attempting to depict reality accurately rather than to serve self-interest. Over the years, I’ve attended a lot of programs designed to teach influence skills. Almost to a one, they’ve stressed that being perceived astrustworthy is an effective way to increase one’s influence and that it takes time for that perception to develop. Although the first of these points remains confirmed, a growing body of research indicates that there is a noteworthy exception to the second. It turns out to be possible to acquire instant trustworthiness by employing a clever strategy. Rather than succumbing to the tendency to describe all of the most favorable features of an offer or idea up front and reserving mention of any drawbacks until the end of the presentation (or never), a communicator who references a weakness early on is immediately seen as more honest. The advantage of this sequence is that, with perceived truthfulness already in place, when the major strengths of the case are advanced, the audience is more likely to believe them. After all, they’ve been conveyed by a trustworthy source, one whose honesty has been established (pre-suasively) by a willingness to point not just to positive aspects but also to negative ones. The effectiveness of this approach has been documented (1) in legal settings, where a trial attorney who admits to a weakness before the rival attorney points it out is viewed as more credible and wins more often; (2) in political campaigns, where a candidate who begins with something positive to say about an opponent gains trustworthiness and voting intentions; and (3) in advertising messages, where merchandisers who acknowledge a drawback before highlighting strengths often see large increases in sales. The tactic can be particularly successful when the audience is already aware of the weakness; thus, when a communicator mentions it, little additional damage is done, as no new information is added—except, crucially, that the communicator is an honest individual. Another enhancement occurs when the speaker uses a transitional word—such as however, or but, or yet—that channels the listeners’ attention away from the weakness and onto a countervailing strength. A job candidate might say, “I am not experienced in this field, but I am a very fast learner.” An information systems salesperson might state, “Our set-up costs are not the lowest; however, you’ll recoup them quickly due to our superior efficiencies. Elizabeth I of England employed both of these enhancements to optimize the impact of the two most celebrated speeches of her reign. The first occurred at Tilbury in 1588, when, while addressing her troops massed against an expected sea invasion from Spain, she dispelled the soldiers’ concern that, as a woman, she was not up to the rigors of battle: “I know Ihave the body of a weak and feeble woman; but I have the heart of a king, and a king of England, too!” It is reported that so long and loud were the cheers after this pronouncement that officers had to ride among the men ordering them to restrain themselves so that the queen could continue. Thirteen years later, perhaps recalling the success of this rhetorical device, she used it again in her final formal remarks to Parliament members, many of whom mistrusted her. Near the completion of those remarks, she proclaimed, “And though you have had, and may have, many mightier and wiser princes sitting in this seat, yet you have never had, nor shall have, any that will love you better.” According to British historian Richard Cavendish, audience members left the hall “transfigured, many of them in tears” and, on that very day, labeled her oration the queen’s “Golden Speech”—a label that has endured ever since. Notice that Elizabeth’s bridging terms, but and yet, took listeners from perceived weaknesses to counteracting strengths. That their leader possessed the heart of a king, once accepted, filled the troops with the confidence they lacked—and needed—before battle; similarly, that she loved her subjects transcendently, once accepted, disarmed even her wary opponents in Parliament. This feature of the queen’s pre-suasive assertions fits with scientific research showing that the weakness-before-strength tactic works best when the strength doesn’t just add something positive to the list of pros and cons but, instead, challenges the relevance of the weakness. For instance, Elizabeth didn’t seek to embolden the troops at Tilbury by saying there is no one “that will love you better,” as her fighters had to be assured of a stout- hearted commander, not a soft-hearted one. She understood that to maximize its effect, an initially deployed weakness should not only be selected to preestablish the trustworthiness of one’s later claims, but also it should also be selected to be undercut by those claims. Her “weak and feeble” woman’s body became inconsequential for battlefield leadership if, in the minds of her men, it carried “the heart of a king, and a king of England, too.”

Scarcity

We want more of what we can have less of. For instance, when access to a desired item is restricted in some way, people have been known to go a littlecrazy for it. After the chain of pastry shops Crumbs announced in 2014 that it would be closing all of its locations, its signature cupcakes, which had been priced at around $4, began commanding up to $250 apiece online. The effect isn’t limited to cupcakes. On the morning of the retail release of the latest iPhone, my local TV news channel sent a reporter to interview individuals who had been waiting all night to secure one. A woman who was twenty-third in line disclosed something that fits this well-established point, but it still astounded me. She had started her wait as twenty-fifth in line but had struck up a conversation during the night with number twenty-three—a woman who admired her $2,800 Louis Vuitton shoulder bag. Seizing her opportunity, the first woman proposed and concluded a trade: “My bag for your spot in line.” At the end of the woman’s self-satisfied account, the understandably surprised interviewer stammered, “But . . . why?” and got a telling answer. “Because,” the new number twenty-three replied, “I heard that this store didn’t have a big supply, and I didn’t want to risk losing the chance to get one.” Although there are several reasons that scarcity drives desire, our aversion to losing something of value is a key factor. After all, loss is the ultimate form of scarcity, rendering the valued item or opportunity unavailable. At a financial services conference, I heard the CEO of a large brokerage firm make the point about the motivating power of loss by describing a lesson his mentor once taught him: “If you wake a multimillionaire client at five in the morning and say, ‘If you act now, you will gain twenty thousand dollars,’ he’ll scream at you and slam down the phone. But if you say, ‘If you don’t act now, you will lose twenty thousand dollars,’ he’ll thank you.” But the scarcity of an item does more than raise the possibility of loss; it also raises the judged value of that item. When automobile manufacturers limit production of a new model, its value goes up among potential buyers. Other restrictions in other settings generate similar results. At one large grocery chain, brand promotions that included a purchase limit (“Only x per customer”) more than doubled sales for seven different types of products compared with promotions for the same products that didn’t include a purchase limit. Follow-up studies showed why. In the consumer’s mind, any constraint on access increased the worth of what was being offered.

Consistency

Normally, we want to be (and to be seen) as consistent with our existing commitments—such as the previous statements we’ve made, stands we’ve taken, and actions we’ve performed. Therefore communicators who can get us to take a pre-suasive step, even a small one, in the direction of a particular idea or entity will increase our willingness to take a much larger, congruent step when asked. The desire for consistency will prompt it. This powerful pull toward personal alignment is used in a wide range of influence settings. Psychologists warn us that sexual infidelity within romantic relationships is a source of great conflict, often leading to anger, pain, and termination of the relationship. Fortunately, they’ve also located a pre-suasive activity that can help prevent the occurrence of this toxic sequence: prayer—not prayer in general, though, but a particular kind. If one romantic partner agrees to pray for the other’s well-being every day for an extended period of time, he or she becomes less likely to be unfaithful while doing so. After all, such behavior would be inconsistent with the daily, actively made commitment to the partner’s welfare. Influence practitioners have frequently found the human tendency for consistency with one’s prior (pre-suasive) words and deeds to be of service. Automobile insurance companies can reduce policyholders’ misreporting of odometer readings by putting an honesty pledge at the beginning of the reporting form rather than at the end. Political parties can increase the chance that supporters will vote in the next election by having arranged for them (through various get-out-the-vote activities) to vote in the previous one. Brands can deepen the loyalty of customers by getting them to recommend the brand to a friend. Organizations can raise the probability that an individual will appear at a meeting or event by switching from saying at the end of a reminder phone call, “We’ll mark you on the list as coming then. Thank you!” to “We’ll mark you on the list as coming then, okay? [Pause for confirmation.] Thank you.” One blood services organization that made this tiny, commitment-inducing wording change increased the participation of likely donors in a blood drive from 70 percent to 82.4 percent. Sometimes practitioners can leverage the force of the consistency principle without installing a new commitment at all. Sometimes all that’snecessary is to remind others of a commitment they’ve made that fits with the practitioners’ goals. Consider how the legal team arguing to the US Supreme Court for marriage equality in 2013 structured a months-long national PR campaign with one man as its primary target, Supreme Court Justice Anthony Kennedy. (Public opinion had already moved in favor of same-sex marriage.) Despite the operation’s nationwide scope before the court hearings, the campaign most wanted to influence Kennedy for two reasons. First, he was widely considered to be the one to cast the deciding vote in both of the companion cases the court was considering on the issue. Second, he was a frequent fence-sitter on ideological matters. On the one hand, he was a traditionalist, holding that the law should not be interpreted in a way that drifted far from its original language. On the other hand, he believed the law to be a living thing with meanings that evolved over time. This foot- in-each-camp position made Kennedy a prime candidate for a communication approach designed not to change one of his contrasting points of view but, rather, to connect only one of them to the issue of marriage equality. The media campaign provided just such an approach by employing a set of concepts, and even wordings, that Kennedy had used in prior court opinions: “human dignity,” “individual liberty,” and “personal freedoms/rights.” As a consequence, wherever Kennedy went in the weeks and months before oral arguments in the cases, he would likely hear the relevant issues linked in the media campaign to that selected set of three of his stated views. The intent was to get him to perceive his prior pertinent legal stances as associated with the pro-marriage-equality position. The intent was enacted much more explicitly once the hearings began. Legal team members repeatedly developed their in-court arguments from the same Kennedy-tied language and themes. Did this tactic contribute to the court’s 5-to-4 rulings in favor of marriage equality? It is difficult to know for certain. But members of the legal team think so, and they point to an affirming piece of evidence: in his written opinions, Kennedy leaned heavily on the concepts of dignity, liberty, and freedoms/rights—all of which they had labored to prioritize within his marriage-equality-related thinking both before and during the formal hearings. It is perhaps testament to the durability of properly evoked commitments that in another marriage-equality case two years later, these same three concepts figured prominently once again in Justice Kennedy’s majority opinion.

WHAT ELSE CAN BE SAID ABOUT THE UNIVERSAL PRINCIPLES OF INFLUENCE?

After presenting the six principles of social influence to a business-focused audience, it is not unusual for me to hear two questions. The first concerns the issue of optimal timing: “Are different stages of a commercial relationship suited better to certain of the principles?” Thanks to my colleague Dr. Gregory Neidert, I have an answer, which is yes. Moreover, I have an explanation, which comes from what Dr. Neidert has developed as the core motives model of social influence. Of course, any would-be influencer wants to effect change in others but, according to the model, the stage of one’s relationship with them affects which influence principles to best employ. At the first stage, the main goal involves cultivating a positive association, as people are more favorable to a communication if they are favorable to the communicator. Two principles of influence, reciprocity and liking, seem particularly appropriate to the task. Giving first (in a meaningful, unexpected, and customized fashion), highlighting genuine commonalities, and offering true compliments establish mutual rapport that facilitates all future dealings. At the second stage, reducing uncertainty becomes a priority. A positive relationship with a communicator doesn’t ensure persuasive success. Before people are likely to change, they want to see any decision as wise. Under these circumstances, the principles of social proof and authority offer the best match. Pointing to evidence that a choice is well regarded by peers or experts significantly increases confidence in its wisdom. But even with a positive association cultivated and uncertainty reduced, a remaining step needs to be taken. At this third stage, motivating action is the main objective. That is, a well- liked friend might show me sufficient proof that experts recommend (and almost all my peers believe) that daily exercise is a good thing, but thatmight not be enough to get me to do it. The friend would do well to include in his appeal the principles of consistency and scarcity by reminding me of what I’ve said publicly in the past about the importance of my health and the unique enjoyments I would miss if I lost it. That’s the message that would most likely get me up in the morning and off to the gym. The second question I am frequently asked about the principles is whether I’ve identified any new ones. Until recently, I’d always had to answer in the negative. But now I believe that there is a seventh universal principle that I had missed—not because some new cultural phenomenon or technological shift brought it to my attention but because it was hiding beneath the surface of my data all along.
Tags: Book Summary,Communication Skills,Negotiation,