๐Ÿ’Ž Consumers are far more likely to splurge windfall money than expected (gamblers beware)

Payday is not the only moment when customers spend more. Any time consumers receive a windfall, like birthdays or bonuses, they will increase their spending. Three Ohio University psychologists, Hal Arkes, Cynthia Joyner and Mark Prezzo, ran an experiment in 1994 exploring this phenomenon. When they recruited students for the experiment half were told a week before that they would be paid $3, while the rest expected to be given course credits. However, when the participants arrived at the experiment they were all given the same $3-dollar incentive.

The participants were given the chance to gamble with their cash on a simple dice game. Those who had been given cash in the windfall condition gambled on average $2.16 while those who had been fully expecting the money only frittered away $1.

Excerpt from: The Choice Factory: 25 behavioural biases that influence what we buy by Richard Shotton

๐Ÿ’Ž Group polarisation and the danger of surrounding yourself with people who share similar opinions (How correct am I?)

But they wonโ€™t. Decades of research has proved that groups usually come to conclusions that are more extreme than the average view of the individuals who make up the group. When opponents of a hazardous waste site gather to talk about it, they will become convinced the site is more dangerous than they originally believed. When a woman who believes breast implants are a threat gets together with women who feel the same way, she and all the women in the meeting are likely to leave believing they had previously underestimated the danger. The dynamic is always the same. It doesn’t matter what the subject under discussion is. It doesn’t matter what the particular views are. When like-minded people get together and talk, their existing views tend to become more extreme.

In part, this strange human foible stems from our tendency to judge ourselves by comparison with others. When we get together in a group of like-minded people, what we share is an opinion that we all believe to be correct and so we compare ourselves with others in the group by asking โ€˜How correct am I?’ Inevitably, most people in the group will discover that they do not hold the most extreme opinion, which suggests they are less correct, less virtuous, than others. And so they become more extreme. Psychologists confirmed this theory when they put people in groups and had them state their views without providing reasons why – and polarization still followed.

Excerpt from: Risk: The Science and Politics of Fear by Dan Gardner

๐Ÿ’Ž Uncertain rewards can often be more motivating than certain rewards (when caught up in the process)

Imagine that you are participating in an auction that involves chocolate coins as a reward. You can bid on a lot containing five coins or on a mystery lot that contains either three or five coinsโ€” you won’t know which until after your bid is accepted. Logically, the lot with five coins is worth more.

But it wasn’t. Researchers at the University of Chicago staged just this auction and found that the average bid for the guaranteed five coin lots was $1.25. The average bid for the mystery lot was $1.89. When asked, participants said the uncertain auction was more exciting. It didn’t increase the actual value of the reward. It just made the game more fun. Participants paid more to play and said they wanted to participate in the auction again. (The secret, though, was getting caught up in the process. When participants planned their bid in advance, they preferred the certain reward.)

Excerpt from: Good Habits, Bad Habits: The Science of Making Positive Changes That Stick by Wendy Wood

๐Ÿ’Ž Misrepresenting reality (so as to better reflect reality)

Our perceptional apparatus makes mistakes-distortionsโ€”in order lead us to more precise actions: ocular deception, it turns out, is a necessary thing. Greek and Roman architects misrepresented the columns of their temples, by tilting them inward, in order to give us the impression that the columns are straight. As Vitruvius explains, the aim is to โ€œcounteract the visual reception by a change of proportions.โ€ A distortion is meant to bring about an enhancement for your aesthetic experience. The floor of the Parthenon is curved in reality so we can see it as straight. The columns are in truth unevenly spaced, so we can see them lined up like a marching Russian division in a parade.

Should one go lodge a complaint with the Greek Ministry of tourism claiming that the columns are not vertical and that someone is taking advantage of our visual mechanisms?

Excerpt from: Skin in the Game: Hidden Asymmetries in Daily Life by Nassim Nicholas Taleb

๐Ÿ’Ž George Orwellโ€™s rules for writing (never…)

i. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
ii. Never use a long word where a short one will do.
iii. If it is possible to cut a word out, always cut it out.
iv. Never use the passive where you can use the active.
v. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
vi. Break any of these rules sooner than say anything outright barbarous.

Excerpt from: Words That Work: It’s Not What You Say, It’s What People Hear by Frank Luntz

๐Ÿ’Ž Checklist for evaluating creative strategy (11 steps)

Checklist for evaluating creative

1. Come prepared
2. Expect to be surprised maybe even made a little nervous
3. React to the idea as a whole
4. Add what’s important and incremental
5. Make sure it’s on strategy, not on checklist
6. If it doesn’t connect emotionally, it doesn’t connect
7. Remember what the work is trying to accomplish
8. Don’t just talk about what’s not working for you
9. See problems? Don’t offer solutions, explain the problem
10. Remember you don’t have to find something wrong
11. A creative idea needs creative direction not group cons

Excerpt from: Strategy Scrapbook by Alex Morris

๐Ÿ’Ž Our picture memory is superior to our verbal memory (and our memory of vivid imagery is greater than that of more routine images)

In 1973, Standing conducted a range of experiments exploring human memory. The participants were shown pictures or words and instructed to pay attention to them and try to memorize them for a test on memory. Each picture or word was shown once, for five seconds.

The words had been randomly selected from the Merriam-Webster dictionary and were printed on 35mm slides – words like ‘salad’, ‘cotton’, reduce’, ‘camouflage’ ‘ton’.

The pictures were taken from 1,000 snapshots – most of them from holidays – beaches, palm trees, sunsets – volunteered by the students and faculty at McMaster University in Ontario, Canada, where Standing taught at the time. But some of the pictures were more vivid – a crashed plane, for instance, or a dog holding a pipe. But remember this was the seventies – all dogs smoked pipes back then.

Two days later the participants were shown a series of two snapshots or two words at a time, one from the stack of snapshots they had seen before and one new, and were asked which one looked more familiar.

The experiment showed that our picture memory is superior to our verbal memory. When the learning set is 1,000 words selected from the dictionary above, we remember 62 per cent of them, while 77 per cent of the 1,000 selected snapshots were remembered. The bigger the learning set, the smaller the recognition rate. So, for instance, if the learning set for pictures were increased to 10,000, the recognition rate dropped to 66 per cent. However, we remember snapshots better than we do words. That may be why you might be better at remembering faces than names. So, if you are introduced to Penelope, it might help you remember her name if you picture Penelope Cruz standing next to her.

In addition, if more vivid pictures were presented, rather than the routine snapshots, recognition jumped to 88 per cent for 1,000 pictures.

Excerpt from: The Art of Making Memories: How to Create and Remember Happy Moments by Meik Wiking

๐Ÿ’Ž Inhibiting desire can backfire (why breaking habits is hard)

Psychologist Daniel Wegner and his colleagues devised an experiment to demonstrate the ironic effect of inhibiting our desires. Participants were instructed in a simple task-not thinking of a white bear. Who spends much time thinking of white bears, anyway? Participants sat alone in a lab room for five minutes and rang a bell every time they failed to suppress this thought. On average, they rang the bell about five times, almost once per minute. No surprise that our thoughts wander, even to forbidden topics, when we are alone and bored. What is interesting is what happened when the same participants later sat for five minutes trying to think of a white bear. After the suppression task, they rang the bell almost eight times. In contrast, participants instructed to try to think of a white bear for five minutes, but without the initial task of not doing so, rang the bell fewer than five times. It was as if the act of trying to suppress a thought gave it a special energy to emerge later. After the participants tried not to think about white bears, thoughts of them returned again and again. When rating their experience, participants who had initially suppressed thoughts of white bears reported feeling preoccupied with them.

Excerpt from: Good Habits, Bad Habits: The Science of Making Positive Changes That Stick by Wendy Wood

๐Ÿ’Ž After an event has occurred, people become overconfident about their ability to have predicted it (“I knew-it-all-along”)

…termed the hindsight bias, or the “I knew-it-all-along” effect. As you may recall from our discussion in Chapter 1, once we know the outcome of an event, we have a strong tendency to believe that we could have predicted it in advance. In the Fischhoff experiments, subjects were given a test assessing their knowledge of historical events. The subject’s task was to indicate the likelihood that four possible outcomes of the event could have actually occurred. Some of the subjects were told that one of the four possibilities had actually happened but were asked to make the estimates that they would have made had they not first been told the “right” answers. The results showed that subjects could not ignore this information; they substantially overestimated their prior knowledge of correct answers. In other words, even though subjects really didn’t know the answers to the test, once they were told an answer, they believed that they knew it all along and that their memories had not changed.

Excerpt from: The Social Animal by Elliot Aronson and Joshua Aronson

๐Ÿ’Ž Why people often think they’re the hero (moral superiority)

Everyone who’s psychologically normal thinks they’re the hero. Moral superiority is thought to be a สปuniquely strong and prevalent form of positive illusion’. Maintaining a positive moral self-image’ doesn’t only offer psychological and social benefits, it’s actually been found to improve our physical health. Even murderers and domestic abusers tend to consider themselves morally justified, often the victims of intolerable provocation. When researchers tested prisoners on their hero-maker biases, they found them to be largely intact. The inmates considered themselves above average on a range of pro-social characteristics, including kindness and morality. The exception was law-abidingness. There, sitting in prison, serving sentences precisely because they’d made serious contraventions of the law, they were only willing to concede that, on law-abidingness, they scored about average.

Excerpt from: The Science of Storytelling by Will Storr

๐Ÿ’Ž Even in an era of efficiency, thereโ€™s a role for extravagance in advance (advertising works)

John Kay, an economist at Oxford University, argues that advertising doesn’t work because of explicit messages. He suggests that one context is particularly important that of waste. By waste he means spending more on adverts than is necessary to functionally communicate the explicit message. That could be a 90-second ad, acres of white space on double-page spread or extravagant production values.

Advertising known to be expensive signals the volume of the resources available to the advertiser. As Kay says in his landmark paper:

The advertiser has either persuaded lots or people to buy his product already, a good sign, or has persuaded someone to lend him lots of money to finance the campaign.

Advertising works, not despite its perceived wastage, but because of it.

Excerpt from: The Choice Factory: 25 behavioural biases that influence what we buy by Richard Shotton

๐Ÿ’Ž Our tendency to set different burdens of proof according to whether evidence agrees with our existing viewpoint or not (Must I believe this?)

As psychologist Thomas Gilovich noted, โ€œWhen examining evidence relevant to a given belief, people are inclined to see what they expect to see, and conclude what they expect to conclude… For desired conclusions … we ask ourselves, ‘Can I believe this?, but for unpalatable conclusions we ask, โ€œMust I believe this?””

Excerpt from: Catalyst by Jonah Berger

๐Ÿ’Ž Exposure to different views doesn’t make people more moderate (they become more extreme)

To test this possibility, Bail set up a clever experiment. He recruited more than 1,500 Twitter users and had them low accounts that exposed them to opposing viewpoints. For a month they saw messages and information from elected officials, organizations, and opinion leaders from the other side. A liberal might see tweets from Fox News or Donald Trump. A conservative might see posts from Hillary Clinton or Planned Parenthood.

It was a digital version of reaching across the aisle. A simple intervention that could have big effects for social policy.

Then, at the end of the month, Bail and his team measured users’ attitudes. How they felt about various political and social issues. Things like whether government regulation is beneficial, whether homosexuality should be accepted by society, and whether the best way to ensure peace is through military strength.

It was a huge undertaking. Years of preparation and thousands of hours of work. The hope was that, as thousands of pundits, columnists, and other talking heads have argued, connecting with the other side would bring people closer together.
But that’s not what happened. Exposure to the other side didn’t make people more moderate.

In fact, just the opposite. Exposure to opposing views did change minds, but in the opposite direction. Rather than becoming more liberal, Republicans exposed to liberal information became more conservative, developing more extreme attitudes toward social policies. Liberals showed similar effects.

Excerpt from: Catalyst by Jonah Berger

๐Ÿ’Ž Approaches for coming up with a big ideas (David Ogilvy)

“Stuff your conscious mind with information, then unhook your rational thought process. You can help this process by going for a long walk, or taking a hot bath, or drinking half a pint of claret. Suddenly, if the telephone line from your unconscious is open, a big idea wells up within you.’

David Ogilvy, adman

Excerpt from How to Have Great Ideas: A Guide to Creative Thinking and Problem Solving by John Ingledew

๐Ÿ’Ž Why psychologists believe that focus groups are far less insightful than some marketers think (Head cannot look into Gut)

‘The heart has its reasons,’ Blaise Pascal wrote more than three centuries ago, โ€˜which reason knows nothing ofโ€™. Sot with the conscious and unconscious minds. Head cannot look into Gut and so it has no idea how Gut assembles its judgments, which is why psychologists believe that focus groups are far less insightful than some marketers think. If you put people together in a room, show them a car commercial, and ask them how they feel about the car, you will get clear answers. ‘I don’t care for it,’ a man may say. Fine. Why not? He frowns. ‘Um, the styling on the front is ugly. And I want a more powerful engine.’ That looks like good insight, just the sort of thing a company can use to design and market its products. But it’s not. This man’s snap judgment – ‘I don’t like that car’ – came from Gut. But the interviewer is talking to Head. And Head doesn’t have a clue why Gut doesn’t like the car. So Head rationalizes. It looks at the conclusion and cobbles together an explanation that is both plausible and quite possibly, wrong.

Excerpt from: Risk: The Science and Politics of Fear by Dan Gardner

๐Ÿ’Ž We often selectively interpret evidence to fit with our prior beliefs (and is used to further cement beliefs)

In 1979 – when capital punishment was a top issue in the United States – American researchers brought together equal numbers of supporters and opponents of the death penalty. The strength of their views was tested. Then they were asked to read a carefully balanced essay that presented evidence that capital punishment deters crime and evidence that it does not. The researchers then retested people’s opinions and discovered that they had only gotten stronger. They had absorbed the evidence that confirmed their views, ignored the rest, and left the experiment even more convinced that they were right and those who disagreed were wrong.

Excerpt from: Risk: The Science and Politics of Fear by Dan Gardner