We are wonderful creatures, blessed with an abundance of senses in order to help us understand our world. But we are terribly biased. In fact, there are over 100+ cognitive biases. Ranging from decision making to social, memory and behavioural biases.
The question to ask is when a team is trying to understand marketing and business data, how does one account for these biases?
At Gallantway we are continually looking to create ways to improve our decision-making outcomes of digital marking activity and analysis.
We have found simply being aware of our biases is a big step towards managing them. But there are also strategies to actively consider these biases when analysing data or building strategic digital campaigns. Let’s take a look at some of the well-known ones and talk about what you can do in your business.
Defined as ‘the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories.’
People tend to develop confirmation bias when they gather or remember information in a selective manner. In sectors like finance, confirmation biases can lead investors to be overconfident and ignore evidence that their strategies will lose money.
Similar to digital marketing, a team may be reluctant to change strategies due to their overconfidence and selected information which suggests continuing with a particular strategy will lead to success.
Our recommendation to combat the effect of confirmation biases is that teams should try to adopt contrary thinking to imagine what an unsuccessful campaign, strategy or execution looks like and why this may happen.
‘Streetlight Effect. Also known as an Observational Bias.
The Streetlight Effect can be defined as ‘a bias that occurs when researchers only look where they think they will find positive results, or where it is easy to record observations.’
This is very true in data-heavy institutions and is becoming more common with the proliferation of available data for interpretation as it is easy to become selective in looking for and recording analysis.
There is a good parable about a drunk who is searching for something he has lost. A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is.”
Our recommendation is to ensure that data is being recorded and evaluated fairly. This also includes the representation and communication of data back to the business from marketing teams. This means that it is important not to cherry-pick charts or parts of charts, and to take the view that a negative result is an opportunity to learn, iterate and develop new paths to a business outcome.
An outcome bias is defined as ‘an error made in evaluating the quality of a decision when the outcome of that decision is already known.
Put simply, we tend to judge past decisions based on their outcomes. In reality, we should be judging the quality of the decision that was made based on the information available at the time of the decision.
Our recommendation when reviewing digital marketing decision-making is that one should evaluate a strategy decision by ignoring information collected after the fact and focusing on what the right answer is, or was at the time the decision was made and with the information that was available.
A very well known bias in which a person’s subjective confidence in his or her judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high.
When researchers asked how confident people are in the accuracy of their beliefs or answers to particular questions, data show that confidence consistently exceeds accuracy; that is, people are more confident that they are right than they should reasonably be.
Overconfidence also manifests itself in a number of ways.
The planning fallacy describes the tendency for people to overestimate their rate of work or to underestimate how long it will take them to get things done. It is strongest for long and complicated tasks and disappears or reverses for simple tasks that are quick to complete. Our suggestion is that teams strive to record and measure their time accurately as this can deliver more accurate forecasting.
Overplacement is a judgment of your performance compared to another. This subsection of overconfidence occurs when people believe themselves to be better than others, or more often “better-than-average”. It is the act of placing yourself or rating yourself above others. Overplacement more often occurs on simple tasks, ones we believe are easy to accomplish successfully.
Defined as ‘In diffusion of innovation theory, a pro-innovation bias is the belief that an innovation should be adopted by whole society without the need of its alteration.’
Pro-Innovation Biases see us driving around in flying cars, living underwater and driving on solar roadways. They come about due to the innovations key stakeholder holding such a strong bias in favour of the innovation that he or she may not see it’s weaknesses or limitations.
Our suggestion to avoiding this bias is to consider reasons why society would not wish to adopt an innovation. Then to consider what changes would be required in order to assist these people in adopting this innovation.
The gambler’s fallacy, is the mistaken belief that, if something happens more frequently than normal during some period, it will happen less frequently in the future, or that, if something happens less frequently than normal during some period, it will happen more frequently in the future (presumably as a means of balancing nature).
Often used by salesmen during negotiation, anchoring is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered when making decisions. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments.
In addition to recognising our biases, debiasing the way we work is an important step to reducing the influence of bias on our business and teams. There are several ways that one can introduce debiasing into decision-making and analysis.
By Changing Incentives
By changing the way that we incentivise teams and organisations we are able to make teams more accountable for their decisions. This can be done by increasing the appropriate incentives and as a consequence increasing the extent to which teams invest cognitive resources in decision-making
Incentives can be calibrated to change preferences toward more beneficial behaviour. Price cuts on healthy foods increase their consumption in school cafeterias, and soda taxes appear to reduce soda consumption by the public. People often are willing to use incentives to change their behaviour through the means of a commitment device. Shoppers, for example, were willing to forego a cash back rebate on healthy food items if they did not increase the percentage of healthy foods in their shopping baskets.
Incentives can also cause negative results when they are not accurately calculated or are weaker than the social behaviour that were preventing the undesirable behaviour. Also worth noting is that large incentives can also lead people to choke under pressure.
Changes in the way information is presented or the way decisions are evoked is another step to debiasing. The idea behind nudges is that people may choose more preferential outcomes if they are able to better understand the analysis. An example is that people may choose healthier foods if they are better able to understand their nutritional contents.
Nudge Theory itself is a vast topic and is considered to be
any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.
An obvious but sometimes overlooked way of debiasing teams and individuals is through training.
Training can be considered debiasing when you are providing people with personalised feedback regarding the degree and direction to which they exhibit bias. Debiasing training can also include teaching statistical reasoning, standard models and rules to people who were previously unaware.
And finally encouraging teams to take the perspective of a person who will experience the consequences of their decision.
Thanks for reading, if you’re interested in hearing about how we help you acquire more customers, develop better digital marketing capabilities and accelerate your knowledge of digital marketing please contact us.