Skip to main content

In-Depth: How Easily Biases Distort What We Believe (In The Workplace)

August 17, 2020

Header

In this series of ‘In-depth’-posts, we take time to consider the bigger picture. This series is not about easy answers or practical tips, but to develop a more complete understanding of what may be going on.

If you take the time to fully read and digest this post, you will (hopefully):

  • Understand how easily your thinking and the thinking in groups is biased, tainted or distorted by a eight very common cognitive and social biases. The less susceptible you think you are the time, the more you usually are.
  • Learn how the impact of eight common biases — like False Causality, Confirmation Bias and the Anchoring Bias — can be reduced by simply becoming aware of them.
  • Realize how biases can lead to very dangerous and wrong conclusions about what is happening in and around organizations.
  • See how approaches like the Scrum Framework and Liberating Structures can prevent some of these biases by bringing in diverse voices and by validating your assumptions against reality.

The writing and research for this post was made possible by our patrons. If you like it, and you’d like more of it, please consider supporting us too.

Setting the Stage

I have always been fascinated by cognitive and social biases. As we’ll see in this post, we often vastly overestimate our ability to arrive at sound, rational conclusions that adhere to the facts. While this is already a bias itself (called the “Bias blind spot”), it has big ramifications for our work in organizations too. Because what does it mean about our work when our reasoning is often so flawed? When the beliefs we have, and the assumptions we make, are shaped and distorted by bias?

For me personally, it's one of the reasons why I like the Scrum Framework, as it gives guide-rails to help us think and validate our assumptions with data. It's also why I like Liberating Structures and how they purposefully include different perspectives and voices to reduce bias. It won’t make you invulnerable to bias, but it hopefully reduces them.

This post is about eight biases, and how they manifest in the workplace. My hope is that after reading this post, you’ll be more aware of them and can hopefully reduce their impact. It was my possible by our dear patrons.

When is something a bias?

A listener of the science podcast Radiolab once called in to share a weird observation. He had noticed how he would often run into the same combinations of vehicles at the intersections he’d crossed; one cyclist, one car and one truck. He judged the odds of that happening so low that he becomes suspicious. What was going on here? A statistician explained that the odds are actually quite high when you consider the number of intersections you cross and the number of vehicles that pass. Also, this was further exaggerated because the person only remembered those instances where the combination was what he expected. Nothing suspicious was going on. Instead, the listener had experienced two biases: sampling bias (underestimating the odds) and confirmation bias (only remembering confirmations).

Although this is an innocent example of biases, their effects are not always as benign. Biases are at the root of many sociological problems, like racism, sexism, political rifts, and general inequality. 

“Biases are at the root of many sociological problems, like racism, sexism, political rifts, and general inequality.”

So what do we mean by “bias”? A naive interpretation would be to assume that biases are inherently wrong. Behavioral scientists would’ve agreed with this view for a long time. But more recently, they are understood as shortcuts to reduce the cognitive capacity we need to make decisions (e.g. Gigerenzer, 2000). Even though they are distortions, they can sometimes result in the right conclusions. But the fact remains that biases can easily lead to wrong beliefs that actively hurt yourself and/or others, especially when you’re unaware of them. 

1. Confirmation bias

The first, and most researched bias, is confirmation bias. Initially coined by Wason (1960), this bias manifests when we only look for confirmation of our beliefs. It happens, for example, when we don’t trust a certain person or group of people, and then only see behavior that fits with our belief (or even interpret it as such) without considering observations where they don’t (Oswald & Grosjean, 2004). This is further compounded by how our beliefs shape our understanding of their intentions in the first place. This creates a self-fulfilling prophecy where the belief strengthens itself.

This bias often manifests as a “positive test strategy” (Klayman & Ha, 1987) where people only test assumptions by looking for what confirms them, but not by also considering what would falsify them. It is one of the primary mechanisms behind “echo chambers” on the internet, where people constantly reaffirm and strengthen beliefs that are in conflict with the facts. 

Examples in the workplace

  • When organizations start initiatives or change programs and only see or look for evidence that supports the belief that it is working, but fail to see where it doesn’t work or even causes damage.
  • Confirmation bias can easily lead to sexism and racism in the workplace, as certain behaviors are seen (or interpreted) to reinforce a stereotype while behavior that doesn’t isn’t seen.
  • When people are opposed to an idea, they are likely to only look for what confirms their opposition. For example, they may start looking for blog posts that support their belief (but ignore many others that reject it). Or they may look for people who are also opposed.

How to reduce this bias

Like all other biases, this bias can be reduced by being aware of when it happens. So whenever you or a group you are with needs to validate an assumption or a belief, also consider what you’d need to see to challenge that belief. The Liberating Structure Myth Turning is a good example of this strategy. This is also a reason to actively look for information that conflicts with your beliefs, or surround yourself with people with different beliefs.

2. Fundamental attribution error

In our day-to-day life, we often attribute the behaviors of others to their inherent traits — like personality, experience, and skills. But as it turns out, our behavior is determined to a much larger degree by the situation than by inherent traits. Ross & Nisbett (2011) offer an extensive overview of research in this area. 

This naive psychology where we attribute behavior to inherent traits is an example of a bias called the fundamental attribution error. Initially coined by cognitive psychologist Lee Ross (1977), it happens when people underestimate the influence of the situation on the behavior of others while overestimating the influence of their personal traits and beliefs (Berry, 2015). This bias is also known as the ‘correspondence bias’.

Examples in the workplace

  • When you attribute the mistake that someone in your team makes to their lack of skill, their inherent clumsiness or overall intelligence without considering situational factors like time pressure, the novelty of the problem, and the (lack of) support that this person received from others.
  • When you explain a colleague’s grumpiness as indicative of their character, without considering situational factors — like their home situation or the work pressure they are experiencing.
  • During a job interview, an applicant may be considered introverted or shy (traits) because they don’t talk much and seem nervous, where it is likely that this person would behave very differently if not for the pressure of the situation (e.g. with friends or co-workers).

For each of these examples, a situational view might’ve resulted in different behavior on the parts of others. The problem with the fundamental attribution error is that it puts responsibility entirely with the other person (and their personality, skills, experience). Even worse, it can lead us to blame the other person or get angry at them.

How to reduce this bias

More recent analyses have shown that this bias isn’t as fundamental as previously thought (Malle, 2006). For example, the bias mostly disappears when people are made aware of how situational factors influence the behavior (e.g. Hazlewood & Olson, 1986). So one way to reduce the influence of this bias is to ask: “How can I explain the behavior through the situation instead of their personality or other traits inherent to them?”.

3. False causality

When two events or activities happen together, people often conflate them by seeing one as causing the other where no connection exists in reality. This is called a “false causality”, and it is captured in the maxim that “correlation is not causation”. When two events happen at the same, meaning that they are “correlated”, it does not mean that one causes the other. There are many examples of this:

Examples in the workplace

  • The interpretation of marketing metrics — like conversion rates and customer satisfaction—can easily lead to false causalities. This happens when a current marketing activity is seen as causing the changes in scores where they are only correlated in reality.
  • Big data, in general, is highly susceptible to the false causality bias. A statistical consequence of the law of large numbers is that when datasets grow in size, statistical noise alone will create correlations where none exist in reality. Furthermore, the correlations that do exist are likely just that: correlated, but not causally linked.
  • Recently, the rise of COVID-19 has been linked to the spread of 5G networks by certain fringe groups. These groups believe that 5G causes Corona-like symptoms, where no such link exists according to extensive and repeated research by the World Health Organisation and health professionals. This is an example of false causality.
Marketing metrics are particularly prone to false causalities, where rises or drops in metrics are attributed to running campaigns.
Marketing metrics are particularly prone to false causalities, where rises or drops in metrics are attributed to running campaigns.

How to reduce this bias

One way to disprove false causalities is to look for where one event happens, but not the other. When two things always happen at the same time, there might be another variable causing both. For example, the incidence of violent crime tends to rise and drop along with ice-cream consumption. Does the eating of ice-cream cause violent crime? Or do people eat more ice because of crime? Of course not. Instead, it is well-known that both violent crime and the consumption of ice-cream increase as it gets warmer.

4. Regression fallacy

This fallacy is caused by a statistical effect called regression to the mean. It implies that an extreme score on a variable is likely to be followed by one that is closer to the average, provided that nothing has profoundly changed in-between measures. The fallacy happens when we attribute the drop in the second score to anything other than this statistical consequence, like skill, a particular intervention, beginner’s luck, or time.

Examples in the workplace

  • When a Scrum Team scores much higher than their average on a metric of their choosing (e.g. happiness, velocity, defects), and if nothing has profoundly changed in-between measures, the second score is likely to be much closer to the average (and thus, lower). 
  • This fallacy can lead people to conclude that punishment is more effective than rewards. When an employee scores exceptionally high on a metric and is rewarded for that, it is statistically more likely that the next score will be closer to the average (and thus, lower). But when an employee scores exceptionally low on a metric and is then punished for it, the next score is likely to be closer to the average (and thus, higher). The fallacy happens when we conclude from this that punishment works, and rewards don’t (Defulio, 2012).

People often fail to understand this fallacy because they underestimate how much of their behavior and their outcomes are influenced by randomness (Taleb, 2007).

How to reduce this bias

The best way to avoid this fallacy is by being cautious when you interpret a single extreme score followed by one that is closer to the average. Instead of attributing the difference between the first and the second score to an intervention, to skill or time, it may simply be a regression to the mean. 

5. Anchoring bias

This is a cognitive bias where recently acquired information influences the decision of a person more than it should (Tversky & Kahneman, 1974).

Examples in the workplace

  • When teams estimate work, hearing an initial estimate is likely to “anchor” further estimates. So when people are asked how much time something will take, and they are offered an initial estimate of 20 days, their own estimates will gravitate towards that number. 
  • It explains why people often return to the first option after exploring many others. This is a subtle example of the anchoring bias where the first option is used as a reference for all the other ones and thus remains the most attractive one.
  • In a subtle way, and I’m noticing this while writing, is that the anchoring bias also explains why it is difficult to write something original when you’ve just read something relevant. It is difficult not to simply replicate what you’ve read.
  • Anchoring often happens in negotiations, where an initial offer anchors the other offers. Research (Janiszewski & Uy, 2008) even shows that precise offers (e.g. 267.200) anchor more than rounded ones (e.g. 300.000).

How to reduce this bias

The anchoring bias explains why Planning Poker requires participants to show their estimates at the same time. It also explains why Liberating Structures often start with giving people a few minutes of silent thinking before moving into group interactions. It may not prevent anchoring bias entirely, but it hopefully dampens it.

I’ve personally found it helpful to distance myself from a decision for a while and revisit it with fresh eyes. The influence of initial anchoring is less, especially when I take care not to anchor myself again.

6. Survival bias

Survival bias happens when failures are ignored when you are evaluating if a process or decision is the right one (Schermer, 2014). It is a more specific form of confirmation bias. 

A famous example of survival bias is how in the 2nd World War, allied planes were reinforced in those areas where ground crews observed many bullet holes. It seemed like a good idea. Until the mathematician, Abraham Wald pointed out that this was the damage on the planes that survived, and that these holes were obviously not critical enough to make them crash or explode. Instead, he recommended reinforcing the areas without bullet holes (Mangel & Samaniego, 1984).

You'll end up with the wrong conclusions about safety and maintenance if you only look at the planes that survived the trip back.
You’ll end up with the wrong conclusions about safety and maintenance if you only look at the planes that survived the trip back.

Examples in the workplace

  • HR departments can conclude that their recruitment process is working well because it is producing suitable candidates. But the fact that some candidates “survived” the process isn’t enough to conclude that it works. How many suitable candidates were (wrongly) rejected? How many candidates did the process miss that would’ve been more suitable? Without that data, survival bias is likely.
  • Survival bias can lead management teams to copy practices from other organizations that appear successful to them, without considering where the practices didn’t work or even caused damage. This is a significant problem with popular business books, like “Good to Great” and stories about successful CEO’s, that only focus on the successes but ignore the failures (Schermer, 2014).
  • When comparing different approaches to develop software, survival bias can lead someone to conclude that plan-driven approaches work well based on a few success stories they have where they did. Even though the (admittedly limited) research in this area suggests that Agile approaches are three times more likely to result in a successful outcome (e.g. Standish Group, 2015).

How to reduce this bias

The best way to reduce this bias is to be skeptical of taking what made someone or something successful (i.e. a “survivor”), without considering the failure rate. Search for examples of where it didn’t work. 

“Personally, this is why I’m always very skeptical of “best practices” and success stories I hear at conferences.”

Personally, this is why I’m always very skeptical of “best practices” and success stories I hear at conferences. Although their success may be real, it doesn’t mean that the practices they used were the cause of it. 

7. Illusory superiority

In a study among academic teachers, 94% rated themselves as above average in terms of their teaching skills (Spring, 1977). All drivers consider their own driving skills above average (Roy & Liersch, 2013). And people overestimate the contribution of their country to world history (Zaromb et al., 2018). In short, most people have an inflated and overly optimistic view of their own abilities and contributions compared to others. Or from the group they are part of.

This self-serving optimism also manifests in other biases. For example, the “Optimism bias” happens when underestimate our chance of misfortune and disaster compared to others. For example, Weinstein (1980) found that people consistently rate their own chance of developing health problems, like alcohol addiction, much lower than others.

Another variation is the “Dunning-Kruger effect”, where the less experienced people are at a skill, the more likely they are to overestimate their ability (Kruger & Dunning, 1999). Or the fewer people know about some field of expertise, the more confident their opinion about something in that field will be — even when it is wrong.

Examples in the workplace

  • This bias easily leads to frustration when people feel they are contributing more to the team than others — even when that is not true in reality. Because people can’t see the whole system, and how much everyone is contributing to it, people tend to overestimate their contribution;
  • Product Owners can overestimate the degree to which they are familiar with the needs of their stakeholders. The Dunning-Kruger effect makes this more pronounced the fewer Product Owners know about their product or their stakeholders.
  • The Dunning-Kruger effect manifests when people without any experience with software development make strong claims about how easy a particular change should be. Many forms of micro-management are examples of this effect as well.
  • In a very real sense, this bias can easily lead to overly optimistic estimates as little is known about the work yet. We tend to overestimate both our knowledge of the problem and our ability to resolve it.

Whatever the case, this bias shows that most people are (overly) optimistic about their own abilities and the confidence of their beliefs.

How to reduce this bias

Illusory superiority is difficult to overcome, as the bias exists because people are unaware of it. In general, it helps to encourage diversity in opinions and viewpoints in groups and to create space for people to voice their views without fear of being judged for it. 

Personally I’ve found the Liberating Structure Conversation Cafe a great way to do this. Once there is openness in groups, people can learn to see that others may have more experience with something than they do, and start trusting them.

8. Social conformity

Another class of biases is related to our social nature. For tens of thousands of years, our ancestors had to depend on others to survive. So being able to fit in with a group was a core survival strategy. 

One example of this is our susceptibility to follow the beliefs of the majority in our group, even if that belief is objectively wrong. The psychologist Solomon Asch demonstrated (1955) this with his famous “conformity studies”. A group of people, with one real participant, had to collectively pick the shortest or longest line out of a set of lines of different lengths. The real participant was unaware that other members were all confederates of the researchers. After some initial rounds of this simple task, the confederates would eventually collectively pick the same wrong line. The researchers found that 74% of the participants followed the opinion of the majority in at least one round, even though it was objectively wrong. Where most participants knew that it was the wrong answer, but went along because of the social pressure, some participants actually made themselves believe they’d picked the right answer. The effects of this study have been frequently reproduced in other cultures, environments, and groups (e.g. in Bond & Smith, 1996).

Social conformity can also cause groups to rejects people with different perspectives
Social conformity can also cause groups to rejects people with different perspectives

Social conformity plays a big role in another social bias called “Groupthink” (Janis, 1972). Here, our desire for social conformity with a group takes precedence over critical reflection — even when the decisions of the group are unethical or dangerous.

Examples in the workplace

  • When decisions need to be made in a group, the opinion of the majority will likely be followed — even if that opinion is factually wrong or at least questionable. This effect becomes more pronounced as the majority increases and starts appealing to social norms (e.g. “Don’t be so difficult all the time” or “Let's just get along”).
  • People with diverging opinions are often rejected by groups based on their unwillingness to socially conform (e.g. Schachter, 1951).
  • Social conformity can make negative sentiments and gossip spread through the organization. If the majority gossips, the minority is likely to follow. When the majority expresses cynicism and negativity, the minority will follow. I should note that the (positive) reverse is also true.
  • Social conformity is one of the processes by which the “status quo” in an organization is maintained, both in what people believe (or say they believe) and how they behave. This explains why it is often so hard to change how work is done in organizations. 

Our social nature makes it hard to prevent social conformity. Plus, social conformity is also useful in many cases. It's important to be aware of it, and how powerfully it can distort beliefs and decisions.

How to reduce this bias

We know from research (Asch, 1955) that social conformity decreases as the minority increases, becomes more visible, or is purposefully given space.

Closing words

This post captures only a handful of cognitive, social, and logical biases. It demonstrates how flawed our reasoning and thinking can be. Then again, “flawed” may be too strong of a word. A milder perspective is to understand biases as the shortcuts that our brains have evolved to reduce the processing capacity and to make snap decisions. 

But although biases may serve a helpful function, they can easily lead to dangerously wrong beliefs. They are often the foundation of racism, of intolerance, and fear of others. On a smaller scale, they impact the decisions we take in our workplaces, with our colleagues, and within our teams. 

Perhaps approaches like the Scrum Framework and Liberating Structures can help here too — that is my hope — but it starts with recognizing that biases exist and they distort our ability to arrive at solid conclusions and well-grounded beliefs.

References

  • Asch, S. E. (1955) “Opinions and social pressure”. Readings about the social animal.
  • Baron, J. & Hershey, J. C. (1988). Outcome Bias in Decision Evaluation. Journal of Personality and Social Psychology, 54(4), pp. 569–579.
  • Berry, Z. & Frederickson, J. (2015). The Journal of Integrated Social Sciences. Vol 5(1) 2015.
  • Cross, P. K. (1977). Not Can But Will College Teachers be Improved?, New Directions for Higher Education, 17, pp. 1–15.
  • Defulio, A. (2012). “Quotation: Kahneman on Contingencies”. Journal of the Experimental Analysis of Behavior. 97 (2): 182.
  • Gigerenzer G. (2000). Adaptive thinking : rationality in the real world. Oxford: Oxford Univ. Press.
  • Janis, I. L. (1972). Victims of groupthink; a psychological study of foreign-policy decisions and fiascoes. Boston: Houghton, Mifflin.
  • Janiszweski, C. & Uy, D. (2008). Precision of the Anchor Influences the Amount of Adjustment, Psychological Science, 19(2), pp. 121–127.
  • Hazlewood, J. D. & Olson, J. M. (1986). Covariation information, causal questioning, and interpersonal behavior. Journal of Experimental Social Psychology, Vol 22 (3), pp. 276–291.
  • Klayman, J., & Ha, Y.-w. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94(2), 211–228.
  • Kruger, J. & Dunning, D. (1999). Unskilled and Unaware of it: How difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology, 77 (6). pp. 1121–1134.
  • Malle, B. F. (2006). The Actor-Observer Asymmetry in Attribution: A (Surprising) Meta-Analysis. Psychological Bulletin, Vol 132 (6), pp. 895–919.
  • Mangel, M. & Samaniego, F. (June 1984). Abraham Wald’s work on aircraft survivability. Journal of the American Statistical Association. 79 (386): 259–267.
  • Nisbett, R. E. & Ross, L. (2011). The Person and the Situation 2nd ed, Pinter & Martin Ltd.
  • Oswald, M. E. & Grosjean, S. (2004), “Confirmation Bias”, in Pohl, Rüdiger F. (ed.), Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement, and Memory, Hove, UK: Psychology Press, pp. 79–96
  • Ross, L. (1977). The intuitive psychologist and his shortcomings. Advances in Experimental Social Psychology, 10, 173–220.
  • Roy, M. M. & Liersch, M. J. (2013). I Am a Better Driver Than You Think: Examining Self-Enhancement for Driving Ability, Journal of Applied Social Psychology, 43(8). Retrieved online at April 22, 2020 from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3835346/
  • Schachter, S. (1951). “Deviation, Rejection, and Communication”. The Journal of Abnormal and Social Psychology. 46(2): 190–207.
  • Schermer, M. (2014). How the survivor bias distorts reality, Scientific American, 311(3). Retrieved online at April 23, 2020 from https://www.scientificamerican.com/article/how-the-survivor-bias-distorts-reality
  • Standish Group, 2015. Chaos Report 2015. Retrieved online at April 22, 2020 from https://standishgroup.com/sample_research_files/CHAOSReport2015-Final.pdf.
  • Taleb, N. N., Fooled by Randomness, Penguin.
  • Tversky, A. & Kahneman, D. (1974). “Judgment under Uncertainty: Heuristics and Biases” (PDF). Science. 185 (4157), pp. 1124–1131.
  • Wason, P. (1960). “On The Failure to Eliminate Hypotheses in a Conceptual Task”. Quarterly Journal of Experimental Psychology. 12 (3): 129–140.
  • Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39(5), pp. 806–820. Retrieved online at April 22, 2020 from https://psycnet.apa.org/record/1981-28087-001.
  • Zaromb, F. M., Liu, J. H., Paez, D., Hanke, K, Putnam, A. L., Roediger III, H. L. (2018). We Made History: Citizens of 35 Countries Overestimate Their Nation’s Role in World History, Journal of Applied Research in Memory and Cognition, 7(4), pp. 521–528.

What did you think about this post?