Connect with us

Community

Mental Models: Decision-making isn’t as logical as Lego

Published

on

By Gareth Lock

What do the following all have in common?

  • When I started to dive a drysuit, I recognized that as I descended the drysuit compressed, and when I ascended, the pressure came off. At the same time, I was using my depth gauge to determine whether I was ascending or descending by looking at the screen. As I got more experienced, I didn’t need to look at my depth gauge to know if I was moving up or down in the water column, I could tell if I had moved 30cm/1ft because the suit pressure changed.
  • When I descended through the clear waters of Angelita Cenote in Mexico, I saw a layer below me where the hydrogen sulfide was captured in the halocline. I knew the visibility was going to be bad as we descended into it, but I wasn’t expecting it to be that bad.
  • When I was doing my advanced trimix (GUE Tech 2) training, the early dives were using a twinset and three stages, all full of 32%. The first time I pulled the dump valve to stabilize at the first stop, a lot more gas escaped than I thought would because I was much heavier than previous dives I had done before, and I sank back down! With practice, I worked out how long to open the dump valve in order to stabilize correctly.
  • When I started to dive on a rebreather, I remember breathing in to arrest my descent after getting it mostly there using the wing inflate. I didn’t stop and hit the bottom.

All of them represent mental models that I had or had developed as I learned to dive in different environments. These mental models allowed me to ‘not think’ about what I was doing, the new actions were based on subconscious thoughts. However, it took time to develop them because the early models were limited or flawed, and as a consequence, I either made mistakes or was task loaded trying to solve the problem, which meant I wasn’t efficient.

Gareth exiting from Ponderosa Cenote, near Tulum, Mexico. Credit: Ellen Cuylaerts


But mental models don’t just relate to the physical or technical skills we develop, they also apply to the decisions we make and the risks we take. We are ultimately biased in our decision making and we need to be aware of this. The 175+ biases listed on the Cognitive bias Wikipedia page can be reduced down to four. Ironically, this in itself is a result of one of the biases!

  • Too much information. There is just too much information in the world, we have no choice but to filter almost all of it out. Our brain uses a few simple tricks to pick out the bits of information that are most likely going to be useful in some way.
  • Not enough meaning. The world is very confusing, and we end up only seeing a tiny sliver of it, but we need to make some sense of it in order to survive. Once the reduced stream of information comes in, we connect the dots, fill in the gaps with stuff we already think we know and update our mental models of the world.
  • A need to act fast. We’re constrained by time and information, and yet we can’t let that paralyze us. Without the ability to act fast in the face of uncertainty, we surely would have perished as a species long ago. With every piece of new information, we need to do our best to assess our ability to affect the situation, apply it to decisions, simulate the future to predict what might happen next, and otherwise act on our new insight.
  • What should we remember? There’s too much information out there. We can only afford to keep around the bits that are most likely to prove useful in the future. We need to make constant bets and trade-offs around what we try to remember and what we forget. For example, we prefer generalizations over specifics because they take up less space. When there are lots of irreducible details, we pick out a few standout items to save and discard the rest. What we save here is what is most likely to inform our filters related to ‘Too much information’, as well as to inform what comes to mind during the processes mentioned in ‘Not enough meaning’ around filling in incomplete information. Unfortunately, it’s all self-reinforcing.

Although we don’t like to admit it, many of the decisions we make are based on assumptions that something will happen the way it did the last time, because this allows us to reduce the mental energy we consume. We often talk of complacency as being a bad thing, or that ‘making assumptions’ makes an ass out of you and me, but in reality, these concepts are what allows us to operate at the pace we do.

Lanny descending into the hydrogen sulfide filled halocline at Angelita Cenote, near Tulum, Mexico. Credit: Gareth Lock


As divers (and humans) we need to change our attitude towards complacency and assumptions and recognize that we can’t stop using these tools. Rather we should look at when such automatic behaviors and associated application of mental models will get us into serious trouble if we remain on autopilot, and then slow down or stop to take a little more time to look at the situation. Sometimes this is difficult given the way our brains work because we don’t want to miss out on an awesome dive or because there are pressures to complete the dive. In fact, the more we are invested in a dive e.g. time, money, long-transit or road journey to get there, peer pressure or not wanting to let our buddies/teammates down, the harder it is to say, “We are not doing this dive” or “I’m thumbing it now”. While anyone can thumb a dive at any time for any reason, sometimes it’s not that simple when you are there in the moment, especially if your previous experiences of doing this were negative.

In addition to targeted training, three simple ways to reduce the impact of biases are:

  • Using briefs and checklists to ensure that the assumptions the dive leader/diver has are aligned with the reality of the situation and what the team/diver thinks is happening.
  • Using debriefs to understand if the assumptions which were made prior to the dive were indeed valid. In addition to the four key topics I teach divers to use in a debrief (What did I do well? Why? What do I need to improve on? How? What did the team do well? How? What do the team need to improve on? How?), I also get teams to consider ‘What was the greatest risk we took during that dive/task?’ as a way of learning how close they got to the failure line and what they could/will do differently next time.

Mental models are essential to high-performance teams. They reduce the mental workload involved which means they can focus on the goal of the dive. However, if those mental models are not closely aligned with reality, then accidents will happen. Most errors happen because of poor perception rather than poor decisions based on good information. As a good friend said to me recently, no surprises, no accidents.


  Karim Hamza delivering a debrief to three divers at the performance EDGE event in January 2017 in Seattle.


Gareth Lock is an OC and CCR technical diver with the personal goal of improving diving safety and diver performance by enhancing the knowledge, skills, and attitudes towards human factors in diving. Although based in the UK, he runs training and development courses across the globe as well as via his online portal https://www.thehumandiver.com. He is the Director of Risk Management for GUE and has been involved with the organization since 2006 when he completed his Fundamentals class.

  • Fathom Rebreathers
  • DAN Membership
  • O'THREE
  • SUEX
  • History of Diving Museum
  • GUE - Global Underwater Explorers
  • Halcyon
  • Scuba Force
  • Buddy Dive
  • Extreme Exposure
  • Dive Rite
  • Lombardi
  • Shearwater Perdix
  • DAN Travel Insurance
  • Area 9
  • Technical Dive Centres

Thank You to Our Sponsors

  • DAN Membership
  • Halcyon
  • Fathom Rebreathers
  • Lombardi Undersea
  • Dive Rite
  • History of Diving Museum
  • DAN Travel Insurance
  • Extreme Exposure
  • IANTD
  • SUEX
  • Area 9
  • Scuba Force
  • Buddy Dive Bonaire
  • O'THREE

NEW!

InDepth Technical Dive Centres Directory