From wikimedia (created by Buster Benson) — a proposed organization for human cognitive .

In the book ‘The Undoing Project’, the author Michael Lewis dives into the history and work of Daniel Kahneman and Amos Tversky. Their work in the fields of decision sciences and behavioral economics earned Kahneman a Nobel prize as it helped shape how people think about decision-making under uncertainty (Tversky was deceased when the prize was awarded and Nobel prizes only go to those living otherwise he likely would have been honored alongside his longtime collaborator).

From their work, the concept of and biases in human decision-making sprang forth. They started with a few simple cases, but other researches have helped the field explode and have uncovered dozens of different phenomena.

*Note: It is likely that some or many of these are found in controlled lab experiments only and may not be particularly generalizable to human decision-making. That’s a different discussion.

Many people use these errors and biases to label humans as error-prone and not to be trusted, and claim decision-making should be handed to (possibly described as algorithms, analytics, automation, or AI — I just group them all under the term ‘’ here to help avoid confusion through different terms) as often as possible. The belief is that will act ‘rationally’ and ignore the situational context or framing and look at just the facts of the problem.

But Kahneman and Tversky weren’t studying these phenomena from the perspective of a weak human. They didn’t go in to identify human shortcomings; they were simply trying to figure out how the mind works.

They didn’t go in to identify human shortcomings; they were simply trying to figure out how the mind works.

Much like perceptual psychologists were able to understand how the eye works by studying optical illusions, Kahneman and Tversky hoped to better understand the human mind by knowing its limitations.

Indeed, while many consider these error and bias instances as human frailty, they are instead important characteristics that have allowed humans to be successful over many millennia. They are shortcuts that allow us to navigate a complex, poorly defined world. They help us avoid spending excess cognitive capacity on problems that don’t need it and give us the best chance at survival.

Further, while we think of errors and biases as distinctly human, they actually have equivalents in the technologies we build that are intended to help make our lives easier.

Just like errors, biases, and optical illusions show conditions where the human mind struggles, there are also clear conditions where software-based technologies will fail consistently.

Simple modifications can make a sign unintelligible for autonomous vehicles. Adding pixelated text to the sign makes it unrecognizable as the stop sign.

Take autonomous vehicles for example. Clever people have show many times that it is quite possible to fool autonomous vehicles. People have found that slight alterations to the stop sign (like those in the image above) will cause confusion for the vehicle’s analytics. Rather than stop, when autonomous vehicles encounter the stop sign above, they interpret it as a 45mph sign. People have also used simple lines of paint or salt to confuse autonomous vehicle sensors. Similarly, people have found ways to fool facial recognition technologies.

This doesn’t mean that these technologies are poor or we should abandon them completely (although ethically we really need to consider how we are using them). It simply goes to show that any technology will have its weak spots which need to be compensated for in the system design. They fail because they are built as rule-based systems. Rule-based systems work when pre-conditions are met. And they have a chance to fail when they don’t.

Once problems are identified, technology developers will generally build a new rule to handle the case. However, new issues can always be discovered that can skirt the built in rules given the complexity of the real world. It’s a never-ending game of whack-a-mole. New issues arise, developers bolt solutions on. Yet another issue appears, countered by a new solution. This can go on to the end of time.

Yes, obvious issues should be dealt with. But technology developers should never assume that they can create enough rules to make the system error-proof (no system will ever truly be error proof).

You can never create enough rules to make a system error-proof

Interestingly, humans operate in a completely different way, which system designers should take note of. Humans haven’t evolved more complex rules to handle problem/edge cases. Instead, humans have developed the capacity to apply more deliberate thinking (what Kahneman dubbed System 2) to a problem that can help us overcome the gaps in our fast-twitch, error-prone, rule-based thinking (labeled System 1) — hence the title of Kahneman’s book ‘Thinking, Fast and Slow’.

This is extremely helpful to handle the ambiguous, novel problems that people encounter all the time. We again can look at autonomous vehicles to see the difference. Whereas autonomous vehicles are programmed to stop when they get confused (for safety — that’s a good thing), humans are capable of making a judgment about the world based on their goals and the decision-making context and can continue driving.

This gets us in trouble some of the time, but frequently gets us out of trouble too. We just rarely hear of situations where humans prevent accidents since the news doesn’t often report near misses or close calls. That’s why we think of humans are error-prone (coincidentally, this has been labeled the ‘availability heuristic’ and fits neatly within the errors and biases).

People who build technology into systems need to understand the benefits that humans bring into systems. Some part of the system needs to be able to use context and the system’s goals to make decisions when the rules stop applying. To my knowledge based on the technologies available, only humans can fulfill this role.

We can never completely rid a system of errors. Something new, unexpected, or outside the boundaries of operating conditions will always occur. What we can do is figure out ways to minimize their impact. The design challenge is to define how humans and the technologies we provide team (not simply interact) to accomplish the overall system goal. When we treat technology as infallible or the people in the system as weak, we set the entire system up for failure. We must understand and design for the strengths and weaknesses of every team member to put the system in the best place to succeed.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here