The Human Factor and the Highway Cloverleaf
Truly man is a marvelously vain, diverse, and undulating object. It is hard to found any constant and uniform judgement on him.
—Michel de Montaigne
One may understand the cosmos, but never the ego; the self is more distant than any star.
—G.K. Chesterton
Over the years, I’ve found myself fascinated with highway cloverleafs. For those unfamiliar with my terminology, “highway cloverleafs” are cloverleaf interchanges (the four-leaf clover, or two-leaf clover, of on-ramps and off-ramps on major highways) that allow more seamless, unregulated, coming-and-going of traffic from one highway to another. Now, mind you, in the midst of rush-hour traffic and distracted drivers, highway cloverleafs can be a bit maddening (and potentially dangerous). With incessant speeding up and slowing down, slipping in or sliding out of the preferred lane, it is a wonder there aren’t endless reports of sideswiping and fender-bending.
And yet, there aren’t.
Anecdotal though it is, in over three decades of driving, I have never witnessed (or been involved in) a cloverleaf accident. Oh, I know they happen, but not nearly as often as I would expect. But why would that be?
Because of the human factor.
The human factor is something that transcends driving laws, traffic cops, and metered lights. It involves well-timed activating of the turn signal, judicious slowing down or speeding up, knowing when to make eye contact, wave, or nod. The human factor involves intuition and instinct, experience and judgment, and it all comes to a temporal pinpoint as you ease your car off (or on) the cloverleaf into the opening space in traffic. Somehow, without overzealous management and inflexible regimentation, it works.
We live in an age of artificial intelligence (AI) that promises, and has delivered, certain wonders to our modern world. The ways in which AI has streamlined complicated information or simplified complex work is extraordinary. But if you listen closely, there are times we are told that all humanity has to offer is inefficiency, fumbling, and error, which AI will magnanimously and patiently correct. In a recent interview, Matthew Crawford, author of Why We Drive, made this observation:
There was this one incident where a Google [driverless] car came up to an intersection and it was a four-way stop. And so it stopped and it waited for the other cars to come to a complete stop before it went through, but of course, that’s not what people do. And so the Google car just froze and got sort of paralyzed and melted down, I mean, software meltdown. And what the chief engineer who was in charge of this project said he had learned from it is that human beings need to be less idiotic, by which he meant, of course, they need to behave more like robots. And that’s an inference that comes very easily if you think that the mind is basically an inferior version of a computer, namely following the rules. That’s the picture of reason that they have here. That reason consists of following rules and we don’t do it very well. But what do you see at an intersection? Well, you see people make eye contact, maybe one person waves the other through in those ambiguous cases of right of way.
There’s almost a kind of body language of driving. Here’s a form of intelligence that is socially realized by people together. They’re cooperating. They’re working things out on the fly. It’s a little bit improvisational. It’s a little bit messy, but for the most part, it works just fine, but that kind of social intelligence is very hard to replicate with machine processes. The conclusion is, well, either humans need to become more like computers, which is not gonna happen or we need to clear the road of the humans to make the machines operate smoothly according to their own kind of method. That’s the basic problem. . . . Artificial intelligence and human intelligence are . . . so different in kind that they don’t play very well together.
In clinic, when I open the door to see my next patient, I need to “read the room.” Is my patient more quiet than usual? Did they bring a family member when they normally come alone? What is the mood of the family member? When reviewing the chart beforehand, was their complaint about intractable pain and yet they are relaxed and laughing at something on their phone when I opened the door? In our conversation, did they pause unexpectedly, blanch at a question, or choke up imperceptibly? A veteran surgeon opening and exploring a traumatized abdomen has a similar experience. She reacts almost automatically—unconsciously (many surgeons I have worked with have described their hands, almost mysteriously, leading them)—to stop bleeding, identify injury, triage the order of repair. Likewise for a seasoned soldier (think of Colonel Dick Winters from Band of Brothers) who draws from instinct in addition to well-honed intuition when he encounters inevitable contingency on the battlefield. Peter Parker has his ineffable “Spider-sense.” Velma (from Scooby Doo) has her quirky hunches.
. . . the human factor is often the fire behind grand epiphanies and unparalleled breakthroughs.
This human factor—this “sixth sense”—defies mere data and knowledge. But in its uncharacterizable looseness and our frustrating inability to pin it like a moth, dissected and labeled, to a cork board, the scientific community often dismisses it or shoos it into a corner as an embarrassing hole in their theory. Once, they said they couldn’t weigh in on it (it goes beyond the bounds of the scientific method); now, many simply deny its value or its existence. And yet Winston Churchill’s words about incontrovertible truth haunt them: “In the end, there it is.” Naturally, scoffers can point out that the human factor—this mysterious intangible—can be a source of great error. Of course it can. The bad forever wants to shoehorn in with the good. But we must remember that the human factor is often the fire behind grand epiphanies and unparalleled breakthroughs. I, for one, will forever choose Han Solo to lead the mission over C3PO. Will AI correct our very human shortcomings? Don’t be so sure. After all, let’s not forget, who created AI? Error-prone, broken, fumbling humanity, that’s who.
The tantalizing ingredients that comprise the stew (the human factor) of humanity include common sense and intuition, experience and judgment, alertness and adjustment. Even more, for greater complexity we sprinkle in our hopes and regrets, our memories and plans. These components are what make us so damned interesting and, ultimately, unfathomable. Whatever film we watch or novel we read, we are intrigued by the events that transpire, but we are transfixed by how the ever-wily or confoundingly oblivious characters react . . . What do they do? And why? Would I do the same?
Making our way onto and off of the cloverleaf is not just a matter of data and calculations. It isn’t solely the knowledge of how to drive and the physics of the car. It isn’t simply theory; it is brilliant reality. The human factor, for all its shortcomings and all of its elusiveness, makes it work—with a slow down or a speed up, a nod and a wave. And hopefully sometimes—just sometimes—with a smile as well.