As software programs become more like the human mind, they acquire the human mindâs fallibility. Like us, they must learn from mistakes, which they can do only if we teach them. This interdependence is changing our relationship with software and prompting a conversation with these systems that will develop its own rules and etiquette. The way we collectively decide to communicate with our software will define the tone of our daily lives as synthetic intelligence permeates our world.
A common feature of the recent breakthroughs in deep learning is a rate of error. The errors are benign in some domains (speech recognition, captioning a photo) but potentially catastrophic in others (self-driving cars, medicine). What is common about these errors, though, is that they invite us to ask why they happened. We expect an explanation from the underlying system, not so we can fix the problem like a programmer, but so we can reassure ourselves that the mistakes are somehow reasonable. If the mistakes are reasonable, they can be corrected with additional inputs.
This mindset is rare when people approach software today. When a desktop application crashes, we do not ask why. We assume that somewhere in a dull, poorly lit, meager little cubbyhole, some careless coder did something wrong. We do not care what it was, because it was arcane andâteleologically speakingâmeaningless. The most we feel is a surge of anger at the company that let one of its errant employees ruin our afternoon. The flaw is in our fellow human.
In contrast, users of speech recognition on desktops are accustomed to inspecting errors via the process of training. You say, âcorrect âthe cart was right,ââ and a list of other possible matches is displayed. The user can see that the system was considering other possibilities. Often, the user sees the right phrase in the list and simply says âchoose threeâ to change the phrase to âDescartes was right.â The user has both gained confidence in the systemâs intelligence and offered input to make the system better.
This mode of interaction is best understood as a conversation. The system says, âI thought you said . . .â and the user replies, âNo, actually . . .â This is the aspect missing from many of the new systems making their way into our lives today. How many of us still see items we purchased months ago advertised back to us from our browsers? For me, these are bird-watching binoculars I bought for my wife. How often does our newly sentient economy expect us to buy binoculars? Imagine the accuracy boost that would occurâin both deep and shallow channelsâif ads had âlikeâ and âdislikeâ buttons. By opening a conversation with the recommendation engine, humans would graduate from victim to participant in advertising, while the deep-learning backend would learn more deeply. (Google appears to be experimenting with a similar feedback mechanism, but it is not yet pervasive.)
The more appealing vessel for such functionality is a personal âagent,â whose interactions and knowledge we control. But the first stepâsimply opening up a conversation between user and systemâis what we need right now to make the experience better. It is the model we should adopt as the sophistication of software grows.
Our current habit is to design only the surface of an experience, striving to make it as simple as possible, which can shut the user out. I am reminded of an experience with a smartwatch that presented a âservice not availableâ message when the watch could not communicate with the phone. It turned out that the web service was not actually down, but since the device was not working or giving me any useful feedback, I was forced to make serendipitous discoveries about how close the devices needed to be, whether to switch from Bluetooth to cellular, and so on, until the deep learning in my brain modified my behavior to accommodate the watchâs limitations. How much easier this would have been if my watch had simply told me what was wrongâlike a mature companionârather than making me guess.
âDeep designâ encompasses the full product, from its surface down into its core capabilities. Rather than hiding errors, we should design learning systems that reveal details and ask the user for guidance. âI thought you liked scotch,â the personal assistant of tomorrow might say. âYes, but not for breakfast!â we will answer.
Of course, training a neural network is not the only form deep design can take. A recent project at frog considered how to show users the possible futures of a complex dataset. Rather than show the user a best guess or a set of likely futures, we decided that the more effective approachâboth for visualization and for calculationâwas to show all possibilities as probability distributions. This greatly simplified the visual elements that needed to be displayed, while also increasing the information conveyed. We invited the user to view all of the bad guesses alongside the good ones. We were revealing the errors, too, and inviting the user to interpret them, enabling many new interactions, such as checking that a scenario was possible and refining the probabilities via human input.
As research into the human brain continues to reveal the fallacy of our own thinking, leading in some circles to a belief that knowledge is only a momentary viewpoint within âa state space of possible world viewsâ and even to suggestions that human choice should join the four humors as an antiquated notion, we may find ourselves much more tolerant of fallible machines. In the 1920s, tolerance for automobile fatalities was nearly unthinkable; 60% of fatalities were children, often playing outside their homes. Drivers in all kinds of accidents were charged with manslaughter and paraded through the streets in âsafety parades.â We will no doubt face a similarly strident response to self-driving cars in the near future, but just as traffic lights, crosswalks, and the notion of the âaccidentâ helped society balance risk and reward in the automobile era, so will we balance risk and reward as we integrate machine intelligence into all aspects of our lives.
We will succeed by understanding that our machinesâ errors make sense in some way, which we can do only by entering into a conversation with them. Via deep learning, some systems have already begun the conversation. They are the first generation of a new order of things being born everywhere from the walls of our houses to the clothes on our bodies. For our own sake, letâs raise them well.
Sheldon is Senior Solution Architect at frog in Austin. Having studied math and English at MIT and Harvard, Sheldon enjoys cross-disciplinary creative projects. He builds award-winning software, writes futurist fiction, creates software architectures for businesses and writes about technology.
We respect your privacy
We use Cookies to improve your experience on our website. They help us to improve site performance, present you relevant advertising and enable you to share content in social media. You may accept all Cookies, or choose to manage them individually. You can change your settings at any time by clicking Cookie Settings available in the footer of every page. For more information related to the Cookies, please visit our Cookie Policy.