Why We Need to Tame Our Algorithms Like Dogs

Long ago we tamed wolves and got dogs. Now we are now living with another non-human species that is far more dangerous and powerful than canines ever were: algorithms.
aliens
Xiulung Choy / Smart Design

There is a theory among evolutionary anthropologists that dogs evolved from beasts to pets because the canines that continued to survive were those that gained social intelligence. The wolves that thousands of years ago hung around the edges of human settlements began to interpret human intentions and moods. In other words, their brains began to be wired to tune into people's brains. Over time, this meant their behavior and even their appearance changed to become less fierce, more attuned to human emotions, and more symbiotic. In other words, they became dogs.

I mention the evolution of dogs because we’re at the point now where we’re living with another non-human species that is far more dangerous and powerful than canines ever were: algorithms. TheUK government just announced £220 million for "big data and algorithm" research. What you see on Facebook is determined by algorithms. Amazon’s (and Spotify’s and Netflix’s et al.) recommendation engines are all algorithms. An algorithm now controls the temperature in my house through my Nest thermostat. If you interact with the digital world at all—and who doesn’t?—you are coming into contact with an algorithm. We need to ensure that these coded systems understand our needs and intentions in order to create products that feel human and humane.

#### Dan Saffer

##### About

A Creative Director at [Smart Design](http://smartdesignworldwide.com/), Dan leads teams in creating new interaction paradigms across a wide range of products, spanning both digital and physical. He has written four books on design including his latest, [Microinteractions](http://smartdesignworldwide.com/news/microinteractions-designing-with-details-by-dan-saffer/). You can follow him on Twitter at [@odannyboy](https://twitter.com/odannyboy).

The Perfunctory Brain

Algorithms, as described by Christopher Steiner, author of Automate This: How Algorithms Came to Rule Our World, are “giant decision trees composed of one binary decision after another...a set of instructions to be carried out perfunctorily to achieve an ideal result. Information goes into a given algorithm, answers come out.”

>Here’s the thing about the domestication and evolution of dogs: we also evolved to live with them.

Now certainly, algorithms aren’t alive in a traditional sense, and they are also man-made. But like those early dogs, we don’t always understand them, nor are they usually coded to respond in human-centric ways. Algorithms that interact with humans (and arguably any human systems like the stock market) should evolve to being not just useful, but also understandable.

But here’s the thing about the domestication and evolution of dogs: we also evolved to live with them. They changed us, as well. They became part of the human ecosystem. There’s evidence that dogs and humans co-evolved brain processes and chemicals such as serotonin. Given enough time, algorithms might have such an impact on us as well, changing how we think. And while (unlike dogs) algorithms might not change us at a genetic level, they are changing our behavior.

What Algorithms Do Best

There are five tasks that algorithms seem especially capable of performing: rapidly executing repetitive tasks, logically evaluating between multiple choices, predicting the future, evaluating the past, and finding the overlooked. All of these are things that humans are mostly bad at.

Xiulung Choy / Smart Design

If your job is competing against an algorithm to, for example, quickly trade stocks, you’ll probably lose. Algorithms work at an inhuman time scale. Their slowest decisions are so far ahead of ours as to be practically instantaneous. They operate in milliseconds, hummingbird time. Much has been written about the fortunes that have been made by shaving off fractions of a second in trading. For instance, New York and Chicago exchanges will shortly be connected by a speed close to the speed of light: 15 milliseconds. Round trip. That’s the kind of speed only an algorithm can use effectively.

>Algorithms operate in milliseconds, hummingbird time.

This kind of rapid processing allows algorithms to decide between different options. These decisions are often predictions of the future based on logical analysis of data---i.e. this set of conditions typically leads to this outcome. These predictions aren’t always correct, of course. Predictions are only as good as the data coming in that it’s responding to and the resulting programmed courses of action. But because an algorithm can take in so much more data and so much more rapidly than a person, it can make predictions faster---and act on them.

Algorithms are also good at evaluating past events and past data sets, in order to both improve predictions about the future and suggest possible courses of action. Now that we’re generating so much data--both big data from large systems and small data from personal, quantified self activities--we need to rely on algorithms to help make sense of it all, to tell us what the data might mean and why it’s valuable.

While all of these are algorithms’ strengths, they can also be their weaknesses when humans come into contact with them.

Awkward Algorithm Interactions

Algorithms can create new, disorienting experiences, which I’ve named here. The first of these is when the algorithm simply works. It can be like magic: you get just the right recommendation, the fastest route home from work. You feel as though there is a powerful spirit working on your behalf: The Genie Reaction.

The flip side is the FAIL Frustration at the stupidity of the algorithm, frequently caused by context blindness. There is something about the environment or subject matter that the data being fed into the algorithm doesn’t know or doesn’t have the nuance to parse. The navigation system that steered you into a traffic snarl had no idea there was an accident, for example. TiVo famously had this problem in 2002 when it erroneously kept guessing straight viewers were gay.

>Playing “Beat the Algorithm” can be a fascinating new pastime, albeit one that can lead to pangs of regret.

But even more than good or poor guesses, there are strange moments that arise when living with algorithms. While attacking the Death Star near the end of Star Wars: A New Hope, Luke switches off his targeting computer and instead uses The Force. We too can __Trust Our Feelings __ and deliberately decide to not use an algorithm to assist us. This can be an uncomfortable, yet sometimes exhilarating, feeling as you ignore a recommendation or driving directions. Playing “Beat the Algorithm” can be a fascinating new pastime, albeit one that can lead to pangs of regret. What if Luke had missed the target? What if that iTunes Genius recommendation is awesome? What if that other route home really is faster?

Xiulung Choy / Smart Design

Algorithms can push humans into uncomfortable, inhuman situations. That turn that looks so reasonable in the programmed map is actually across three lanes of roaring traffic. It’s doable—barely. It’s Barely Possible. And something a human would be unlikely to choose. Also, few people are going to choose to be the guinea pig for an algorithm’s experiment, yet that happens occasionally, or seems to as algorithms test new strategies for doing an activity faster.

>What an algorithm values may not be at all what a human values.

Likewise, there can be a __ Rift of Values__: what an algorithm values may not be at all what a human values. Most algorithms rank efficiency and speed over meaning or ease of use. For example, if a navigation algorithm thinks it can shave a minute off your arrival time, it’ll usually have you veer off onto many side streets instead of staying on a main road, whether or not you’re familiar with the area and irrespective of the difficulty of multiple turns versus driving straight. Sometimes the extra minute isn’t worth it, yet conveying that feeling to an algorithm is impossible.

Aliens in Our Midst

As Ian Bogost wrote in his book Alien Phenomenology, we don’t have to go to other planets to find aliens. They live among us as algorithms. Because algorithms aren’t human, they don’t naturally know or care or respond to human intentions and emotions unless, like ancient wolves, they evolve to meet human needs.

Xiulung Choy / Smart Design

But unlike wolves, we don’t have hundreds of years to wait for algorithms to evolve. The consequences of their running amok are too great. The Flash Crash of 2010, in which algorithms caused a mini stock market crash by tanking the Dow Jones about 1,000 points in a few minutes, is just one example. Imagine a similar event happening with the power grid. Or self-driving cars.

Hurrying Evolution Along

One way of speeding up this evolution is providing a means of telling them what we need and value. We need to insert an awareness of human feelings and human limitations into the code. This could be through some Asimovian I, Robot-style rules, or simply having the means to tell the algorithm what the environment, our intent, and our mood is---or have the algorithm detect it via behavior (past and present). For instance: If I’ve never driven this route before, keep me on major thoroughfares; if I seem agitated, don’t overwhelm me with many options. We’ll also need a way to let the algorithm know when it’s guessed wrong, that this isn’t the kind of music I like or the kind of experience I want to have.

Algorithms also have to adjust their feedback to deal with our human cognitive capacity. We can’t take in as much input or match the speed of these coded systems. I don’t need to know all data points, just the meaningful ones. Telling me about an accident 20 miles away that’s not on my route isn’t helpful, even though it’s part of the algorithm’s calculations and might be affecting the speed of traffic.

These coded aliens, these ghosts in the machines, are becoming even incomprehensible to their creators. With algorithms starting to take on oversight and control of our critical systems, we need to ensure that, like with dogs, we become comprehensible to them. If so, perhaps in the future we’ll think of them as Man’s Best Friend.