this image is not available
Media Platforms Design Team


You've said in the past that IBM's Jeopardy-playing computer, Watson, isn't deserving of the term artificial intelligence. Why?

Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial levelthings that are impressive if you don't look at the details. In that sense, we've already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinkingthat's inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn't understand what it's reading. In fact, read is the wrong word. It's not reading anything because it's not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous.

Do you think we'll start seeing diminishing returns from a Watson-like approach to AI?

I can't really predict that. But what I can say is that I've monitored Google Translatewhich uses a similar approachfor many years. Google Translate is developing and it's making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it's not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it'll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn't have to do with words, and Google Translate is about words triggering other words.

So why are AI researchers so focused on building programs and computers that don't do anything like thinking?

They're not studying the mind and they're not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They're doing product development.

I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn't about making money, and the people in the field weren't driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That's a beautiful paradox and very exciting, philosophically.

In the '70s and then again in the '80s, a lot of AI researchers were pushed away from the type of work you advocate. What happened?

[It was] the result of a lot of hype, which didn't materialize. In the first case, a lot of AI researchers were basically saying that intelligent machines were just around the corner. They were predicting in the '60s that there would be a world champion chess playing computer within a decade. And that wasn't even close, and so I think in the '70s, a lot of that hype was looked upon skeptically by government funding agencies and the money dried up for a while.

Something else happened in the early '80s. There was something called the fifth generation of computers, which were using a certain [programming] language called Prolog, based on very rigid, deductive logic, and it claimed that all of human knowledge was going to be encoded in databases with Prolog. And lots of books were written about this; lots of people wrote grants; the grants were funded, and then . . . nothing happened.

Prolog was one of the silliest approaches I've ever heard of, and it fell to the ground in shambles. And again people in the government said, You haven't produced anything. We're not going to give you any money. So certain companies like Apple started to invest instead of the government, and when computers started getting much, much bigger, with better memory and faster processing, people found that you could brute-force problems in a way you couldn't before. That sort of revived an excitement about developing products that could do incredibly impressive thingseven though behind the scenes the computers weren't doing anything resembling thinking.

So what will get us closer to creating real AI?

I think you have to move toward much more fundamental science, and dive into the nature of what thinking is. What is understanding? How do we make links between things that are, on the surface, fantastically different from one another? In that mystery is the miracle of human thought.

The people in large companies like IBM or Google, they're not asking themselves, what is thinking? They're thinking about how we can get these computers to sidestep or bypass the whole question of meaning and yet still get impressive behavior.

How do we even begin to answer those giant questions?

[Here's one example:] My research group has focused on building programs that look at letter strings and are able to perceive them at abstract levels. For example, I could ask you the question, if the letter string ABC changed to ABD, how could the string PPQQRR have the same thing happen to it? Well you could say that PPQQRR would also change to ABD. Now that's the dumbest answer but it's defensible. You could be less rigidbut still somewhat rigidand say, it changes to PPQQRD, where the last letter changes to a D. But even more sophisticated is to notice that PPQQRR is just like the 3 letters of the alphabet ABC, but doubled, and so you say, PPQQSS where the last two letters change to their successors.

Now this isn't an example of Einsteinian thinking, but it's an example of thoughtof stripping away everything and looking at the essence of the situation. This is what we try to get our programs to do, not only to make abstract perceptions but to favor them.

Do you think interest in fundamental AI science is being rekindled?

One of my recent graduate students named Abhijit Mahabal came to my research group because he's very interested in trying to understand how people perceive patterns. And when he finished his Ph.D., Google snatched him up. Now he's working for Google, and while I know what still drives him is trying to understand the mind, he's in a corporation, and in a corporation what counts are profits, products, and a bottom line. But at the same time Google wants to encourage bright people and Abhijit is extremely bright, and they give him some leeway to explore his own ideas.

In an environment where there's lots of money floating around, there's always some extra that can be used to indulge in luxuries, you know? And there will always be people interested in the mysteries of thinking.

Headshot of William Herkewitz
William Herkewitz
Science & Technology Reporter
William Herkewitz is a science and technology journalist based in Berlin, Germany. He writes about theoretical physics, AI, astronomy, board games, brewing and everything in between.