Music site chatbot wins AI Loebner contest

  • Published
Screengrab of Mitsuku website
Image caption,
Steve Worswick has spent nine years refining the code behind the Mitsuku chatbot

A chatbot called Mitsuku has won an annual contest to see if computers can convincingly imitate humans.

The chatbot took top prize in the Loebner contest that puts the artificially intelligent programs through their paces.

The contest involves the programs trying to convince judges they are human by answering questions put to them via an instant message system.

Briton Steve Worswick who wrote Mitsuku won $4,000 (£2,500) for his creation.

World knowledge

Created by US businessman Hugh Loebner, the annual competition is an attempt to stage a real-world test of a question posed by mathematician Alan Turing in the 1950s.

Turing suggested that if the responses a computer gave to a series of questions were as convincing as those from a human it could reasonably be said to be thinking.

Mr Loebner has offered a prize of $100,000 (£62,000) for the computer program that meets Turing's standard for artificial intelligence but in the 22 years the competition has been running that cash has gone unclaimed.

The four finalists in the 2013 contest went through a series of rounds which saw them chat via text with the competition judges. After four rounds of questioning the Mitsuku chatbot was declared to be the most convincing.

Mr Worswick said he started programming chatbots as a way to engage visitors to a website that showcased his dance music.

The first iteration of his home-grown chatbot was a teddy bear, he told the BBC.

"Eventually I found that visitors were wanting to talk to the teddy bear rather than listen to the music," he said.

His work on chatbots got a bigger boost in 2004 when he was commissioned by a games company to write one called Mitsuku. This also lived on a website and the many conversations it has had with visitors has helped Mr Worswick refine its conversational abilities.

That helped during the final, he said, because some of the questions the chatbots get asked are designed to catch them out.

Image caption,
The Turing Test was dreamt up as a way to assess the intelligence of computer programs.

Tricky questions include: "How many plums can I fit in a shoe?" and "Which is bigger, a big lion or a small mountain?"

Answering those involves writing a program that does much more than just grab canned responses from a long list of possible answers, said Mr Worswick.

"The difficulty is trying to teach these things about the world because they have no sensory input," he said. Mitsuku has been built upon the Pandora bot pen source chatbot code and tools.

Although he has entered the Loebner contest before, 2013 was the first year he made it to the final. Winning, he said, was a big surprise.

"I was thinking I'd use this year as a learning experience to prepare for a win next year. I thought I'd probably come second or third," he said. "Winning is a dream come true."

Related Internet Links

The BBC is not responsible for the content of external sites.