There’s been a bit of kerfluffle the last few days between “AI optimists” like Mark Zuckerberg and “pessimists” like Elon Musk.
Personally, I think AI pessimism should refer to the belief that AI is overhyped; not the fear that “Skynet is going to enslave/exterminate mankind in the foreseeable future.” Using the former definition, I’d call myself a slight AI pessimist. Despite the amazing research advances of the last few years (especially in deep learning), I think that many people underestimate how much work is still needed to make AI systems useful or accessible in the wider world. Yes, I strongly believe that within the next decade and for many years after that, AI will prove that its value as a foundational technology. But for at least the next few years, AI systems will only be effective in very limited domains. They may be shockingly effective in those domains like 19x19 Go or language translation. But I’m not worried about the need for Butlerian Jihad anytime soon.
This mini-debate between Musk and Zuckerberg did inspire this thoughtful Techcrunch article by Ron Miller. The most interesting aspect is this quote:
Pascal Kaufmann, founder at Starmind, a startup that wants to help companies use collective human intelligence to find solutions to business problems, has been studying neuroscience for the past 15 years. He says the human brain and the computer operate differently and it’s a mistake to compare the two. “The analogy that the brain is like a computer is a dangerous one, and blocks the progress of AI,” he says.
Further, Kaufmann believes we won’t advance our understanding of human intelligence if we think of it in technological terms. “It is a misconception that [algorithms] works like a human brain. People fall in love with algorithms and think that you can describe the brain with algorithms and I think that’s wrong,” he said.
One of the first things that really struck me when I started machine learning more seriously in the last year was how fundamentally different computer and biological systems are. While neurobiology was an inspiration for the first perceptrons and neural networks decades ago, modern neural networks are extremely different from actual neuron systems in animals. There may be interesting cross-pollination in terms of learning and thinking. But ultimately, computer systems are best understood by building them as systems in and of themselves. And biologists do their best work studying the model animals in question; not by overapplying learnings from neural networks to organisms.