“I’ll wager $6,435,” Watson said in his pleasant electronic voice.
“I won’t ask,” said host Alex Trebek, wondering with everybody else where that figure came from.
Another step forward for AI, as IBM’s supercomputer tramples the organic competition. Perhaps what’s most interesting about Watson is that it included a copy of Wikipedia — which came into existence only ten years ago but is now an incredible compendium of human knowledge, the equivalent of thousands of Libraries at Alexandria and far more accessible — and built almost entirely by voluntary labor. Yesterday the branching scenarios of chess, today the detail and nuance of Jeopardy, tomorrow… consciousness?
No, almost certainly not. First computers will have to beat us at Go, which is probably a better measure of raw computing power. Humans, not surprisingly given the challenges posed by prehistoric life, are tuned by evolution for general-purpose problemsolving and social cognition. Even a pocket calculator can beat 99.9% of us at certain tasks, but we’re still a few iterations of Moore’s Law from having the horsepower for an analogue of a human brain.
Perhaps a bigger problem is the programming effort. Outside of sci-fi, consciousness is not going to just spontaneously emerge from complexity; creating a human analogue personality in silicon will be an extremely challenging and precise effort to replicate aspects of human intelligence. As impressive as they are, assemblages of hardware and software like Deep Blue and Watson are barely comparable to severely autistic humans. What we think of as human consciousness arises not just from the interplay of our ten trillion synaptic connections and the ten thousand chemical triggers that tell us we’re hungry, horny, harried, or hung over, but also something much more difficult than chess or Jeopardy or Go: social modelling.
Modelling other humans is generally the most complex task our grey matter is asked to do (this probably explains why the brains of social animals seem to grow faster than those of nonsocial animals). Clumps of recently evolved brain matter called mirror neurons help mammals understand what other animals are thinking/emoting so we can predict their behavior and optimize our responses for success. This faculty has to be exquisitely tuned in modern humans because we must navigate exceedingly complex and subtle social variations on a daily basis.
Of course, at some point in the next couple decades, after a lot of hard work and some luck, we probably will see a human-personality-capable machine… and from there, they will proceed to exceed human capabilities in the social cognition realm while still enjoying the additional specialized advantages we’re already built for them today (try to imagine the wittiest, most socially successful person you know, able to motivate like Tony Robbins and innovate like Steve Jobs, and then imagine he’s also the world’s greatest chess player and has a Wikipedic knowledge of… well, almost everything). And what happens after that, my friends, is beyond the veil of the Singularity…
Comments
23 responses to “Humankind’s Dominance In Jeopardy”
Slow down dude. Just because Watson won on long-prepared ground is no means for assuming that computers will exceed human capabilities in social cognition.
How much of social cognition is non-verbal? How much of that non-verbal component is unique to an individual? How can one ever program a deterministic system to convincingly emulate a chaotic system?
Watson, just like other IBM publicity stunts (Deep Blue) was narrow-bounded. It might have passed a Turing Test within the strict bounds of Jeopardy, but that’s a long way from a general Turing Test.
Human consciousness is analog (I think). Computer pseudo-consciousness is digital. Until you can make a logical digital system “think” in non-logical & non-linear analog terms, our wetware has little to fear at present.
Eh, it’s Jeopardy, who cares?
It’s all “crossword puzzle knowledge”.
Did they have the weird categories that take some thinking?
Now if a computer can win at Match Game, then I’d be afraid.
Very afraid.
A LOT of the signals humans send out are in the form of smell. We know this because we have a language construct for it: “the smell of fear”. Now how about all the more subtle signals?
The human analog is going to have to have an excellent nose.
Cap’n Ned: It won’t happen soon, but it probably will happen someday. Computers can have eyes as well as ears — in fact, we can give give them a vastly superior sensory apparatus for most of our senses. But I’m not sure why people assume there’s anything in a neuron that can’t be modelled, or that we can’t build a chaotic system.
M Simon: True, but otoh people who have no sense of smell (I’ve known one or two) seem to get by fine.
Social cognition is coming. I predict that soon (like within a decade) there will be really functional sex robots (yeah, coming out of Japan) that will intelligently intereact with humans, interpreting body language and emotion. Also, some work is coming out of designing robots for human care for the elderly (Japan’s demographic cliff is pushing technology). Add sex as an economic incentive and all bets are off.
Add a decade of Moore’s Law and better batteries and, well, it will be interesting.
Coleridge wrote the definitive debunking of artificial intelligence long ago.
It’s in chapters V thru IX of Biographia Literaria.
Summarized in the motto: Matter has no inwards.
If you want an off-the-wall speculation, suppose consciousness has to do with quantum entanglement.
Then no computer analog will work.
The most successful computer programs happened long ago, Eliza and grep.
Since then it’s been decades of promise. In fact, artificial intelligence sets the record for longest running promise, I think; even longer than the war on cancer.
rhardin:
AI has already delivered much of what was supposed to be impossible, such as voice recognition, beating the greatest chess players, etc. I’ll argue the Singularity will be here when AI first authors, essentially alone, an entire piece of major popular entertainment.
But I think question is really more one of definition: what is consciousness? Perhaps I’ll explore that one further.
Well, if nothing else, it’s certainly a fun demo of a powerful question-answering system. Certainly a powerful combination of the successful techniques from NLP/computational linguistics.
There’s nothing essential about chess or voice recognition; and nobody said it was impossible.
Chess just requires brute force for a few levels and a halfway decent position scoring system at the bottom. The more brute force and the better the halfway decent, the better it does. Ken Thompson and Joe Condon had a decent machine Belle in the 70s I think.
Even the Vic-20 played a decent game.
Voice recognition is just speech to text. You have to figure out what features to use.
The AI problem has always started with what to do with text, not in getting to text.
Actually, quite a few people said various VR applications were impossible before they were accomplished. Kurzweil’s book has some examples of this.
Every aspect of human intelligence is either a question of brute force or programming. We just have more brute force and better programming (for some tasks) than computers right now, so we’re better at an ever-shrinking number of tasks.
I was doing spoken word recognition on a PDP11/35 (slow machine) in 1980 without thinking much about it, so the “can’t be done” meme couldn’t have been very strong.
I could see it was going to take some more work but nothing essentially different, just as a statistical problem needing more signatures.
The text-to-text AI problem can be thought of like this: whatever you type in, the computer’s response could have been from an enormous set of if-then-else statements, and nothing more.
if(how are you)then(fine, and you?)
else if(hello) then (hi, what’s up?)
else …
and so on for everything you type.
That obviously isn’t intelligent, just exhaustively large.
But every other kind of program can be reduced to that. It’s just encoded a little differently.
That obviously isn’t intelligence.
If you want to say that brains are the same way, you’d have to show that.
It’s already shown for computers.
If a sufficiently large if/then system cannot be distinguished from “true” intelligence in practical terms, then I’m not one really has a basis for saying it is not, in fact, intelligence, albeit in a different form.
Of course, you’d need yottabytes of code for that method to work for handling even the available possibilities in simple conversations, and of course our brains did not evolve to work in a way that inefficient. We’re designed by nature more for pattern recognition and function approximation, and humanlike AI will be built that way as well.
http://en.wikipedia.org/wiki/Artificial_neural_network
(PDP11/34, not 35)
The point of expanding the computer into its if-then-else equivalent is that it makes the workings of a computer obvious; and it having been made obvious what a computer is, the question of intelligence doesn’t even come up. It’s the wrong kind of thing to have intelligence.
The move to say that a neural network is like a computer is not justified.
As I say, to highlight the difficulty ignored rather than to propose a solution, emulated neural networks do not have the same quantum entanglement.
The Coleridge I cited notices that the machinery is always laid out and then consciousness is added to finish the picture; when the machinery was originally supposed to explain consciousness.
The grammar of explanation has not changed in the intervening 200 years.
“and then consciousness is added to finish the picture”
Today’s form is “emergent properties.”
That seems a bit silly. So, if we explain the workings of neurons, people can’t have intelligence either? You might as well define intelligence as mysticism!
There’s another problem with this line of thought as well. Is a cat intelligent? It certainly can solve some problems, but not as many as a human. Is an insect intelligent? It can solve an even more limited range of problems. And so forth, down to bacteria, which basically executes if/then statements. At some point we draw a fairly arbitrary line.
In any case, we can build neural networks that function like biological ones, so whether you feel the comparison is justified is irrelevant.
You have to explain the workings of brains; including quantum entanglement, taking that as a stand-in for something that cannot be correctly emulated by anything other than itself.
As for animals, why yes they are intelligent.
That’s not even a question except to psych 101 people and below.
For slow humor, look to the rhinoceros, who has to be taught the same trick over again every day.
Vicki Hearne is the writer who’s done the best job of remarking on the scientific blindness involved in this, say in Adam’s Task, or a Harper’s article “What’s Wrong with Animal Rights.”
(I think that’s where the rhinoceros was, anyway.)
What’s wrong with Animal Rights is here (.pdf), a surprise since Harpers seems to take them down pretty fast.
rats, it’s truncated. sorry
If we can explain the workings of neurons and glia (and we pretty much can), then you’re left with “emergent properties” which you’ve already called a dodge.
The only real mystery left about intelligence are those that arise from scale — we can’t yet create an artifical neural network as powerful as the human brain, but this will become increasingly likely to occur as Moore’s Law progresses.
It’s unlkely that quantum entanglement has anything to do with biological brains — there’s certainly little to no physical evidence for this. Most likely quantum systems decohere too quickly in biological systems for this theory to hold any water (see Tegmark).
Well, first, Moore’s law doesn’t help with the problems. Exponential remains exponential and undoable at any speed.
Second, on physical evidence, does your consciousness of stuff count, or not?
If things appear to you, are you just an observer, or are you a participant?
It certainly appears (to you?) you’re a participant.
Why that appearing-to? It should be unnecessary, on the mechanistic view. It would be very inefficient to put it in.
As to decohering, quantum mechanics is what stuff is made of. You’re saying it doesn’t matter based on some intuition?
This confidence has been the same for 200 years. I guess every generation of young males learns about it anew.
Males make progress by abstracting from stuff.
That same move has been made a million times in AI, though.
Well, Moore’s Law solves half the problem, the processing power half. The architecture and programming will still be very challenging.
You appear to yourself as a participant because you model reality in your brain. Autistic people tend to have difficulty modelling themselves and others, probably due to problems with their mirror neurons.
Again, see Tegmark on the quantum question. It’s very unlikely likely a quantum effect can propagate far enough to have any macro-effect in a biological system, because decoherence happens at very tiny scales, much smaller than the function of neurons and etc.
1. Moore’s law doesn’t solve the processing power part, because an exponential problem can’t be done with speed increases. If you double the speed, you can go one unit further into the problem. (As in weather, where the scaling goes that double the computation gets you ten minutes more forecast; Moore’s law doesn’t get you very much.)
2. Who do you appear to when “You appear to yourself to be a participant…” There’s a thought problem here, which is what I’ve been trying to point out from the beginning.
That’s the you that is added to the machinery at the end, that was what was to be explained by the machinery itself.
Matter has no inwards, as Coleridge (copying from Schelling) put it, as the essential problem.
It always comes up. Anticipating it isn’t that hard, if you look ahead a little.
Quantum is meant more as a reminded that nobody knows what the hell is going on. Classical sizes are fine for computers, but nothing suggests that that’s all that matters if you’re not a computer.
That might have something to do with the problem I’m pointing to, as well.
When considering the Deep-Blue precedent, please note that computers have not made humans obsolete in chess.