Uncommon Descent Serving The Intelligent Design Community

Google co-founder on why our neurons are not like a computer neural network

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Stanford professor Andrew Ng, former “Google Brain” leader, Coursera founder, and current chief scientist at Chinese web giant Baidu, told Backchannel:

[Caleb Garling] Often people conflate the wiring of our biological brains with that of a computer neural network. Can you explain why that’s not accurate?

[Andrew Ng] A single neuron in the brain is an incredibly complex machine that even today we don’t understand. A single “neuron” in a neural network is an incredibly simple mathematical function that captures a minuscule fraction of the complexity of a biological neuron. So to say neural networks mimic the brain, that is true at the level of loose inspiration, but really artificial neural networks are nothing like what the biological brain does.

Today machines can recognize, say, a dog jumping. But what if someone is holding a piece of meat above the dog. We recognize that that’s a slightly different concept, a dog trick. And the piece of meat isn’t just a piece of meat, it’s a treat—a different linguistic idea. Can we get computers to understand these concepts?

Deep learning algorithms are very good at one thing today: learning input and mapping it to an output. X to Y. Learning concepts is going to be hard.

It may well be impossible. Most concepts involve emotional responses, which stem from experiencing life as a self. Thoughts?

See also: Darwin’s “horrid doubt”: The mind

Follow UD News at Twitter!

Comments
PAV: What can anyone say, except: “Hope springs eternal!” Some people are not as willing to label a problem as insurmountable We humans are much more than “mapping X to Y,” no matter how “high level” they may be. Two hundred years ago we were using oxen and wagons, perhaps there are still things yet to discover , patience.velikovskys
February 13, 2015
February
02
Feb
13
13
2015
07:31 PM
7
07
31
PM
PDT
Velikovsky: Seems nobody told Baidu that they were wasting their time, or that ‘ their premises (religious/metaphysical all) are more important to them than conclusions that match up with reality. “ What can anyone say, except: "Hope springs eternal!" We humans are much more than "mapping X to Y," no matter how "high level" they may be.PaV
February 13, 2015
February
02
Feb
13
13
2015
06:03 PM
6
06
03
PM
PDT
Thank you Axel for your kind words.Barry Arrington
February 12, 2015
February
02
Feb
12
12
2015
01:23 PM
1
01
23
PM
PDT
Although I agree with Andrew Ng that current AIs are not anything like the brain, I take exception to this:
[Andrew Ng] A single neuron in the brain is an incredibly complex machine that even today we don’t understand. A single “neuron” in a neural network is an incredibly simple mathematical function that captures a minuscule fraction of the complexity of a biological neuron. So to say neural networks mimic the brain, that is true at the level of loose inspiration, but really artificial neural networks are nothing like what the biological brain does.
It is not true, IMO, that the biological complexity of a neuron has much to do with its function at the intelligence level. A neurons is a living structure and its complexity is almost entirely due to the need to keep it alive. This abstraction level is irrelevant to intelligence. At the signal processing level, the level where a neuron's firing is what's important, the neuron is indeed simple. In fact, most cortical neurons don't do much more than detect concurrent input signals. We know this because we can attach probes to the inputs and the output of a neuron and figure out how it fires. PS. to News. Andrew Ng is not a Google co-founder, AFAIK.Mapou
February 12, 2015
February
02
Feb
12
12
2015
11:19 AM
11
11
19
AM
PDT
There was a debate on AI not long ago in this thread: https://uncommondescent.com/intelligent-design/how-does-the-mind-arise-from-the-brain-novel-idea/#commentsDionisio
February 12, 2015
February
02
Feb
12
12
2015
10:57 AM
10
10
57
AM
PDT
news: Deep learning algorithms are very good at one thing today: learning input and mapping it to an output. X to Y. Learning concepts is going to be hard. To continue: "One thing Baidu did several months ago was input an image?—?and the output was a caption. We showed that you can learn these input-output mappings. There’s a lot of room for improvement but it’s a promising approach for getting computers to understand these high level concepts." Seems nobody told Baidu that they were wasting their time, or that ‘ their premises (religious/metaphysical all) are more important to them than conclusions that match up with reality. "velikovskys
February 12, 2015
February
02
Feb
12
12
2015
08:58 AM
8
08
58
AM
PDT
'Their red-faced insistence on this in the teeth of all logic and evidence is a wonder to behold.' I started this response, Barry, to commend you on your witty metaphor, 'red-faced insistence'; but then reflecting on your whole post, phrase by phrase, it occurred to me that, humorously couched though it is, it is actually in no wise hyperbolical. Their red-faced insistence on this truly is a wonder to behold. It truly is a wonder to behold. Inexplicable other than in quite inchoate and purely emotional, fundamentalist terms; this, mark you, on the part of people with a tertiary education, doubtless highly-formally accredited for their analytical intelligence and academic achievements. 'Because it is a conclusion absolutely compelled by their premises.' The nub of the matter. 'That the conclusion bears not the faintest glimmer of hope of having any connection with reality does not seem to faze them.' Both statements, likewise, absolutely true. 'Because their premises (religious/metaphysical all) are more important to them than conclusions that match up with reality. The irony of their making fun of fundies about their “irrational faith commitments” is palpable.' Beginning with an iteration of the precedence accorded to their 'deposit of faith', and its subsequent, pristine preservation, to the exclusion of any ineluctable inferences from the facts that might undermine that 'deposit of faith'. Finally, you made reference to the 'irony of their making fun of fundies about their “irrational faith commitments”, as being palpable and truly astonishing, almost 'beyond belief'. Sorry for the repetitions, but I am trying to bring out the fact that the humour when arguing with atheists is sometimes wonderfully pungent, side-splitting, even, but there is always a tragic edge to it and its deceptive appearance of improbability, even impossibility.Axel
February 12, 2015
February
02
Feb
12
12
2015
08:26 AM
8
08
26
AM
PDT
Searle's Chinese room has never been answered. Many attempts have been made. None has been even remotely successful. Still, the materialists prattle on about there being no essential difference between a silicon chip and the meat computer in our head. Their red-faced insistence on this in the teeth of all logic and evidence is a wonder to behold. Why do they insist? Because it is a conclusion absolutely compelled by their premises. That the conclusion bears not the faintest glimmer of hope of having any connection with reality does not seem to faze them. Because their premises (religious/metaphysical all) are more important to them than conclusions that match up with reality. The irony of their making fun of fundies about their "irrational faith commitments" is palpable.Barry Arrington
February 12, 2015
February
02
Feb
12
12
2015
06:53 AM
6
06
53
AM
PDT

Leave a Reply