Uncommon Descent Serving The Intelligent Design Community

Challenge: Can ID detect meaning vs. machine-spewed babble?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Recently, I wrote a piece for Mercatornet on how even a machine can get a degree if no one reads any more and machines both write and grade essays. Or, as one tech mag put it, today, “Essay generator can spew out BS, still get you an ‘A.'”

Someone has asked, could ID principles be used to measure the quality of thought? To determine whether the essay was just portentous gunk around a topic or was actually someone trying to put an idea into words?

File:A small cup of coffee.JPG Mug’s eye view: One difficulty could be that many students may believe that they must sound like that, to impress the grading machine. How do w get around that? Failure to sound like a machine?

– O’Leary for News

Follow UD News at Twitter!

Comments
Mung nailed it. The only thing ID would say is that necessity and chance did not produce the paper.Joe
May 7, 2014
May
05
May
7
07
2014
04:44 AM
4
04
44
AM
PDT
Ah, a lovely question. This, of course, is a variation on the Turing problem. The real question is how much "intelligence" does the machine have. If the machine just assembled letters without "thought", it would be really easy to detect using standard ID methodologies. If the machine assembled "words", it would be a bit more difficult. If the machine applied rules of grammar to assemble "words" the challenge of detection would be significantly higher. The next step for the computer would be to create "meaningful paragraphs". But at that point, we are getting to the question of how "smart" a computer is.Moose Dr
May 6, 2014
May
05
May
6
06
2014
09:05 PM
9
09
05
PM
PDT
Challenge: Can ID detect meaning vs. machine-spewed babble? Wrong question! How does machine-spewed meaningless babble come to be?Mung
May 6, 2014
May
05
May
6
06
2014
04:10 PM
4
04
10
PM
PDT
O'Leary, this following article that was just posted on ENV, seems to, somewhat, address that very question: What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014 Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough. Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,, ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014. Erik J. Larson - Founder and CEO of a software company in Austin, Texas http://www.evolutionnews.org/2014/05/what_is_a_mind085251.htmlbornagain77
May 6, 2014
May
05
May
6
06
2014
02:30 PM
2
02
30
PM
PDT

Leave a Reply