Talk:Natural language understanding

From Wikipedia, the free encyclopedia
Jump to: navigation, search
edit·history·watch·refresh Stock post message.svg To-do list for Natural language understanding:

Here are some tasks awaiting attention:



Steps of NLU:

as if there was a canonical NLU methodology. rm


  • Rule-based
  • Learning based

it's equally vacuous. When someone gives the article attention the first will be mentioned and as for the second ... (rhetorical ellipsis). —Preceding unsigned comment added by (talk) 20:29, 10 July 2008 (UTC)

This comment from 7 years ago seems to have some interesting criticisms, but not one of them is explicit. "Vacuous" is about as meaningless a criticism as is possible. It would be helpful, when an expert in the field visits here, if they could make some actual improvement in the article. One paper on the Web mentions "greedy decoding with the averaged perceptron" modified by Brown clustering and case normalization, without any explanation of what these terms mean. If there is good current research in natural language understanding, it would be helpful if an expert expanded the article to explain the details. One excellent application, not currently listed in the article, is in language translation. David Spector (talk) 15:51, 30 March 2015 (UTC)

Speech Recognition[edit]

FTR, this really has relatively little to do with the subject of this article. Understanding proceeds indifferently from spoken or written speech, except insofar as non-verbal communications are concerned and of course these are outside the normal scope of either NLP or SR. (talk) 19:57, 10 July 2008 (UTC)

Disagree. Speech is full of ambiguity, as is text/writing. The full understanding or interpretation of the semantics of speech (even down to the identification of phonemes) requires very much the same kinds of analysis as does the understanding of written language. David Spector (talk) 15:59, 30 March 2015 (UTC)

Article rewrite[edit]

This article can be kindly described as "hopeless". It has a few irrelevant short paragraphs and needs a 99.9999% rewrite. If no one objects, I will rewrite from scratch. There is nothing here that can be used. And there is NO point in merging with Natural language processing since this is "a field onto itself" and the merger will be the blind leading the blind, for the other article is no gem either. History2007 (talk) 20:40, 18 February 2010 (UTC)

Good work, the comments above are all by me, glad to see someone's done something with this. (talk) 17:48, 20 August 2010 (UTC)
Thank you. History2007 (talk) 18:27, 20 August 2010 (UTC)
But also, yes. It is a really thorough article now! :) Pixor (talk) 17:07, 17 June 2012 (UTC)
It doesn't seem thorough to me at all. It doesn't say anything specific about how to analyze natural language text for its semantics. I can't write even the most primitive computer program to do this analysis based on this article. I also don't see any real explanation of any academic topics in the field. David Spector (talk) 16:03, 30 March 2015 (UTC)
I hope that this article will be more thorough, so that many interested students are properly guided. I don't think that it would be a good starting point for newcomers to NLU. Yijisoo (talk) 13:40, 18 September 2016 (UTC)

Dubious: Understanding is more difficult than Generation[edit]

The second paragraph of the opening section makes the (albeit well-argued) unsupported claim that understanding is more complex than generation. While this might be true, it isn't cited or referenced.

I'm inclined to be believe that this isn't true, though. In a recent computational linguistics course I took, my professor repeatedly mentioned that good generation is much more difficult, because in understanding all of the information is there and only needs to be picked apart, whereas in generation, the computer has to make many "decisions" based on little else but context.

Anyway, I would consider removing this section until a source is found? I'm not sure if it adds a lot to the article, anyway. Thoughts? Pixor (talk) 17:06, 17 June 2012 (UTC)

Disagree strongly. It is almost obvious that generation of natural language text can be very easy. I've programmed this myself (for a medical history generator), and I have no background in formal NLP. It is equally obvious that determining the meaning of actual natural language text ranges from complex to impossible, depending on how much context information is needed. But, as to removing any section of the article, you haven't built a case for such an extreme action. Sources are certainly needed throughout the article, but deletions aren't a solution for lack of sources, they are just an avoidance. If a section has been removed, someone with the time to research this change should restore it. David Spector (talk) 16:13, 30 March 2015 (UTC)

While the argumentation given is true, whether NLG is more difficult than NLU depends on the representation from which language is generated, and how much variation you want to put in the generation. I would not consider illing up slots in a template as proper NLG.

Assessment comment[edit]

The comment(s) below were originally left at Talk:Natural language understanding/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

* It's a stub! (talk) 20:14, 10 July 2008 (UTC)

Last edited at 20:14, 10 July 2008 (UTC). Substituted at 00:57, 30 April 2016 (UTC)

Dubious Searle reference[edit]

Why is Searle's POV mentioned here specifically in respect to Watson? The placement of the citation gives the impression that he believes that *that* set of algorithms running on Watson failed to understand, rather than his more general epistemological view that no matter what algorithms were implemented it *couldn't* understand, which he posited decades before Watson. I don't believe any NLU researcher, or the Watson team would claim they are in any way addressing the Chinese room argument in their work. The word 'understanding' simply implies that they are working at the semantic level, rather than surface syntax or morphology, for example. The problem they are addressing is technical- not philosophical, and I don't think Searle's position is relevant except to demonstrate that claims a machine understands as people do is very shaky ground. It may be more useful to mention Searle in a section clarifying what 'understanding' means in this context. — Preceding unsigned comment added by (talk) 12:27, 1 August 2016 (UTC)

The commenter above is bringing up an excellent point. While the WSJ article may be difficult to access, here is a topical line from the article:

Watson revealed a huge increase in computational power and an ingenious program. I congratulate IBM on both of these innovations, but they do not show that Watson has superior intelligence, or that it's thinking, or anything of the sort. Computational operations, as standardly defined, could never constitute thinking or understanding for reasons that I showed over 30 years ago with a simple argument.

It might be better and more neutral pov to say something more along the lines of: "However, despite it's apparent success using machine learning, even Watson is still not demonstrative of true Natural Language Understanding as understood by experts such as John Searle". (just a rough draft proposal) Cuevasclemente (talk) 19:40, 28 August 2017 (UTC)