Talk:Symbol grounding problem

From Wikipedia, the free encyclopedia
  (Redirected from Talk:Symbol grounding)
Jump to: navigation, search
WikiProject Philosophy (Rated Start-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
WikiProject Linguistics (Rated Start-class)
WikiProject icon This article is within the scope of WikiProject Linguistics, a collaborative effort to improve the coverage of linguistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.
Taskforce icon
This article is supported by Philosophy of language task force.

Who is Stevan Harnad?[edit]

And did he write this entire article himself?

Harnad Links -->[1][2]
  • It seems like that guy named Harnad wrote this article by himself and extensively cited his own work. In addition, this article is a little bit hard to read and its view is too dependent to one particular viewpoint, in my opinion. But that is better than no article at all. If one could improve that article, it would be great. --
  • I don't usually like to see things like this in my pedia -Niubrad (talk)
Niubrad (talk) 22:20, 20 November 2014 (UTC)
  • To be honest, I swear the only reason nobody's changing this is because nobody has any better ideas, because it's a pretty obscure thing. As it is, I think it gives a really good introduction to the whole thing, but it should cite and incorporate more of other peoples' ideas. —Preceding unsigned comment added by (talk) 14:38, 27 October 2009 (UTC)
  • Stevan Harnad is currently Professor in Electronics and Computer Science at the University of Southampton, and Canada Research Chair in Cognitive Science at Université du Québec à Montréal. Founder and Editor of Behavioral and Brain Sciences. Past President of the Society for Philosophy and Psychology. Author and contributor to over 300 publications. I did PhD research in artificial intelligence, and I regard his symbol grounding hypothesis as a profoundly important and insightful contribution. This article deserves its place here. And no, unfortunately, I do not have anything approaching the time required to update it. By the way I have recast the question in neutral terms. The anonymous individual who asked the original, extremely rude and inappropriate question needs to learn some basic courtesy. Just because you haven't heard of somebody doesn't entitle you to try to diminish them. Rubywine (talk) 17:51, 31 August 2010 (UTC)
  • Thank you for providing some background. While of course Wikipedia values expert input, editors need to avoid letting a WP:Conflict of interest affect articles. It is best in such cases for people with a conflict of interest to propose changes on the talk page rather than directly editing the article. We strive to write for a general audience and maintain a WP:neutral point of view, so the main references should be from reliable secondary sources, not research articles. More perspective and background is in the WP:No original research policy. ★NealMcB★ (talk) 13:35, 16 September 2014 (UTC)
  • @Rubywine I'm not condoning any prior discourtesy, but Wikipedia is not a scientific society or publication: An editor's external credentials are largely irrelevant here. The relevant criteria for evaluating a Wikipedia contribution is the extent to which it respects the rules and guidelines of Wikipedia, and enhances the content. This article flouts some basic rules and leaves a lot of editing for others. "I do not have anything approaching the time required to update it." - yes, exactly. Neither does anybody else. I could be totally overreacting, but this reminds me of somebody coming by Goodwill after hours to drop off a dining room table - that's missing one leg. Spike0xff (talk) 16:57, 6 November 2014 (UTC)
  • The article reads to me like a college essay on "The Symbol Grounding Problem", it is not in the style of a Wikipedia article. A Wikipedia article is not an essay. Also the first section does not define or explain "Symbol Grounding". Either the first section, first paragraph and first sentence should be focused on defining and explaining "Symbol Grounding" (the subject of the article) for a general audience, as well as possible, or the article should be renamed to "the Symbol Grounding Problem" - except that if we do that, then we have an article named "the Symbol Grounding Problem", without having an article on "Symbol Grounding". Unless the phrase "symbol grounding" is only commonly used in the context of discussing the Symbol Grounding Problem? That does not seem to be the case, as a quick Google search suggests. For example, there is a book named "Symbol Grounding"Symbol Grounding which gives this definition: "This process of connecting symbols with sensorimotor experiences is known as symbol grounding." Spike0xff (talk) 16:57, 6 November 2014 (UTC)

Recommending a little rework[edit]

In case I'm not the only one who would have this opinion, I want to mention that the article currently seems a little too biased in the direction of suggesting that proper consciousness identification in others is theoretically not doable and that this is relevant. Now, I shouldn't here want to contend the full implication of whatever might be precisely meant by the phrase containing "proper", but I think we can recognize that if consciousness were slightly reconceptualized, we'd attain a higher quality article.

'Meaning' itself is still a somewhat problematic notion in philosophy, so invoking consciousness as its co-operating associate is probably not contributing much to our understanding of symbol grounding. Consciousness only seems as important as it does – and not that it isn't important – because those of us claiming to have it don't currently possess in our experience repertoires that of other cognitive agents consistently making exact and seemingly uncanny predictions about our behaviors, not to mention our fates. In terms of our potential mercies or decision expectations, we're usually in full control of our robots, from start states to end states, and therefore we don't tend to attribute volitions to them. But suppose we attribute volitions to them anyway for a moment. Then, in terms of volition, the only significant difference that we might notice between them and us is that whereas we perceive their range of actions as strictly delimited, we don't perceive ours as such. Theoretical engineering barriers notwithstanding, we would likely re-evaluate how we currently regard ourselves as 'non-robotic with a mysterious consciousness' if suddenly other cognitive agents began exhibiting behaviors that would compel us to believe that they could predict us as well as we can predict that our microwaves will stop microwaving upon the duration of what we programmed in.

One possible objection to those hypothetical cognitive agents, I imagine, is that we can simply choose a set of acts different than the set we are told that we are predicted to perform. Crucially, however, no ultimate law requires that a puppeteer always tell us exactly what it knows at exactly appropriate times and thus could still know ever without error which set of acts we are to perform. Perhaps early-adopter atheists sometimes overlook this plausibility, so demonstrations involving another human would need to do the compelling. While one human subject is given the deviant role of whimsically either doing X or not doing X when told it will do X, a second human subject – preferably a skeptic – communicably apart from the first subject gets to witness the actual predictions Y made by the super-agent. Of course, the super-agent can't actually depend on mock predictions X to be error-free, especially since the first subject is probably geared to want so much to be "free" and never practically deterministic. (Hopefully, uncanny demonstrations in general will never get threatening, since then the so-called super-agents likely won't be any more interesting than human conspirators.) Appeals to the problem of consciousness, especially as it's associated with humans, can go only so far in explaining the problem of symbol grounding, and hence they're inadequate from the present perspective.

I should leave such a reworking for a later time if no one arrives interested in doing it, or else if it becomes that such a modification won't cause a big controversy. In what's hoped to be a contribution I made earlier today, I included the term 'metamodel', whose unpacking is possibly a heuristic in the proposed direction of trying to depart from consciousness concerns and perhaps going "more technical". Vbrayne 22:35, 6 April 2007 (UTC)

  • Simply, if one accepts that there are varying degrees of consciousness, which would include human-level sentience as one particular segment of degrees, then when we say that the problem of meaning is related to the problem of consciousness, of the problem of consciousness we're not really referring to the problem of human-level sentience but instead a broader class of sentience. Some changes were made to achieve coherence. I studied Stevan's papers more thoroughly, then aimed to keep with and not contradict him. Valeria B. Rayne 21:59, 9 April 2007 (UTC)
  • I find your comments rambling, obscure and convoluted. You haven't attempted to address the content of the article. There is no evidence here that you have grasped the symbol grounding problem. I don't think there's any argument here to be answered. Rubywine (talk) 00:47, 1 September 2010 (UTC)
I wrote that several years ago. That day I was in a writing mood and perhaps made the post more ornate than what it should've been. The part of the problem I addressed was the way and extent Harnad, or the article at the time, mystified consciousness. I gave indications how it could be mystified less, while not being eliminativist. An earlier form of the problem, as it was expressed in this article, seemed more strongly concerned that, while meaning is associated with symbol grounding, consciousness is associated with meaning. ValRayne (talk) 15:27, 5 April 2011 (UTC)

The Non-Definition of Symbol[edit]

The definition of symbol given in the Formulation of Symbol Grounding Problem section is circular. We are told that a "symbol is any object that is part of a symbol system", and that a "symbol system is a set of symbols and syntactic rules for manipulating them...". Perhaps it would be better not to mention the 'definition' and to just delete that paragraph. —Preceding unsigned comment added by Spaecious (talkcontribs) 01:33, 21 May 2008 (UTC)

The non-definition of symbol is at the root of the symbol-grounding problem -- look it up in any dictionary and you will be sent in circles upon circles as to what a 'symbol' means, or what 'meaning' means. But you don't need a dictionary to understand what 'meaning' means, right? -- Right? --Quetzilla (talk) 06:34, 26 January 2010 (UTC)
And the article neglects non-symbolic computation, which is currently revolutionizing AI. See: "deep learning". This is a big, big hole. (talk) 13:46, 10 October 2015 (UTC)

Let it be whatever it may connote[edit]

It may be that the tag "Tony Blair" denotes or explicates Tony Blair (1) indeed, and also connotes or implicates UK's former Prime Minister (2) and Cherie Blair's husband (3), and many others in need.

Likewise, the tag "Cherie Blair's husband" denotes or explicates Cherie Blair's husband (3) indeed, and also connotes or implicates Tony Blair (1) and UK's former Prime Minister (2), and many others in need.

Let "Mark Twain" tag or denote Mark Twain indeed, and tug or connote Samuel Clemens, the 6th child of John Clemens, the husband of Olivia Clemens, the creator of Tom Sawyer, an admirer of Helen Keller, the coiner of "miracle worker" for Anne Sullivan, or whatever he was, in need.

Regardless of what ought to be the meaning of words or names, such implication is our inborn associative intelligence that even artifcial intelligence can easily imitate. It may be that a tag or name stands for its bearer as a whole, hence to the maximal effect. It may be that meanings are brainstorming in the head anyway. This may be why we have more to do with psychologism than literalism. This may be too simple to be realized!

Judging from such a sense of the words denote and connote, problematic is John Stewart Mill's argument that proper names such as "Tony Blair" had no connotation but only denotation. Gottlob Frege also found Mill's problematic.

Thus, Frege argues in effect that the sense (Sinn) of "Hesperus" is Hesperus itself while its essence (Bedeutung) is Hesperus in itself, that is, Venus! Then, what is the essence of "Venus" in turn? Should it be Venus itself, then where should Venus in itself in contrast be found? This question suggests an endless regress from essence to essence.

To be straightforward, why not tag (or denote) "Venus" for Venus, and "Hesperus" for Hesperus? Why not let what is tagged "Venus" be Venus and the brightest star, and "Hesperus" be Hesperus and Venus seen in the evening (hence, the evening star) and so forth as far as my knowledge goes? Tagging and tugging may be too diffused to be confused.

--KYPark (talk) 15:01, 29 October 2008 (UTC)

Symbol manipulation according to meaning[edit]

In the first paragraph of the article the author states: "... computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols' shapes, not their meanings."

It seems to me that it is the meaning that we are dealing with, not the symbol itself. Do symbols not merely refer to a meaning that has some concrete basis. If the meaning is not understood then the calculation is being performed according to memorization. Regardless of whether the meaning is understood, it is inherent to the system of which it is a part. Is it not?

By what argument do we isolate symbols from their meanings and the ideas to which they refer?

There is nothing controversial here. This is the meaning of formal or computational and the distinction is called the difference between syntax and semantics'. Stevan Harnad 14:47, 29 December 2012 (UTC) (talk) 04:46, 29 December 2012 (UTC)Courtney Gardner

Additionally, is the author failing to consider the implicit assumptions that we invoke when finding meaning? It seems that the focus on rules or algorithms for finding meaning is inappropriate. A Connectionist approach would better account for and describes the mental processes involved. — Preceding unsigned comment added by (talk) 04:54, 29 December 2012 (UTC)

'The respective contributions of computation and connectionism (neural nets) to cognition are part of what is at issue in the symbol grounding problem
A neural net can be simulated by a symbol system (computation). See critique of Searle's "Chinese Gym Argument." Stevan Harnad 14:47, 29 December 2012 (UTC) This article elucidates the flaw in the authors thinking. (talk) 06:04, 29 December 2012 (UTC)Courtney Gardner

  • Where the flaw, if any, lies must be left to the reader to judge... Stevan Harnad 14:47, 29 December 2012 (UTC)
Stevan, all your talk page signatures point to a Wikipedia article about you rather than to your user talk page at User talk:Harnad the way a standard 4-tilde signature does. Please follow the WP:SIGNATURE policy, which says "Signatures must include at least one direct internal link to your user page, user talk page, or contributions page; this allows other editors easy access to your talk page and contributions log. The lack of such a link is widely viewed as obstructive." I also wish you would use the normal "leading colons" method of indenting comments. Using bullet items makes me think the previous commentator is making some bullet points. ★NealMcB★ (talk) 02:52, 17 September 2014 (UTC)
Neal, sorry about the bullets. Back then I had not yet figured out the meaning of the various indentation options. But I still haven't figured out how to fix my signature. I always click the signature and time-stamp tag and it always gives 2 dashes and 4 tildes. I'm not sure how to redirect it to where it is supposed to go, instead of to the WP article --User:Harnad 11:25, 17 September 2014 (UTC)
Thanks, Stevan - I hadn't remembered that I can click the signature icon in the editing toolbar :) Perhaps you changed your default signature. I think you can change it back under Preferences > User Profile: --★NealMcB★ (talk) 16:25, 17 September 2014 (UTC)

Link to theory of meaning[edit]

I'm not sure what Wikipedia equivalent article would be to this one: . Now, I don't have academic degrees (though might have in the future!), this subject is something I've been studying now, and I would like to see the criticism-like section as well as the original text of the article. Well, if possible. I think that there are not one but several distinct claims that can be objected to. For instance, most AI researchers from computer science camp don't agree (don't even take seriously) Searle's Chinese Room thought experiment (I'm most certain that if you search through the works of Marvin Minsky you'd find good quotes to support that claim). But I think Wikipedia's own article fairly reflects that point. Secondly, very few AI researchers in that same camp regard Turing test as indication of anything. For more in-depth discussion you can read Aaron Sloman: . Another, maybe useful link: I searched this article as a result of reading a paper by Tom Froese and Tom Ziemke in the general context of situated AI, which looks like it could be yet another good link from this page because embodied / situated AI makes claims about (potentially) solving this problem. So, to put it shortly, most objections would go towards the description of what consciousness is and whether the broad argument against functionalism has to be accepted without a doubt. (talk) 22:13, 7 November 2014 (UTC)

How about a comp-sci definition???[edit]

Is there a wikipedia article that defines the standard comp-sci definition of symbol grounding? This article has absolutely nothing at all to do with the "symbol grounding problem" as defined in standard comp-sci textbooks, which is a completely different and unrelated concept ... (i.e. its the satisfiability problem, and a grounding is the set of things that satisfy a set of clauses. (talk) 19:36, 24 March 2015 (UTC)

Interesting - thanks! I agree that the current article isn't well grounded (heh) in the literature. But I don't see much use of the term "symbol grounding" in the satisfiability realm either - the closest I found was this article: [3]. Do you have some better refs? ★NealMcB★ (talk) 13:47, 5 April 2015 (UTC)
I don't have any refs handy, however, I am trying to write up a certain robot-architecture design for some software programmers to implement, and I have to use the word "grounding" a lot. I was hoping to tell the coders to "go read the wikipedia article". I may as well say "oh, go study model theory and logic for a few years and you'll get it". The Wittocx paper you point at is very typical for the model-theoretic definition -- there must be hundreds of similar articles (I mean, I don't know what that particular paper says, but the definitions they provide -- "here's a language, here's a theorem, here's a grounding" is generic for the industry.) (talk) 00:15, 14 September 2016 (UTC)