Talk:Chatterbot

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Software / Computing  (Rated Start-class, Low-importance)
WikiProject icon This article is within the scope of WikiProject Software, a collaborative effort to improve the coverage of software on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Computing.
 
WikiProject Linguistics / Applied Linguistics  (Rated Start-class)
WikiProject icon This article is within the scope of WikiProject Linguistics, a collaborative effort to improve the coverage of Linguistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.
Taskforce icon
This article is supported by the Applied Linguistics Task Force.
 
Note icon
This article has been automatically rated by a bot or other tool because one or more other projects use this class. Please ensure the assessment is correct before removing the |auto= parameter.


Open source bots[edit]

Links to open source bots would be nice... seeking now -- User:DennisDaniels

I've got a very short chatterbot program (<30 lines) which I could add to the article. I wrote it myself in 1984 and as far as I'm concerned it's Open Source. -- Derek Ross | Talk 14:00, 2004 Jun 18 (UTC)
Well, 33 including blank lines -- Derek Ross | Talk 03:35, 22 Jun 2004 (UTC)

SHRDLU[edit]

SHRDLU was an experiment in natural language understanding, but it hardly qualifies as a chatterbot. The crucial difference is that SHRDLU did know what it was talking about -- or at least "attempted" to. Its purpose, unlike a chatterbot, wasn't just trying to convince human operators that a "real person" was on the other end (except indirectly -- but then any human being qualifies as one, too. :-) -- JRM 12:26, 2004 Sep 1 (UTC)

Agreed. However it's probably worth mentioning it just to point out that it's not a chatbot. -- Derek Ross | Talk 02:57, 2004 Sep 2 (UTC)
That gets a bit too specific, I think -- perhaps a general reference to natural language processing would be better (of which chatterbots are but a specific (and rather whimsical) instance). -- JRM 11:47, 2004 Sep 2 (UTC)

Source code[edit]

The source code, even in QBASIC, is quite obscure — it looks pretty obscure to me, and I used to do the odd bit of programming in the language a few years ago! It's not going to be very helpful for 99% of our readers. I think we should, if not remove it outright, recast this as pseudocode, and, if it would be helpful, provide an external link to source code. — Matt 18:43, 20 Sep 2004 (UTC)

We could certainly recast the program as pseudocode as an aid to comprehension. However one of the reasons for writing it in QBASIC was to give an example chatterbot which would actually run "as is". A pseudocode version might be more useful to experienced programmers but perhaps less so to neophytes or non-programmers since it would be impossible for them to run it. By comparison an interested neophyte or non-programmer can get the current code running by following the simple instructions included in the current article.

An alternative to pseudocode would be to rewrite the program to make it clearer. For all the unusual layout of the program it is actually fairly simply structured, so that would not be difficult to do. -- Derek Ross | Talk 05:15, 2004 Sep 21 (UTC)

One problem is that not every reader even has a QBASIC interpreter, (remember they stopped shipping it with late versions of Windows 98, not to mention non-Windows systems) nor even the knowledge of how to enter and execute such a program. Moreover, it's probably asking too much to expect a general reader to understand BASIC syntax, even if it was layed out correctly. A larger number of readers would have half a chance of reading pseudocode and getting the gist of it. If an adventurous reader wants to execute some code, I think an external link would do the trick. — Matt 09:45, 21 Sep 2004 (UTC)

Those are fair points but I still don't feel that pseudocode is enough. I contributed the code in answer to the request at the top of this page, so the source code is a response to demand. However if QBASIC is no good perhaps you can suggest a better language. Something like awk or perl perhaps ? -- Derek Ross | Talk 03:23, 2004 Sep 23 (UTC)

For people who want to try out or implement a chatterbot, I think the set of external links to software for various platforms is sufficient. I'd point out that the request at the top of the page asked for "links to open source bots", not for actual source code to be placed within the article. I think including a simple chatterbot in pseudocode (and the resulting conversation) could be an excellent piece of illustration, but I don't think we should use Wikipedia as a repository for sample source code. — Matt 12:03, 30 Sep 2004 (UTC)
I'm not sure if this conversation is dead or not, but I figured I'd add my two cents anyway. After running the code, I tried to rewrite it in VBScript (the native scripting language of my IRC client) so I could run it in an IRC channel. Now, I'm no newbie to programming, but I was pretty confused by most of the code -- I ended up mostly converting the syntax and trusting that it would work. It didn't, I figure I must have messed something up somewhere. My point is, it would be nice if the code was commented, or if the variable names were longer than one character... that way, I can learn and understand the code instead of simply be able to run it. Thanks for listening! AquaDoctorBob 14:30, 9 Jan 2005 (UTC)
Okay, I do have a version which is longer but more conventional in appearance. It should be easier to translate into other languages or dialects. I'll upload it instead. -- Derek Ross | Talk 23:12, 2005 Jan 9 (UTC)
Adding to the above conversation, I suppose I'm an "ordinary" person with no knowledge of programming code, and the stuff in this article baffles me. It's really not helpful at all to me, and I'd hazard a guess that more readers aren't programmers than are. As mentioned above, later WIN98 releases don't have a QBASIC interpreter, and it seems I fall into that category. I came here expecting an article about chatterbots. Perhaps the code would be better suited to a Wikibook on QBASIC programming? - Vague | Rant 07:06, Jan 18, 2005 (UTC)
Fair enough. I think that part of the problem for non-programmers is that the article is really just a stub followed by an example of interest to programmers. In order to improve the article we need more informative material to counterbalance the example. I'll look at converting the WikiChat program into something that can be run more easily on modern systems too. That will take a few days to sort out. -- Derek Ross | Talk 15:37, 2005 Jan 18 (UTC)
I have no knowledge of programming code, and having the full .BAS there did it for me. I had assumed such programs would be huge and complex. To see the code (it's not like I've read it; I've merely noticed how little text it is) and then to see what it can do taught me a lesson. If you ever decide to link it away, be sure to mention in the link text that it consists of only 90 lines. 22:43, 14 October 2005 (UTC)

Well, thanks, Anonymous User. I'm glad that at least one person has found the code informative. -- Derek Ross | Talk 05:43, 15 October 2005 (UTC)

Attention[edit]

In an effort to clean up Artificial intelligence, instead of completely removing a paragraph mostly concerning chatterbots, I copied it under "Chatterbots in modern AI". I noticed that it repeats a some information already in this article, but there might also be some additions. Unfortunately I can not spend time on a smooth merger right now. Sorry for the inconvenience. --moxon 09:20, 20 October 2005 (UTC)


Basic source code removed from prog (not essential to understanding of subject0[edit]

Aids to understanding are often not essential even when they are useful to understanding. The BASIC source code below demonstrates that these programs can be quite short and simple, even to people who don't understand computer programming. I am surprised that anyone should think that it was intended as a tutorial on programming. It might have some tutorial value as an example of a chatterbot (the topic of the article) but hardly as an example of programming. -- Derek Ross | Talk 16:18, 20 November 2005 (UTC)

WikiChat -- a simple Chatterbot example[edit]

In principle a chatterbot can be a very short program. For instance the following program — which should be copied and saved as WikiChat.BAS — implements a chatterbot which will learn phrases in any language by repetition in much the same way that a parrot does.

WikiChat:
  DEFINT A-Z
  GOSUB Initialise
  GOSUB LoadData
  GOSUB Converse
  GOSUB StoreData
  SYSTEM

Initialise:
  LET DictionarySize = 1000
  DIM Context$(DictionarySize) 'The character sequences that WikiChat has already seen
  DIM Alternatives$(DictionarySize) 'The characters that WikiChat may print after recognising a sequence.
  LET EmptyRow = 0
  LET EndOfResponseCharacter$ = CHR$(180)
  LET ContextLength = 6  'A bigger value makes WikiChat more grammatical but slower learning.
  LET CurrentContext$ = STRING$(ContextLength, EndOfResponseCharacter$)
  LET DictionaryFile$ = "WIKICHAT.MEM"
  RANDOMIZE TIMER
  RETURN

Converse:
  DO
    LINE INPUT "Human: "; Response$
    IF Response$ = "" THEN EXIT DO
    LET Response$ = Response$ + EndOfResponseCharacter$
    GOSUB MemoriseHumanResponse
    LET Response$ = ""
    GOSUB GenerateComputerResponse
    PRINT "Computer: "; Response$
  LOOP
  RETURN

MemoriseHumanResponse:
  DO WHILE Response$ > ""
    LET CurrentCharacter$ = LEFT$(Response$, 1)
    LET Response$ = MID$(Response$, 2)
    GOSUB InsertCharacter
    LET CurrentContext$ = MID$(CurrentContext$, 2) + CurrentCharacter$
  LOOP
  RETURN

GenerateComputerResponse:
   DO
     GOSUB Lookup
     LET CurrentCharacter$ = MID$(Alternatives$(DictionaryIndex), INT(RND * LEN(Alternatives$(DictionaryIndex))) + 1, 1)
     IF CurrentCharacter$ = "" THEN
       EXIT DO
     ELSE
       LET CurrentContext$ = MID$(CurrentContext$, 2) + CurrentCharacter$
       IF CurrentCharacter$ = EndOfResponseCharacter$ THEN
         EXIT DO
       ELSE
         LET Response$ = Response$ + CurrentCharacter$
       END IF
     END IF
   LOOP
   RETURN

InsertCharacter:
  GOSUB Lookup
  IF INSTR(Alternatives$(DictionaryIndex), CurrentCharacter$) = 0 THEN
    LET Alternatives$(DictionaryIndex) = Alternatives$(DictionaryIndex) + CurrentCharacter$
  END IF
  RETURN

Lookup:
  LET Context$(EmptyRow) = CurrentContext$
  LET DictionaryIndex = 0
  DO WHILE CurrentContext$ <> Context$(DictionaryIndex)
    LET DictionaryIndex = DictionaryIndex + 1
  LOOP
  IF DictionaryIndex = EmptyRow AND DictionaryIndex < DictionarySize THEN
    LET Alternatives$(EmptyRow) = ""
    LET EmptyRow = DictionaryIndex + 1
  END IF
  RETURN

LoadData:
  OPEN DictionaryFile$ FOR APPEND AS #1
  CLOSE #1
  OPEN DictionaryFile$ FOR INPUT AS #1
  DO WHILE EmptyRow < DictionarySize AND NOT EOF(1)
    LINE INPUT #1, Context$(EmptyRow)
    LINE INPUT #1, Alternatives$(EmptyRow)
    LET EmptyRow = EmptyRow + 1
  LOOP
  CLOSE #1
  RETURN

StoreData:
  OPEN DictionaryFile$ FOR OUTPUT AS #1
  FOR DictionaryIndex = 0 TO EmptyRow - 1
    PRINT #1, Context$(DictionaryIndex)
    PRINT #1, Alternatives$(DictionaryIndex)
  NEXT
  CLOSE #1
  RETURN

Note that to begin with, this chatterbot knows nothing and therefore says nothing. However if one uses simple conversation with it, like a parrot it will begin to reply as it starts to find responses that are appropriate for the immediately preceding sentence.

WikiChat learning sessions[edit]

The following sessions show how to teach WikiChat some simple conversation. Note that sessions are started by running the WikiChat program and ended by pressing the return key without giving a response to the computer. They show WikiChat being taught a basic conversational pattern in English.

(Session 1) 
Human: hello there
Computer:
Human: hello
Computer:
Human: how are you
Computer:
Human: fine thanks how are you
Computer: fine thanks how are you
Human: fine thanks
Computer:
Human:

(Session 2) 
Human: hello
Computer: how are you
Human: fine thanks how are you
Computer: fine thanks
Human: glad to hear it
Computer:
Human:
(Session 3) 
Human: hello
Computer: how are you
Human: fine thanks
Computer: glad to hear it
Human:

The responses will be stored for the next session, so that there is no need to reteach them but it should be noted that while this example chatterbot will create "original" sentences after a little teaching, it has a very limited capacity and works best with short sentences.


The sections between the lines above used to form part of the article. They all belong together. The example session makes no sense without the code it refers to. -- Derek Ross | Talk 16:12, 20 November 2005 (UTC)

Chatterbot vs. chatbot[edit]

Shouldn't this be at Chatbot, since that is the most common name? -- Visviva 11:09, 18 November 2006 (UTC)

I'm on the Robitron e-mail discussion list where Loebner Prize Contest entrants and Loebner himself talk about these things. There, both "chatbot" and "chatterbot" are used, so I don't see one term as being clearly dominant among people who make and use them. (Where do you see "chatbot" as being most common?) I have no strong preference myself, either. "Chat" implies conversation, while "chatter" is both humorous and slightly negative because it implies meaningless talk. (In my opinion most such programs really are meaningless in what they say, so it's a valid criticism.) So, either one works. Even if the lead title changes, both names should be preserved so that they redirect to the same article; how did you do that? --Kris Schnee 19:12, 18 November 2006 (UTC)
I also vote for chatbot, as it is the first and most commonly used of the names. Comparative use of the phrases on search engines seems to bear this out (for example: http://writerresponsetheory.org/wordpress/2006/01/15/what-is-a-chatbot-er-chatterbot/ ). Also, the phrase chatterbot and its promotion seems to have some underlying connection to Mauldin’s commercial chatbot (er, chatterbot) ventures. 66.82.9.110 00:09, 1 August 2007 (UTC)
I disagree, Michael Mauldin (founder of Lycos) invented the word "Chatterbot" to describe natural language programs. Chatbot doesn't seem to have a specific origin nor can I find (and this is a very quick Usenet archive search) a mention of the word 'Chatbot' before the use of the word 'Chatterbot'. Perhaps we need a line in the top part of the page like "all too often shortened to Chatbot" 193.128.2.2 09:52, 1 August 2007 (UTC)
That's exactly my problem with it...the phrase chatterbot is associated with Mauldin's commercial ventures and their seems to be a consistent push to market the term chatterbot that isn't backed up by its usage. In fact, as I pointed out above, by far, most people use the term chatbot. Also, the constant inserting of Mauldin's name in Wikipedia (for example, in his many times recreated and then deleted for irrelevance Wikipedia biography, a version of which you just linked to again, and other now editor deleted for self promotion Wiki biographies of Mauldin's company and company employees) consistently followed by some variation on the terms "Founder of Lycos" or "Creator of the Verbot" on Wikipedia, is pretty embarrassing. I hope he's not involved with it. As for inserting the phrase "all too often shortened to Chatbot," I'd just like to point out that on Wikipedia its considered bad form to edit articles that are about yourself or people you have a close personal association with, or involve a company you work or worked for or are/were associated with, even if they are the "Founder of Lycos" and "Creator of the first Verbot." By the way, is the rumour true that you get a dollar every time you say one of those phrases or get it inserted on the web? Because that would explain a lot. 66.82.9.77 10:57, 1 August 2007 (UTC)
Firstly, I am not Michael Mauldin (which I suspect you think I am). One of the things that worries me is that this part of the discussion appears to becoming about the use of Wikipedia for self-promotion - something that I suspect you and I agree 100% on. I only fixed the wikilink following your message as I clicked through it and realised it was linking to the wrong person with the same name. I have never added any substantial content to the page (as you can see from my static IP) for exactly the Wikipedia form reasons you have stated. I just prefer the word Chatterbot, as that was what I called it when I released my first one as DOS Freeware eleven years ago. 193.128.2.2 11:49, 1 August 2007 (UTC)

Beneficial Chat robots[edit]

Are there beneficial chat robots I recently replied to what might be a Yahoo answers suicide bait question Then I realized that a chat robot that just sifted YA as well as blogs to find what appeared to be suicide preferences could say things like "wow, that totally makes me think of those wacky suicide prevention things like 1800chillout" or "ha! I detect romance gone awry I just placed a personals ad for you, if you live until monday you can see who responded" or "You seem pretty dedicated to things making sense Have you considered going to irs.gov to file early so your relations can get all that refundy goodness theyd otherwise miss out on?" These might be phrased more kindly as well as effectively to prevent suicide, As lifesaving software goes you might prevent an actual death for every few hundred or thousand autoposts You might prevent many hundreds or thousands of emotively crummy suicide attempts as well. —Preceding unsigned comment added by 163.41.136.51 (talk) 18:38, 23 May 2011 (UTC)

And you might not. When writing beneficial chatbots, I think that I would try a chatbot aimed at helping people with common computer problems before aiming for the heady heights of suicide prevention. It's so easy for people to get it wrong when dealing with potential suicides. And I think that we can be pretty sure that a bot at the current levels of sophistication would be more likely to get it wrong than right. -- Derek Ross | Talk 19:15, 23 May 2011 (UTC)

Malicious Chatterbots section of the page[edit]

I don't have the knowledge to contribute to this section, wish I did, but it doesn't seem to have much authority to it, no cites of statistics or links to articles, so it comes across as too anectodal to be of any use. Especially the part that says "as well as on Gay.com chatrooms". Why is that reference somehow more notable than 'bots that appear on any of a thousand other forums?dawno 05:21, 18 June 2007 (UTC)

Relevance of paragraph on the philosophy of AI within article & notable names in the field[edit]

I think the paragraph about Malish should be removed. It doesn't apply specifically to chatterbots. The following paragraph (discussing Blockhead and the Chinese Room) belongs in Philosophy of artificial intelligence.--CharlesGillingham 10:14, 26 June 2007 (UTC)

I absolutely agree with you, it smells of self promotion. I've removed the Malish bit but will leave the other move up to someone with expertise in that field. 66.82.9.80 21:26, 31 July 2007 (UTC)
I've now corrected some of the language and accuracy within these paragraphs in light of the recent edits, and also removed some misconceptions. I tend to agree that the sections discussing the philosophical arguments within AI really belong in Philosophy of artificial intelligence and not here as such, as they are not merely limited to chatterbots. Also, Malish's work in the field seems to be more centred around Human decision making, rather than AI specifically (referenced here in a paper by the UK MoD presented before a US Department of Defense conference)[1]. Although, I highly doubt that the notable names added here by various anonymous users (Turing, Searle, Malish, Block) would seek, or even require, any "self promotion". It seems to me, to be much more a case of over-zealous editing by enthusiastic followers of their respective works.
Finally, I removed the quote claiming that Jabberwacky is "capable of producing new and unique responses". Jabberwacky in fact, can only repeat sentences that have been previously input by other users. This was probably an earlier reference to "Kyle", which is actually one of very few programs that can actually achieve this (which was probably the rationale for its original inclusion here). 79.74.1.97 16:51, 1 August 2007 (UTC)
Just comparing the two versions of the article: good rewording. Many thanks. 193.128.2.2 08:57, 2 August 2007 (UTC)

I've just come into the Wiki business (wockham, for William of Ockham), and apologise if I've got anything wrong in respect of how to use the Wiki.

I changed the section on AI research to try to make it reflect more faithfully how things really stand in the research world, for example:

1. AIML is not a programming language, but a markup language, specifying patterns and responses, not algorithms. And ALICE can't really be considered an AI system, because (as both the other content of this section and the initial section point out), it works purely by very simple pattern-matching, with nothing that can be called "reasoning" and hardly any use even of dynamic memory.

2. Jabberwacky can't properly be described as "closer to strong AI" or even really as "tackling the problem of natural language", because it doesn't actually make any attempt to understand what's being said. It is designed to score well as an imposter - as something that can pass as intelligent - rather than even attempting any genuine grasp or processing of the information conveyed in the conversation. It can give the impression of more "intelligence" than other chatbots, sure, because it does do a rudimentary kind of learning, but again, it seems very misleading to suggest that this really has anything significant to do with natural language research.

3. The previous version suggested that it's the failure of chatbots as language simulators that has "led some software developers to focus more on ... information retrieval". But this seems odd, as though such developers were desperate to find a use for chatbots, rather than (more plausibly) trying to find a way to solve an information retrieval problem. My version maintained the point that chatbots have proved of use in information retrieval (as also in help systems), but deliberately avoided any speculation about how those researchers might have come to have such interests.

4. I made substantial changes to the paragraph that said: "A common rebuttal often used within the AI community against criticism of such approaches asks, 'How do we know that humans don't also just follow some cleverly devised rules?' (in the way that Chatterbots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument." Here are my reasons:

(a) The argument that chatbots are moderately convincing, and therefore perhaps humans converse in the same way, is unlikely to be put forward by anyone "within the AI community". AI researchers are aiming to achieve some sort of genuinely intelligent information processing, and they are well aware of the serious difficuly of doing so. Only chatterbot enthusiasts are likely to come up with this argument, and most of them are engaged on a quite different task (see 2 above).

(b) The argument is anyway very weak, and I don't think it's fair to attack my rebuttal of it as just expressing a personal point of view. Maybe it could be put better, but the point I was making is that even an everyday conversation - for example, about what to wear or about football - requires some logical connection between the various sentences (e.g. what shirt will go with what skirt or trousers, or how the placement of one player in the team will have implications for other positions - e.g. that the same player can't be in more than one position). Now it is just obvious that this sort of thing is typical of human conversation, and equally obvious that chatbots (at any rate in their currently usual form) cannot handle such logical connections. So if it's worth putting the argument in the article, then it's also worth putting this obvious rebuttal of it (though again, it could no doubt be reworded).

(c) The stuff about Block and Searle was inaccurate. It suggested, for example, that John Searle's Chinese Room argument was "an example of this line of argument" which it isn't at all. Searle isn't arguing that human conversation is like chatterbots; on the contrary. But nor is he suggesting that AI systems are as crude as chatterbots: if he were, then nobody would take his argument seriously. What he's saying is that even if a computer system could achieve a logically coherent conversation (i.e. even if ambitious AI researchers could succeed), that still wouldn't give genuine semantic content to what the system says. All this really belongs in the section on Philosophy of AI. The most that could be said here (and it could be added) is that chatbots (arguably) provide some evidence against the usefulness of the Turing Test. If even a pattern-match-response chatbot can fool a human into thinking that it's intelligent, then obviously the ability to fool a human isn't any good as a criterion of intelligence.

Wockham 21:30, 31 August 2007 (UTC)

I know you mean for the best, but you can't jump into an established article on Wikipedia and completely rewrite a large section of it without any consensus from the other long time editors. You don't have any sourcing for a lot of your claims and a lot of it is pure POV (examples: "despite the "hype" that they generate in some media" and "But the answer is clear"...). In fact, you've undercut most of the main parts of the article with sentences beginning with "But..." I know you think the article is inaccurate, or wrong in relation to "how things really stand in the research world", but academia isn't the only user of Wikipedia, or chatterbots, for that matter. There are other views on the subject that the article is trying to balance, and every opposing side is convinced the other is wrong. Also, the link to the chatterbot Elizabeth and its accompanying long, promotional sounding paragraph is particularly an egregious act; a quick glance at the edit history would show that many much more famous and influential bots have been ruthlessly removed from the article to prevent it from bloating uncontrollably. Again, I know you didn't mean it in bad faith, but such links generally get editors reported for SPAM and blocked from further editing. We absolutely don't link to such bots here, there is a separate article for that. 72.82.48.16 22:15, 31 August 2007 (UTC)

OK, thanks very much for this. I've got rid of the "despite the hype" and "But the answer is clear" stuff, and also the link to Elizabeth (before reading your note, in fact). I would hope, however, that the references to "help systems" and the potential of chatbots in education would be worthy of consensus (even if references to examples violate protocol).

In a section called "Chatterbots in Modern AI", and starting "Most modern AI research", I should have thought it important to reflect what is actually happening in the research world; that was why I confined my edits to that.

Regarding claims that are "unsourced", I honestly can't see that what I put is any worse than what was there before. What claims do you think need sourcing, that currently aren't?

Wockham 22:31, 31 August 2007 (UTC)

Merge Artificial conversational entity[edit]

Artificial conversational entity seems to be too short and stubby to warrant its own article, and it is basically the same as a Chatterbot. Chatterbot is the more common name for this category of software.--Cerejota (talk) 18:40, 19 January 2009 (UTC)

Support[edit]

Oppose[edit]

I would like to propose that the term 'chatbot' should be leading. Actual for just one reaon: this term is by far most often used:

Google Trends shows us when we compare chatbot, chatterbot, embodied conversation agent and Artificial conversational entity: http://www.google.com/trends?q=chatbot%2C+chatterbot%2C+embodied+conversation+agent%2C+Artificial+conversational+entity

That 1 chatbot (by far on top) 2 chatterbot (30% usage in the US, 70% US users use chatbot. Chatterbot is often used Poland) 3 embodied conversation agent (rarely entered in Google) 4 Artificial conversational entity (rarely entered in Google)

So from a user point of view (not for academics or professionals), and Wikipedia is created for users, the term 'chatbot' should be used.

Furthermore, the term is much shorter, it sound better and much more likely that it will be adapted by an even large group of users.

Therefore I believe that the various articles should be merged in a new article named chatbot.


Discussion[edit]

Please discuss here.

Conclusion[edit]

It seems there is no objection to merge Ace into Chatterbot. It also seems there are good reasons to reverse the roles of Chatbot and Chatterbot, so Chatbot is the leading term. If no objections are posted here in the next couple of days, I shall merge the contents of Ace with the contents of Chatterbot into the leading title Chatbot, and redirect from both Chatbot and Ace.UdRuhm (talk) 14:20, 2 November 2009 (UTC)

merged with "Artificial conversational entity", under title Chatbot[edit]

Adding detail on chatbot "Methods"[edit]

I would like to suggest that the "Method of operation" section could usefully be revised so as to reflect its title, which perhaps should be plural: "Methods" rather than "Method" (because chatbots vary, and even a single chatbot might use a number of different techniques). In particular, it would be good to give a few examples of the sorts of methods used by the original ELIZA (e.g. replacement of near-synonyms, sets of patterns and corresponding responses, replacement of "me" with "you" etc.), so that readers who don't know much about the subject can actually get a good feel for how basic chatbots work. I'm happy to have a go at this, focusing on examples from the famous ELIZA scripts (which will, I hope, avoid controversial choices among more recent systems). But before doing so, I'd like to know how other more long-term editors feel about it.

I don't know anything about Jabberwacky's methods, and the Wikipedia page on Jabberwacky says very little on this too. It would be good if whoever does know about it could add something on them (i.e. on the methods that it uses, not just the purpose of the methods - which is apparently to model how humans learn language). Shouldn't a "methods" section focus on how things are done? And I presume that the justification for giving this special space to Jabberwacky is that it apparently works differently from other chatbots. It would be nice to be told how!

For the same reason, I suggest the section would be much better to focus just on chatbots' methods of operation, and avoid the philosophical stuff on "understanding", Searle, Block etc. This anyway seems inaccurate (e.g. Searle is American, not British), almost completely unsourced (e.g. the "Much debate" passage), and too sketchy to be of much use. Wouldn't it be better just to refer readers to the section on the Turing test for all that sort of thing? People will be looking at this section to try to find out about chatbot methods of operation, not philosophical discussion about whether chatbot are of philosophical significance. WikkPhil (talk) 17:13, 10 January 2010 (UTC)

Copy edit[edit]

I have removed a fair amount of quite loose description, some repetition and some overlinking. I don't believe I have taken out any hard facts. Charles Matthews (talk) 14:34, 13 January 2010 (UTC)

You've made a big improvement, in my view. But I still think it would be nice to have a section that's really on the methods used, and I don't see that the stuff on Searle and Block belongs in the "background" of an entry on chatbots. Weizenbaum wrote ELIZA long before their work, and the only philosophical significance of chatbots seems to be to show how easily humans can be fooled by something that cannot - by any stretch of the imagination - be called genuinely intelligent (i.e. they cast doubt on the value of the Turing Test). Weizenbaum's paper said more or less that on p. 42: "This is a striking form of Turing's test. ... ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding". (That is the only mention of Turing or any other philosophy publication in his entire paper, so its overall effect is to downplay any philosophical significance at all, which I reckon is dead right.) Besides, Searle's "Chinese Room" doesn't hypothesise a chatbot, but a full-blown NLP algorithm that can generate a fully appropriate answer for any question put to it. I'm not sure that Searle thinks such a thing is possible, either: he just insists that even if it were possible, it would lack genuine intentionality. That really doesn't have much to do with chatbots at all, does it? WikkPhil (talk) 22:36, 13 January 2010 (UTC)

Go ahead. I just removed some duplication and other text which I thought didn't add much. Charles Matthews (talk) 23:26, 13 January 2010 (UTC)

Thanks, Charles. I've now redone the "Background" section accordingly, in what I hope will be found a useful way, with relevant historical stuff about ELIZA and Weizenbaum, and leading up to more recent uses. I don't think there's anything controversial here, and I trust people will agree that ELIZA deserves a particular focus (PARRY was also influential, but much more complex and difficult to imitate, and I don't think its "script" was ever published openly in the way that ELIZA's was).

I plan next - when I get the time - to add back a section on "Methods of Operation" as discussed above, again focusing on the techniques that ELIZA pioneered. If desired, I could say something about ALICE and AIML too here, because that seems to be the most used recent system. It would be good to know what other editors think of this idea (because I appreciate that it can be controversial mentioning some systems rather than others). Thanks again, Phil WikkPhil (talk) 15:29, 16 January 2010 (UTC)

Looking for reliable ELIZAs[edit]

No objections have been expressed to the suggestions above, so I hope to go ahead as planned within the next week. Does anyone know of reliable ELIZA implementations in a form that readers can inspect for themselves? Nearly all of the implementations listed on the ELIZA page are very different from Weizenbaum's original, and the one that genuinely follows his algorithm is a Java program which might make it hard to use for many people. Has anyone tried implementing a genuine ELIZA in AIML, for example, or any other simple scripting language? (I'm not sure whether AIML can handle all the processes that Weizenbaum uses, but presume it can.) Thanks, Phil WikkPhil (talk) 14:14, 30 January 2010 (UTC)

Sorry, "the next week" was very optimistic! I've still not found any reliable ELIZAs in the form of an easily-comprehensible script (e.g. AIML) that will enable examples to be given in a usable and testable format. Any suggestions for how to move forward on this? If no luck, I'll just have to explain things informally. WikkPhil (talk) 18:05, 20 August 2010 (UTC)

Merge Dialog system to Chatterbot[edit]

The scopes of these articles appear to be the same. If merging, it seems Chatterbot is the preferred target, as it has much more "what-links-here"s and more than 10x more traffic (according to http://stats.grok.se/). Mikael Häggström (talk) 07:17, 12 March 2011 (UTC)

I think I've found a clear distinction, and will point this out in each article. Mikael Häggström (talk) 05:59, 15 March 2011 (UTC)