Talk:Marvin Minsky

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Biography / Science and Academia (Rated B-class)
WikiProject icon This article is within the scope of WikiProject Biography, a collaborative effort to create, develop and organize Wikipedia's articles about people. All interested editors are invited to join the project and contribute to the discussion. For instructions on how to use this banner, please refer to the documentation.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
Taskforce icon
This article is supported by the science and academia work group (marked as Mid-importance).
 
WikiProject Philosophy (Rated B-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 
WikiProject Transhumanism (Rated B-class, Low-importance)
WikiProject icon Marvin Minsky is part of WikiProject Transhumanism, which aims to organize, expand, clean up, and guide Transhumanism related articles on Wikipedia. If you would like to participate, you can edit this article, or visit the project page for more details.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
 
Wikipedia Version 1.0 Editorial Team
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
 
B-Class article B  This article has been rated as B-Class on the quality scale.

Possible Misquote[edit]

I checked my copy of 2001: A Space Odessey, and in the quote it says 1980's in stead of 1960's. Could anyone confirm, and if necessary correct this? —Preceding unsigned comment added by 84.197.0.162 (talk) 19:29, 21 July 2009 (UTC)

Marvin the Robot[edit]

Was this character from Hitchhikers Guide to the Galaxy named after Mr. Minsky? If so, an interesting reference.

-- Interesting if true, but not. See Marvin the Paranoid Android -- Paulscrawl (talk) 05:36, 25 November 2010 (UTC)

His Picture[edit]

The page used to contain a picture of him. Why was it removed?

Yeah, that’s interesting... And more - I refuse to understand it... I uploaded http://en.wikipedia.org/wiki/Image:Marvin_Minsky.png and added to the top of the page the code but the image didn't appear! Strange... --Yuriy Lapitskiy 17:51, 2 September 2006 (UTC)
OK, reuploaded it to jpg... --Yuriy Lapitskiy 18:02, 2 September 2006 (UTC)

Meaning of the Koan[edit]

The Koan really belongs in the trivia section.

Could somebody explain what is this koan about ? It isn't very understandable for a random person. --Taw

A randomly wired neural net will still have some preconceptions of how to play the game -- you just won't know what they are. In the same sense, entering a room with closed eyes will not make the room (the preconceptions of how to play the game) go away; you just won't know how the room looks -- Ulfalizer

I have no actual knowledge. But it seems by analogy that the point is that just as closing your eyes does not make a room empty, randomly wiring a neural net does not cause it to lack preconceptions of how to play. Presumably, it has random preconceptions, but preconceptions nonetheless. There may be some deeper lesson here in that we cannot build an artificial intelligence that is "pure" in the sense of "not depending on hardware or software". --Jimbo Wales

The whole point of a koan is not that it has an answer, but that it clarifies the problem. It's not supposed to have an explanation, really. It is just supposed to get you asking the right questions. --Dmerrill


I think it may have to do with the role that our preconceptions play in our own understanding of the universe. Look around a room with your eyes open, see what is in it, then close your eyes. Is everything still there? Almost everyone would say yes, because that's what their experience has lead them to believe. But, deprived of that experience, we would not know enough to say yes.

As for the neural net part, it could simply be there for the sake of the koan or it could be that Marvin was saying that a randomly wired neural net would not only have no preconceptions about Tic-tac-toe but also no experience with logic itself, in which case it would have to be taught logic first, which would either (1) defeat the purpose of the experiment or (2) turn one problem (teaching the net to play tic-tac-toe) into two (teaching the net logic and tic-tac-toe). Of course, I know very little about neural nets and could be wrong on this part. --J. Jensen

Another possibility is that the koan is meant to draw attention to the assumptions that are inherent in the design and implementation of the network, not the wiring. --Spazzm 21:05, 2005 Feb 26 (UTC)

First neural networks?[edit]

I'm modifying the sentence about Minsky building the first neural network, since it is almost certainly not true, according to this site: http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol1/cs11/article1.html "First Attempts: There were some initial simulations using formal logic. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumptions about how neurons worked. Their networks were based on simple neurons which were considered to be binary devices with fixed thresholds. The results of their model were simple logic functions such as "a or b" and "a and b". Another attempt was by using computer simulations. Two groups (Farley and Clark, 1954; Rochester, Holland, Haibit and Duda, 1956)."

SNARC was the first randomly wired neural network learning machine, not the first neural network - apparently. Please correct me if I'm wrong.

This seems to be wrong. Minsky built an actual neural network machine (made of vacuum tubes) in 1951. PushSingh

Also, I removed from the bibliography section the comments on neural networks and the suggestion that "Network theorists should read it again [...]" as this seems a little on the opinionated side.

Removed "Developed the modern theory of computational geometry and established fundamental limitations of loop-free connectionist learning machines." from the description of "Perceptrons".

While the book does discuss the subject of computational geometry, it does not "develop the modern theory" thereof. The 'fundamental limitations' has since been proven incorrect, see XOR problem.

XOR problem[edit]

Added a mention of Minsky and Papert's incorrect assumption on the XOR problem. --Spazzm 21:00, 2005 Feb 26 (UTC)

This is odd. Minsky and Papert were correct that Perceptrons cannot represent XOR, although of course multilayer nets can. I deleted the contribution that suggested different. --PushSingh
Minsky's book killed any hope of government funding for research into artificial neural networks. He was right about what a perceptron could not do, but wrong about what a two or more layer network of perceptrons could do - i.e. model just about any continuous function. Minksy biased his views in the book in order to acquire continued military funding for MIT for research into knowledge based systems. The article must incorporate this tragedy more formally. --Amit 06:49, 2 March 2006 (UTC)
Preferably without the POV. Cmouse 16:10, 2 March 2006 (UTC)
Please cite references for these assertions, so that we can document them properly. --Fjarlq 20:55, 2 March 2006 (UTC)
The stated assertions are common knowledge, and I don't see any POV bias in them. Reviewing any lenghty archived discussion on Marvin Minksy, such as on Slashdot, (or book reviews for Perceptron) should yield pointers to several references. One reference in particular that comes to mind is the book The Brain Makers by HP Newquist. --Amit 04:53, 3 March 2006 (UTC)
I am deeply sorry, but the "common knowledge" is plain wrong in this aspect. Please, let us have great care with this subject. One of the reasons Wikipedia is criticized is because it may borne "popular" conceptions that are not necessarily true. Let's use this tool to finally put an end to the enduring misconceptions on the history of neural networks and of Minsky's work. Plus, it's even funny, if not tragic, that someone might use slashdot as a source to solve a wikipedia dispute. This is not at all the righteous path.
The book Perceptrons suffers from the "Necronomicon syndrome". It is a book that is often mentioned, but was actually read only by few people. I do not wish to talk about why so many researchers continue to spread the misconceptions regarding this book, but the fact is that one learn the tale from the other, cites them, and the myth goes on, when all we have to do is simply read the original book. The book should stand by itself as a reference. There is no need at all to quote any other books as a source of explanation of a book that can be read directly by anyone. And it's not even an old, strange of inaccessible book. It is relatively new, and had an edition published in the early 1980s, with a new preface where Minsky seems confused about the strange things people started to say about it.
The XOR problem regards only what a single neuron can do, and doesn't tell us much about Perceptron networks. It tell us as much as the fact that a single AND or OR gate can't also implement the XOR function by themselves, while both are routinely used in standard electronic circuits. The fact that networks of Perceptrons are needed to perform minimally interesting computation was known not only by Minsky, but much earlier by Rosenblatt, to whom the book was dedicated (if I am not much mistaken). This is a first misconception that must be cleared out. Some people talk as if the book came to destroy Rosenblatt's previous and historical research. Minsky and Papert's feeling towards him were pretty much the opposite, they talk wonders of his work.
The fact that networks of neurons are able to implement any kind of boolean function was known by McCulloch and Pitts themselves. In fact, in the 1943 article I believe they even mention the fact that we can use these neurons with memory units and build Turing-machine-like devices. Minsky and Papert also knew that, everyone knew that. Some people say that this was only started to be proved in the 1980s... If you don't believe me regarding these books and articles, the originals, you can also take a look at the 1964 book by Michael A. Arbib, that talks about boolean functions and state machines built from Perceprons in a very explicit way. Even if anyone insist in disputing the contents of the works of Minsky, McCulloch and Rosenblatt, this Arbib book is enough to deny the claims that the (boolean) generality of neural networks was only discovered in the 1980s. One may (must) also read the exceptional book by Minsky called Computation: Finite and Infinite Machines. It is from 1967, so it is even prior to Perceptrons, and it is a book of basic computer theory, covering finite state machines and Turing machines. The important thing is that Minsky uses Perceptrons to describe the construction of the machines throughout the book!
Regarding the findings in the Perceptrons book, what it contains are never-disproved theories regarding the limits of Perceptron networks, and how the sizes needed grow with the problem complexity. It is general, and is goes much further then the XOR case. In fact, the proof for the XOR function is easy, and is not subject for a book, perhaps half a chapter... This proof is much larger, talks about feed-forward Perceptron networks in general, and states limits for the functions implemented. The best example of the kind of inherent limitation they proved that MLPs have is the drawing at the cover of the book.
That is what amazes me the most. Everybody talk about the book and about the trivial XOR-function/single-perceptron problem. But the most important and big problem is the one in the drawing of the cover. People not only have not read the book, they don't even know the cover of the book!!
I will explain the problem and the cover drawing now. But I urge you to instead go to your library and read the book, because I will spoil your experience... It is as pleasurable as watching a movie, or reading any other good book! So, "spoilers warning". Here it goes. The problem is that of identifying whether a colored area in a drawing is a single area, or two separate areas, or to determinate the connectedness of a drawing. The proof shows that a network of limited size can only solve the problem for drawings with limited sizes. This limitation exists to this day, will never be overcome, and is in fact the routine problem posed to an engineer who needs to build a MLP to solve certain problem: determine the number of neurons needed... Because the best we can do is to build a network as large as the requirements of the problem. You can't build a fixed-size network to solve any problem. (And this is ok!... It's just like selecting the order for a linear filter to do the desired job.)
Another common misconception is that people had no idea of the so-called backward propagation algorithm in the 1960s, but if you think about it, it's just a pretty standard optimization algorithm. And they did it back then!...
I would be glad to have a deeper discussion with anyone interested... I shall start to edit this article right this weekend unless anyone manifests himself. I only hope to do a job as minimally good as Push might have done. It is a great shame for wikipedia that him and Minsky himself have dedicated some of their times to this page, but to this day it still contains inaccuracies. I hope we all can do justice to wikipedia and to Minsky. -- NIC1138 (talk) 05:54, 21 March 2008 (UTC)

Edits made by the subject of the article[edit]

Will we accept the correction of quotes by on 1 January 2006 by someone claiming to be Marvin Minsky? (This is the dilemma of anonymous editing of a wiki.) Were these quotes published contemporarily elsewhere? GUllman 22:08, 6 January 2006 (UTC)

Considering that according to IPLocation.com, is located at "MASSACHUSETTS BROOKLINE RCN CORPORATION", and that minsky@media.mit.edu also posted to the newsgroup comp.ai.philosophy from the same IP address, I think we can incorporate these quotes as his own recollections of the incidents. GUllman 01:33, 7 January 2006 (UTC)

Yes -- I talk to Marvin Minsky regularly (he was my thesis advisor) and those comments are certainly his. (I had recently suggested he make some fixes to his Wikipedia entry.) Pushsingh 01:56, 8 January 2006 (UTC)

While I blanch from my own temerity, I'd still like to ask you, Pushsingh, to contact Minsky and ask him to register onwiki and comment on talk. We must rewrite his edits, merging them into the article yet, perhaps, continuing to cite them as direct quotes. It's all a mess and I'd like the warmth and comfort of User:Marvin Minsky hovering nearby. John Reid 02:23, 11 April 2006 (UTC)
Sadly, Push passed away a few weeks ago. I will try to answer your comments as soon as I can - however I don't seem Marvin nearly as often. Cmouse 04:29, 11 April 2006 (UTC)
Anyone can email Minsky. You don't need a .mit.edu domain to get him to answer. Personally, I think no one should be above wikipedia convention, though anyone can help create conventions. --Jaibe 08:38, 12 September 2006 (UTC)

Technically, we can't accept previously unpublished accounts or explanations, even if we have ironclad proof that it's Minsky adding them personally. It needs to have appeared in some other published s source first. Stan 23:26, 23 June 2006 (UTC)

Well we can treat his edits as pointers to parts of the edit that may be false (or that he wants expunged from the article for vanity reasons, perhaps). I took the liberty of removing the Jurassic Park edit because of this - since having a conversation on a beach about something that didn't appear in the film is perhaps too trivial an occurrence --82.35.240.214 21:54, 12 August 2006 (UTC)

Cleanup needed[edit]

I added the cleanup tag. The biography section needs to be structured better, it just kind of goes from one topic to another without any natural progression. Some of it could probably be moved to the trivia section; maybe the parts about his contributions to Jurassic Park and 2001 could be moved to their own section. Also, as discussed above the edits on the page which appear to be from Minsky himself seem to be legit... these are very informative comments and need to be integrated into the article.

Letslip 01:26, 4 April 2006 (UTC)

marvin monroe[edit]

is the simpsons psychologist based on him? Family Guy Guy 05:40, 2 July 2006 (UTC)

Year of confocal microscopy patent added[edit]

The page on Confocal microscopy has the year of 1957 as the year of Minsky's confocal patent. I believe that date to be 1961 for 3 reasons, but boy it would be nice if Dr. Minsky read this and ensured I was correct. So I added the year here to be consistent with his another patent having the year attached to it. Now the three reasons are 1) I could find no support for 1957 in the Confocal microscopy article. 2) I found support for the 1961 date in an article by Dr. Jeffrey Lichtman (et alia?) in an August 1994 Scientific American article entitled, "Confocal Microscopy." 3) Based on the Lichtman article, I searched the patent records, and while very limited data for old patents is available, a search for the last name of Minsky reveals no patents at all in 1957 but one patent in 1961. December 19th specifically. 3,013,467; US Classification 356/432. A further search shows:

US Class 432 FOR LIGHT TRANSMISSION OR ABSORPTION:
This subclass is indented under the class definition. Subject matter wherein visible radiation is passed directly or with internal reflection through solid, liquid, or gaseous substances or any mixture thereof including coated solids, and detected visually or photoelectrically after it has passed through the substance for the purpose of determining the intensity, the change of intensity, the extinction of the radiation, or the outline of the radiation source or image.
(1) Note. As between this Class 356 and Class 250, the claimed combination of a light source, a support for a substance to be tested by a transmission test, and a photosensitive detector with or without indicating structure is classified in this Class 356 providing there is the disclosure of an indicator responsive to the detector not provided for elsewhere. If an indicator of the quantitative type such as a meter is present in the claimed combination, classification is in Class 356 regardless of the claiming of the support.
(2) Note. The patents claiming a light source with the transmission of this light through a substance and detected (usually quantitatively) are in subclasses 432+. If no light source is claimed and only the light intensity of a specific location or locations is involved and not the amount of light attenuated in the passage of the light through a medium or a substance, or only the intensity or a light source is desired without regard to the attenuation of the light in its passage through a medium, see subclasses 213+ on photometry.

Based on the above, I am very confident the date is 1961, not 1957. Hence I added it here, and will correct it in Confocal microscopy. --SafeLibraries 03:31, 6 August 2006 (UTC)

With regards to the above, 1961 was the date that the US patent #3,013,467 was issued, however it was filed in 1957 and hence its term started from that year. --Colin E. 09:37, 25 August 2006 (UTC)
Great to see others involved. In this case, I will change the date back to 1961. Patent terms nowadays (since June 8, 1995) run from the date of issue but last 20 years from the date of filing. However, for this older patent, patent terms run from the date of issue for 17 years. In either case, patent terms run from the date of issue, not the date of filing. And in our case, that would be 1961. To prove this to you, look here http://bpmlegal.com/howtoterm.html. Note also "In any event, the 20 year term laws take effect June 8 (barring any change in legislation between now and then). The patent term will begin on the issue date and end 20 years from the date of filing." Thanks again for your input. --SafeLibraries 11:49, 25 August 2006 (UTC)

I put an excerpt of the SciAm Lichtman article on the Confocal microspopy Talk page. By the way, if Dr. Minsky is reading this, may I have your autograph with a short message to children of the future? --SafeLibraries 03:49, 6 August 2006 (UTC)

"old man minsky"[edit]

I have no first-hand knowledge, but I found several non-wikipedia references to him as such, most notably this 2006 Discover interview. Tvoz | talk 07:35, 18 March 2007 (UTC)

http://donbot.com/DesignBot/Bibliography/Bib02MinskyDiscoverInterview.html

Personal communication[edit]

Please don't add information or remove challenge tags based on uncited "personal communication". Such edits will be reverted on sight. Superm401 - Talk 07:33, 17 April 2008 (UTC)

I'm sorry about that one, but where can I find a list of what information needs for sure to have a reference? For example, do the names and dates of birth of people need to be referenced? Why does religion? If this is so important to be verified it should perhaps remain blank instead of "pending". I imagine this is debated somewhere already, can someone point me a link? -- NIC1138 (talk) 07:20, 3 June 2008 (UTC)

Participation in the Victim of the Brain film[edit]

Does anybody have reliable sources on this? I've heard people say that it might be true, and there is indeed a character that is introduced as Marvin Minsky in a fictional part the film (here at 1:45). But Marvin Minksy is not in the official credits, which is hard to explain. The character that is supposed to be him seems to have an English accent. Julien.dutant (talk) 14:33, 11 May 2008 (UTC)

Good reference to mine for more details[edit]

In Honor of Marvin Minsky’s Contributions on his 80th Birthday,AI Magazine Vol 28, No 4: Winter 2007. Model summary sections on Minsky’s Research and Contributions and Minsky's Personal History, in addition to anecdotes from John McCarthy, and several former students, including Danny Hillis, Patrick Winston, et al. Now free access: added as a named reference for easy re-use. --Paulscrawl (talk) 05:11, 25 November 2010 (UTC)