Wikipedia:Reference desk/Archives/Science/2010 September 18
Science desk | ||
---|---|---|
< September 17 | << Aug | September | Oct >> | September 19 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
September 18
[edit]Symbolic suicide to end a delusional state
[edit]In many pieces of fiction, Vanilla Sky and Life on Mars (TV series) as good examples, a character is told that he is living in a dream world, and that to leave the world, he must take a "definitive step" and commit suicide in that world. Sometimes, within the fiction of the world, this is a trick being played on the character by outside forces, trying to trick him into believing that his surroundings are not real. But sometimes, in the fiction, it is not.
Is there any factual basis to this idea? Any real psychological delusional condition in which a person is living in a fantasy world, but can snap out of it and return to reality if they end their life there? Or is this notion entirely made up? gnfnrf (talk) 02:55, 18 September 2010 (UTC)
- Broadly speaking, the sort of condition where one has fantastic delusions of this type is called Scizophrenia. There may be some delusions where people may believe the scenario you describe; however if these people kill themselves they are still dead. A mental illness like scizophrenia isn't cured because the sufferer commits a drastic act during a delusional state. --Jayron32 03:02, 18 September 2010 (UTC)
- To be clear, in the scenario presented the subject is so delusional that he isn't taking any of the actions he imagines he is taking. He is really in a coma or other non-responsive state, and the world is in his mind. I'm asking if it is ever true that committing in that fantasy (without any real action) can ever break it and return a person to reality. Obviously committing suicide in the real world, even if surrounded by delusions, will result in being dead, not cured. gnfnrf (talk) 03:46, 18 September 2010 (UTC)
- According to coma, people in a coma do not undergo standard sleep cycles, so it is unlikely they dream. I have little personal experience with anyone in a coma, but it is my understanding that they have little sense of time passing and no memory of dreams or delusions; if and when they wake up, the time spent in the coma is just "lost". --Jayron32 04:03, 18 September 2010 (UTC)
- The answer is simple: no. There is no real condition that is anything like that. Looie496 (talk) 05:44, 18 September 2010 (UTC)
- It's a literary/dramatic device. See Dave (Lost) for a different ending. ;) WikiDao ☯ (talk) 05:49, 18 September 2010 (UTC)
- Well, an epiphany (feeling), particularly an epiphany about your own situation, might trigger some kind of tripped-out mental imagery such as imagining your own suicide, and emerging from a delusion is presumably a kind of epiphany. Delusional disorder says "Reports have shown successful use of insight-oriented therapy", sometimes, but this gradual process doesn't really square with your "take a definitive step", which sounds ritualistic and more like a part of the delusion than a way of beginning to make sense. It might be a good symbol for the falling of the scales from the eyes, though, for narrative purposes. 81.131.11.153 (talk) 10:31, 18 September 2010 (UTC)
- My own OR here- I often have "bad dreams" which I realize to be dreams, but I'm not able to wake myself up. In such a case, the way I've learned to wake myself up is to kill myself. Jump off a cliff, throw myself into the jaws of the monster, etc. This fits perfectly the description of a "delusional condition in which a person is living in a fantasy world, but can snap out of it and return to reality if they end their life there". (Only difference I guess is that I know for sure the dream is a delusion.) Staecker (talk) 12:29, 18 September 2010 (UTC)
- That doesn't seem a very safe thing to have trained yourself to believe. Much safer to decide that pinching yourself or saying some set phrase will always wake you up. 86.164.78.91 (talk) 16:06, 18 September 2010 (UTC)
- Why not? I have never thought I was dreaming when I wasn't. Never even once considered throwing myself into the jaws of the monster in real life. Staecker (talk) 16:14, 18 September 2010 (UTC)
- I've had enough dreams where I 'wake up' multiple times, each time seeming more convincingly real than the last, that I no longer assume I'll always be able to spot what is and isn't a dream, so I tend to consider it bad practice to establish a rule like that. You might never jump into the jaws of a real monster, but cliffs and jumping off them are real things, and mindstates exist in which stressed people find the real world taking on a dreamlike quality. But, everyone assesses their own risks: I'd always lead myself to believe something safer. 86.164.78.91 (talk) 17:12, 18 September 2010 (UTC)
- Why not? I have never thought I was dreaming when I wasn't. Never even once considered throwing myself into the jaws of the monster in real life. Staecker (talk) 16:14, 18 September 2010 (UTC)
- That doesn't seem a very safe thing to have trained yourself to believe. Much safer to decide that pinching yourself or saying some set phrase will always wake you up. 86.164.78.91 (talk) 16:06, 18 September 2010 (UTC)
- Do you habitually sleep on the edge of cliffs? :) Matt Deres (talk) 14:15, 20 September 2010 (UTC)
- I've definitely had dreams that I couldn't wake up from, even when I was pretty sure they were dreams, and I've had various waking states that I was unable to distinguish from dreaming. --Mr.98 (talk) 18:49, 18 September 2010 (UTC)
- Killing yourself to prove a dream sounds like a bad plot device, and it's hard to picture any way that it could be unique psychologically. After all, there's nothing to prevent you from continuing with the dream, flying up to the pearly gates, or visiting one of Hell's many fine bordellos. (Your pick ;)) I should say though, that I do think that situations requiring a certain kind of rational analysis tend to cause waking, and it is possible that if deciding what comes after death is such a thing, then it might do so. But for example, most frustratingly, I've found that simply looking at something pretty in a dream and trying to make a note of the exact design and measurements so that I could try to draw it later is the sort of thing guaranteed to cause wakefulness. I don't think that the exact limits are absolutely hardwired - for example, sometimes it is marginally possible for to read in a dream, and other times the effort fails and risks awaking. Wnt (talk) 04:47, 19 September 2010 (UTC)
- Have you given lucid dreaming a try? WikiDao ☯ (talk) 04:59, 19 September 2010 (UTC)
- I haven't pursued it seriously. I'm almost always aware at some level that a dream is a dream, and to be honest, the technical quality usually isn't very good - until a mental reorganization around 17, I actually dreamed in black and white most of the time, and reds and yellows are still not very strong, and tactile or scent stimuli are extremely rare. So I should admit that I don't directly share the OP's perspective. Sometimes I take on the perspective of more than one character in a dream. The plots are sometimes rather elaborate, and most distinctive is that the dreams often have a huge, very strange background, all of which is taken for granted - (e.g. that I'm a cyclops living underground in a post-nuclear society, or a pampered child in a future society who can call out the names of my pets to transform them from plastic sculptures of words into snuggly animals). So I suppose I don't quite know what to do with lucid-dreaming. Deviating from the script would only seem to degrade the experience. Remembering them better might be nice, but I think the whole point of the dream is to dissort memory without such crude interference. Wnt (talk) 05:42, 19 September 2010 (UTC)
- Have you given lucid dreaming a try? WikiDao ☯ (talk) 04:59, 19 September 2010 (UTC)
- Killing yourself to prove a dream sounds like a bad plot device, and it's hard to picture any way that it could be unique psychologically. After all, there's nothing to prevent you from continuing with the dream, flying up to the pearly gates, or visiting one of Hell's many fine bordellos. (Your pick ;)) I should say though, that I do think that situations requiring a certain kind of rational analysis tend to cause waking, and it is possible that if deciding what comes after death is such a thing, then it might do so. But for example, most frustratingly, I've found that simply looking at something pretty in a dream and trying to make a note of the exact design and measurements so that I could try to draw it later is the sort of thing guaranteed to cause wakefulness. I don't think that the exact limits are absolutely hardwired - for example, sometimes it is marginally possible for to read in a dream, and other times the effort fails and risks awaking. Wnt (talk) 04:47, 19 September 2010 (UTC)
- I've definitely had dreams that I couldn't wake up from, even when I was pretty sure they were dreams, and I've had various waking states that I was unable to distinguish from dreaming. --Mr.98 (talk) 18:49, 18 September 2010 (UTC)
- I think the OP's question boils down to: is there any factual basis to the existence of an afterlife? And the weight of evidence seems mostly against at this point. WikiDao ☯ (talk) 04:59, 19 September 2010 (UTC)
- what evidence would that be? --Ludwigs2 05:27, 19 September 2010 (UTC)
- For an afterlife: religious teaching, wishful thinking, and the like. Not very hefty, concrete data. Against an afterlife: every time anyone dies, they seem for all intents and purposes to remain quite dead (these days). The balance of evidence seems clear. I do not mean to disrespect anyone's beliefs, or dispute with them about their belief in an afterlife. But the answer to the OP's question seems to me ultimately to be that there is no quote "factual basis" to support anything further happening to someone when that someone dies. As far as anyone knows, you do not "wake up" from what you delusionally-or-not firmly believe to be your "life" when you "die". Even if you are delusional, and you die: you're still dead. That's all I meant. WikiDao ☯ (talk) 05:42, 19 September 2010 (UTC)
- It is putting the cart before the horse to speak of what you experience in the afterlife, when who can say why you experience things now? Replacing all or part of the brain with a robotic replica would seem to abolish experience, but who can give the technical explanation? Who could be trusted to accurately report the experiment if it were done? Is there any logic to the notion that a person "feels" what one human body experiences, but never "feels" what another experiences, when "a person" logically should be definable as some process involving atoms which may continually exchange positions with one another and have no distinct identity? [I speak of feeling, obviously not of remembering when in one brain what has been experienced in another] (see also Atman (Hinduism))
- But beyond this is a separate question, that the afterlife is "after" life in what dimension of time? In the physical time dimension of the universe moving toward its end — or in a spiritual dimension of time as a Creator tears down one cosmos and builds another more perfected? Because no measurement you take in physical time to determine natural laws can tell you what laws link the progression of spiritual time. Wnt (talk) 05:52, 19 September 2010 (UTC)
- Hence: no evidence. The data is either non-existent or non-attainable at present. WikiDao ☯ (talk) 05:58, 19 September 2010 (UTC)
- "Absence of evidence is not evidence of absence". Rational thought is rightly valued, and yet the mind demands to think in other ways; perhaps this is not a joke or a mistake, but also a meaningful manner of perception. Wnt (talk) 06:19, 19 September 2010 (UTC)
- Just wanted to add to your initial list of movies. Total Recall is another example that contains the situation mentioned by the OP. 10draftsdeep (talk) 14:26, 20 September 2010 (UTC)
- "Absence of evidence is not evidence of absence". Rational thought is rightly valued, and yet the mind demands to think in other ways; perhaps this is not a joke or a mistake, but also a meaningful manner of perception. Wnt (talk) 06:19, 19 September 2010 (UTC)
- Hence: no evidence. The data is either non-existent or non-attainable at present. WikiDao ☯ (talk) 05:58, 19 September 2010 (UTC)
- For an afterlife: religious teaching, wishful thinking, and the like. Not very hefty, concrete data. Against an afterlife: every time anyone dies, they seem for all intents and purposes to remain quite dead (these days). The balance of evidence seems clear. I do not mean to disrespect anyone's beliefs, or dispute with them about their belief in an afterlife. But the answer to the OP's question seems to me ultimately to be that there is no quote "factual basis" to support anything further happening to someone when that someone dies. As far as anyone knows, you do not "wake up" from what you delusionally-or-not firmly believe to be your "life" when you "die". Even if you are delusional, and you die: you're still dead. That's all I meant. WikiDao ☯ (talk) 05:42, 19 September 2010 (UTC)
- what evidence would that be? --Ludwigs2 05:27, 19 September 2010 (UTC)
- Back to the beginning: "Is there any factual basis to this idea? Any real psychological delusional condition in which a person is living in a fantasy world, but can snap out of it and return to reality if they end their life there? Or is this notion entirely made up?"
- Just to be clear: you are asking about a case where someone is so delusional that their entire fantasy world seems completely real to them, right? And there is no way for them to know that it is all just a delusion except by dying and seeing if they then wake up in "reality". Right? This sort of thing gets portrayed in many books and movies. You want to know if it has ever really happened. And I do not find any formal studies to provide a "factual basis" for that having happened before. And I do not imagine it is in common practice in the psycho-therapeutic treatment of delusional people, either. So my best guess is that it is an entirely made up notion. Hope this helps. :) WikiDao ☯ (talk) 15:31, 20 September 2010 (UTC)
- Basically, yes. Thank you all for the interesting discussion, though I'm not sure how we got to "is there an afterlife?" from my original question. gnfnrf (talk) 00:56, 21 September 2010 (UTC)
Physics question
[edit]Hi, I'm having some trouble with a problem on my Physics homework. This might be a bit rudimentary considering some of the other questions you get here, but bear with. The question is "A motorboat travelling on a straight course slows down at a constant deceleration from 70 kph to 35 kph in a distance of 50 m. Find the acceleration" In the textbook we are given an equation, vfinal2 = vo2 + 2as. Plugging in I get 1225 kph = 4900 kph + 2a(0.05) and from there -3675 kph = 2a(0.05), but that gives me a huge number that obviously can't be right. What di I do wrong? Thanks. —Preceding unsigned comment added by 24.92.78.167 (talk) 03:11, 18 September 2010 (UTC)
- OK, we are not really allowed to solve homework here, but we are allowed to give hints, so I'll hive you a hint: in what units do you want to express a? --Dr Dima (talk) 03:27, 18 September 2010 (UTC)
- Why can't it be right? You haven't defined your acceleration units; if we assume hours to be your time unit and kilometers to be your distance units, then your acceleration would be in units of km/hr2. I don't have an intuitive sense of how big that answer should be in those units, so I don't see why the size of the answer would surprise me. --Jayron32 03:28, 18 September 2010 (UTC)
- Here is another hint. When you multiply 35 times 35 you get the square of 35. When you multiply kph by kph you should get ... (But what did you get?) Dolphin (t) 04:27, 18 September 2010 (UTC)
- Something that I found makes problems a lot easier: convert your initial speeds to metres per second and your distance to metres so everything uses SI units. I've run through the calculation with those and it spits out a reasonable answer. Brammers (talk/c) 08:00, 18 September 2010 (UTC)
- If your question is homework, show that you have attempted an answer first, and we will try to help you past the stuck point. If you don't show an effort, you probably won't get help. The reference desk will not do your homework for you. This is the reference desk guidelines for homework questions. The original poster did show that they have attempted an answer first. The only restriction against solving homework questions is where the original poster only posted a question without showing that they are doing any work. --Chemicalinterest (talk) 11:26, 18 September 2010 (UTC)
- That isn't the only restriction. Even if they've tried, we don't solve it for them: we help them solve it themselves. Accordingly, people above have been giving hints to help the question asker to solve this problem themselves: the value of homework lies in solving the problem and doing the work. The guideline above (in full) does a pretty good job of capturing this consensus view. 86.164.78.91 (talk) 15:58, 18 September 2010 (UTC)
- If your question is homework, show that you have attempted an answer first, and we will try to help you past the stuck point. If you don't show an effort, you probably won't get help. The reference desk will not do your homework for you. This is the reference desk guidelines for homework questions. The original poster did show that they have attempted an answer first. The only restriction against solving homework questions is where the original poster only posted a question without showing that they are doing any work. --Chemicalinterest (talk) 11:26, 18 September 2010 (UTC)
Before plugging in numbers, rearrange the textbook equation so it states directly the acceleration
Then plug in the numbers paying attention to the units you use. These conversions can be handy:
1 kilometer = 1000 meters.
1 m/s = 3.6 kph.
Deceleration is the same as acceleration with a negative value. Cuddlyable3 (talk) 17:24, 18 September 2010 (UTC)
Phonetic equipment
[edit]User:Textorus has suggested that Wikipedians who patrol the science desk could have some information about good sound-sampling equipment for spectrogram analysis regarding my question over on the language desk. Thank you--el Aprel (facta-facienda) 04:06, 18 September 2010 (UTC)
- Need to break this down into discrete parts. (1) Are you satisfied that the Laptop's sound card is up to recording at the quality that you want. No mic will make a poor card sound good. If not a external sound card would cheaply get around the problem. I don't know about the differences but others might, so what is the laptop and what is the existing sound card? Something like this I guess, has the right sort of specs for what you need in a good external USB sound card. [1] You can plug an ordinary mic straight in without the need for getting a USB mic. Ambient noise when recording outside an echo free and sound proofed studio can be a big problem, so to understand the jargon and principles about the different methods of filtering here are two articles on how to mitigate unwanted noise. noise-canceling microphone , Active noise control. Yes. Dynamic mics are much much better that condenser types. Someone else might be able to expand on this but off the top of my head I would think with the external audio interface and two microphones it would allow for good noise rejection without losing dynamic response or loss of frequencies, but I don't have practical experience how well the software works. Having one mic very close to the speaker certainly helps a lot in getting a clearer signal.--Aspro (talk) 08:23, 18 September 2010 (UTC)
- You can download Visual Analyzer free and immediately see your laptop perform as a spectrum analyzer. Cuddlyable3 (talk) 12:52, 18 September 2010 (UTC)
Harmful to handle frogs, then release them?
[edit]What does the last sentence in Gigging#Frog_gigging mean? Does it really harm frogs to catch them, hold them, take some pictures of them, and release them? diff--Chemicalinterest (talk) 12:43, 18 September 2010 (UTC)
- This guide cautions "Don't wear any sunscreen or bug spray on your hands. Not only will it make your hands slippery, it will harm the frogs because the[sic] will absorb the chemicals through their skin." Cuddlyable3 (talk) 13:04, 18 September 2010 (UTC)
- I like to catch frogs and take pictures of them. I do not use any chemicals on my hands. I was wondering when I read the article whether handling them really harmed the frogs, other than the temporary sluggishness they get from repeatedly trying to escape and failing. --Chemicalinterest (talk) 15:23, 18 September 2010 (UTC)
- As long as you handle them carefully (remembering that frogs absorb air and chemicals through their skin and are not the most physically robust creatures ever created), the frogs should be fine. Don't disturb them during breeding periods, obviously, don't hang onto them for more than a couple of hours, and make sure you put them back more or less where you found them, and it should be fine. —Preceding unsigned comment added by Ludwigs2 (talk • contribs)
- OK. I'll make the gigging article less ambiguous. --Chemicalinterest (talk) 20:18, 18 September 2010 (UTC)
- As long as you handle them carefully (remembering that frogs absorb air and chemicals through their skin and are not the most physically robust creatures ever created), the frogs should be fine. Don't disturb them during breeding periods, obviously, don't hang onto them for more than a couple of hours, and make sure you put them back more or less where you found them, and it should be fine. —Preceding unsigned comment added by Ludwigs2 (talk • contribs)
- I like to catch frogs and take pictures of them. I do not use any chemicals on my hands. I was wondering when I read the article whether handling them really harmed the frogs, other than the temporary sluggishness they get from repeatedly trying to escape and failing. --Chemicalinterest (talk) 15:23, 18 September 2010 (UTC)
Inquire about semen and its impact on the face
[edit]This question may be inappropriate for the Reference Desk, and may precipitate non-productive arguments or disputes. Please restrict responses to neutral, factual, sourced statements. |
Dear All, Have a nice day,
I have read in several forums, websites & here at wiki that the semen of a man is good for women.
Could be that she are using on the skin for a moment the emission of semen
And also helps build up the body because it contains a quantity of protein and textural material of your own if the woman drinking it down to luck.
Thanks for co-operation Ahmed atoon (talk) 12:57, 18 September 2010 (UTC)
- The Wikipedia article about Semen has the information we can give. Cuddlyable3 (talk) 13:48, 18 September 2010 (UTC)
- You may find this article interesting - [2]. Exxolon (talk) 17:18, 19 September 2010 (UTC)
Downloading a brain to a computer (part 1)
[edit]You know the idea that eventually we'll be able to download our brains/personalities to computer, to achieve physical immortality? Are there currently any theories as to how you could actually 'capture' the brain to do the transfer? Spoonfulsofsheep (talk) 15:03, 18 September 2010 (UTC)
- Yes, I know the idea. Wikipedia has articles about Digital immortality and Mind uploading. Cuddlyable3 (talk) 16:57, 18 September 2010 (UTC)
- Thanks, the mind uploading link was particularly useful! Spoonfulsofsheep (talk) 14:51, 19 September 2010 (UTC)
Downloading the brain (part 2)
[edit]Any theories about what areas of the brain would need to be downloaded? I know this would be very speculative but are there any papers on what is needed to retain 'you'? Spoonfulsofsheep (talk) 15:11, 18 September 2010 (UTC)
- Downloading creates a copy of the original. The original is still there in the brain; "you" is not retained in the downloaded version, it's another "you" entirely. 82.44.55.25 (talk) 18:14, 18 September 2010 (UTC)
- Well, so says one theory of things. A lot of it depends on what you define as "you". It's heady philosophical territory. --Mr.98 (talk) 18:45, 18 September 2010 (UTC)
- The articles recommended to you in (part 1) are also probably good places to start for this part, too. But see also the recent film Moon for a dramatic treatment of this sort of thing (I mean the film, not the article, unless you want the spoiler first). WikiDao ☯ (talk) 19:27, 18 September 2010 (UTC)
- The film Moon was moderately entertaining, but the (unrelated) novella Rogue Moon was much better. Wnt (talk) 04:28, 19 September 2010 (UTC)
- The articles recommended to you in (part 1) are also probably good places to start for this part, too. But see also the recent film Moon for a dramatic treatment of this sort of thing (I mean the film, not the article, unless you want the spoiler first). WikiDao ☯ (talk) 19:27, 18 September 2010 (UTC)
- That's debatable 82.44, if I make a copy of a digital file on my harddrive, is one the original and one a copy? I would say no. I would say that they're both the same file, it's simply in two places now.
- If I new for a fact that next week my mind would be copied, and that one of the copies would be destroyed, then right now I'm cool with that. The person I am now will survive. (After the copy has been made, I'm sure that each of me will have some rather strong opinions on which one should live and which should be destroyed. But to me right now, I don't care. The pre-copy me survives either way.) APL (talk) 20:05, 18 September 2010 (UTC)
- The original files creation and modified dates would be different from the copied file, yes? They are identical in every way, but the original is still the original, occupying the same clusters on the hard drive it did before the copy was made. The copy is taking up a new part on the drive that was empty before the copy was made. They might be identical, but they're not the same file. I'm confused about the second part of your comment. Are you talking about 2 copies being made and then one of the copies being killed, or are you talking about one copy being made and the choice of who is killed being between original you and the copy? If it's the first one, then I agree; I don't care what happens to two copies of me. But if it's just one copy, then you're basically saying next week there's a 50% chance you'll be killed, and a copy with take your life. Maybe I think differently, but I would not be ok with that at all. 82.44.55.25 (talk) 20:24, 18 September 2010 (UTC)
- Certain file copy tools will keep the file creation and modification times. Also in terms of your cluster thing, what happens when you defragment and the files are moved around? Sure you can track which version goes where, but why is either one the original? In your philosphy the original is lost when you defragment since the file is read into memory, written in another location then deleted or overwritten in the original location. But why are you even calling that the original? If I write a file, very often it will be written to memory first before going to disk. But is even the copy in memory the original? What about whatever is going on in the CPU to make the file?
- In other words, I agree with APL here. For the precopy self, there's no reason why either version is more me then the other version. I'm not saying I would be happy with either me being killed, I guess I'm self centred in that way :-P But if one does have to be killed, and both are exact copies, there's no real reason why me the precopy self should feel more strongly about either version. Of course, either way whoever is going to murder me should go to jail for the rest of their lives. In the Star Trek world apparently everyone is like APL hence teleporters.
- Nil Einne (talk) 21:07, 18 September 2010 (UTC)
- Yea, exactly. I'd like to go to the Moon, if the only way for me to get there was to make a copy of me on the Moon and destroy the original here on Earth, I'd be fine with it. My only condition is that the original is destroyed more or less immediately after the copy is made so that it doesn't have time to think about it's fate as the non-surviving me. APL (talk) 22:23, 18 September 2010 (UTC)
- I would not be fine with dying and a copy of me living my life, at all. I also would never use a transporter. But that's just me, I guess 82.44.55.25 (talk) 22:43, 18 September 2010 (UTC)
- Are you fine with a machine that constantly replaces every molecule in your body with new ones? That's what's happening all the time: very few of the molecules you had when you were born are still inside your body. --140.180.0.120 (talk) 03:00, 19 September 2010 (UTC)
- That's very different to being copied, killed, and your copy going to the moon. 82.44.55.25 (talk) 11:13, 19 September 2010 (UTC)
- Are you fine with a machine that constantly replaces every molecule in your body with new ones? That's what's happening all the time: very few of the molecules you had when you were born are still inside your body. --140.180.0.120 (talk) 03:00, 19 September 2010 (UTC)
- I would not be fine with dying and a copy of me living my life, at all. I also would never use a transporter. But that's just me, I guess 82.44.55.25 (talk) 22:43, 18 September 2010 (UTC)
- Perhaps clarifying slightly : I consider the important part of "Me" to be my mind and memories. That's just data. For data "original" and "copy" have little useful meaning. (If I create a file and save it. Is the file on disk the original? Or was the original in RAM and now destroyed? If I open the file again, change one letter, then re-save it, the file is loaded into ram, wiped from the disk, then re-saved to the disk, possibly not even in the same physical location. Is the file still 'original'? Was it ever?)
- If my mind were transferred or copied into a different physical body I would still consider myself to be the same "me" I always was. APL (talk) 22:38, 18 September 2010 (UTC)
- Where a distinction between original and copy becomes important is a situation when the copy has inadvertently omitted some vital piece of information from the original. Most people with a religion will believe that they possess a soul, so do you believe that the soul has been copied into the copy and that two souls now exist where before there was only one? Even putting that aside (which as an atheist I am obliged to do, but I would still find the whole idea of destroying the original very frightening) there is still the possibility that the copying process has failed to copy some vital piece of information stored in an unexpected place. Copying a data stick, for instance, might be considered fairly safe, until you realise that the password was written in pencil on the original which, of course, will not have been copied by the computer. Destroying the original will have made the data unusable forever. It is possible that some aspect of human identity is likewise stored in a place completley unsuspected by medical science, for instance, I don't know, my left big toe. Copying data from the brain only is going to miss this and destroying the original will make the mistake unrecoverable. SpinningSpark 23:19, 18 September 2010 (UTC)
- Well, I wasn't imagining someone saying "Here's my new human duplicator! Time to test it out for the first time ever, and then immediately murder the test subject!" There would doubtless be all sorts of subtleties to the process that would need to be worked out. APL (talk) 00:17, 19 September 2010 (UTC)
- Where a distinction between original and copy becomes important is a situation when the copy has inadvertently omitted some vital piece of information from the original. Most people with a religion will believe that they possess a soul, so do you believe that the soul has been copied into the copy and that two souls now exist where before there was only one? Even putting that aside (which as an atheist I am obliged to do, but I would still find the whole idea of destroying the original very frightening) there is still the possibility that the copying process has failed to copy some vital piece of information stored in an unexpected place. Copying a data stick, for instance, might be considered fairly safe, until you realise that the password was written in pencil on the original which, of course, will not have been copied by the computer. Destroying the original will have made the data unusable forever. It is possible that some aspect of human identity is likewise stored in a place completley unsuspected by medical science, for instance, I don't know, my left big toe. Copying data from the brain only is going to miss this and destroying the original will make the mistake unrecoverable. SpinningSpark 23:19, 18 September 2010 (UTC)
- Yea, exactly. I'd like to go to the Moon, if the only way for me to get there was to make a copy of me on the Moon and destroy the original here on Earth, I'd be fine with it. My only condition is that the original is destroyed more or less immediately after the copy is made so that it doesn't have time to think about it's fate as the non-surviving me. APL (talk) 22:23, 18 September 2010 (UTC)
- The original files creation and modified dates would be different from the copied file, yes? They are identical in every way, but the original is still the original, occupying the same clusters on the hard drive it did before the copy was made. The copy is taking up a new part on the drive that was empty before the copy was made. They might be identical, but they're not the same file. I'm confused about the second part of your comment. Are you talking about 2 copies being made and then one of the copies being killed, or are you talking about one copy being made and the choice of who is killed being between original you and the copy? If it's the first one, then I agree; I don't care what happens to two copies of me. But if it's just one copy, then you're basically saying next week there's a 50% chance you'll be killed, and a copy with take your life. Maybe I think differently, but I would not be ok with that at all. 82.44.55.25 (talk) 20:24, 18 September 2010 (UTC)
- The Transhumanism article has a general discussion of some issues related to your question (both parts) that you may find interesting. WikiDao ☯ (talk) 21:45, 18 September 2010 (UTC)
- Your knowledge and memories are most likely stored mainly in the cerebral cortex and hippocampus. Your emotional memories are probably stored at least in part in the amygdala; your habits and desires are stored at least in part in the basal ganglia. Your personality is modulated by a bunch of small subcortical areas. In short, a whole bunch of brain areas come into play, although the cerebral cortex is almost certainly by far the most important. Looie496 (talk) 22:22, 18 September 2010 (UTC)
- I mentioned a paper here recently in a question of my own that discusses this question a bit: "Are you living in a computer simulation?". See especially the second section, "The assumption of substrate-independence", which argues that one would have to basically model on the computer the structure and activity of the human brain down to at least individual synapses. You need a whole human brain for a ("normal") whole human mind, so the answer to your question is really: all of the brain, down to each and every 1000 trillion (1015) synaptic connections. WikiDao ☯ (talk) 01:50, 19 September 2010 (UTC)
- But is reproduction of every physical detail the same as preserving identity? Plutarch observed that is a controversial question. Cuddlyable3 (talk) 11:59, 19 September 2010 (UTC)
- From your wl: "Plutarch thus questions whether the ship would remain the same if it were entirely replaced, piece by piece." This is (very weakly) similar to the notion of "substrate-independence". The thing here is that there is no discontinuity: the underlying substrate may change, but you are the same person from moment to moment. Then Hobbes gets closer by wondering "what would happen if the original planks were gathered up after they were replaced, and used to build a second ship." Here, there is a big "discontinuity" and probably also a different ship (by convention) unless it is exactly reproduced with the same material. Above, we are talking about reproducing ("modeling") the structure and action of your 1015 synapses as software code that can be run on a different substrate altogether: some really serious but theoretically possible future hardware. It's not your greatx-grandfather's question anymore. ;) Would "your" subjective experience as "you" be replicatable by such a process? Consensus these days seems to be: yes, it would. And see the paper for more... WikiDao ☯ (talk) 14:39, 19 September 2010 (UTC)
- But is reproduction of every physical detail the same as preserving identity? Plutarch observed that is a controversial question. Cuddlyable3 (talk) 11:59, 19 September 2010 (UTC)
- I mentioned a paper here recently in a question of my own that discusses this question a bit: "Are you living in a computer simulation?". See especially the second section, "The assumption of substrate-independence", which argues that one would have to basically model on the computer the structure and activity of the human brain down to at least individual synapses. You need a whole human brain for a ("normal") whole human mind, so the answer to your question is really: all of the brain, down to each and every 1000 trillion (1015) synaptic connections. WikiDao ☯ (talk) 01:50, 19 September 2010 (UTC)
(OP here) Thanks for all the answers and links everyone. Just had a quick flick through so far but there's a lot of interesting information for me to digest :-) Spoonfulsofsheep (talk) 15:28, 19 September 2010 (UTC)
- You're welcome! And come back any time! WikiDao ☯ (talk) 15:37, 19 September 2010 (UTC)
Don't forget to read this article Count Iblis (talk) 03:05, 20 September 2010 (UTC)
- Also, a downloaded brain might not successfully simulate falsifiable consciousness. ~AH1(TCU) 02:01, 21 September 2010 (UTC)
Judean date palm from 2000 year old seed
[edit]Is there any news about this? I read somewhere that it may fruit in 2010, if it is female. Has its gender been identified yet? It would be nice to green the desert with it, as it was in historical time. Why did it die out? The article does not say. Thanks. Edit: The Reuters link says it is female, but the article implies that its gender is not known. On looking more closely, the article says it died out due to being no longer cultivated. 92.28.255.54 (talk) 15:33, 18 September 2010 (UTC)
Bzzzzt!
[edit]The AC adapter for my laptop says that its output is 4.5A at 20V. What will happen to me if I put the output end in my mouth while the input end is plugged in?
Thanks, —Mark Dominus (talk) 17:46, 18 September 2010 (UTC)
- You will get a Darwin Award. 92.15.24.80 (talk) 17:50, 18 September 2010 (UTC)
- Okay, but I won't get a Darwin Award for licking the terminals of a 9-volt battery, so what's the cutoff? Is 14 volts enough to get me the award? If not, how about 17? —Mark Dominus (talk) 19:11, 18 September 2010 (UTC)
- Current, be it AC or DC, is generally acknowledged as a more dangerous parameter than voltage. We have an article on the lethality of electric shock. — Lomn 19:50, 18 September 2010 (UTC)
- I disagree. I don't think that 20 V, localized in the mouth, is going to kill you (I could be wrong; don't try this at home). If you let the current pass through your heart, then sure, it's possible to kill yourself with 20 V (or even 9 V, as this real life recipient of a Darwin Award did [3]). But you stick your tongue over a potential difference of 20 V, and I bet the current mostly just passes through the tongue, between the electrodes, giving yourself a nasty shock (and possibly burns) (again, please don't test this yourself). Buddy431 (talk) 19:59, 18 September 2010 (UTC)
- I call out that Darwin Award as a hoax. What is the maximum short circuit current from that ohmmeter? What is the name of the person who was killed, and where is a reference to an actual accident investigation report? It smacks of a pious fraud intended to prevent students from doing foolish things with electricity. Edison (talk) 02:11, 19 September 2010 (UTC)
- See Electroshock weapon for a good point-of-reference here. WikiDao ☯ (talk) 20:40, 18 September 2010 (UTC)
- I disagree. I don't think that 20 V, localized in the mouth, is going to kill you (I could be wrong; don't try this at home). If you let the current pass through your heart, then sure, it's possible to kill yourself with 20 V (or even 9 V, as this real life recipient of a Darwin Award did [3]). But you stick your tongue over a potential difference of 20 V, and I bet the current mostly just passes through the tongue, between the electrodes, giving yourself a nasty shock (and possibly burns) (again, please don't test this yourself). Buddy431 (talk) 19:59, 18 September 2010 (UTC)
- I think you may get a mild shock, but most of the current will flow through the saliva that gets into the connector and shorts the connection -- most likely the main thing that will happen is that you will blow the fuse in the adapter. Looie496 (talk) 22:16, 18 September 2010 (UTC)
How many molecules in a cubic metre of iron?
[edit]At room temperature and pressure. Thanks 92.15.24.80 (talk) 17:49, 18 September 2010 (UTC)
- Iron doesn't come in molecules; that's not how metallic bonding works. They'll be about 8.5×1028 atoms, though. Algebraist 17:53, 18 September 2010 (UTC)
- See also Molecule#History and etymology for clarification on that point. WikiDao ☯ (talk) 18:01, 18 September 2010 (UTC)
- And to explain how Algebraist (might have) derived his answer, 1 cubic meter contains 1,000,000 cubic centimeters, which has a mass of 7,874,000 g (nearly 8 Tonnes, or over 8.5 Tons if you live in North America) (see Iron, where density is listed as 7.874 g/cm3 near room temperature). We then divide by the molar mass of iron (55.845 g/mol) to get about 141,000 moles of iron atoms. Multiply this by Avagadro's number, 6.022x1023, to get the total number of atoms, about 8.49x1028, as Algebraist already said. Buddy431 (talk) 23:48, 18 September 2010 (UTC)
- See also Molecule#History and etymology for clarification on that point. WikiDao ☯ (talk) 18:01, 18 September 2010 (UTC)
- Thanks, I used that to help estimate the answer to the Bonding with the Rocket question above. 92.15.17.68 (talk) 16:37, 19 September 2010 (UTC)
- Using the definition of a molecule rather loosely, you could argue that the answer to the question is "1". ←Baseball Bugs What's up, Doc? carrots→ 18:15, 19 September 2010 (UTC)
- Bugs, do you have any reference or source for any reliable metallurgy or chemistry resource that uses that terminology? That is strictly incorrect usage of the word molecule. A block of solid iron could be a polycrystalline lattice or a single monolithic crystal lattice, but it is not correct to say it's "one molecule." Nimur (talk) 04:42, 20 September 2010 (UTC)
- I would counter that explicitly “loose” usages don’t call for references, nor indeed do they need to be very correct at all, let alone “strictly” so. ;) Odysseus1479 (talk) 00:46, 21 September 2010 (UTC)
- Bugs, do you have any reference or source for any reliable metallurgy or chemistry resource that uses that terminology? That is strictly incorrect usage of the word molecule. A block of solid iron could be a polycrystalline lattice or a single monolithic crystal lattice, but it is not correct to say it's "one molecule." Nimur (talk) 04:42, 20 September 2010 (UTC)
- Using the definition of a molecule rather loosely, you could argue that the answer to the question is "1". ←Baseball Bugs What's up, Doc? carrots→ 18:15, 19 September 2010 (UTC)
Flash
[edit]Is it safe to use a third-party flash with a different camera, even if the flash is made for that camera? I do know that a film flash with a digital camera will fry it. Is that risk still plausible here? --The High Fin Sperm Whale 18:42, 18 September 2010 (UTC)
- As long as the third-party flash is specced to the camera in question, it's fine -- and I seriously doubt that "film flashes" intrinsically fry digital cameras. That's certainly not the case with mine (though I've found combinations that simply don't work together). — Lomn 19:48, 18 September 2010 (UTC)
- Sorry, I can't answer your question, but do you have a cite for that thing about older flashes "frying" a digital camera? I've never heard that before. My father is a professional photographer and he certainly didn't replace all his lighting equipment when he switched over to digital. APL (talk) 19:59, 18 September 2010 (UTC)
- It is the trigger voltage that has to closely checked. Canon for instance doesn't like anything over 6v (from memory so please check) Older flash guns often meter around 24v and this can damage some cameras. Modern camera's have got more voltage sensitive equipment inside that talks to the gun. If the third party gun is of a reputable make and the trigger voltages are within spec then it ought to be OK. With flash, I tend to keep to the camera's brand and high guide numbers and say to hell with the cost, because at the end of the day it saves so much hassle to let the electronics do all the work. However, that doesn't mean it results in photos that are any better. --Aspro (talk) 20:45, 18 September 2010 (UTC)
- As stated above, you need to be aware of the trigger voltage, there is a teble here - http://www.botzilla.com/photo/strobeVolts.html - that lists many major strobe units and the sync/trigger voltage can go above 200V on older units.
- Your assumption that a "film flash on a digital camera will fry it" is incorrect as I have used a Nikon SB28 on my D80 quite safely, but the SB28 does not support the newer iTTL system.
- However, given that different manufacturers use differing pin configurations for communication between strobe and body, you can't be sure that using say a Canon strobe on a Nikon will not cause problems. Dlegros (talk) 21:21, 18 September 2010 (UTC)
- Even if it doesn't damage the camera you WILL lose all the automatic functions in the camera/flash and you'll have to set all the flash settings on the flash yourself. Usually it's not worth the hassle to use offbrand flash. --antilivedT | C | G 05:15, 20 September 2010 (UTC)
wood chips
[edit]are those pine wood chips that you put in hamster cages made of real wood or is it made from like recycled particle board or chipped up pallets or something? —Preceding unsigned comment added by Kj650 (talk • contribs) 19:35, 18 September 2010 (UTC)
- Well if you are in Australia it would be fresh wood chips made from timber useless for boards, made in a wood chip mill. The original wood comes from a pine plantation. You can also get pine bark chips or fines, and hardwood chips, that take longer to rot. Recycled timber is much more likely to contain preservative, nails or paint, and so be unsuitable for animals, (or gardens). Graeme Bartlett (talk) 20:25, 18 September 2010 (UTC)
im in the usa. but i think the pine chips come from china. —Preceding unsigned comment added by Kj650 (talk • contribs) 21:42, 18 September 2010 (UTC)
why isn't Iron(III) nitrate just as dangerous as nitric acid or iron(III) chloride?
[edit]Like Cl-, NO3- isn't that great of a conjugate base. AFAIK NO3- doesn't like to coordinate to much of anything, making it more spectator than Cl- (except in redox reactions). In addition, surely Fe(III) nirate is a Lewis acid analogue of nitric acid...? Or is it that Fe(III) nitrate (or a solution of it) isn't very skin-permeable, whereas nitric acid is a skin-permeable oil? John Riemann Soong (talk) 21:33, 18 September 2010 (UTC)
- What makes you think that iron(III) nitrate isn't as dangerous as iron(III) chloride? Physchim62 (talk) 23:46, 18 September 2010 (UTC)
- Look at the safety diagrams given for each chemical. Fe(NO3)3: 1 for health, and 1 for flammability. FeCl3: 3 for health, and 2 for reactivity. John Riemann Soong (talk) 00:08, 19 September 2010 (UTC)
- Compare the pH of a concentrated Fe(NO3)3 solution with a FeCl3 solution. The nitrate ion is an oxidizing agent; do not forget that. --Chemicalinterest (talk) 00:25, 19 September 2010 (UTC)
- I don't have an approximate pKa value for Fe(III)'s Lewis acidity, but it shouldn't change much. Cl- and NO3- are both very weak conjugate bases, and the differences in basicity only appear in very high H+ concentrations. John Riemann Soong (talk) 00:38, 19 September 2010 (UTC)
- (edit conflict) NFPA diamonds are a joke: hopefully we'll be able to get rid of them from Wikipedia in a few months' time. The pH of a 1 M Fe(NO3)3 solution and a 1 M FeCl3 solution are both around zero, and this is the hazard that naïve students ignore: [Fe(H2O)6]3+ is a pretty strong Brønsted acid. There is an additional hazard from anhydrous FeCl3, in that it will give off HCl gas on contact with the moisture in the air and so it can be classified as a respiratory irritant (Japan does, New Zealand doesn't from the official classifications I've found). Physchim62 (talk) 00:42, 19 September 2010 (UTC)
- And the additional hazard from Fe(NO3)3 would be its oxidizing capability. --Chemicalinterest (talk) 10:43, 19 September 2010 (UTC)
- I've updated the safety info on iron(III) chloride and iron(III) nitrate. Both sets of NFPA values were wrong: Fe(NO3)3 is not flammable, example. For FeCl3, I found a collection of NFPA values which all agreed on 2-0-0, so that's what I put. For Fe(NO3)3, there is a fairly typical ridiculousness in the range of published values, from 1-0-1 [4][5] to 2-0-0 [6][7] to 3-0-0 [8] to 3-0-1 [9] to 2-0-3 [10]. This is why NFPA 704 is simply "not fit for purpose" in laboratory safety. Physchim62 (talk) 22:36, 19 September 2010 (UTC)
- And the additional hazard from Fe(NO3)3 would be its oxidizing capability. --Chemicalinterest (talk) 10:43, 19 September 2010 (UTC)
- Compare the pH of a concentrated Fe(NO3)3 solution with a FeCl3 solution. The nitrate ion is an oxidizing agent; do not forget that. --Chemicalinterest (talk) 00:25, 19 September 2010 (UTC)
- Ok thanks for the clarification. But I notice nitric acid is really dangerous. Won't ferric nitrate dissolved in water produce nitric acid? AFAIK 1M nitric acid is a thing much worse than 1M HCl. John Riemann Soong (talk) 02:24, 20 September 2010 (UTC)
- It probably will produce an equilibrium between nitric acid and ferric hydroxide. It is used in etching, don't forget, so it must be quite corrosive. --Chemicalinterest (talk) 11:06, 20 September 2010 (UTC)
- You have to distinguish between “corrosive to metals” and “corrosive to skin”: the two are often related, but not always. There are many different ways of causing skin damage, but the one that is most important here is the hydrolysis of proteins and fats, which, for acids, is roughly proportional to the hydrogen ion concentration. So the four compounds, at equal concentrations and in fairly dilute solution, are equally (and not particularly) corrosive to the skin: they would usually be classed as irritants rather than truly corrosive for skin contact at 1 M. Corrosion of metals, on the other hand, is not just an acid–base reaction, but also a redox reaction. Most metals are able to reduce acid protons to hydrogen, but this is often a slow reaction. Dilute nitric acid is more corrosive to metals than dilute HCl because the nitrate ion can “help out” in the oxidation of the metal, although this is also quite slow in practice for dilute solutions. Iron(III) salts are much more corrosive to metals than the corresponding acids, because the iron(III) ion is a strong enough oxidizer to oxidize most metals and it does so very rapidly. So, take the choice of a jeweller who wants to etch silver: they can choose concentrated nitric acid – dangerous to handle and will give of fumes of nitrogen oxides when it reacts – or iron(III) nitrate, which is just as effective but far safer and doesn’t fume (the reduced product is iron(II) nitrate). Physchim62 (talk) 07:55, 21 September 2010 (UTC)
- It probably will produce an equilibrium between nitric acid and ferric hydroxide. It is used in etching, don't forget, so it must be quite corrosive. --Chemicalinterest (talk) 11:06, 20 September 2010 (UTC)
- Ok thanks for the clarification. But I notice nitric acid is really dangerous. Won't ferric nitrate dissolved in water produce nitric acid? AFAIK 1M nitric acid is a thing much worse than 1M HCl. John Riemann Soong (talk) 02:24, 20 September 2010 (UTC)
Inverted leaves on plant
[edit]I was looking over some old photos and found this odd plant whose leaves appear inverted. Could someone tell please tell me which plant this is and why its leaves appear inverted?Smallman12q (talk) 19:56, 18 September 2010 (UTC)
- Inverted? WikiDao ☯ (talk) 22:11, 18 September 2010 (UTC)
- It looks like a courgette or marrow or cucumber. The inverting might be due to a breeze. 92.15.24.80 (talk) 22:13, 18 September 2010 (UTC)
- There's no breeze in the photograph...?Smallman12q (talk) 22:35, 18 September 2010 (UTC)
- How could you tell? The lower leaves may be wilting. Sometimes lower leaves of plants wilt and wither in dry conditions. 92.15.24.80 (talk) 22:54, 18 September 2010 (UTC)
- There's no breeze in the photograph...?Smallman12q (talk) 22:35, 18 September 2010 (UTC)
- You can also look up the plant on sites such as the New York Flora Atlas and the New York Natural Heritage Program Plant Guides. ···日本穣? · 投稿 · Talk to Nihonjoe · Join WikiProject Japan! 22:49, 18 September 2010 (UTC)
- Initial examination indicates a member of the cucumber family but if you look closer at the flower you will notice some tightly enclosed green bracts at the base of the petals. These are not characteristic of the cucumber family although the male flower does sometimes show some twisted residual bracts. It looks more like a member of the mallow family to me. Richard Avery (talk) 11:52, 19 September 2010 (UTC)
- I've never heard or seen anything in the mallow family having yellow flowers. 92.15.17.68 (talk) 16:40, 19 September 2010 (UTC)
- Initial examination indicates a member of the cucumber family but if you look closer at the flower you will notice some tightly enclosed green bracts at the base of the petals. These are not characteristic of the cucumber family although the male flower does sometimes show some twisted residual bracts. It looks more like a member of the mallow family to me. Richard Avery (talk) 11:52, 19 September 2010 (UTC)
Viruses in food manufacture
[edit]The manufacture of many foods, such as yoghurt, natto and kimchi, involve bacteria in an essential way. But what about viruses? Of course, viruses are much simpler, so the mechanism would have to be different. But are there, for example, foods made from virus-diseased plants? --196.215.14.166 (talk) 21:04, 18 September 2010 (UTC)
- Viruses have been used to develop genetically modified food such as through virus-induced gene silencing.Smallman12q (talk) 21:30, 18 September 2010 (UTC)
- Viruses turn living cells into factories to make more viruses. At best, the cells return to normal, at worst they become just dead versions of live cells. Therefore, a Virus strain which infected veg/meat/etc. may result in a slight change in taste but they would not confer any useful change to the food -as far as I can see. Also, and here is the killer: As viruses need living cells, it would be an impossible possess to control. The would be food (being live) would strive to become immune. Bacteriophages however, are beginning to be used to control the bacteria that can spoil food. --Aspro (talk) 23:29, 18 September 2010 (UTC)
- Not food, but the Tulip breaking virus was historically used to make variegated tulips, which were actually quite highly prized (see Tulip Mania). The diseased tulips actually sold for much higher prices than ordinary tulips, due to their unusual coloring (and the fact that the disease made it harder to grow the tulips). I don't think people intentionally grow infected Tulips anymore (they have healthy, multicolored ones now), but it's at least a historical example of something similar to Mr. 196's query. Buddy431 (talk) 23:36, 18 September 2010 (UTC)
- You have to understand a bit about why bacteria are so useful in foodmaking in this way, while viruses are not. Some bacteria (and some yeasts, and some molds, and some fungi, etc, etc.) produce waste products which are tasty. Generally these are things like lactic acid and ethanol. Bacteria produce these by eating the food in question, and then eliminating the tasty biproducts as waste (not to be crude, but its essentially bacterial poop). The deal with viruses is, they don't eat. They don't poop. Viruses, while composed of living material, are not really proper living organisms. They are basically little nuggets of genetic material which insert themselves into cells and reproduce themselves that way. They do not do anything on their own; they contain no energy producing bits; they consume nothing and produce nothing. So they aren't terribly useful in making food taste better. --Jayron32 03:18, 19 September 2010 (UTC)
- To elaborate on that a little more: viruses usually have a small number of genes (not always - smallpox being an exception, carrying a Hollywood diva's worth of baggage). These genes are usually pared down to bare essentials, because a virus with more genes will replicate a little more slowly and not keep up with its lighter fellows. By "essentials" I mean proteins and perhaps some small RNAs. These build new viruses and fool with the host cell. The problem is, by and large, when you taste something, you're tasting some small chemical, not a protein. I suppose it's kind of hard in a cell made up of proteins for a receptor to be very sensitive to new proteins from the environment - even the immune system has a lot of trouble spotting new ones and telling them from old. So it's hard to have a virus where a potato infected with it suddenly tastes like cinnamon - it doesn't have the wherewithal to make cinnamaldehyde, etc. Now it is theoretically possible to have a virus that would intensify or alter a plant or animal's production of some flavorful compound, but I can't think of any example - it'd be a rather odd thing for it to do, and domestication of a plant should strive to achieve the same effect all the time by its own genetics. Wnt (talk) 06:39, 19 September 2010 (UTC)
- You have to understand a bit about why bacteria are so useful in foodmaking in this way, while viruses are not. Some bacteria (and some yeasts, and some molds, and some fungi, etc, etc.) produce waste products which are tasty. Generally these are things like lactic acid and ethanol. Bacteria produce these by eating the food in question, and then eliminating the tasty biproducts as waste (not to be crude, but its essentially bacterial poop). The deal with viruses is, they don't eat. They don't poop. Viruses, while composed of living material, are not really proper living organisms. They are basically little nuggets of genetic material which insert themselves into cells and reproduce themselves that way. They do not do anything on their own; they contain no energy producing bits; they consume nothing and produce nothing. So they aren't terribly useful in making food taste better. --Jayron32 03:18, 19 September 2010 (UTC)
Thermal vest?
[edit]I was wondering whether thermal vests are actually much better than a regular t-shirt at retaining heat and came to Wikipedia to find no article on thermal vests?? Are they known by another name? --178.98.60.118 (talk) 22:29, 18 September 2010 (UTC)
- If you have a wp:rs, you can create an article. I created a blank for you. --Chemicalinterest (talk) 22:50, 18 September 2010 (UTC)
- We have thermal underwear which redirects to "long johns" but at least covers the concept. SpinningSpark 23:28, 18 September 2010 (UTC)
- Neither is there an article on string vests. These were worn under a shirt, and had been scientifically designed (during WW2?) to be warmer than a normal vest, despite being full of holes. I think men stopped wearing them in the 70s. 92.15.12.54 (talk) 23:34, 19 September 2010 (UTC)
Electron shells
[edit]If the shell theory has been discredited in favor of the cloud theory then why can you still predict the behavior and interactions of atoms and molecules with concepts such as few electrons in the outer shell makes it more likely to loose an electron? Thank you 24.92.78.167 (talk) 22:50, 18 September 2010 (UTC)
- Meh, it's not about having a few electrons in a shell. It's about effective nuclear charge. Sodium doesn't hold on to its electrons very tightly because it has an ENC of about +1. Fluorine has +7, while the noble gases have about +8. Now ENC combined with how far away the valence shell is vaguely analogous to electronegativity. John Riemann Soong (talk) 23:26, 18 September 2010 (UTC)
- I wouldn't say the shell model is "discredited"; it's just recognized as a fairly coarse approximation. JRS makes a good point about effective nuclear charge but, if you calculate ENC by Slater's rules (which is what most people do), you are still working in the shell model. If you want to discuss the finer points of chemical bonding, the shell model isn't good enough, but it is fine for discussing basic inorganic chemistry when you just want a simple scheme to let students arrange the various bits of information about different compounds. Physchim62 (talk) 01:47, 19 September 2010 (UTC)
- Indeed, just to reemphasize the points above; don't think of the various models of the atom (the Bohr model, the shell model, the quantum model, etc.) as if the later ones disproved or discredited the earlier ones. Look at the later models as more detailed and finer, but harder to work with and conceptualize. Look at the earlier models as more approximate, but easier to work with. None of them are wrong, they just each have their particular uses. --Jayron32 03:13, 19 September 2010 (UTC)
- The shell model is helpful in visualizing the effects of the number of electrons in the valence shell on the atomic radius and therefore the first ionization energy of a valence electron. ~AH1(TCU) 01:58, 21 September 2010 (UTC)
- Indeed, just to reemphasize the points above; don't think of the various models of the atom (the Bohr model, the shell model, the quantum model, etc.) as if the later ones disproved or discredited the earlier ones. Look at the later models as more detailed and finer, but harder to work with and conceptualize. Look at the earlier models as more approximate, but easier to work with. None of them are wrong, they just each have their particular uses. --Jayron32 03:13, 19 September 2010 (UTC)
- I wouldn't say the shell model is "discredited"; it's just recognized as a fairly coarse approximation. JRS makes a good point about effective nuclear charge but, if you calculate ENC by Slater's rules (which is what most people do), you are still working in the shell model. If you want to discuss the finer points of chemical bonding, the shell model isn't good enough, but it is fine for discussing basic inorganic chemistry when you just want a simple scheme to let students arrange the various bits of information about different compounds. Physchim62 (talk) 01:47, 19 September 2010 (UTC)