Wikipedia:Reference desk/Science: Difference between revisions
TastyCakes (talk | contribs) |
|||
Line 748: | Line 748: | ||
With Christmas season here, I had an idea... Many wireless chargers use inductors to "transmit" electricity from a base unit to a device. Does anyone make that sort of thing that transmits electricity from inside the house to outside? I'm not considering a high-powered device. I'm considering the transmit/receive devices to be within an inch of each other on opposite sides of a window. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|™]] 21:46, 7 December 2009 (UTC) |
With Christmas season here, I had an idea... Many wireless chargers use inductors to "transmit" electricity from a base unit to a device. Does anyone make that sort of thing that transmits electricity from inside the house to outside? I'm not considering a high-powered device. I'm considering the transmit/receive devices to be within an inch of each other on opposite sides of a window. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|™]] 21:46, 7 December 2009 (UTC) |
||
:I think normal wireless rechargers should be able to transmit through glass. [[Special:Contributions/74.105.223.182|74.105.223.182]] ([[User talk:74.105.223.182|talk]]) 23:55, 7 December 2009 (UTC) |
Revision as of 23:55, 7 December 2009
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
December 3
Medical experiment that went terribly wrong
I remember reading a news story from a few years ago about a medical experiment that went terribly wrong. My memory is foggy but I'll try as best I can to explain what I remember. The researchers were testing an experimental drug. It was the first trial on humans. The test subjects had a terrible reaction to the drug. I can't recall if any of the test subjects died, but a couple might have. I remember that part of the controversy was that the scientists administered the drug to the test subjects back to back, rather than waiting to make sure that the first person didn't have a negative reaction. They may have violated medical protocols. This happened maybe 2 or 3 years ago. It recieved some mainstream media attention. Again, my memory is foggy, but I think I read about it at BBC News. Does anyone know what I'm talking about? A Quest For Knowledge (talk) 05:16, 3 December 2009 (UTC)
- Sounds a lot like the trials of TGN1412 — Zazou 05:34, 3 December 2009 (UTC)
- I think you've got it. It happened in England in March 2006; six patients got the drug at 10-minute intervals and it only took an hour before they began suffering one after the other. Nobody died, but they were all severely affected. --Anonymous, 08:28 UTC, December 3, 2009.
- Presuming this is what you mean, and it sounds to me like it is, while the 10 minutes interval thing generated a lot of controversy amongst other things and did seem like a dumb thing to do to many, I don't believe it was a violation of protocols or particularly unusual. In fact, as this ref suggests [1] for example, giving sufficient time for a reaction to be observed is a new recommendation arising from the trial Nil Einne (talk) 10:40, 3 December 2009 (UTC)
- They gave systemic doses of a previously untested drug instead of giving it topically to begin with. It was a drug designed to boost the immune system, they gave it to healthy patients, and it resulted in a cytokine storm. This was definitely predictable and as an immunologist noted "not rocket science". Fences&Windows 14:40, 3 December 2009 (UTC)
- Maybe but that doesn't mean it violated the protocols of the time which was the point I was addressing. To put it a different way, they may have screwed up badly, but it doesn't mean they ignored established protocols, more that perhaps they didn't think properly whether the protocols were appropriate in the specific instance. On the other hand this [2] does suggest it's normal to try hazardous agents on one patient first so it may not have been uncommon as the earlier ref. However it isn't peer reviewed. There is of course still research ongoing as a result of the case. E.g. [3] [4] Nil Einne (talk) 15:44, 3 December 2009 (UTC)
- The protocol-design issue is basically this: when you don't anticipate any problems, how long do you wait for problems to develop before you decide that it's enough? When they chose 10 minutes, they were probably imagining that the only possible rapidly manifesting problem would be something like anaphylactic shock, which comes on faster than that. In retrospect that was clearly a bad idea. But what if they'd waited an hour, only to find that after six hours people started getting sick? What if they'd waited a day, only to find that it took a week? With no data on the sort of problems to be expected, it really is a judgement call. Of course, if Fences is correct that this sort of reaction was to be expected, that's a different story. But that's not how it was reported in newspapers at the time, and I'm no immunologist, so I can't comment. --Anonymous, 08:55 UTC, December 5, 2009.
- I was wondering about the case at the time in particular the final reports. It was one of those many things the media has forgotten about once the final report is release, and so have our editors so our article was never updated and it's difficult to find an overview of the final report. But the final report is [5] linked to from our article talk page and an interesting comment on this blog [6]
- It’s worth pointing out that the final report of the Expert Scientific Group on Clinical trials published in November 2006 (http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_063117) found that the pre-clinical animal tests done by TeGenero were not adequate.
- which then goes on to more detail. This is perhaps a key point. At the time, as with others, I don't really understand how this had happened since even as someone who hasn't done much immunology but did have some knowledge of molecular biology, it seemed to be something they should have antipated and properly tested for.
- And the general impression I got was they had done what they thought was sufficient testing, including what they thought was adequete tests to account for the specific effects of their drugs on humans, according to established protocols etc. So my presumption was the protocols and testing were thought be adequete and would have been thought adequete by most in the field (but were not). Perhaps combined with hindsight being 20/20 and people focused on certain aspects and established practices sometimes losing sight of the bigger picture and common sense (whereas from an outsider looking in, it's easy to see things that you think should be obvious but aren't to the people actually doing the work).
- All these may very well still be the case, but it appears they did screw up and not do their work properly.
- BTW, from a quick skim of the report, it's still not clear to me that the time interval thing came under particular criticism. They did recommend the time interval be adequete for the reactions expected, so perhaps it's a moot point since they didn't test properly so didn't know what to expect. So I feel my point still stands, while they did make mistakes, it's not clear the time interval was a violation of the protocols. Re-reading my earlier comments, perhaps I didn't explain this very well. (My point was whatever they did or did not do wrong, it doesn't mean the time interval was a violation of the protocols or had they done proper testing and their information still the same, would the time interval have come under criticism from experts at the time.)
- I also can't find any suggestion topical application was necessary or recommended (I'm not saying it wouldn't have helped). My impression from the blog comment and what I know suggests there are far better ways to test the drug which I had presumed they'd done but evidentally hadn't.
- Nil Einne (talk) 14:59, 26 April 2010 (UTC)
- I was wondering about the case at the time in particular the final reports. It was one of those many things the media has forgotten about once the final report is release, and so have our editors so our article was never updated and it's difficult to find an overview of the final report. But the final report is [5] linked to from our article talk page and an interesting comment on this blog [6]
- The protocol-design issue is basically this: when you don't anticipate any problems, how long do you wait for problems to develop before you decide that it's enough? When they chose 10 minutes, they were probably imagining that the only possible rapidly manifesting problem would be something like anaphylactic shock, which comes on faster than that. In retrospect that was clearly a bad idea. But what if they'd waited an hour, only to find that after six hours people started getting sick? What if they'd waited a day, only to find that it took a week? With no data on the sort of problems to be expected, it really is a judgement call. Of course, if Fences is correct that this sort of reaction was to be expected, that's a different story. But that's not how it was reported in newspapers at the time, and I'm no immunologist, so I can't comment. --Anonymous, 08:55 UTC, December 5, 2009.
- Maybe but that doesn't mean it violated the protocols of the time which was the point I was addressing. To put it a different way, they may have screwed up badly, but it doesn't mean they ignored established protocols, more that perhaps they didn't think properly whether the protocols were appropriate in the specific instance. On the other hand this [2] does suggest it's normal to try hazardous agents on one patient first so it may not have been uncommon as the earlier ref. However it isn't peer reviewed. There is of course still research ongoing as a result of the case. E.g. [3] [4] Nil Einne (talk) 15:44, 3 December 2009 (UTC)
- They gave systemic doses of a previously untested drug instead of giving it topically to begin with. It was a drug designed to boost the immune system, they gave it to healthy patients, and it resulted in a cytokine storm. This was definitely predictable and as an immunologist noted "not rocket science". Fences&Windows 14:40, 3 December 2009 (UTC)
- Presuming this is what you mean, and it sounds to me like it is, while the 10 minutes interval thing generated a lot of controversy amongst other things and did seem like a dumb thing to do to many, I don't believe it was a violation of protocols or particularly unusual. In fact, as this ref suggests [1] for example, giving sufficient time for a reaction to be observed is a new recommendation arising from the trial Nil Einne (talk) 10:40, 3 December 2009 (UTC)
- Perhaps the X-linked severe combined immunodeficiency gene therapy trial? Or less likely the gene therapy trial that killed Jesse Gelsinger. 75.41.110.200 (talk) 06:40, 3 December 2009 (UTC)
Yes, that's it. Thanks! A Quest For Knowledge (talk) 23:44, 3 December 2009 (UTC)
Butterfly sensation from infatuation.
There's a girl I've recently become infatuated with and I think she reciprocates my affections at least to some degree. Sometimes, I'll go many minutes without thinking of her and then suddenly, in a flash, I'll remember her-- infectious laughter, her supple contour, her stellar character, her daring wit, & her infinite, limpid, brown eyes... Accompanying these thoughts, I often experience a sinking sensation in my stomach or heart -- butterflies, I think it's sometimes called. What is the cause of this delicious sinking feeling? What are the biological and physical reasons for it? —Preceding unsigned comment added by 66.210.182.8 (talk) 05:41, 3 December 2009 (UTC)
- Wikipedia has an article on everything. Looking at that article, it seems that the main component is due to anxiety, possibly due to adrenalin. Vimescarrot (talk) 09:52, 3 December 2009 (UTC)
- and good luck! --pma (talk) 13:31, 3 December 2009 (UTC)
- Well, we really ought to have an article on the neurobiology of love; there is enough of a literature. In the absence of an article, here is a pointer to a recent paper with a lot of information, a bit technical though. Looie496 (talk) 17:22, 3 December 2009 (UTC)
- Also Esch, Tobias (2005). "The Neurobiology of Love" (PDF). Neuroendocrinology Letters. 3 (26).
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) Fences&Windows 23:25, 3 December 2009 (UTC) - And a video about how the key to love is oxytocin. Fences&Windows 23:29, 3 December 2009 (UTC)
- This would somewhat overlap the existing article on limerence. 67.117.130.175 (talk) 06:58, 4 December 2009 (UTC)
- Also Esch, Tobias (2005). "The Neurobiology of Love" (PDF). Neuroendocrinology Letters. 3 (26).
- Well, we really ought to have an article on the neurobiology of love; there is enough of a literature. In the absence of an article, here is a pointer to a recent paper with a lot of information, a bit technical though. Looie496 (talk) 17:22, 3 December 2009 (UTC)
Physical fallacies
Hi, I posted this question about speed of light calculations few months ago. Is there an article discussing such physical fallacies? If yes, can anyone volunteer to explain where the wrong use of physical laws was made in that website? I think such information should have the same interest as has been done to mathematical fallacy article.--Email4mobile (talk) 09:26, 3 December 2009 (UTC)
- It seems to me you got a good answer last time. What else do you want to know? Second the paragraph "Variable Speed of Light" is not true. Not at all. It's completely contrary to the theory of relativity. And third scientists have NOT confirmed the existence of Dark Energy. Why do you want to learn anything at all from a website that does not understand science? If you want to theorize on changes to science go for it. But don't think for a second that what they say is correct by current theories. Unlike some, I don't mind speculating on changes to current thinking (historically the accepted scientific thinking of the day has been wrong quite often, I see no reason to believe we are in a unique period today) - but it's always important to note when your speculations differ from current understanding. Ariel. (talk) 11:07, 3 December 2009 (UTC)
- I agree with Ariel, it is usually pretty pointless entering a scientific discussion with fundamentalists. The fundamentalist position starts from the premise that all truth emanates from their holy book (of whatever religion). It is intolerable to them that anyone else can obtain "truth" from another source, hence the strong desire to "prove" that their holy reference manual contains that truth, though it was previously somehow overlooked by everyone. I guarantee that no-one has previously interpreted that passage in the Koran as meaning the speed of light until long after science came up with an accurate measurement of it. Science starts from a radically different position, and mutually incompatible with the fundamentalist view. The scientific position is that truth (the laws of nature) is the simplest possible interpretation consistent with the experimetal results. This means that science will modify its laws in the light of new evidence. The fundamentalist can never do this, contradictory evidence will only cause the reasoning to become ever more contrived in order to make the holy book remain true.
- I like the postulate on that site that Angels travel at the speed of light. If that is true, it means they are inside our own light cone and exist in our universe, not in some other ethereal existence. In principle then, they are scientifically detectable - but it is strange that no experiment, so far, has found them. SpinningSpark 14:03, 3 December 2009 (UTC)
- Perhaps, but then we have never found any dark matter either. Googlemeister (talk) 14:22, 3 December 2009 (UTC)
- Angels are photons, God is a singularity, and Satan is the heat death of the universe. Fences&Windows 14:31, 3 December 2009 (UTC)
- That analogy doesn't work. In christian and jewish versions of the story, Satan is an angel who "turns to the dark side". I don't see how the heat death of the universe is also analogous to a photon. The information content of a singularity is restricted to it's mass and maybe it's spin...doesn't bode well for something that's supposed to be all-knowing and therefore containing an infinite amount of information!
- Anyway - these kinds of websites are nonsense. It's very easy to come up with similar nonsense - it doesn't prove anything - the best you can do is ignore them. You can find approximate coincidences in ratios of numbers everywhere - it doesn't prove anything. Precise relationships are more interesting - but even then may not mean much. Let's look at one "fact" from that page:
- "But 1400 years ago it was stated in the Quran (Koran, the book of Islam) that angels travel in one day the same distance that the moon travels in 1000 lunar years, that is, 12000 Lunar Orbits / Earth Day. Outside the gravitational field of the sun 12000 Lunar Orbits / Earth Day turned out to be the local speed of light!!!" - Well, how far does the moon travel in 1000 "lunar years"? What the heck is a "lunar year" anyway? If it's the time it takes the moon to orbit the sun - then that's almost exactly the same as a regular year - and the distance the moon travels over that time (relative to the earth) is 1.022km/s x 1000 x 365.25 x 24 x 60 x 60 = 32,251,000,000km - the distance light travels in a day is 1,079,000,000 km/hr x 24 = 25,896,000,000 km. So these supposed angels are travelling at about 25% faster than the speed of light. I'm not sure what the gravitational field of the sun has to do with it - the speed of light is constant and the sun's gravity can't change that, it can distort time a bit - but nothing like 25%. Now, you might consider the distance travelled by the moon relative to the sun...that's a bit tougher to calculate but it's got to be a lot more than it moves relative to the earth - so that just makes the situation worse. So this guy has an error of 25% in his calculations - that's simply not acceptable in any kind of scientific argument. The errors in our measurements of the speed of light and the speed of the moon are tiny TINY fractions of a percent. So this argument must be incorrect...period. SteveBaker (talk) 17:43, 3 December 2009 (UTC)
- Not really related, but satan in jewish thought is NOT an angel that went to the dark side. Stan is more akin to a prosecutor, who works for god, has no free will! Ariel. (talk) 20:19, 3 December 2009 (UTC)
- While I agree with Steve's overall sentiment, he is a bit overzealous with regard to numerical accuracy in astrophysics. For a lot of parameters, 25% error is acceptable in astrophysics... for example, look at some of the tolerances on the parameters of a typical exoplanet, CoRoT Exo B, as documented by the ESA. Its density is quoted with a 30% error bar. I've seen much more speculative numbers with worse uncertainty in other publications. Stellar physics publications are lucky if they can estimate some numbers to within a factor of 10. But these parameters are not the speed of light, which is well known to better than one part in a billion. In general, a "high level of accuracy" is context-specific. In any case, the above argument is making an outlandish claim, so a greater burden of proof is in order. While I can stomach a 50% uncertainty about whether an exoplanet is iron- or silicate-core, I don't have the same tolerance for the "angels are photons" argument. Because those claims are much more unbelievable, I would expect a much higher standard of accuracy before giving them even the slightest little bit of credibility. I guess my point can be summarized as follows: the above claims are false - but not simply because the numerical error is very large. Numerical error is acceptable, if the scientific claims are qualitatively correct. The above claims about "lunar years" are simply wrong, so it's useless to even bother analyzing their accuracy. Nimur (talk) 17:52, 3 December 2009 (UTC)
- No, I'm not being overzealous. Errors that big are acceptable only when the data you're working from has error bars that big. The error bar on the speed of light is a very small fraction of a percent - and so is the speed of the moon, the length of a year and all of the other things that made up that calculation. The numbers I calculated for the distance travelled by the moon over 1000 years and the distance travelled by light in a day are accurate to within perhaps one part in a thousand. The discrepancy between them is 25%!! There is no way that those numbers back up that hypothesis - and no respectable scientist would say otherwise. Since our confidence in the speed of the moon, etc is very high - the hypothesis that the Koraan is correct about the nature of angels is busted. It flat out cannot be true. (Well, technically - the number "1000 years" has unspecified precision. I suppose that if the proponents of this theory are saying "1000 years plus or minus 50%" and therefore only quoting the number to one significant digit - then perhaps we have to grant that it is possible (not plausible - but possible). But I'm pretty darned certain that the proponents of this theory would tell us that when this holy book say 1000 - it means 1000.0000000000000000000000...not 803.2 - which would be the number required to make the hypothesis look a little more credible! Hence, probably, the necessity of muddying the water by dragging the sun's gravitational field into the fray - the hope being that anyone who tries the naive calculation above can be bamboozled into accepting the result as being 100% correct once general relativity has been accounted for...but sadly, that's not the case because none of the bits of the solar system involved are moving anything like fast enough relative to each other and the sun's gravitational field simply isn't that great.) SteveBaker (talk) 18:26, 3 December 2009 (UTC)
- For the purposes of establishing the actual facts of this claim, I looked up the quoted passage and got;
- He regulates the affair from the heaven to the earth; then shall it ascend to Him in a day the measure of which is a thousand years of what you count. (The Adoration 32:5)
- I was going to post just the quote and leave it at that. However, I was intrigued by the lack of mention of the moon in the passage, or indeed, in the entire book (or chapter or whatever the Koran calls its subdivisions). Apparently we must read "the measure of what you count" as meaning a lunar year. So looking a bit further I found this;
- To Him ascend the angels and the Spirit in a day the measure of which is fifty thousand years. (The Ways of Ascent 70:4)
- Sooo, to be consistent we must interpret that the same way and now have angels travelling at 50C, and if the interpretation that angels travel at the speed of light or slower is to be maintained we must conclude that the Koran would have the speed of light to be at least 1.5x1010. I think that pretty much rules out the Koran as a potential reliable source for Wikipedia purposes. SpinningSpark 19:07, 3 December 2009 (UTC)
- For the purposes of establishing the actual facts of this claim, I looked up the quoted passage and got;
- No, I'm not being overzealous. Errors that big are acceptable only when the data you're working from has error bars that big. The error bar on the speed of light is a very small fraction of a percent - and so is the speed of the moon, the length of a year and all of the other things that made up that calculation. The numbers I calculated for the distance travelled by the moon over 1000 years and the distance travelled by light in a day are accurate to within perhaps one part in a thousand. The discrepancy between them is 25%!! There is no way that those numbers back up that hypothesis - and no respectable scientist would say otherwise. Since our confidence in the speed of the moon, etc is very high - the hypothesis that the Koraan is correct about the nature of angels is busted. It flat out cannot be true. (Well, technically - the number "1000 years" has unspecified precision. I suppose that if the proponents of this theory are saying "1000 years plus or minus 50%" and therefore only quoting the number to one significant digit - then perhaps we have to grant that it is possible (not plausible - but possible). But I'm pretty darned certain that the proponents of this theory would tell us that when this holy book say 1000 - it means 1000.0000000000000000000000...not 803.2 - which would be the number required to make the hypothesis look a little more credible! Hence, probably, the necessity of muddying the water by dragging the sun's gravitational field into the fray - the hope being that anyone who tries the naive calculation above can be bamboozled into accepting the result as being 100% correct once general relativity has been accounted for...but sadly, that's not the case because none of the bits of the solar system involved are moving anything like fast enough relative to each other and the sun's gravitational field simply isn't that great.) SteveBaker (talk) 18:26, 3 December 2009 (UTC)
- At the core of the issue, it's difficult/impossible to assess the scientific merits of an unscientific line of reasoning. This theory, and others like it, are very inconsistent, are not based on empirical observation, and do not draw logical conclusions from experimental data. Therefore any assertions that it makes are categorically unscientific. It doesn't matter what the error-bars are on its numeric results. A lot of numerology finds exact values via convoluted procedures. That "accuracy" does not mean the methods are sound or scientific. In the same way, the inaccuracy of the above numbers is irrelevant - the method is simply wrong. Nimur (talk) 19:09, 3 December 2009 (UTC)
- Also, I object to SpinningSpark's comment, "that pretty much rules out the Koran as a potential reliable source for Wikipedia purposes." The Koran is a reliable source for information about Islam'. It is a very reliable source for Wikipedia's purposes when those purposes are related to Islam. It'd be hard to find a more reliable source for our article about Islam, for example. But, the Quran is not a scientific book, and sourcing scientific claims from it would be invalid. Since this is the science desk, we should never source our references from the Quran or any other "holy book;" nor should we source scientific claims from history books, poetry books, or other non-scientific references. However, that doesn't mean that these are unreliable sources - it's just the wrong source for the Science Desk or science-related issues. Nimur (talk) 19:16, 3 December 2009 (UTC)
- Quite so, I had intended to qualify that with "...for scientific articles" or some such, but typed the more general "Wikipedia" instead. SpinningSpark 19:32, 3 December 2009 (UTC)
- On second thoughts, no you cannot use the Koran as a reliable source about Islam, at least not on its own. The only thing it is a reliable source for is what the Koran says. SpinningSpark 09:03, 4 December 2009 (UTC)
- The issue is not that people believe the Quran - that's entirely their own problem - it's that some people are attempting to portray what it says as somehow reliably relevant and applicable to modern science. Plainly, it's not...or at least not as that website explains it. But if he can't get his science right and he can't quote the Quran accurately then it's really no use to anyone. SteveBaker (talk) 19:44, 3 December 2009 (UTC)
- Quite so, I had intended to qualify that with "...for scientific articles" or some such, but typed the more general "Wikipedia" instead. SpinningSpark 19:32, 3 December 2009 (UTC)
- Also, I object to SpinningSpark's comment, "that pretty much rules out the Koran as a potential reliable source for Wikipedia purposes." The Koran is a reliable source for information about Islam'. It is a very reliable source for Wikipedia's purposes when those purposes are related to Islam. It'd be hard to find a more reliable source for our article about Islam, for example. But, the Quran is not a scientific book, and sourcing scientific claims from it would be invalid. Since this is the science desk, we should never source our references from the Quran or any other "holy book;" nor should we source scientific claims from history books, poetry books, or other non-scientific references. However, that doesn't mean that these are unreliable sources - it's just the wrong source for the Science Desk or science-related issues. Nimur (talk) 19:16, 3 December 2009 (UTC)
- At the core of the issue, it's difficult/impossible to assess the scientific merits of an unscientific line of reasoning. This theory, and others like it, are very inconsistent, are not based on empirical observation, and do not draw logical conclusions from experimental data. Therefore any assertions that it makes are categorically unscientific. It doesn't matter what the error-bars are on its numeric results. A lot of numerology finds exact values via convoluted procedures. That "accuracy" does not mean the methods are sound or scientific. In the same way, the inaccuracy of the above numbers is irrelevant - the method is simply wrong. Nimur (talk) 19:09, 3 December 2009 (UTC)
- A lunar year is 12 lunar months, which is about 354 days. That makes it a little closer than your calculation gave, but not by much. --Tango (talk) 22:37, 3 December 2009 (UTC)
- Well, I'm an Arab and Muslim too; though I don't believe for any reason to connect between religion and Science. Unfortunately many Muslims believe. I'm afraid to say the one who tried to prove this fallacy was originally a professor as I heard. If I were just an engineer then how could I convince so may people who are spreading such information not only in that website but in the schools and universities. How can they believe me such information are totally mess unless I can verify that from reliable sources and I believe in Wikipedia because it either gives reliable sources or proofs. On the one hand, I still believe this problem is not just in Muslim countries but almost all religions have some extremist who would like to convince others by any means. Anyhow thank you very much for this wonderful interaction.--Email4mobile (talk) 20:38, 3 December 2009 (UTC)
- I agree - it's certainly not just the Quran that makes these kinds of error. The Christian bible says that Pi is 3 and that bats are a species of bird. This is what happens when you try to take written material that's several thousands of years old and apply it to everything we've learned in the meantime. The fact is that we shouldn't expect this stuff to be halfway reasonable - the problem isn't the books - it's that people are still trying to apply it to modern situations. SteveBaker (talk) 00:53, 4 December 2009 (UTC)
- Steve, I know you're not a big fan of the Bible, fine, but don't say nonsense about it. Nowhere does it say pi is 3. It says someone made a "molten of sea" that was 10 cubits across and 30 cubits round about. From there to "pi==3" there are a couple of large logical jumps. --Trovatore (talk) 00:59, 4 December 2009 (UTC)
- The argument Steve mentions has indeed been made, but its main flaw (it seems to me) is to assume that exactly 10 and 30 (i.e. 10.0 and 30.0) cubits were meant. If the figures were actually rounded to the nearest cubit, which seems perfectly reasonable in the context, then the description is entirely consistent with the true value of pi: for example, 9⅔ and 30⅓ would come very close at 3.138. 87.81.230.195 (talk) 02:16, 4 December 2009 (UTC)
- But that's the point! Did they mean 1000 lunar years or did they mean 1000 lunar years to one significant digit just like the "molten of sea" thing? If they really meant 803 lunar years - rounded to the nearest 1000...then this is indeed a valid "prediction" of the fastest speed anything can possibly move. But was it ever intended as a prediction of relativity? My bet is no. No more than the Bible is talking about geometry of circles. We're generally lead to believe that the words in these books are to be taken "as gospel". But we can't judge that by modern standards. Nobody measured the speed of an angel or the circumference of the "molten of sea" thing to modern precision levels. We must avoid dual-standard here. It's precisely as wrong to claim that the Quran predicts the speed of light as it is that the Bible predicts the value of pi - neither of those things were ever intended by the original authors - it's just modern hindsight trying to extract miracles where there is nothing but simple literary verbiage that's been blown out of all proportion. (Although it is pretty clear on that bat==bird thing - and on a whole bunch of other biological 'oopsies' in the dietary laws.) SteveBaker (talk) 04:08, 4 December 2009 (UTC)
- What's the error with the bat bird thing? You define bird as creature with feathers. The bible doesn't, it defines the word in hebrew that is commonly translated as bird, as flying creature. During creation for example it even says flying creature[7][8]. And a bat flies, so what's the problem? And complaining about the basin is really stupid, since that part isn't even the word of god - it was a person recording what he saw - the basin was a physical object. You can't argue with that any more or any less than any other ancient document. And for the record the speed of light thing is nonsense. Ariel. (talk) 05:02, 4 December 2009 (UTC)
- There seems to be some cherry picking going on here. My King James bible says,
- And God said, Let the waters bring forth abundantly the moving creature that hath life, and fowl that may fly above the earth in the open firmament of heaven. (Genesis 1:20)
- The American Standard Version does not say flying creature either. SpinningSpark 13:58, 4 December 2009 (UTC)
- There seems to be some cherry picking going on here. My King James bible says,
- What's the error with the bat bird thing? You define bird as creature with feathers. The bible doesn't, it defines the word in hebrew that is commonly translated as bird, as flying creature. During creation for example it even says flying creature[7][8]. And a bat flies, so what's the problem? And complaining about the basin is really stupid, since that part isn't even the word of god - it was a person recording what he saw - the basin was a physical object. You can't argue with that any more or any less than any other ancient document. And for the record the speed of light thing is nonsense. Ariel. (talk) 05:02, 4 December 2009 (UTC)
- But that's the point! Did they mean 1000 lunar years or did they mean 1000 lunar years to one significant digit just like the "molten of sea" thing? If they really meant 803 lunar years - rounded to the nearest 1000...then this is indeed a valid "prediction" of the fastest speed anything can possibly move. But was it ever intended as a prediction of relativity? My bet is no. No more than the Bible is talking about geometry of circles. We're generally lead to believe that the words in these books are to be taken "as gospel". But we can't judge that by modern standards. Nobody measured the speed of an angel or the circumference of the "molten of sea" thing to modern precision levels. We must avoid dual-standard here. It's precisely as wrong to claim that the Quran predicts the speed of light as it is that the Bible predicts the value of pi - neither of those things were ever intended by the original authors - it's just modern hindsight trying to extract miracles where there is nothing but simple literary verbiage that's been blown out of all proportion. (Although it is pretty clear on that bat==bird thing - and on a whole bunch of other biological 'oopsies' in the dietary laws.) SteveBaker (talk) 04:08, 4 December 2009 (UTC)
- The argument Steve mentions has indeed been made, but its main flaw (it seems to me) is to assume that exactly 10 and 30 (i.e. 10.0 and 30.0) cubits were meant. If the figures were actually rounded to the nearest cubit, which seems perfectly reasonable in the context, then the description is entirely consistent with the true value of pi: for example, 9⅔ and 30⅓ would come very close at 3.138. 87.81.230.195 (talk) 02:16, 4 December 2009 (UTC)
- Steve, I know you're not a big fan of the Bible, fine, but don't say nonsense about it. Nowhere does it say pi is 3. It says someone made a "molten of sea" that was 10 cubits across and 30 cubits round about. From there to "pi==3" there are a couple of large logical jumps. --Trovatore (talk) 00:59, 4 December 2009 (UTC)
- I agree - it's certainly not just the Quran that makes these kinds of error. The Christian bible says that Pi is 3 and that bats are a species of bird. This is what happens when you try to take written material that's several thousands of years old and apply it to everything we've learned in the meantime. The fact is that we shouldn't expect this stuff to be halfway reasonable - the problem isn't the books - it's that people are still trying to apply it to modern situations. SteveBaker (talk) 00:53, 4 December 2009 (UTC)
- Back to Steve: A Lunar year is a year in the lunar calendar, i.e. in this case likely the Islamic calendar. It consist of 12 lunar months, i.e. 354 or 355 days, depending on how the fractions work out. That's how the original author arrives at the 12000 (12 months times 1000 years). So the error is about 3 percentage points worse than your result. --Stephan Schulz (talk) 20:50, 3 December 2009 (UTC)
- Yes, Stephan, I think I've already mentioned that in the previous discussion but not sure it SteveBaker noticed that. To me I've accepted this step of calculations but was surprised when he again used another kind of conversions to achieve cos(26.92952225o) in order to reach 0.01% error. That was the point I wanted to swallow, but couldn't understand how (See the details here).--Email4mobile (talk) 21:02, 3 December 2009 (UTC)
- Apart from that other verse talking about 50,000 years a day, let's first verify the 1000 years a day calculation, Spinningspark.--Email4mobile (talk) 21:21, 3 December 2009 (UTC)
Lines of little circles of light on camera
How come when a camera shoots something very bright like a brief shot of the sun, you often see little circles, usually as if they were strung together along a line? 20.137.18.50 (talk) 12:56, 3 December 2009 (UTC)
- It's caused by light reflecting back and forth between the surfaces of the lenses. Cameras with high quality lenses don't do it nearly so much. The dots you see in the 'flare' aren't always circles - sometimes they are pentagonal or hexagonal. In this photo they seem to be 7-sided. SteveBaker (talk) 17:16, 3 December 2009 (UTC)
- Fixed your link. APL (talk) 17:22, 3 December 2009 (UTC)
- It's caused by light reflecting back and forth between the surfaces of the lenses. Cameras with high quality lenses don't do it nearly so much. The dots you see in the 'flare' aren't always circles - sometimes they are pentagonal or hexagonal. In this photo they seem to be 7-sided. SteveBaker (talk) 17:16, 3 December 2009 (UTC)
- Almost certainly images of the leaves of the lens aperture. See bokeh. --Phil Holmes (talk) 09:54, 4 December 2009 (UTC)
Rainbow ham?
What causes the rainbow color that I sometimes see in ham and other cured meats? This says it's a "chemical reaction" (not telling much more), this says it's birefringence, which is a nicer word, but our article on birefringence doesn't mention this effect at all. (If it is birefringence, this is probably one of the most common effects of birefringence encountered in the typical life of citizens of the western world. Probably deserves a mention.) Staecker (talk) 17:35, 3 December 2009 (UTC)
- A lot of cured meats are soaked in a brine, saline solution, or other liquid to add volume and flavor to them. The birefringence or other optical effects are often the result of these saline liquids suspended in the interstitial spaces of the meat. Nimur (talk) 17:46, 3 December 2009 (UTC)
- There are several possibilities - one is that we're seeing an "oil on water" effect because oils from the meat are mixing with water - another is that we're seeing some kind of Dichroism effect - yet another is some kind of coherent scattering - similar to the thing that makes the colorless scales of a butterfly's wing show up in such vivid, iridescent colors. There are a lot of related effects and this could easily be any one of them - or even some complicated combination of them. Without some kind of expert study - I don't think we should speculate. SteveBaker (talk) 18:07, 3 December 2009 (UTC)
- We can, however, point to prior research, e.g. Prediction of texture and colour of dry-cured ham by visible and near infrared spectroscopy using a fiber optic probe, Journal of Meat Science, 2005. Virtually everything that can possibly be observed, and many things that can't, has already been studied and published somewhere. Nimur (talk) 18:10, 3 December 2009 (UTC)
- Darn! How did I miss that? I'm such an avid reader of the Journal of Meat Science! SteveBaker (talk) 19:38, 3 December 2009 (UTC)
- There are several possibilities - one is that we're seeing an "oil on water" effect because oils from the meat are mixing with water - another is that we're seeing some kind of Dichroism effect - yet another is some kind of coherent scattering - similar to the thing that makes the colorless scales of a butterfly's wing show up in such vivid, iridescent colors. There are a lot of related effects and this could easily be any one of them - or even some complicated combination of them. Without some kind of expert study - I don't think we should speculate. SteveBaker (talk) 18:07, 3 December 2009 (UTC)
Does such a disease exist?
Is there a disease where the neurons of the brain spontaneously form synapses with all their neighboring neurons at an accelerated rate, essentially forming one very deeply interconnected mess? 20.137.18.50 (talk) 18:27, 3 December 2009 (UTC)
- Never heard of anything like that. If there were a mutation that did that, it seems likely to me that it would be fatal at a pretty early stage of embryonic development. Looie496 (talk) 20:41, 3 December 2009 (UTC)
- Relevant articles are Synaptogenesis and Synaptic pruning. Landau–Kleffner syndrome and continuous spikes and waves during slow sleep syndrome, related to epilepsy, both involve too much synaptogenesis during childhood due to electrical activity that strengthens the synapses.[9] Fences&Windows 23:12, 3 December 2009 (UTC)
- There is a great variety of proteins that participate in axonal guidance and/or affect synaptogenesis. See, for example, FMR1, Thrombospondin, semaphorins, and Amyloid precursor protein. I am not familiar with the specific pathology you refer to, though. --Dr Dima (talk) 00:48, 4 December 2009 (UTC)
cheesewring stones
It does not say in the article, but is the Cheesewring a natural formation, or is it man made like Stonehenge? Googlemeister (talk) 20:26, 3 December 2009 (UTC)
- Looks natural to me. In southern Arizona there are hundreds of rock formations that look like that -- made of sandstone rather than granite though. Looie496 (talk) 20:37, 3 December 2009 (UTC)
- The article states "Geological formation" which implies natural rather than man made source. In Southwestern Utah there are formations called Hoodoos (you've seen them in the old Wile E. Coyote cartoons). Geology + psychology is capable of some remarkable looking formations. I remember taking some college friends to Northern New Hampshire to see the Old Man of the Mountain (RIP), and they kept asking "No really, who carved that? Was it the Indians?" I kept trying to tell them it was just a natural formation. Other fun natural formations which have been mistaken for manmade include the Giant's Causeway in Ireland, the Pingos of northern Canada, the Badlands Guardian of Alberta, the Cydonia face on Mars, etc. --Jayron32 21:07, 3 December 2009 (UTC)
- Apparently we are lucky that it still exists. Looie496 (talk) 21:22, 3 December 2009 (UTC)
- The article states "Geological formation" which implies natural rather than man made source. In Southwestern Utah there are formations called Hoodoos (you've seen them in the old Wile E. Coyote cartoons). Geology + psychology is capable of some remarkable looking formations. I remember taking some college friends to Northern New Hampshire to see the Old Man of the Mountain (RIP), and they kept asking "No really, who carved that? Was it the Indians?" I kept trying to tell them it was just a natural formation. Other fun natural formations which have been mistaken for manmade include the Giant's Causeway in Ireland, the Pingos of northern Canada, the Badlands Guardian of Alberta, the Cydonia face on Mars, etc. --Jayron32 21:07, 3 December 2009 (UTC)
- For interest, there is currently an artist in the UK who makes somewhat similar, though smaller, piles of rocks on public beaches, often featuring apparently impossible balancing. Google-searching turns up the name Ray Tomes who has done something similar but I don't think he's the artist I have previously encountered. 87.81.230.195 (talk) 02:00, 4 December 2009 (UTC)
- I think you might be referring to Andy Goldsworthy. Richard Avery (talk) 10:30, 5 December 2009 (UTC)
- For interest, there is currently an artist in the UK who makes somewhat similar, though smaller, piles of rocks on public beaches, often featuring apparently impossible balancing. Google-searching turns up the name Ray Tomes who has done something similar but I don't think he's the artist I have previously encountered. 87.81.230.195 (talk) 02:00, 4 December 2009 (UTC)
- Speaking of apparently impossible balancing, I have seen a lot of amazing stuff in Utah, but nothing as amazing as the picture on the right. Looie496 (talk) 17:52, 4 December 2009 (UTC)
- Yes, the strength of that Millstone grit at Brimham Rocks is remarkable. This shape is supposed to be a result of natural sand-blasting of the somewhat softer layer at the base. Mikenorton (talk) 13:26, 5 December 2009 (UTC)
- Speaking of apparently impossible balancing, I have seen a lot of amazing stuff in Utah, but nothing as amazing as the picture on the right. Looie496 (talk) 17:52, 4 December 2009 (UTC)
- The Cheesewring is a rather extreme example of a tor, a type of rock outcrop that typically forms in granite. They are a result of weathering, which has acted particularly on pre-existing sub-horizontal joints to produce the unlikely shape (see Haytor for a less extreme example). Mikenorton (talk) 11:06, 4 December 2009 (UTC)
North Korea's closed-circuit speaker system
In this article and at least one other at the Wall Street Journal, they say that the North Korean authorities notified the citizenry of the replacement of the North Korean won by means of "a closed-circuit system that feeds into speakers in homes and on streets, but that can't be monitored outside North Korea."
Speakers in homes? Really? Do we have a Wikipedia article on this system? Is this cable TV but without the TV? How many homes are equipped with this technology? I have a raft of questions. Tempshill (talk) 21:54, 3 December 2009 (UTC)
- Is this science? Anyhow... from the New York Times: "Every North Korean home has a speaker on the wall. This functions as a radio with just one station -- the voice of the Government -- and in rural areas speakers are hooked up outside so that peasants can toil to the top 40 propaganda slogans. Some of the speakers are hooked directly into the electrical wiring, so that residents have no way of turning them off; they get up when the broadcasts begin and go to sleep when the propaganda stops. In some homes, however, the speakers have a plug, and people pull the plug when they want some quiet."[10] Just like in 1984. Something similar but less scary in Australia: "loudspeakers are sprouting like mushrooms on Sydney streets, peering down from the tops of traffic lights. The State Government has begun to put in place a permanent public address network that will, in some unspecified emergency, tell people what to do."[11] Fences&Windows 22:47, 3 December 2009 (UTC)
- You might find this: [12] link interesting. It has a photo of a similar hard wired radio(?) in russia. Ariel. (talk) 00:03, 4 December 2009 (UTC)
- Back in the 70s and 80s -- and probably well before that -- there was a ubiquitous contraption called "radiotochka" (radio spot) in the USSR households. IIRC the radio signal was transmitted via the electric wires of the power grid and not by air. I do not know how the signal was modulated, but I am pretty sure it was separated in frequency from the 50 Hz AC current the wires were carrying. There was only one station. Yes, it was government-controlled, but so was the TV, anyway; and it could be turned off or unplugged any time you like, of course :) . I doubt it that it transmitted anything back, but in principle I guess it could double as a bug for the bolsheviks to eavesdrop on you. --Dr Dima (talk) 00:08, 4 December 2009 (UTC)
- (I haven't seen Ariel's post when I edited mine, but I didn't get the EC screen either. Weird.) anyway, Ariel, yes, that's it in the picture. It had one station only, though, not three; or maybe it had three in some places. Or maybe the other two were added after I emigrated :). --Dr Dima (talk) 00:13, 4 December 2009 (UTC)
- Would you mind creating an article on it? It's ok if you don't know everything about it, just get it started and put in what you do know. (I know nothing about it. But maybe I can ask the person who posted the photo to contribute.) Ariel. (talk) 00:34, 4 December 2009 (UTC)
- Last time I've seen a radiotochka was about 20 years ago. I do not think my memory from back then is accurate enough for me to write a Wikipedia article about it now. Sorry. --Dr Dima (talk) 01:07, 4 December 2009 (UTC)
- I don't think it's that weird. The software is getting better and better at resolving edit conflicts. It's obviously fairly annoying for editors when you make extensive edits to a page (or even fairly minor but spread out ones) and have an edit conflict then have to resolve that then try again and have another conflict etc. Particularly a problem for high traffic pages. I'm not sure but it's also possible that this page is treated like an article and the software is more fussy on talk pages in recognition of the fact that edit conflicts could in some instances lead to confusing discussions. This is of course the kind of thing that people don't tend to notice since unless you actually get an edit conflict, you may not realise people have edited while you were editing. But to use an example I just encountered see [13]. I didn't actually look at the time when I started editing but I'm pretty sure it was before the 2 Madman2001 edits maybe even before the Derlinus. These where not that hard to resolve for the software, but I strongly suspect several years ago I would have gotten an EC Nil Einne (talk) 07:09, 5 December 2009 (UTC)
- Incidentally, there was a similar device in use in the 1950's in the US. It was to be used for civil defense, and would also get the signal through the electrical wiring. It would be always left on to sound the alarm in an emergency (even if the homeowners had the radio and TV turned off). It never really caught on, though, and the plan was canceled after a few years. StuRat (talk) 05:44, 4 December 2009 (UTC)
- That US device was on a
History's Mysteriessegment I saw a week or two ago. Clever device, tested and worked for transmitting an alarm. But the system was scrapped when they recognized that there was just the alarm, no ensuing instructions on how to respond to the situation. DMacks (talk) 07:34, 4 December 2009 (UTC)- Correction: History Detectives. The webpage for that segment might have some good information for an article about this type of system. DMacks (talk) 07:52, 4 December 2009 (UTC)
- For example, National Emergency Alarm Repeater. DMacks (talk) 07:54, 4 December 2009 (UTC)
- In some places in the U.S. the emergency sirens can broadcast announcements as well as their really annoying screech. Which would be useful if that tornado ever happens to come at 1 o'clock on the first Saturday of any month (monthly system test time). Rmhermen (talk) 14:44, 4 December 2009 (UTC)
- For example, National Emergency Alarm Repeater. DMacks (talk) 07:54, 4 December 2009 (UTC)
- Correction: History Detectives. The webpage for that segment might have some good information for an article about this type of system. DMacks (talk) 07:52, 4 December 2009 (UTC)
- That US device was on a
- Thanks for the responses - I'll add "radiotchka" to Wikipedia:Requested articles. Tempshill (talk) 07:25, 5 December 2009 (UTC)
- See also http://en.wikipedia.org/wiki/Cable_radio !
- The system in NK is so pervasive it's also on trains. According to some references I read in 1970's NK books about the Great and Beloved Leaders, as a matter of course there was a tiny audio studio with a human speaker in every train. If radio is so mistrusted, this might well still be the case today.
- Czechoslovakia had a nationwide PA system. It was used rather sparingly under communism. After all, everybody had radios and listened to western media anyway, they were definitely not completely shut out. Right after the Velvet Revolution it became a sort of public access system: you could informally ask authorities to let you tell people of innocuous community things like concerts or village parties. I don't know if the system is up yet and if it's being used.
- The radiotochka system in Russia is better understood as a soft propaganda medium with a general interest program, more soothing than hypnotizing or inflammatory. It was not necessarily a wire system, and was generally radio-delivered over long distances, with often less than the last mile by wire. End-receivers in homes and offices had volume control. Very similar to local cable TV systems fed by satellite. You still find plenty of old, good quality, semi-professional broadcast receivers that were used to get a signal off the air and pipe it into wires. AFAIK it was not necessarily a single channel system either. Places like hotels, resorts etc. had a selection of channels. The main radio channel was indeed also put on the PA all day in many places of manual work, not totally unlike Muzak in the US or BRMB in Birmingham, but it did provide a degree of entertainment. After the fall of Communism many factories turned the system off - and there are reports that some workers complained. I wish someone could update us on that, and clarified whether there was ANY degree of wire transport other than from a local building/neighborhood/resort media office.
- Switzerland pioneered audio wire-casting as cheaper-than-FM radio way of delivering the national radio and TV audio channels, and some extra contents, at medium-fidelity (up to 7kHz) into all tiny valleys: NF–TR = Niederfrequenz-Telefonrundspruch. It worked with AM carriers over phone wires in the frequency range where DSL now travels. The service was discontinued in the late 1980's. http://de.wikipedia.org/wiki/Telefonrundspruch
- Italy still has a similar system called "filodiffusione" - literally wire-distribution.
- http://it.wikipedia.org/wiki/Filodiffusione
- http://www.radio.rai.it/filodiffusione/index.htm
- It carries 6 channels, all from the national broadcaster RAI: three national on-air radio channels, plus one for light music, and 2 (stereo) for classical, with audio up to 10kHz, delivered by the quasi-monopolist Telecom Italia. On any given line if DLS is activated, wirecasting can't work. It is a pay-for system, the two wire-only programs are advertising free. Before the advent of private FM radio, satellite, and then internet, "filodiffusione" was often piped into retail outlets and sometimes in offices, just like Muzak.
- As I personally don't care for music or stereo, I modded an early 1970's Italian filodiffusione receiver as a mono amplified computer speaker. It's the size of a large bible set on its side, and looks much like a larger mains-powered transistor radio from that period. It was used for some 15 years in an office.
- I removed the radio-frequency board, which contained six separate sets of a 2 RF resonating circuits, one per channel, and two AM infinite-impedance (transistor) detectors - one for channels 1-5, the other for channel 6 only. Channels were selected by pushbuttons. For stereo you would press 5 and 6 together, and a complicated mechanism ensured that only the last 2 buttons could be depressed at the same time.
- I left only the power supply and the audio amplifier with volume and tone controls. The unit contains a single mono audio amp and one speaker for receiving the 4 mono channels and the 2 stereo channels mixed into mono. A a line-level audio output socked was provided to feed audio from the two separate detectors to an external stereo amp.
- It's the speaker connected to the PC I am writing from - I have a radiotochka right on my desk!
Environmental Impact of ebooks vs paper books
I've seen some e-book distributors advertising ebooks as environmentally friendlier than the 'dead tree' version. On the face of it this seemed reasonable; no trees, no chemicals for paper and ink making, no distribution of heavy books, no bricks and mortar stores (and all the energy to run them), but then I started thinking about the computing required to deliver ebooks. So, which is more environmentally friendly? I'll leave it to you to decide how much of the production / distribution / consumption chain to include, also what constitutes 'environmentally friendly'. Scrotal3838 (talk) 22:02, 3 December 2009 (UTC)
- Hmm well this page and this page outline some perceived problems with paper. See also Pulp (paper). On the other hand Electronic waste is often portrayed as being bad fairly serious, and factories that produce Kindles or computers or whatever of course also pollute. On the balance, however, I'd say that electronic distribution is much more environmentally friendly. It could (theoretically) replace a huge amount of printed material, and I just don't think there's any way the pollution generated making a kindle could add up to the pollution generated making a piece of paper for every page a kindle electronically displays. As far as energy to run servers and the devices themselves, I really doubt you could quantify ebooks as being anything but a marginal energy use. I don't see why ebook distribution would take up any more energy than a regular website, which on an energy per unit of information basis is extremely efficient.
- However, the argument should be taken with a grain of salt, in my opinion. People were predicting similar improvements with the advent of email replacing memos. But paper use over the period when email became widespread increased, due to it being much easier to produce documents with modern printers and (ironically?) people printing out their work emails to have a paper copy. I forget where I read that last bit, I think it was in the Economist. Regardless, I think ebooks could be portrayed as better for the environment if it can be demonstrated that the user in fact uses less paper, and doesn't just use the same amount of paper and an electronic device that has an environmental impact in its creation, operation and disposal. TastyCakes (talk) 23:26, 3 December 2009 (UTC)
- To read an ebook you need to turn your computer on (assuming it was off), and that requires electric power which consumes energy producing CO2, CO, NO, NO2, SO2, etc... Dauto (talk) 01:50, 4 December 2009 (UTC)
- Well first, you might live in an area that gets its electricity from hydro or nuclear or some other generator that doesn't produce pollution. And second, turning wood into paper requires significant electricity as well, along with chemicals and the logging of forests. And then you have to fuel the trucks that distribute books and other paper to stores or distribution centres, which also uses energy and produce pollution. I don't think anyone would argue that ebooks have zero environmental consequences. But again, taking everything in account it looks like they have less impact that printed books, which was the question. TastyCakes (talk) 02:43, 4 December 2009 (UTC)
- I don't think there is much doubt that this is not a question of energy use. After all, the Kindle runs for a heck of a long time on battery power - and the 60 watt light bulb you are reading it by is consuming at least 100 times more energy than the eBook itself. It's more a question of environmental damage during manufacture and ultimate disposal. That comes down to how long books and eBook readers last. Books seem to be almost immortal. I don't think I know anyone who throws them away...it seems almost sacrilegious to do so - and burning a book is just such a taboo (especially for Ray Bradbury fans!) that I doubt anyone does it routinely. However, if eBook readers are going to be regularly obsoleted like laptops and fancy phones are - with a lifetime of just a few year - then dumped onto landfills - then we can probably say that the eBook is doing more environmental damage. Paper books lock in carbon - and if you dump them into landfill, the compost nicely and their carbon is sequestered - that's a net win if the manufacturing process wasn't too nasty. Most books are read by many people before they eventually go wherever it is they go. Since an eBook player has no moving parts (well, except, perhaps for the switches) - it could last a long time. If they aren't obsoleted, then all likelyhood, the battery will be the thing that finally kills them. Most batteries die because their lives get shorter and shorter over the years - and that's a real problem for an eBook which really needs to be cable-free and to run for MANY hours without a recharge. If things settle down enough technologically - and the battery life is good enough - then perhaps there is a chance of the eBook being a better choice - but I kinda doubt it right now.
- (Dear Santa: Steve would like a Kindle for Xmas please - I have carefully sequestered the lump of carbon you sent me last year.)
- Who uses a 60 watt light bulb to read? A 15W or even 12W or heck even 8W CFL does fine Nil Einne (talk) 08:23, 4 December 2009 (UTC)
- OK - even an 8W CFL uses vastly more power than a Kindle. The beautiful thing about ePaper is that the image stays there when you remove the power source. Hence a well-designed ePaper based eBook reader can turn itself completely off and consume literally zero power while you're actually reading. You wake it up by pushing a button to turn the page or something - the on-board computer grabs the next page, formats it, sends it to the ePaper - then turns itself off again about a tenth of a second later. They use truly microscopic amounts of power when you are using them as intended. Of course if you surf the web with them using the wireless link or continually flip back and forth between pages - then it's going to eat more power - but for simply reading a novel or something - their power consumption is almost completely negligable. SteveBaker (talk) 19:18, 4 December 2009 (UTC)
- Who uses a 60 watt light bulb to read? A 15W or even 12W or heck even 8W CFL does fine Nil Einne (talk) 08:23, 4 December 2009 (UTC)
- Also, how many of us read ebooks with the Kindle? I had never even heard of it. Most people will use a desktop computer or a laptop and these consume more energy than the Kindle. Most people don't turn the lights off when using a computer either so there realy isn't any savings. Finally, the environmental cost for the production of a paper book happens only once while the power consumption for reading an ebook happens every time you read it. 169.139.217.77 (talk) 14:27, 4 December 2009 (UTC)
- See Amazon Kindle and electronic paper. It is true, at this point Kindle and other electronic paper readers have a negligible share of the overall book market. But the real question is whether the average Kindle owner's paper "usage" goes down enough to offset how much environmental damage the Kindle does through its creation, use and disposal. I don't know the numbers (I'm not sure anyone does), so I'll make them up to explain. Say the production of a kindle produces the same "environmental impact" or "environmental footprint" or whatever as 1000 books. I don't know if that's an accurate number or not. But if the owner of the Kindle only reads 100 books on the Kindle over the course of its life, the Kindle has not been better for the environment than the paper equivalent. If, however, they read 5000 books, it is a great improvement. As I state above, I suspect, on average, the Kindle is better for the environment than its paper equivalent over the course of its life, but that is just from a vague feeling of how much damage the paper industry causes compared to the electronic industry. TastyCakes (talk) 17:41, 4 December 2009 (UTC)
- I'd like to agree with you - but the niggling problem I have is that paper books are often read by multiple people - when I'm done reading with my books, I either lend them to other people to read - or take them to my local "Half Price Books" store and sell them - or I give them away to some local charity or something. I can't ever recall tossing a book into the trash. Most of the books I read are second hand anyway - so I think it's possible that a typical book is read maybe a dozen times before it finally falls apart or something. That skews things in favor of paper books. If we assume that an average Kindle is used to read 1000 books (that seems like a very high number to me) - then if paper books are each read by 10 different people (or even by the same person 10 times) - then the Kindle has to be more environmentally friendly than 100 paper books - not 1000. I can't help suspecting that the average Kindle will only last at most maybe 10 years...probably more like 5. SteveBaker (talk) 19:18, 4 December 2009 (UTC)
- Hmm that's true, multiple readers are left out on my simplistic calculation. And there is the added complication that the Barnes & Noble Nook allows users to lend e-books to others and this capability could become the norm. I guess the easiest (and fairest?) way to measure it would be if electronic readers became more commonplace (say, 10% of the market for new books) and then measure how much paper production per capita decreases in the same market over the same period. Then if you could get a reasonable estimate of how long the average Kindle will last (or how long until its obsolescence) you could estimate how much paper the average Kindle displaces over its lifetime. Of course that's making the big assumption that ebook readers are the only thing affecting paper sales per capita over that period, and it seems likely that a greater percentage of people will read a greater percentage of things on phones, computers and PCs over the same period... TastyCakes (talk) 01:27, 5 December 2009 (UTC)
- I'd like to agree with you - but the niggling problem I have is that paper books are often read by multiple people - when I'm done reading with my books, I either lend them to other people to read - or take them to my local "Half Price Books" store and sell them - or I give them away to some local charity or something. I can't ever recall tossing a book into the trash. Most of the books I read are second hand anyway - so I think it's possible that a typical book is read maybe a dozen times before it finally falls apart or something. That skews things in favor of paper books. If we assume that an average Kindle is used to read 1000 books (that seems like a very high number to me) - then if paper books are each read by 10 different people (or even by the same person 10 times) - then the Kindle has to be more environmentally friendly than 100 paper books - not 1000. I can't help suspecting that the average Kindle will only last at most maybe 10 years...probably more like 5. SteveBaker (talk) 19:18, 4 December 2009 (UTC)
- See Amazon Kindle and electronic paper. It is true, at this point Kindle and other electronic paper readers have a negligible share of the overall book market. But the real question is whether the average Kindle owner's paper "usage" goes down enough to offset how much environmental damage the Kindle does through its creation, use and disposal. I don't know the numbers (I'm not sure anyone does), so I'll make them up to explain. Say the production of a kindle produces the same "environmental impact" or "environmental footprint" or whatever as 1000 books. I don't know if that's an accurate number or not. But if the owner of the Kindle only reads 100 books on the Kindle over the course of its life, the Kindle has not been better for the environment than the paper equivalent. If, however, they read 5000 books, it is a great improvement. As I state above, I suspect, on average, the Kindle is better for the environment than its paper equivalent over the course of its life, but that is just from a vague feeling of how much damage the paper industry causes compared to the electronic industry. TastyCakes (talk) 17:41, 4 December 2009 (UTC)
- Also, how many of us read ebooks with the Kindle? I had never even heard of it. Most people will use a desktop computer or a laptop and these consume more energy than the Kindle. Most people don't turn the lights off when using a computer either so there realy isn't any savings. Finally, the environmental cost for the production of a paper book happens only once while the power consumption for reading an ebook happens every time you read it. 169.139.217.77 (talk) 14:27, 4 December 2009 (UTC)
- Why would it so bad to take the carbon from trees and store it in a form (paper) that won't contribute to CO2 percentage for probably hundreds of years? I never understood why transforming trees to stored carbon should be bad, as long as trees are grown again afterwards. ----Ayacop (talk) 18:07, 4 December 2009 (UTC)
- Oh that's not bad - it's good. But the environmental impact of printing a paper book is a lot more than just the wood pulp it's made from (which - as you say - is a positive benefit to the environment because it's sequestering carbon). But making paper from wood pulp requires diesel fuel to power the lumber trucks, gasoline for chainsaws, electricity for the pulp-making machine, water (lots of it). Most paper is also bleached - presumably with some nasty toxic chemicals. The ink is laced with antimony and other nasty heavy metals. There is glue in the binding. Many paperback thrillers have the title embossed and coated with a thin metal foil. More gasoline is burned in getting the book from the printer to the bookstore - and for the eventual purchaser to go to the bookstore and back. So paper books certainly do have an environmental footprint. We just don't have the information to compare the size of that footprint to an eBook reader. Gut feel says that a single book is much less destructive than a single eBook reader - but then we don't know how many books are replaced by that reader over it's lifetime - maybe it's a lot - maybe very few because books are so well recycled across many readers. That makes this a tough question to answer. SteveBaker (talk) 19:18, 4 December 2009 (UTC)
- Why would it so bad to take the carbon from trees and store it in a form (paper) that won't contribute to CO2 percentage for probably hundreds of years? I never understood why transforming trees to stored carbon should be bad, as long as trees are grown again afterwards. ----Ayacop (talk) 18:07, 4 December 2009 (UTC)
Stars
How are we able to see stars if they are so far away? jc iindyysgvxc (my contributions) 22:02, 3 December 2009 (UTC)
- They are bright. --Jayron32 22:09, 3 December 2009 (UTC)
- There's not a lot in the way. Light doesn't just fade away over long distances -- it has to go through plenty of interstellar dust before becoming indiscernible. Vranak (talk) 22:12, 3 December 2009 (UTC)
- It does spread out, though. The brightness of nearby stars is determined more by the inverse square law than extinction. --Tango (talk) 22:15, 3 December 2009 (UTC)
A more interesting question might be: "How are we able to look at any of the night sky and not see stars?" See Olbers' paradox. Dragons flight (talk) 23:22, 3 December 2009 (UTC)
- The previous answers are missing a critical point - and (sadly) it's a somewhat complicated explanation.
- The sun is a star - a pretty normal, boring kind of star just like many others in the sky. It's so bright that you can't look at it for more than the briefest moment without wrecking your eyesight. Most of the other stars out there are at least that bright - and space is pretty empty - interstellar gasses and dust make very little difference. So the only real effect is that of distance.
- As others have pointed out, that's driven by the "inverse square law" - when one thing is twice as far away as another similar thing - it's four times dimmer - four times further away means 16 times dimmer and so on. The sun is only 93 million miles away - that's 8 light-minutes. The nearest star is 4 light-years away. Let's consider Vega (which is one of the brightest stars in the sky) - if you were 93 million miles away from it - it would be about 37 times brighter than our sun and you'd need some pretty good sunglasses and a good dollop of SPF-50! But fortunately, it's 25 light years away. So, Vega is 25x365x24x60/8...about one and a half million times further away. Which means that even though it's 37 times brighter when you're up close, it's 1.5Mx1.5M/37 times dimmer from where we're standing (73 billion times dimmer) because of that inverse-square law thing.
- Our eyes are able to see a range of brightnesses from the maximum (which is about where the sun's brightness is) to a minimum of about 10 billion times dimmer than that. On that basis, Vega ought to be about 7 times too dim for us to see - but it's not. It's actually pretty bright. So you can tell right away that that inverse square law that everyone is going on about ISN'T the whole story.
- There is obviously something else going on - and that is that the total amount of light from the sun is spread over that large disk you see in the sky - and while Vega is 73 billion times dimmer, all of that light is collected into one tiny dot. It gets hard to calculate the effect that has - but it's actually rather significant because the apparent size of the sun compared to that of Vega is gargantuan. In fact, the apparent area of an object obeys the same inverse-square law as the brightness does - so when you double the distance to something, it looks four times smaller (in area, that is). That concentration of light from a perceptually large object into progressively smaller areas of our retina exactly counteracts the inverse-square law.
- Someone's going to complain about that - but think about it...that's why you can see something quite clearly when it's 200 feet away and it's not 40,000 times dimmer than when it's 1 foot away!
- That means that until you are so far away that the sun is just a speck that's comparable to the resolution of your retina - it's not really any dimmer to look at than it is up close. The total amount of light is much less - but the light coming hitting each cell in your retina is exactly the same - until the projected image of the sun on the back of your eye starts to get smaller than the size of a single cell. So if you were out at the orbit of (say) Pluto - where the sun casts almost no heat and very little light - staring at the sun's tiny disk would still ruin a very small patch of your eyeball.
- But still, 73 billion is a big number - Vega is still a heck of a lot dimmer - as you'd expect. However: remember that the sun is bright enough to literally blind you - and that your eyes are really sensitive - we can see things that are 10 billion times dimmer than the sun - so it's actually quite easy to see Vega even in very light-polluted cities. Much dimmer stars are also visible to the naked eye.
- I understand that an interesting question is why the night sky is not bright white rather than black, as an infinite number of stars would lead to the former. 89.242.105.246 (talk) 01:13, 4 December 2009 (UTC)
- I believe the answer to that question, known as Olbers' paradox (which remarkably was first hinted at by Edgar Allen Poe in his essay Eureka: A Prose Poem), is that the Universe is not infinitely old, so light from the more distant stars has not yet had time to reach us. Attenuation due to red shift may also play a part. 87.81.230.195 (talk) 01:44, 4 December 2009 (UTC)
- The Olbers' paradox article is pretty good - it lays out all of the possible reasons for this. SteveBaker (talk) 03:32, 4 December 2009 (UTC)
- Aha. I've actually caught the most excellent SteveBaker in a misstatement. In his first response to the OP, refering to brightness of the Sun, he stated, "Most of the other stars out there are at least that bright." In reality, the vast majority of stars are far dimmer than the Sun. They are, in fact, so dim that we don't see them. So what was meant was that of the stars we see, most are at least as bright as the Sun. (Just a tiny correction) B00P (talk) 08:17, 5 December 2009 (UTC)
- Indeed - I believe about 90% of the stars in the galaxy are red dwarfs, few (if any) of which can be seen with the naked eye. --Tango (talk) 11:49, 5 December 2009 (UTC)
- I believe the answer to that question, known as Olbers' paradox (which remarkably was first hinted at by Edgar Allen Poe in his essay Eureka: A Prose Poem), is that the Universe is not infinitely old, so light from the more distant stars has not yet had time to reach us. Attenuation due to red shift may also play a part. 87.81.230.195 (talk) 01:44, 4 December 2009 (UTC)
The most useless particle
Say you had to choose one type of subatomic particle to be completely rid of: every single particle of that kind would completely disappear and no process would ever produce them ever again. Which would make the least difference to the Universe? Vitriol (talk) 22:37, 3 December 2009 (UTC)
- I strongly suspect there is no answer to this - they are all absolutely 100% necessary. Take any one away (if that's even possible - string theory says "No") then the universe would be a dramatically different place - probably life as we know it wouldn't exist. But there is no "marginally less useful" particle. SteveBaker (talk) 23:04, 3 December 2009 (UTC)
- String theory in its current form doesn't say anything useful about the Standard Model. The current thinking that there are a huge number of string theory vacua with different effective physical laws in each one. There might be one that looks like the Standard Model with a particle missing. -- BenRG (talk) 02:46, 4 December 2009 (UTC)
- Oh, I don't know. A universe without a top quark might not differ much. Top is very hard to create and decays in ~5×10−25 s. Now there might be secondary effects on the rest of the standard model if one removed the top, and I'm not sure how to predict what modifications to the larger theory might be necessary, but the top by itself seems of little importance. Dragons flight (talk) 23:14, 3 December 2009 (UTC)
- There is no particle that could be removed from the Standard Model without either making it inconsistent or making life impossible. However, we could remove a whole group of particles, such as the third generation of the standard model (which comprises the tau, tau neutrino, top quark, and bottom quark) This is the only of the three generations with no practical applications. 74.14.108.210 (talk) 23:13, 3 December 2009 (UTC)
- Not to hijack the question but could you elaborate on that a little? Why would it be inconsistent or non-life sustaining if, for example, the top quark didn't exist? Maybe not so many pleasing symmetries would exist but where are the serious effects? 129.234.53.144 (talk) 23:56, 3 December 2009 (UTC)
- In physics the math always balances. If the top quark was missing, some physical interaction would not balance which is impossible. So some other particle or effect would, nay MUST, happen instead. Which would then have implications, etc, etc. Make any change, and everything else changes too. Ariel. (talk) 00:01, 4 December 2009 (UTC)
- Not to hijack the question but could you elaborate on that a little? Why would it be inconsistent or non-life sustaining if, for example, the top quark didn't exist? Maybe not so many pleasing symmetries would exist but where are the serious effects? 129.234.53.144 (talk) 23:56, 3 December 2009 (UTC)
- The up-type and down-type quarks couple via the weak interaction, and I think there's a loss of unitarity if you don't have the same number of particles of each type. On the other hand, there are preon theories with nine quarks, four of one type and five of the other, that don't have unitarity problems as far as I know. The story is the same on the lepton side. I don't think there's any known reason why there have to be the same number of quarks as leptons, though, so you can get rid of just two quarks or just a charged lepton and a neutrino without trouble. (This is not quite "getting rid of the top and bottom" or "getting rid of the tauon and tau neutrino" because you would also have to rejigger the CKM matrix or PMNS matrix, which alters the nature of the leftover particles as well.) One problem with dropping a generation is that there can be no CP violation in the weak interaction with two generations, and CP violation of some kind is needed to explain why there's more matter than antimatter. But I don't think the CP violation in the weak force can be used to explain that anyway. My vote for the most useless particles goes to the right-handed neutrinos, unless they turn out to be an important component of dark matter. -- BenRG (talk) 02:46, 4 December 2009 (UTC)
- BenRG, the symmetry between the number of leptons and quarks is necessary in order to cancel the gauge anomalies that would otherwise destroy gauge symmetry and spoil renormalization. Dauto (talk) 07:13, 4 December 2009 (UTC)
- Oh. Oops. -- BenRG (talk) 17:53, 4 December 2009 (UTC)
- BenRG, the symmetry between the number of leptons and quarks is necessary in order to cancel the gauge anomalies that would otherwise destroy gauge symmetry and spoil renormalization. Dauto (talk) 07:13, 4 December 2009 (UTC)
- More importantly, per Murray Gell-Mann, "that which is not forbidden is mandatory" in particle physics. The existance of the top and bottom quarks is necessitated by the symmetry in the Standard Model. The entire system predictes the existance of said particles, therefore they are ALL equally vital. We have a pschological sense that particles like electrons are more vital because we tend to work with them more often, but the entire system of particles is not seperable; you must take them all, because the laws that created the top quark also created the electron; you could not create a universe with one and not the other. You can think of the Standard Model like a house of cards. If you remove any part of it, the whole system does not stand. See also anthropic principle for more on this. --Jayron32 00:16, 4 December 2009 (UTC)
- Here's an interesting article for you Weakless Universe, they imagine a universe where something is missing. But as you see they had to change various other things too to make it work. Ariel. (talk) 00:31, 4 December 2009 (UTC)
- Excuse me, but that's totally nonsense answer. If there were no top quark, the standard model would be seriously broken, I agree. But that's still just a human model of physical reality. If the universe had no top quark, then that would imply physicists need to discover a theory of particle physics that is different from the standard model, and one in particular where top quark formation is forbidden. However, because the top quark is almost never involved in interactions at human scales, more likely than not one could invent a new theory (perhaps much less elegant) that still gave the same predictions for human life as we have now. The Standard Model might be a "house of cards", but physical reality need not adhere to your sense of aesthetic beauty in determining its laws. For another example, the Higgs boson has but long sought after and not yet found. Most physicists seem to believe the Higgs will eventually be found, but one can just as well replace the Standard Model with one of several Higgsless models and our physical reality would look the same. Dragons flight (talk) 00:35, 4 December 2009 (UTC)
- The Standard Model doesn't predict the number of generations; there's no known reason why there should be three. I don't know of any anthropic reason either. "Everything not forbidden is compulsory" is not about particle content. It's a statement that any interaction or decay that's not forbidden by a conservation law has a nonzero probability of occurring in quantum mechanics (classically forbidden transitions can happen in quantum mechanics because of tunneling). -- BenRG (talk) 02:46, 4 December 2009 (UTC)
- If string theory is to be believed - then all of these particles are just modes of vibration on a string - getting rid of one mode of vibration is an entirely unreasonable proposition - so it's very possible that these things are no more removable from the universe than the color yellow or objects weighing exactly 17.2kg. SteveBaker (talk) 03:29, 4 December 2009 (UTC)
- Strings have vibrational modes (harmonics), and those vibrational modes are particles, but the modes are quantized in multiples of roughly the Planck mass. All observed particles have masses far smaller than that, so they all belong to the ground state of string vibration. They're supposed to be distinguished by their behavior in the extra dimensions, but there's no reason to believe that the shape of the extra dimensions is unique. You can say a similar thing about quantum field theory. "Particles are just vibrational modes of the vacuum" is an accurate enough statement about QFT. It doesn't make sense to get rid of one vibrational mode, so you're stuck with a certain set of particles—for a given vacuum. But this doesn't answer anything; it just rephrases the question about the particle content as a question about the vacuum.
- If string theory is to be believed - then all of these particles are just modes of vibration on a string - getting rid of one mode of vibration is an entirely unreasonable proposition - so it's very possible that these things are no more removable from the universe than the color yellow or objects weighing exactly 17.2kg. SteveBaker (talk) 03:29, 4 December 2009 (UTC)
- There was some speculation in the earlier days of string theory that it would turn out to have a unique vacuum which would have the Standard Model as a low-energy approximation, but the current thinking is that there are lots of vacuum states and only some of them match the Standard Model. Whether there are vacuum states corresponding to slight variations of the Standard Model isn't known. It isn't even known that there's a vacuum state corresponding to the Standard Model, though obviously they hope that there is. -- BenRG (talk) 05:13, 4 December 2009 (UTC)
- I could definitely do without [fat electrons] being sent down the electricity supply and clogging up my computer:) Dmcq (talk) 06:43, 4 December 2009 (UTC)
- There was some speculation in the earlier days of string theory that it would turn out to have a unique vacuum which would have the Standard Model as a low-energy approximation, but the current thinking is that there are lots of vacuum states and only some of them match the Standard Model. Whether there are vacuum states corresponding to slight variations of the Standard Model isn't known. It isn't even known that there's a vacuum state corresponding to the Standard Model, though obviously they hope that there is. -- BenRG (talk) 05:13, 4 December 2009 (UTC)
December 4
Storks
Why do you get storks in places like Germany and Holland but not in Britain? Germany has a more severe winter than Britain, so that cannot be the reason. 89.242.105.246 (talk) 01:08, 4 December 2009 (UTC)
- Storks are seen in UK [14], occasionally. However, UK is rather far west and north-west from their habitat (Central and Eastern Europe). They migrate south by one of three routes, AFAIK: western route over France, central route over Italy, or eastern route over Israel. If they were to spend the summer in UK, they would have to fly east to France and then south, which, I guess, they usually don't. --Dr Dima (talk) 01:42, 4 December 2009 (UTC)
name of physics book
What is the name of the physics book depicted here? In case you are wondering, that is Tiger Woods's car. Thanks. 67.117.130.175 (talk) 01:24, 4 December 2009 (UTC)
- It is “Get a Grip on Physics” by John Gribbin. Edit: the above link does not work for those of us for whom Google automatically redirects to Google.co.uk, so here is a UK version of it: [15] 78.149.192.188 (talk) 11:24, 4 December 2009 (UTC)
- Took a minor liberty and Wikilinked the name in your post, 78.149, as I'm a fan of Gribbin. 87.81.230.195 (talk) 12:55, 4 December 2009 (UTC)
Cat and dog ear fold.
What is the function of the small pleat on the ventral/posterior exterior margin of a cat or dog auricle? Presumably other Carnivora have this feature as well. It looks like this: ----==-===--- where the skin doubles, and the interior fold is divided. The structure is visible in this image: http://en.wikipedia.org/wiki/File:Terrier_mixed-breed_dog.jpg. -Craig Pemberton 01:43, 4 December 2009 (UTC)
- I'm going to go out on a limb and say that this fold has no appreciable function. My guess is that it is a vestigial trait left over from some other ancestral characteristic. I also have to admit that I am not at all qualified to make such an assumption so if someone else has evidence to the contrary you can safely ignore my answer. Presumably since some bats have quite gnarly ears, the folds play some role in sensing direction or attenuating certain sound, but a lot of bats also have smooth ears and hear just fine so it seems to suggest that this characteristic doesn't play a major role. Similarly it's hard to imagine the folds in human ears play a significant "functional" part of our hearing, if any part at all. Vespine (talk) 05:26, 4 December 2009 (UTC)
- I seem to remember reading somewhere that the folds in human ears modify the frequency ranges of sounds coming from different directions, allowing a person to know where the sound is coming from (not just left or right but in front or behind, above or below). I must, like you, point out that I have no idea what I'm talking about really, though. 213.122.24.221 (talk) 17:58, 7 December 2009 (UTC)
- I'm going to go out on a limb and say that this fold has no appreciable function. My guess is that it is a vestigial trait left over from some other ancestral characteristic. I also have to admit that I am not at all qualified to make such an assumption so if someone else has evidence to the contrary you can safely ignore my answer. Presumably since some bats have quite gnarly ears, the folds play some role in sensing direction or attenuating certain sound, but a lot of bats also have smooth ears and hear just fine so it seems to suggest that this characteristic doesn't play a major role. Similarly it's hard to imagine the folds in human ears play a significant "functional" part of our hearing, if any part at all. Vespine (talk) 05:26, 4 December 2009 (UTC)
- I've always assumed that it has to do with turning the ears. That is, the fold is present when the attached muscle is relaxed, while it's straigtened out and the ear turns when the muscle tenses. StuRat (talk) 05:28, 4 December 2009 (UTC)
Rechargable Batteries
Just curious: which is faster? Draining a battery OR charging it? or can they be completed in roughly equal periods of time? I'm thinking draining a battery could potentially be faster because (I think?) batteries don't heat up when they lose power, only when they're charged, so the absence of a thermal consideration would allow for a faster rate of flow? Thanks! 218.25.32.210 (talk) 02:00, 4 December 2009 (UTC)
- I guess you've never used a smart phone or semi smart phone then. Draining a battery can definitely result in it heating up. This happened even with my Panasonic VS2 which wasn't a particularly fancy phone, when using GPRS or when taking many photos nearly continously. In fact from some quick Googling I see it can happen with continous talking too which makes sense so I wonder if you could probably notice this even with completely non smart mobile phones in some circumstances so you may be able to try this yourself if you have a mobile phone (although it's obviously going to cost money in such circumstances). I'm thinking here of Lithium-ion batteries obviously but I'm pretty sure this would apply to most rechargable batteries. Obviously when it comes down to it, it's depends under what conditions. You could discharge or charge a battery at a very high rate but it may damage the battery or in some cases particularly lithium-ion batteries result in explosions. For example, you can get 15 minute fast chargers for NiMH batteries that are supposed to charge in about ~15 minutes but as the batteries get hot and it isn't particularly good for them to be charged so fast, many have a switch to allow slower charging. (Which would still be faster then most traditional chargers and I suspect even extremely fast charging is probably significantly better for the battery then overcharging that can happen with old unsmart chargers). Similarly most lithium-ion batteries have temperature sensors I believe and these help limit the rate of discharging and charging to prevent the battery getting too hot Nil Einne (talk) 02:39, 4 December 2009 (UTC)
- Though I own a smartphone, 99% of my usage is text messages, so I had indeed never experienced what you relate. In terms of my original question, I'm really more interested in laboratory/theoretical conditions and the physics behind the results - though I thank you for your long and detailed response! :-) 218.25.32.210 (talk) 02:55, 4 December 2009 (UTC)
The question is relevant to designing an electric bus to run a fixed urban route. The goal is to have two busses, one always charging and the other always in motion. My assumptions are that the bus design sets no limit on the battery size or weight, the battery type (to be defined) cannot accept charge as fast as it discharges in use, and that any charging arrangement can be made available at the bus terminus. It seems that if the bus is equipped with battery capacity X times as much as needed to complete its route until the busses swap at the terminus, then the battery can be charged at 1/X times the current at which it discharged. That will be achieved by switching the battery cells from parallel for driving, to series for charging. Am I am right? X must be a smallish integer. Cuddlyable3 (talk) 14:33, 4 December 2009 (UTC)
- What's the use of a bus if it's always in motion? Also are we talking a dedicated busway here or something? Or just completely empty roads? Nil Einne (talk) 15:59, 4 December 2009 (UTC)
- There are services for swapping car batteries, you might want to look at something similar:[16][17]. Fences&Windows 16:06, 5 December 2009 (UTC)
- By one bus being "in motion" I mean "in service", meaning it follows a predetermined circular route, stopping when needed to take on or unload passengers. I don't see how a distinction between dedicated busway or empty road affects the calculation. I don't see a future for the battery swapping machine that is an expensive investment, needs a critical interaction of man and machine, and has a lot of ways to go wrong. A bus that is demensioned to carry 40-60 people and serves a route of say 10-40 km can carry its batteries to be changed only when they they wear out after some hundreds of charge cycles.Cuddlyable3 (talk) 23:12, 5 December 2009 (UTC)
- There are services for swapping car batteries, you might want to look at something similar:[16][17]. Fences&Windows 16:06, 5 December 2009 (UTC)
Hair
Why do girls typically have longer hair than boys? jc iindyysgvxc (my contributions) 05:34, 4 December 2009 (UTC)
- For the same reasons they wear lipstick when most men don't - cultural norms. Note that men having shorter hair than women on average is not universal to all cultures. In societies in which men traditionally wear turbans, their hair will be very long, perhaps even perpetually uncut such as with Sikhs. In those situations you may very well have most women, even with "long" hair, walking around with less hanging off their skulls than men! 218.25.32.210 (talk) 05:44, 4 December 2009 (UTC)
- No, I'm pretty sure that girls are capable of growing longer hair than boys. It's not just a matter of how they cut it. So the OP's question stands. Some google found that estrogen and androgen have an effect on how long hair says in the anagen phase. Google for hormone and hair. Ariel. (talk) 06:32, 4 December 2009 (UTC)
- Can you point to any WP:RELIABLE sources for your claim, or just a Googled bunch of blog entries from random Internet people? Tempshill (talk) 07:55, 4 December 2009 (UTC)
- I have no real ref. But as evidence I point to the fact that women grow and retain a lot more hair when pregnant (and then it falls out a month or two after they give birth), and men get bald while women don't. It's clear that hair responds to hormones. Ariel. (talk) 23:52, 5 December 2009 (UTC)
- I don't see any reason to expect Sikh men to have more hair then Sikh women. Both are expected to observe Kesh (Sikhism) AFAIK. Women may not wear turbans but unless that increases hair growth or reduces it falling out or something then it's fairly irrelevant. Now if you include facial har and body hair and perhaps because men tend to be slightly larger on average you could argue that all that means men would on average have more hair but a more sensible interpretation wouldn't include those factors IMHO so it seems they would have roughly equal amounts of hair. Also since no one has done so yet, I might as well link to Long hair Nil Einne (talk) 08:14, 4 December 2009 (UTC)
- As anecdotal evidence contradicting Ariel's contention, consider that I (a British white male) have in two separate periods of my life (ages ca 19-23 and 45-50) grown to and maintained my hair at near waist length with no difficulty: I've also encountered plenty of other adult males with hair as long or longer. There may be weak statistical trends in 'hair-length potential' attributable to sex, but the overwhelming bulk of the generally observed length differences is purely down to fashion. 87.81.230.195 (talk) 12:51, 4 December 2009 (UTC)
- I found some data from the 1950s that suggests that women's head hair grows ever so slightly faster than men's - though only by about 0.02mm/day.[18] Fences&Windows 15:25, 4 December 2009 (UTC)
- This may be a crazy idea, what if the reason females grew hair faster was because the hormones that signal hair growth in the body were spread out over more places in the male body than the female? Mac Davis (talk) 16:36, 5 December 2009 (UTC)
Green Laser Pointer
Recently I bought a green laser pointer thinking of high power etc. but when switched it on I saw that it does NEVER foucus single point but in nearly fifty or so points dividing the power all over. I thought that there should be some adjustment that could be removed but no. There is a lens type thing on front that can be slightly rotated ( causing points to dance here and there ) but no way I can foucs it to single point. What I should to it to make it SINGLE point ?
Jon Ascton (talk) 07:56, 4 December 2009 (UTC)
- You need to hold a suitable (separate) convex lens in front of it. MER-C 11:39, 4 December 2009 (UTC)
- See the image in the article Speckle pattern. Cuddlyable3 (talk) 14:07, 4 December 2009 (UTC)
Restyl tablets
Question removed due to request for medical advice. If you are concerned for your health, please call emergency services.
activation energy and pri/sec/tert advantages
On paper (resonance structures), the benzylic / allylic site looks quite reactive, but from the bond energies table I see that a benzylic and allylic C-H bond is only about 15 kcal/mol weaker than a "normal" C-H bond. Same goes for C-X bonds (halide). Are there other effects at play (besides I guess bond weakness?) In fact choosing iodide as a leaving group over chloride seems to give a much bigger energy advantage than benzylic/allylic!
Btw, is iodide catalysis "true catalysis"? (Where in an alkyl halide SN2 substitution reaction you put some iodide in solution to speed it up.) What I understand is that iodide is a good leaving group but iodide is not that solvated (so you start out with higher energy, allowing it both to react and leave), but chloride is more solvated (so it ends up in lower energy) so in fact you've increased the energy gap between the reactants and the products, altering the equilibrium. John Riemann Soong (talk) 15:17, 4 December 2009 (UTC)
Organizing chemicals (practical)
I have been tasked with organizing the chemical room of a foodservice D.C. The chemicals are those you'd expect retaurants and other institutions to order - cleansers, chafing fuel, de-greasers, detergents, sanitizers, de-limers, soaps, etc. While they've never had a serious spill, there's no harm in taking some extra precautions, so I'd like to organize the room in such a way to minimize the risks associated with the accidental mixture of chemicals. For the first order of business, I've separated the strong alkalis, acids (of which there's only a few), and flammables (again, only a few) away from each other, keeping more innucuous items like soaps in between. What else ought I keep in mind? Should oxidisers like sodium hypochlorite be given special treatment? Would it be better to have the oxidisers near the alkalis or the acids - or completely separate? Let me emphasize: the actual risk of accidental mixture is very low, the substances are securely packaged and carefully stacked, and we're talking about commercial and light industrial mixtures here, not weapons grade stuff :). Matt Deres (talk) 16:14, 4 December 2009 (UTC)
- Follow the normal rules of H&S, i.e. keep heavy items on lower shelves if not on the floor: items which are frequently used closer to hand than items which are infrequently used. Make sure the shelves are properly labelled with the item which is to be kept in that position, so all your plans are not set to nought by people who think they know better! --TammyMoet (talk) 16:24, 4 December 2009 (UTC)
- I'm not worried about that stuff (though I appreciate the advice); I organize and design distribution centres as part of my job. I just want to make the room as safe as it can be, keeping potential chemical reactions in mind, as well as the normal stuff. And, to be honest, you don't necessarily want heavy stuff on the floor; something around hip height is best - would you rather pick up a frozen turkey from floor height or from counter height? Matt Deres (talk) 16:43, 4 December 2009 (UTC)
- (ec) Even with "low grade" or light commercial chemicals, serious hazards may exist. For example, never mix bleach with ammonia or acid. Be sure to check the MSDS safety sheet for any chemicals - these safety sheets will have storage guidelines that will outline any safety issues associated with storage. Typically, acids and bases are stored in separate cabinets. Oxidizers are never stored near fuels. Pressurized gas falls into its own category, and gas cylinders have entirely separate safety requirements (often mandatory outdoor storage, depending on conditions). Above all, consult the MSDS sheets - these are very informative and will spell out any potential hazards in plain english. Some "benign" chemicals may have storage details that you did not know about. Nimur (talk) 16:44, 4 December 2009 (UTC)
- The MSDS's are, of course, the ultimate guide. The problem there is that when you're dealing with hundreds of chemicals, it may be more useful to start with general guidelines and then work your way down, so to speak. It's a shame that the word "bleach" has more than one meaning; hydrogen peroxide is both a bleach and an acid so according to that poster, I should keep it away from itself :-). Matt Deres (talk) 18:12, 4 December 2009 (UTC)
- (ec) Even with "low grade" or light commercial chemicals, serious hazards may exist. For example, never mix bleach with ammonia or acid. Be sure to check the MSDS safety sheet for any chemicals - these safety sheets will have storage guidelines that will outline any safety issues associated with storage. Typically, acids and bases are stored in separate cabinets. Oxidizers are never stored near fuels. Pressurized gas falls into its own category, and gas cylinders have entirely separate safety requirements (often mandatory outdoor storage, depending on conditions). Above all, consult the MSDS sheets - these are very informative and will spell out any potential hazards in plain english. Some "benign" chemicals may have storage details that you did not know about. Nimur (talk) 16:44, 4 December 2009 (UTC)
- I'm not worried about that stuff (though I appreciate the advice); I organize and design distribution centres as part of my job. I just want to make the room as safe as it can be, keeping potential chemical reactions in mind, as well as the normal stuff. And, to be honest, you don't necessarily want heavy stuff on the floor; something around hip height is best - would you rather pick up a frozen turkey from floor height or from counter height? Matt Deres (talk) 16:43, 4 December 2009 (UTC)
- As important as the location is the manner of storage: Flammables in a fire cabinet, liquid acids on a spill tray, gas canisters behind chains, etc. On the other hand, my local supermarket keeps liquid acid drain cleaner right above the liquid base drain cleaner and right near the bread! Rmhermen (talk) 19:20, 4 December 2009 (UTC)
December 5
causes of internal ache and chokeness along the throat
Removed request for medical advice. The only advice Wikipedia can give is to call a doctor and have a face-to-face meeting with him/her. Only a medical professional can give responsible medical advice.
Wikipedia does not give medical advice
Wikipedia is an encyclopedia anyone can edit. As a result, medical information on Wikipedia is not guaranteed to be true, correct, precise, or up-to-date! Wikipedia is not a substitute for a doctor or medical professional. None of the volunteers who write articles, maintain the systems or assist users can take responsibility for medical advice, and the same applies for the Wikimedia Foundation.
If you need medical assistance, please call your national emergency telephone number, or contact a medical professional (for instance, a qualified doctor/physician, nurse, pharmacist/chemist, and so on) for advice. Nothing on Wikipedia.org or included as part of any project of Wikimedia Foundation Inc., should be construed as an attempt to offer or render a medical opinion or otherwise engage in the practice of medicine.
Please see the article Wikipedia:Medical disclaimer for more information.
Pyruvic Acid vs. Pyruvate as end product of Glycolysis
Most sources I've seen (incl. wiki) say that pyruvate is the end product of glycolysis. Except I was reviewing some biology in the Schaum's Outlines and it said pyruvic acid. According to wikipedia the formual for pyruvic acid is C(3) H(4) O (3) (I don't know how to do subscripts) and pyruvate is C (3) H(3) O(3), which makes sense given that pyruvate is the ionized form. In the glycolysis article it says pyruvate is the end product but if you look at the picture (glycolysis overview) then end product is has 4 hydrogen, which would make it pyruvic acid, not pyruvate. This makes more sense because after glycolysis if fermenation occurs the end product, supposedly pyruvate, is reduced twice by the two NAHDH to make a 6 hydrogen compound, which doesn't make sense because pyruvate reduced twice would only have five hydrogens. So my question is: is the end product of glycolysis pyruvate or pyruvic acid? Thanks, 76.95.117.123 (talk) 02:19, 5 December 2009 (UTC)
- They are the same thing. Pyruvic acid is C3H4O3, and pyruvate is the anion C3H3O3-. If you read the article on pyruvic acid, the second sentence of the lead tells you just that. Pyruvate is the form used by the Citric Acid Cycle. ~ Amory (u • t • c) 02:50, 5 December 2009 (UTC)
- They are the same thing, and which form prevails basically depends on the pH in the cell. In this case it's most likely pyruvate - any pyruvic acid generated would have dissociated into pyruvate and proton anyway. Tim Song (talk) 02:59, 5 December 2009 (UTC)
- So I could say either as the answer to the question? But if the pyruvic acid disassociated into pyruvate then fermentation wouldn't produce a 6 H compound. The only reason I'm curious is that I do Science Bowl and the question sometimes comes up. Which answer would be more correct? I kinda said that pyruvate is the ionized form of pyruvic acid in my question btw...66.133.196.152 (talk) 03:09, 5 December 2009 (UTC)
- Say pyruvate because that is the form it will be in given the conditions. Also, pyruvate and H+ are among the reactants in anaerobic respiration. I saw you said that about the ions, apologies if you felt slighted. I just wanted to set up the proper subtext and background. ~ Amory (u • t • c) 03:25, 5 December 2009 (UTC)
- So I could say either as the answer to the question? But if the pyruvic acid disassociated into pyruvate then fermentation wouldn't produce a 6 H compound. The only reason I'm curious is that I do Science Bowl and the question sometimes comes up. Which answer would be more correct? I kinda said that pyruvate is the ionized form of pyruvic acid in my question btw...66.133.196.152 (talk) 03:09, 5 December 2009 (UTC)
- If they say it's wrong based on that grounds you can always appeal. Pyruvate may be temporarily protonated in an enzyme at the active site, but usually what happens is that the COOH group has to be deprotonated. This gives the COO- system the electron it needs to expel the weak carbonyl-carbonyl bond and cleave as carbon dioxide. It can't cleave if it's protonated. ;-) The two-carbon molecule remaining (acetaldehyde) is further oxidised is attacked by the sulfur thiol of CoA to become acetyl CoA. John Riemann Soong (talk) 03:28, 5 December 2009 (UTC)
You can think of it like this:
The proton of pyruvic acid helps supply protons to the proton pump in the electron transport chain. Note that NADH (reduced form of NAD+) carries 2 electrons but only one proton. The other "lost" proton has to come from deprotonating pyruvic acid. ;-) (As you might know, carboxylate is a weak base so it's not very good at taking back the lost proton.)
Decarboxylation (loss of CO2) donates a pair of energetic electrons (to NAD+) that will be used for the electron transport chain. The thermodynamic stability of CO2 helps drive the donation.
Acetyl-CoA is a useful anabolic building block (if you want to build sugars or fatty acid]]), but if you want to oxidise it all the way (use all its energetic electrons) it's kinda hard to oxidise and pull electrons (via evolving CO2) out of a molecule to nothingness (converting acetaldehyde to formaldehyde and formic acid would be a pretty bad idea), so it goes through the citric acid cycle. John Riemann Soong (talk) 03:57, 5 December 2009 (UTC)
- Ok thanks John Riemann Soong! The first explanation you gave helped me alot. And if I challenge I say the wiki ref desk told me :-)
66.133.196.152 (talk) 04:11, 5 December 2009 (UTC)
heat modelling
this code has come as an outcome of modelling of spot modelling process.i have arrived at eqation (1).If we take initial heat (due to atmospheric conditions) in each point as unity.this in coded in initialisation section. Now dq is sent to ode45 for solving in a prescribed time domain and with initial condition y0=0.
function dq = heat(t,q)
p=5;%number of variables
%--------------------------
% generation of const matrix
%---------------------------
A = [5 4 3 2 1]';----------------------------------------arbitrarily chosen constant A,B,C,D
B = [5 4 3 2 1]';------------------------------------------------
C = [5 4 3 2 1]';------------------------------------------------
D = [5 4 3 2 1 ]';------------------------------------------------
%----------------------------------
dq = zeros(p,1);
%-----------initialisation-----------------
for i=1:p
q(i) =1;
end
%----------------------------------------
dq(1) = A(1)*q(2) + B(1)*q(1) + D(1);
for i=2:p-1
dq(i) = A(i)*q(i+1) + B(i)*q(i) + C(i)*q(i-1) + D(i); -----------------(1)
end
here 'i' represents the weld number.code has considered the contribution from a point before ,a point after the point 'i', and contribution of heat added in next point.now my problem is that i want to optimise this process i.e. minimize dq.i.e.i need the welding to be cooled fastly.so what parameter should i consider for optimisation and what method should i adopt. SCI-hunter (talk) —Preceding undated comment added 03:01, 5 December 2009 (UTC).
- See this duplicate inquiry at WP:RD/Math. Takes your pick but not both. You are in a little maze of twisty passages. hydnjo (talk) 03:49, 5 December 2009 (UTC)
- I have formatted the code for readability. Nimur (talk) 04:45, 5 December 2009 (UTC)
- The code seems rather nonsensical - the first 'for' loop sets all members of q(1..p) to 1. So surely the second loop sets every element of dq(n) to A(n)+B(n)+C(n)+D(n) ? Why so much complication? You don't say what language this is written in - but what C-like programming language has arrays that start from index 1? This suggests that whatever this code is intended to do...it's not doing it. SteveBaker (talk) 16:16, 5 December 2009 (UTC)
- It was subtly implied that it was Matlab code; between the syntax and the reference to ode45; the OP might want to read our guide on asking for help with code. Nimur (talk) 19:11, 5 December 2009 (UTC)
- The code seems rather nonsensical - the first 'for' loop sets all members of q(1..p) to 1. So surely the second loop sets every element of dq(n) to A(n)+B(n)+C(n)+D(n) ? Why so much complication? You don't say what language this is written in - but what C-like programming language has arrays that start from index 1? This suggests that whatever this code is intended to do...it's not doing it. SteveBaker (talk) 16:16, 5 December 2009 (UTC)
- I have formatted the code for readability. Nimur (talk) 04:45, 5 December 2009 (UTC)
its written in matlab and its approxmately functioning correctly.please help now 220.225.98.251 (talk) —Preceding undated comment added 16:28, 5 December 2009 (UTC).
- What do you mean by "minimize dq" ? dq is a function (vector in your code).
- More broadly, try following the following steps:
- Formulate a clear mathematical statement of the physical problem you are trying to solve.
- Derive (or pick) a mathematical solution/algorithm for the problem (or its discretized/approximate version)
- Write Matlab code for the algorithm. Test and debug it.
- Right now, you seem to be at step 3, and it is not clear (at least to us) if you have followed the previous steps. As such, your code does what it does, but we cannot determine if it actually implements the algorithm derived in step 2, and if the algorithm solves the problem in step 1 (remember GIGO).
- PS: You should consult fellow students for tips on better Matlab coding; your current code is pretty poor. For example, the function takes in inputs t and q, and then doesn't use either. Instead it simply defines q. Also your first loop can be replaced by q = ones(p,1). Note that this review is intended to guide, not criticize. Hope it helps. Abecedare (talk) 16:52, 5 December 2009 (UTC)
- Abecedare, dq is not a function. That is the syntax for declaring a return value. The function, heat(t,q), returns a vector whose local name is dq. This is standard MATLAB code style. What is unclear is why the code overwrites q, which is an input; and why it does that overwrite in such an inefficient and convoluted way. I suspect the OP used "pseudocode" or dummy assignments instead of writing a comment or actually implementing the correct physics. If the OP reviews Abecedare's and others' suggestions, and our software help guidelines, it will greatly help us answer the problem. I'm also going to posit that the simulated annealing article may be conceptually helpful, as well as the heat equation article. Nimur (talk) 19:40, 5 December 2009 (UTC)
- I meant to indicate that dq is not scalar valued, so it doesn't make sense to try and minimize it. My language was ambiguous though; thanks for pointing it out. Abecedare (talk) 19:54, 5 December 2009 (UTC)
- I guess if not otherwise specified, minimizing a vector implies minimizing its L2 norm. Nimur (talk) 22:30, 5 December 2009 (UTC)
- I meant to indicate that dq is not scalar valued, so it doesn't make sense to try and minimize it. My language was ambiguous though; thanks for pointing it out. Abecedare (talk) 19:54, 5 December 2009 (UTC)
- Abecedare, dq is not a function. That is the syntax for declaring a return value. The function, heat(t,q), returns a vector whose local name is dq. This is standard MATLAB code style. What is unclear is why the code overwrites q, which is an input; and why it does that overwrite in such an inefficient and convoluted way. I suspect the OP used "pseudocode" or dummy assignments instead of writing a comment or actually implementing the correct physics. If the OP reviews Abecedare's and others' suggestions, and our software help guidelines, it will greatly help us answer the problem. I'm also going to posit that the simulated annealing article may be conceptually helpful, as well as the heat equation article. Nimur (talk) 19:40, 5 December 2009 (UTC)
Bohr Magneton Number for Copper Sulphate.
I'm currently trying to calculate the dimensionless Bohr Magneton number peff for CuSO4·5H2O. The formulae I have are:
and
Where all the symbols have usual meanings and values. From this, peff should be:
However, the formula I have been given for the dimensionless Bohr magneton number is:
Where the fundamental constant of magnetism of an electron is squared in the denominator, how can this be? Thanks for any help 188.221.55.165 (talk) 13:32, 5 December 2009 (UTC)
- When you substituted, you forgot about that square root. Outside the square root use but if you move it inside the parenthesis you have to square it.. Graeme Bartlett (talk) 21:30, 5 December 2009 (UTC)
- Oh, yeah....simple....thanks Alaphent (talk) 08:26, 6 December 2009 (UTC)
- And I thought I would have to understand Bohr Magnet(r)on number, but actually only algebra was needed! Graeme Bartlett (talk) 05:58, 7 December 2009 (UTC)
- Oh, yeah....simple....thanks Alaphent (talk) 08:26, 6 December 2009 (UTC)
Does rice water has chemical reaction with mineral water's plastic bottle?
I had collect some rice water after washing rice for watering plants.
Because of keep raining these few days, I kept the rice water in plastic bottles to watering plants later.
But I found that after about two weeks, the plastic bottles had been harden and bloat.
The base of the bottle also bloat until hardly to stand on a flat surface.
I'm wondering is there any chemical reaction between rice water and mineral plastic bottle?
I'm curious and wish to know more about this condition, and also the reason why the bottle becomes like this.
Can anyone helps to find out the reason?
There is some problem statements i wish to know:
1. What is the fators affecting the bottle to bloat and harden?
2. What is the effect (positvely and negetively)
2. Does the chemical reaction brings harm to human?
3. Does it brings harm to plants if i watering plant with the rice water in it?
This is the condition i kept the rice water for about 2 weeks:
1. Date I kept the rice water in plastic bottles: 21/12/2009 to 5/12/2009 (I discovered out the condition on 5/12/2009)
2. Temperature: about 27 degree celsius to 33 degree celsius (sometimes in air-conditioned of 24 degree celsius)
3. Place I kept it: in a cupboard in my room
4. Not exposed to sunlight.
And these are few of the pictures of the bottle's condition:
-
This picture shows the difference of the base of original plastic bottle and the plastic bottle with rice water inside.
-
This picture shows that the plastic bottle with rice water inside hardly stand on a flat surface.
-
This picture shows the upper part of bottle also had bloat.
-
This picture shows that I kept the bottles with rice water inside in a book shelf with glass windows.
--perfection is not intact.. (talk) 19:26, 5 December 2009 (UTC)
- We can't look at the pictures unless you put them somewhere that we all have access to - upload to Commons or somewhere equally accessible. Mikenorton (talk) 17:14, 5 December 2009 (UTC)
- Your links are inaccessible as well. bibliomaniac15 18:00, 5 December 2009 (UTC)
- Thanks, that's much better. Mikenorton (talk) 18:51, 5 December 2009 (UTC)
- I'm sorry about previous condition. I'm still a newbie in wikipedia, that's why I'm keep finding the instructions and ways to fix those problems. And thank you for your help to guide me. Just now, I'm still finding the ways to reply.--perfection is not intact.. (talk) 19:26, 5 December 2009 (UTC)
- Thanks, that's much better. Mikenorton (talk) 18:51, 5 December 2009 (UTC)
- Your links are inaccessible as well. bibliomaniac15 18:00, 5 December 2009 (UTC)
- Perhaps you are inadvertantly making rice wine? 75.41.110.200 (talk) 18:06, 5 December 2009 (UTC)
- I have to agree that fermentation of rice starch in the water, creating carbon dioxide, is a likely cause of this. Mikenorton (talk) 18:58, 5 December 2009 (UTC)
- But I just kept the rice water after I wash the rice. At first my motive is just to watering the plants later because the past few days were raining. Until yesterday only i found out that the shape of bottle had change and had harden. Err..does it means that I'm accidentally make of rice wine which produce carbon dioxide, and the carbon dioxide had harden the bottle?--perfection is not intact.. (talk) 19:26, 5 December 2009 (UTC)
- I have to agree that fermentation of rice starch in the water, creating carbon dioxide, is a likely cause of this. Mikenorton (talk) 18:58, 5 December 2009 (UTC)
- Two weeks at that sort of temperature is certainly enough to ferment the starch. The bottle should have a bit of pressure inside, when you open the top it will outgas. There should be some nasty smell associated, your result is probably not drinkable. But plants may be able to tolerate it. Graeme Bartlett (talk) 21:25, 5 December 2009 (UTC)
- Yes!!I just try to open the cap and it releases gases..so is it carbon dioxide release?I don't have lime water at home,so unable to test it.And there is nasty smell too!But after I open the cap and the gas had releases, the bottle had back to it's softness. So, can I conclude that rice water under the condition of high pressure and the temperature will results in fermentation? Does it apart of anaerobic respiration reaction?But there is no yeast inside.Besides that, does bacteria inside the bottles can replaced the yeast? --perfection is not intact.. 06:28, 6 December 2009 (UTC)
- Yes!!I just try to open the cap and it releases gases..so is it carbon dioxide release?I don't have lime water at home,so unable to test it.And there is nasty smell too!But after I open the cap and the gas had releases, the bottle had back to it's softness. So, can I conclude that rice water under the condition of high pressure and the temperature will results in fermentation? Does it apart of anaerobic respiration reaction?But there is no yeast inside.Besides that, does bacteria inside the bottles can replaced the yeast? --perfection is not intact.. 06:28, 6 December 2009 (UTC)
- I doubt that the bottle itself has changed its hardness. It's just that the contents are under such high pressure that they're pushing out against the walls of the bottle really hard.
- Incidentally, it's only a matter of time before those bottles burst and spray that rice water all over the place. It might not be a good idea to store them so close to a bunch of books. APL (talk) 21:52, 5 December 2009 (UTC)
- Thanks for reminding me. =) I had placed the bottle in a bucket.I had an crazy idea..I wonder that if i still keeping the rice water in that bottle, how long does it takes to burst or does it will burst. It may test the "toughness" of the bottle too..haha xD --perfection is not intact.. 06:28, 6 December 2009 (UTC)
- The yeast naturally arrives from the air. 75.41.110.200 (talk) 18:15, 6 December 2009 (UTC)
- Indeed it does: in Belgium, Lambic beers are fermented by allowing wild yeasts (and some bacteria) to drift in and 'infect' the wort, rather than by adding cultivated yeasts as in more conventional brewing. On a similar note, I find that if I partly consume a carton of pure orange juice, but then leave it in my refrigerator for a couple of weeks, it begins to ferment, adding a not-unpleasant tang to the taste. 87.81.230.195 (talk) 00:50, 7 December 2009 (UTC)
Eye water
What is the substance composed of that wets and lubricates the human eye? Mac Davis (talk) 16:26, 5 December 2009 (UTC)
- See tears. The standard "wetness" is referred to as "basal tears" and according to the article it contains contains water, mucin, lipids, lysozyme, lactoferrin, lipocalin, lacritin, immunoglobulins, glucose, urea, sodium, and potassium. Matt Deres (talk) 16:38, 5 December 2009 (UTC)
- It is also called lachrymal fluid. Googlemeister (talk) 15:54, 7 December 2009 (UTC)
classical music and emotions
I find it very strange which songs trigger strong emotions in myself — e.g., I get flushing waves of "tingles" whenever I hear Pachelbel's Canon, even though I can't recall having any strong memories associated with the song. Bits of Wagner hit me similar. I would probably generalize that it is probably only classical music that affects me in this particular way (the waves of "tingles," whatever that is), but I'm not a particularly big fan of classical at all (and haven't spent long amounts of time listening or playing it or anything along those lines), and generally do not think of myself as a terribly sentimental person (nor someone who is unusually appreciative of or interested in music). What causes this? Is it just some sort of long-lost association to music playing in stores around Christmastime when I was a child? Some property of this type of music itself—mathematical "problems" being proposed and solved? Just a sign of how complicated and weird the human brain is? I know there has been a lot written and researched on music and the brain, but I'd love a summary, if someone out there has thought about it much. --Mr.98 (talk) 16:45, 5 December 2009 (UTC)
- Anything in the music psychology article give you any clues? Between cultural conditioning and a biological predisposition to perceive rhythm, tonal scales, and harmonics, music can inspire a strong psychological response. It's pretty much impossible to pinpoint what exactly triggers this response for you, but a lot of research has been done on music and psychology. Nimur (talk) 19:15, 5 December 2009 (UTC)
- I have heard the term aural orgasm used to describe this, although I can't find any particularly reliable sources that define it. Mitch Ames (talk) 02:57, 6 December 2009 (UTC)
- Isn't this sensation the basic meaning of the word "thrill"? --Anonymous, 04:55 UTC, December 6, 2009.
- The experimenting (or torturing) physician in Clockwork Orange was surprised at the strong reaction the young thug Alex had to classical music. Stanley Kubrick seemed to be making the same point: Alex epitomised unsentimentality and was not particularly well educated, but he responded to Beethoven, not pop, with bliss. BrainyBabe (talk) 19:37, 6 December 2009 (UTC)
Showering with contact lenses
Why do most manufacturers of soft contact lenses warn against showering with them in or using tap water to rinse out the lens case? What negative effects could showering with them in have on the lenses? Thanks! --98.108.36.186 (talk) 20:27, 5 December 2009 (UTC)
- It's to do with contaminating the lenses. Normal tap water carries bacteria that, in the normal way of things, isn't a problem for most people. However, if it gets on your lenses the bacteria will be in contact with your eyes for hours at a time, and your tears can't wash it away properly. If these are lenses you wear for more than one day, the bacteria will continue to breed and grow, feeding on bits you haven't properly washed off the lens. And they'll still be there, more plentiful than ever, when you next put them on. It can potentially blind the lens wearer. Here [19]. 86.166.148.95 (talk) 21:51, 5 December 2009 (UTC)
- Thank you! —Preceding unsigned comment added by 98.108.32.19 (talk) 01:45, 7 December 2009 (UTC)
Raccoon
I was just walking back across campus and I saw a small animal on the path ahead. I assumed it was a cat, but the nose was the wrong shape, so I assumed it was an opossum. No biggie; opossums are vicious, but they aren't likely to have rabies. Then when I was close enough to clearly make out the raccoon's markings (it's at night), I noticed that it was so content on drinking the contents of the puddle that it didn't notice me. I must have passed within five feet of it. It is a college campus, so perhaps it's just abnormally tame, but isn't an early sign of rabies an intense thirst? I did look at the article, but I can't tell whether the thirst comes before or after the animal is unable to drink. Just as a note, I did report it to campus police. Falconusp t c 23:31, 5 December 2009 (UTC)
- You seem to have been checking it out quite intently -- perhaps you're into raccoon drinking-voyeurism? :) I'm just saying that it's very easy to jump from "it was drinking and didn't see me" to "it must have a rabid thirst for it to not have seen me." DRosenbach (Talk | Contribs) 00:50, 6 December 2009 (UTC)
- Well, I thought it odd, as most wild animals will at least look at you when you walk within a few feet. Falconusp t c 00:56, 6 December 2009 (UTC)
- Have you ever spent some time around racoons, even wild ones? Most that live close to people generally behave exactly as you describe, in my experience. When I was growing up, it was not uncommon to have racoons in my yard picking through the trash. They frequently didn't even pay me any attention, even if i yelled, threw rocks, whatever. I had to get close enough to grab them, and they would move far enough away for me to pick up all the trash. Then they went back to picking through it as soon as I walked away. They simply don't seem to pay humans much mind, and they certainly weren't much afraid of me. --Jayron32 01:08, 6 December 2009 (UTC)
- Well, I thought it odd, as most wild animals will at least look at you when you walk within a few feet. Falconusp t c 00:56, 6 December 2009 (UTC)
- My experience with urban raccoons is that they don't care very much about humans and regularly ignore them. (Dogs are another matter.) They are tough animals that nobody hunts near cities. --Mr.98 (talk) 01:34, 6 December 2009 (UTC)
- Well, I guess I made a big deal out of nothing. I have just never seen an animal do that. Falconusp t c 01:52, 6 December 2009 (UTC)
- A little known fact about raccoons is that they wear raccoon coats not for fashion reasons. Bus stop (talk) 02:02, 6 December 2009 (UTC)
- I often see raccoons around the house, and they are pretty intelligent creatures. They aren't likely to regard you with anything more than peripheral vision unless you do something problematic. Vranak (talk) 04:15, 6 December 2009 (UTC)
- I don't know, I frequently see raccoons peering in through the bottom panel of my glass door at night. They're pretty curious. Looie496 (talk) 17:45, 6 December 2009 (UTC)
- That's because you've got stuff they'd like to eat and rummage through, not because they care about you. --Mr.98 (talk) 20:10, 6 December 2009 (UTC)
December 6
Vacuoles, vacuolation, vacuolisation and vacuolization
Is vacuolation the same as vacuolization? As it stands, the former currently redirects to a section in the article for vacuoles in which it states that this is a process in which vacuoles form pathologically, while the latter is its own article that might seem to indicate pathosis but doesn't necessarily spell it out nicely. Vacuolisation, which appears to me to be nothing more than a (perhaps British) spelling variant of vacuolization, redirects to the main article on vacuoles. This is what I think -- correct me if I'm wrong:
- Vacuolisation and vacuolization are spelling variants of the same thing
- Vacuolation and the aforementiones spelling variants are variant words of the same thing -- sort of like dilation and dilitation.
- The mini-section on this concept within the article on vacuoles should make a statement or two about it and include a link to the article that will delve into it deeper.
Let me know if there's any disagreement on the definitions, etc. before I go ahead and do it. Thanx! DRosenbach (Talk | Contribs) 00:47, 6 December 2009 (UTC)
- It may be best if this discussion happened on the talk page of the articles in question (pick one to have the discussion, and leave notices on the other talk pages). Since this involves a question which stands to have a material impact on the content of the article space, the discussion should probably happen on those talk pages, since editors who edit and patrol those articles would likely be interested in it. --Jayron32 01:04, 6 December 2009 (UTC)
- I didn't imagine that the talk pages of any of these articles were nearly as high-volume as this page. Additionally, the editors of the aforementioned articles obviously have left this crucial point unmanaged for some time. DRosenbach (Talk | Contribs) 02:12, 6 December 2009 (UTC)
- OK -- I placed notes on both the article talks to see here. Now we can discuss it here. DRosenbach (Talk | Contribs) 02:15, 6 December 2009 (UTC)
- Wouldn't WikiProject Biology be the place to discuss such article issues? Fences&Windows 16:13, 6 December 2009 (UTC)
- I agree with Fences - discussion of article content belongs in article Talk space, where it will be archived along with the article, or in a linked wikiproject created for such a purpose. -- Scray (talk) 18:08, 6 December 2009 (UTC)
Why does increasing CO2 concentration matter?
Doesn't 350 ppm CO2 absorb the same amount of infrared from the miles of Earth's atmosphere as 400 ppm? I can see how the difference between 350 and 400 ppm CO2 in air would change how much infrared could be absorbed by a test tube's width, but for the vast depth of Earth's atmosphere, I just can't understand how it could change the total absorption. Are there any articles or sources that discuss this? I've looked at greenhouse gas, global warming, radiative forcing, and their talk pages, but maybe I overlooked something? 99.56.139.149 (talk) 01:13, 6 December 2009 (UTC)
- 350 to 400 ppm represents an increase of 14.2%, so all other things being equal, this will increase the "greenhouse effect" contributed by the CO2 by 14.2%, which is a significant and measurable amount. Regardless of the size of the sample, a 14.2% increase is a 14.2% increase. --Jayron32 01:25, 6 December 2009 (UTC)
- Unfortunately not, Jayron - changing the concentration at the bottom of the atmosphere by 14% does not increase the total absorption by 14%. See optical depth for the physical mechanism of increased gas concentration on total atmospheric absorption. For gas of uniform density, optical depth is exponentially related to concentration. Compound this by the fact that the atmospheric profile is also roughly exponentially decaying with altitude. Nimur (talk) 02:23, 6 December 2009 (UTC)
- Also keep in mind that greenhouse effect is only one effect. Atmospheric chemistry and climate are extremely complicated subjects. It is probably a great misrepresentation to say that CO2 is harmful primarily because of its contribution to a greenhouse effect. As you correctly point out, the albedo change and the difference in optical depth between 350 ppm and 400 ppm are very small. I would go so far as to call them negligible, and I can find a lot of planetary science references to back me up on that. However, and this is critical - the greenhouse effect is only one of many ways that a changing atmospheric composition affects climate. You may want to read climate change, which discusses some of the mechanics, and atmospheric chemistry, which will broaden the view of how carbon and other atmospheric constituents affect conditions on Earth. Nimur (talk) 01:57, 6 December 2009 (UTC)
- You can use a web-based radiative transfer model to help you see that the amount of radiation absorbed at 350ppm is not the same as at 400ppm. It's actually easier to see the difference if you use the pre-industrial value for the concentration of CO2 of ~280ppm and a value we're likely headed towards ~450ppm. -Atmoz (talk) 01:59, 6 December 2009 (UTC)
- Spaceborne measurements of atmospheric CO2 by high resolution NIR spectrometry of reflected sunlight, published in GRL in 1997, is a good quantitative overview of the Carbon Dioxide near-infrared spectrum in an experimental, in-situ, atmospheric context. Nimur (talk) 02:01, 6 December 2009 (UTC)
- The other paper I like to point out in discussions of "global warming" and atmospheric chemistry is this 2003 Eos publication: Are Noctilucent Clouds Truly a “Miner’s Canary” for Global Change. This paper points out some very interesting atmospheric effects - notably, it provides the novice atmospheric scientist with a reminder about conservation of energy. Unless the net power from the sun is changing (which is experimentally not the case), then for any "global warming," there must be some "global cooling" somewhere else - in this case, the mesosphere [20]. Observations of mesospheric weather therefore would be a good indicator of climate change - probably a better indicator than (say) average temperature measurements or atmospheric chemical content. "Of the infrared radiatively most important gases (CO2,O3,O, and perhaps H2O), none can currently be measured with sufficient accuracy at mesopause altitudes to establish its abundance there within anything like percent accuracy, not to speak of any significant long-term change." Therefore, these numbers about Atmospheric Carbon Content are sort of useless - remember, all the quoted numbers are for the troposphere, and almost all the data comes from surface measurements. The actual total carbon content of the atmosphere, per the opinions of the scientists of these papers, is actually very poorly known. On top of this, our only method to probe it is via NIR optical density measurements - and the first paper I linked will give you some idea of the quantitative measurement accuracy for that. Unfortunately, these statements and this line of reasoning sparked huge controversy back in 2003, because it does not tow the simplistic "more carbon ppm = evil" rote argument. But in reality, it's simply establishing an actual scientific context for evaluating the meaning of one particular surface measurement - atmospheric carbon concentration at the surface level. Changing the tropospheric carbon content will certainly result in a different chemistry mechanism in the upper atmosphere, and again, we have extremely complex, non-greenhouse-effect climate-change consequences. Nimur (talk) 02:07, 6 December 2009 (UTC)
- I'm not following the conservation of energy argument. In order for global warming to demand global cooling, the earth would have to be treated as a "closed" system (with a constant and equal input and output). The net input from the sun is assumed constant, but isn't the fundamental argument of the greenhouse effect that the amount of energy radiated from the Earth is decreasing? If energy in is constant and energy out is decreasing, net energy in the system is increasing. Compartmentalizing the system might change the amount of energy locally (at surface or at mesosphere) but the system as a whole can still experience a net increase. Open systems need not obey conservation of energy, and the earth is not a closed system. SDY (talk) 02:31, 6 December 2009 (UTC)
- If the planet temperature increases, its blackbody spectrum will change and it will radiate power faster according to the Stefan-Boltzmann law. Surface temperature may change as a result of greenhouse effect, but planet effective temperature cannot. Nimur (talk) 02:34, 6 December 2009 (UTC)
- According to our articles, Stefan's Law relies on emissivity, and the effective temperature is also a function of albedo. Again, emissivity is arguably what is changing, and changes in albedo (ice has very high albedo, melted ice has less) are also a concern. Why must effective temperature be constant? SDY (talk) 03:25, 6 December 2009 (UTC)
- If the planet temperature increases, its blackbody spectrum will change and it will radiate power faster according to the Stefan-Boltzmann law. Surface temperature may change as a result of greenhouse effect, but planet effective temperature cannot. Nimur (talk) 02:34, 6 December 2009 (UTC)
- I'm not following the conservation of energy argument. In order for global warming to demand global cooling, the earth would have to be treated as a "closed" system (with a constant and equal input and output). The net input from the sun is assumed constant, but isn't the fundamental argument of the greenhouse effect that the amount of energy radiated from the Earth is decreasing? If energy in is constant and energy out is decreasing, net energy in the system is increasing. Compartmentalizing the system might change the amount of energy locally (at surface or at mesosphere) but the system as a whole can still experience a net increase. Open systems need not obey conservation of energy, and the earth is not a closed system. SDY (talk) 02:31, 6 December 2009 (UTC)
- Nimur, you are complicating things much more than necessary. It is true that the effective temperature does not change. But that does not require cooling of the mesosphere (though some cooling is possible). None of that is required to understand the basic idea behind global warming which is what the question is about. Dauto (talk) 03:17, 6 December 2009 (UTC)
- Ok. What I said above is not entirely accurate. The effective temperature CAN change if earth's albedo change. But it will not change as a (direct) consequence of the increase on atmospheric CO2. Dauto (talk) 04:31, 6 December 2009 (UTC)
- The original questioner noted that a change of 350ppm to 400 ppm does not significantly change the transparency of the entire atmosphere (integrated over the full height) to infrared wavelengths. This is a scientific fact, set forth in the articles I linked. The complexity comes in because climate change can still occur even though the additional CO2 is not adding to the cumulative greenhouse effect. So, the logical question is - "if climate change is not strictly the result of greenhouse effect, then what is it an effect of?" And, again, the answer is "very complex atmospheric chemistry changes which may result in a different energy distribution in the troposphere." Sorry that this is not a simple answer - but "more carbon = more greenhouse effect" is an overly simplistic and scientifically incomplete picture. Let me succinctly rephrase: adding CO2 may still cause climate changing effects, even if the total change in atmospheric IR absorption is negligible, because other effects come into play. Nimur (talk) 05:27, 6 December 2009 (UTC)
- Ok. What I said above is not entirely accurate. The effective temperature CAN change if earth's albedo change. But it will not change as a (direct) consequence of the increase on atmospheric CO2. Dauto (talk) 04:31, 6 December 2009 (UTC)
- That's a bunch of nonsense. The additional CO2 IS adding to the greenhouse effect, and that's why the earth's mean temperature is increasing. Dauto (talk) 14:06, 6 December 2009 (UTC)
- I'm sure that is what you read about in high school science textbooks and the newspaper, but I would suggest moving to a geophysics or planetary science journal to get a more accurate scientific picture. Here is a nice, albeit old, piece from Science: Cloud-Radiative Forcing and Climate: Results from the Earth Radiation Budget Experiment, (1989). Again, experimental and quantitative results suggest that carbon dioxide induced "greenhouse effect" is not the most relevant effect. It may play a role, and anthropogenic carbon may be a root cause of some other changes, but the climate change is not due only to greenhouse effect: "Quantitative estimates of the global distributions of cloud-radiative forcing have been obtained from the spaceborne Earth Radiation Budget Experiment (ERBE) launched in 1984.... "The size of the observed net cloud forcing is about four times as large as the expected value of radiative forcing from a doubling of CO2. The shortwave and longwave components of cloud forcing are about ten times as large as those for a CO2 doubling. Hence, small changes in the cloud-radiative forcing fields can play a significant role as a climate feedback mechanism." Do you really intend to stick to your simplistic model of greenhouse insulation, when experimental observation has repeatedly shown it to be 10 times smaller than other atmospheric physics effects?[21][[22] Even these are small compared to massive climate-scale energy redistributions, e.g. Does the Trigger for Abrupt Climate Change Reside in the Ocean or in the Atmosphere? (2003). To reiterate: the carbon dioxide in the atmosphere is present; it is probably anthropogenic; and its biggest impact on climate is probably not actually related to greenhouse warming, but to other effects that CO2 can induce. Nimur (talk) 15:11, 6 December 2009 (UTC)
- Nimur, I do not deny that there are many complex positive and negative feedback effects that must be taken into account in order to reach a precise quantitative description of climate change. But none of that is necessary to give the OP an answer that makes sense. You said "the additional CO2 is not adding to the cumulative greenhouse effect". And that's just not true. Dauto (talk) 15:45, 6 December 2009 (UTC)
- It seems that my efforts to link to scientific papers are not getting my point across. Let me make an analogy, which is actually very analogous to the situation (except that CO2 is blocking "upgoing" photons which are re-radiated from the earth... I'm only concerned with the opacity of the atmospheric window, though, so direction doesn't matter). Imagine that you are building a roof, and for some reason you are trying to block light from the sun, and you use thick steel plates to block the sunlight. Each steel plate is 1 inch thick, and blocks most of the photons. For your purposes, you want to really block the sunlight, so you build a giant structure and you put 350 steel plates between you and the sunlight. Now, along comes an upstart engineer, who says he has 50 more steel plates in the scrap-yard, and he's going to add them to your structure, whether you want them or not. Two questions: (1)how much more sunlight are those extra 50 steel plates going to block? Probably none. (2) Are there other problems that those extra steel plates will produce? Absolutely. Your structure wasn't designed for 400 steel plates on its roof.
- How does this correspond to the atmospheric carbon situation? Well, the carbon dioxide atoms are narrow-band absorbers of photons. They really only affect a small part of the total solar energy spectrum. And by the time we have 350 ppm, they are pretty much blocking all the sunlight in that particular part of the infrared spectrum. Dauto, you are absolutely correct, in that adding more carbon will increase the absorption - in the same way that adding more steel plates to a roof will block more photons. Because of the way that exponential functions work, this change is negligible. So, if we really have a problem with adding excess carbon, it isn't because of the greenhouse effect or because those carbon molecules will be blocking any extra solar energy. Other effects are the real potential problem - and we need to understand those effects to make sure our roof doesn't collapse under the weight of 50 extra steel plates. Nimur (talk) 16:08, 6 December 2009 (UTC)
- No, Nimur, that is not a good analogy. There is a very good article by Spencer Weart on RealClimate here. Yes, the direct opacity of the atmosphere is not significantly changing when adding more CO2. But what does happen is that the "final" emission layer moves further up the atmosphere, providing more chances for re-emission towards the ground. This is a physical effect, not a chemical process. And while this also is an exponential decay, it still is quite significant - that's why doubling CO2 without feedbacks gives us a ≈1℃ increase in temperature. --Stephan Schulz (talk) 16:30, 6 December 2009 (UTC)
- That is a good read, Stephan. And, as you say, the absorption profile is very relevant as well. Changing the concentration will change the relevant scale height for the near infrared spectral effects. I still disagree with your unsourced assertion that doubling carbon would yield a 1 degree celsius increase in surface temperature. I stand by the references I linked earlier - most importantly, the quantitative analyses of total radiative effects and energy balance experiments - but at this point I think it's moot to argue. Nimur (talk) 16:40, 6 December 2009 (UTC)
- No, Nimur, that is not a good analogy. There is a very good article by Spencer Weart on RealClimate here. Yes, the direct opacity of the atmosphere is not significantly changing when adding more CO2. But what does happen is that the "final" emission layer moves further up the atmosphere, providing more chances for re-emission towards the ground. This is a physical effect, not a chemical process. And while this also is an exponential decay, it still is quite significant - that's why doubling CO2 without feedbacks gives us a ≈1℃ increase in temperature. --Stephan Schulz (talk) 16:30, 6 December 2009 (UTC)
- Thanks Schulz. That's finally putting us into the path towards giving the OP a sensible answer to the question asked. The point of the greenhouse effect is not how much of the earth's radiation gets absorbed by the atmosphere. How much of that energy finds its way back to the surface IS the relevant question. Dauto (talk) 16:39, 6 December 2009 (UTC)
- I think you somewhat misrepresented that Eos review: it is about Noctilucent clouds and whether they are signals of climate change, and does not mention anything about conservation of energy. Indeed, the article states that "The temperature will be affected by any anthropogenic changes of the CO2 and/or O3 abundances". Your comment that "Unless the net power from the sun is changing (which is experimentally not the case), then for any "global warming," there must be some "global cooling" somewhere else" is wrong as Earth is not a closed system; the cooling occurs in space. Of course atmospheric content affects surface temperature, which is why Mars is freezing, Earth is warm and Venus is toasty. Why are you quoting from 10-20 year old papers about how CO2 affects climate when there are newer articles on the topic? e.g. [23][24][25] Fences&Windows 16:54, 6 December 2009 (UTC)
- I usually quote papers I've read - sometimes I read them 10 years ago. There's no shortage of new material. But, given that everybody is trying to establish a long-term-change, don't you think it may be worth checking primary source data from previous decades before making bold claims about massive changes in recent years? In any case, ERBE was a great experiment on a great spacecraft, and a hallmark of empirical data collection for global climate studies. It should be cited more often. Nimur (talk) 16:56, 6 December 2009 (UTC)
- That picture above should help make clear why adding more CO2 to the atmosphere increases the surface temperature even after saturation is achieved. The amount of energy cycling between the earth's surface and the atmosphere can be (and is) much larger than the amount of energy coming from the sun. More CO2 increases the energy being fed back to the surface warming it up. Dauto (talk) 22:12, 6 December 2009 (UTC)
- I usually quote papers I've read - sometimes I read them 10 years ago. There's no shortage of new material. But, given that everybody is trying to establish a long-term-change, don't you think it may be worth checking primary source data from previous decades before making bold claims about massive changes in recent years? In any case, ERBE was a great experiment on a great spacecraft, and a hallmark of empirical data collection for global climate studies. It should be cited more often. Nimur (talk) 16:56, 6 December 2009 (UTC)
- I think you somewhat misrepresented that Eos review: it is about Noctilucent clouds and whether they are signals of climate change, and does not mention anything about conservation of energy. Indeed, the article states that "The temperature will be affected by any anthropogenic changes of the CO2 and/or O3 abundances". Your comment that "Unless the net power from the sun is changing (which is experimentally not the case), then for any "global warming," there must be some "global cooling" somewhere else" is wrong as Earth is not a closed system; the cooling occurs in space. Of course atmospheric content affects surface temperature, which is why Mars is freezing, Earth is warm and Venus is toasty. Why are you quoting from 10-20 year old papers about how CO2 affects climate when there are newer articles on the topic? e.g. [23][24][25] Fences&Windows 16:54, 6 December 2009 (UTC)
Original questioner here. I understand what's been said, but nobody has addressed my actual question: why does the concentration change so slight in bands where the atmosphere is almost completely opaque make any substantial difference in the total amount of absorption? Or to put it another way, the diagram on the right shows the transmission spectra of 300 ppm and 600 ppm. The total amount of energy difference represented by the difference between the blue and green lines isn't anywhere near 14%, is it? What is the actual amount of energy forcing between the actual Earth's atmosphere at 350 ppm and 400 ppm?
Update: I take it back! Stephan Schulz addressed my question correctly at 16:30 above. Thanks Stephan! 99.62.185.148 (talk) 00:53, 7 December 2009 (UTC)
Size of average Caucasian female head
What is the average size of a Caucasian female head? --I dream of horses (T) @ 02:25, 6 December 2009 (UTC)
- Slightly in theme, the circumference of the head of a cat is equal to the length of his tail. So, when a cat goes to a hat shop, he only has to let the clerk measure his tail. --pma (talk) 09:40, 6 December 2009 (UTC)
- That sounds problematic for a tailless cat. moink (talk) 11:58, 6 December 2009 (UTC)
- Seems problematic for a headless cat as well. Dauto (talk) 14:01, 6 December 2009 (UTC)
- The problem is moot for a headless cat. --Tango (talk) 15:03, 6 December 2009 (UTC)
- Seems problematic for a headless cat as well. Dauto (talk) 14:01, 6 December 2009 (UTC)
- Note however that neither tailless nor headless cats wear hats. --pma (talk) 16:21, 6 December 2009 (UTC)
- Are you sure? I don't think that is entirely true. SpinningSpark 16:44, 6 December 2009 (UTC)
- You mis-read the part about headless and tailless. Nimur (talk) 16:52, 6 December 2009 (UTC)
- Are you sure? I don't think that is entirely true. SpinningSpark 16:44, 6 December 2009 (UTC)
- Note however that neither tailless nor headless cats wear hats. --pma (talk) 16:21, 6 December 2009 (UTC)
- If I decapitate a hat-wearing cat, the hat might stay on the head, but you (Tango and pma) are saying that "the cat is not wearing his hat" (...because he is not wearing his head). If we assume the essence of being of a cat is based on his brain, we have just proven that a cat's brain is below his neck rather than being in its head. Wanna co-author the paper? DMacks (talk) 18:01, 6 December 2009 (UTC)
- From an answer to a question a few months ago, I have to sadly report that this paper already exists. Essence or no, decerebrate cats are quite alive and do have most of their normal respiratory and gastric functionality intact, because these functions are controlled by the spinal cord and brain stem. Nimur (talk) 18:06, 6 December 2009 (UTC)
- I haven't looked at the paper, but there is a difference between decerebrated and decapitated. The cerebrum is just one part of the brain. --Tango (talk) 23:07, 6 December 2009 (UTC)
- I would define a cat as the combination of a cat's head and a cat's body. If you cut the head of a cat, you no longer have a cat. (The definition is chosen primarily so that you will be wrong, but it is a justifiable definition!). --Tango (talk) 23:07, 6 December 2009 (UTC)
- From an answer to a question a few months ago, I have to sadly report that this paper already exists. Essence or no, decerebrate cats are quite alive and do have most of their normal respiratory and gastric functionality intact, because these functions are controlled by the spinal cord and brain stem. Nimur (talk) 18:06, 6 December 2009 (UTC)
- If I decapitate a hat-wearing cat, the hat might stay on the head, but you (Tango and pma) are saying that "the cat is not wearing his hat" (...because he is not wearing his head). If we assume the essence of being of a cat is based on his brain, we have just proven that a cat's brain is below his neck rather than being in its head. Wanna co-author the paper? DMacks (talk) 18:01, 6 December 2009 (UTC)
- We're really drifting off topic from the OP's question. Sorry for my contribution to that effect. Per the guidelines, let's stay on topic for the OP. Nimur (talk) 18:11, 6 December 2009 (UTC)
Human nervous system latency
I am looking for the human nervous system latency time. I couldn't find it in the artical nervous system. Basically, I am looking for the number of milliseconds or microseconds between 2 events.
Situation: the human is driving a car and a child crosses the road 20 meters ahead. The driver has to turn the wheel to avoid the child.
Event 1: the light reflected by the child (visual signal) enters the eye of the driver.
Event 2: the hand of the driver starts moving to turn the wheel.
Assume that the human is normal, awake, has not been drinking and attempts to react as fast as possible. I am basically looking for the total latency necessary for these operations: visual signal to reach the brain, brain processing of the signal and recognising danger, brain making decision to turn the wheel, brains instructing hand to turn the wheel, then message to transit from brain to hand and arm muscles, and finally, muscles to begin contractions. I don't need the split between the operations, just the total number of milliseconds between event 1 and event 2. Could you please help? This is not homework question :-) --Lgriot (talk) 03:27, 6 December 2009 (UTC)
- Reaction time is probably the best we have; it cites 180-200 milliseconds to detect a simple boolean visual stimulus; your instance is a much more involved problem and so you can, I think, expect the reaction time to be greater. --Tagishsimon (talk) 03:37, 6 December 2009 (UTC)
- Thanks, that is exactly what I was after. --Lgriot (talk) 04:48, 6 December 2009 (UTC)
- In the Highway Code in the UK there are a set of stopping distances for a variety of speeds. In that publication, stopping distance is given as the sum of thinking distance and braking distance. Thinking distance is invariably given as the distance in feet being equal to the speed in mph. So the estimated time to see something happen and shift a foot to the brake pedal (similar to seeing something and steering) is estimated by the UK driving authorities as about 0.7 seconds. --Phil Holmes (talk) 11:25, 6 December 2009 (UTC)
- That's not exactly the same, though, even though I doubt it would matter much in practice. You hopefully always drive with your hands on the steering wheel, but I'm guessing the same can't be said for having a foot on the brake all the time. -- Aeluwas (talk) 14:18, 6 December 2009 (UTC)
- In the Highway Code in the UK there are a set of stopping distances for a variety of speeds. In that publication, stopping distance is given as the sum of thinking distance and braking distance. Thinking distance is invariably given as the distance in feet being equal to the speed in mph. So the estimated time to see something happen and shift a foot to the brake pedal (similar to seeing something and steering) is estimated by the UK driving authorities as about 0.7 seconds. --Phil Holmes (talk) 11:25, 6 December 2009 (UTC)
Interferomics possible editorial problems
I recently came across this article and after doing some corrections noted that the term was coined by researcher Gaurav Rana, when I reviewed the history the article was created by User:Gauravsjbrana, a subsequent google search revealed this article [jmd.amjpathol.org/cgi/reprint/9/4/431.pdf] which mentioned the emerging field of Interferomics in 2005 however makes no attribution to Gaurev Rana. I see a number of issues here firstly it is a specialized field so inaccurate editing maybe remain undetected, second it appears to be self promotion, thirdly it could be false representation. I am hoping someone with more experience in these matters can take a look. Matt (talk) 03:57, 6 December 2009 (UTC)
- edit - oops it appears I may have asked this question in the wrong place
- please ignore this article I have posted a welcome to this user and a short note regarding the possible issue with the article Matt (talk) 04:21, 6 December 2009 (UTC)
my son ( 10 yearls old )
Removed request for medical advice. Wikipedia cannot give medical advice. Only a medical professional can give responsible medical advice.
Wikipedia does not give medical advice
Wikipedia is an encyclopedia anyone can edit. As a result, medical information on Wikipedia is not guaranteed to be true, correct, precise, or up-to-date! Wikipedia is not a substitute for a doctor or medical professional. None of the volunteers who write articles, maintain the systems or assist users can take responsibility for medical advice, and the same applies for the Wikimedia Foundation.
If you need medical assistance, please call your national emergency telephone number, or contact a medical professional (for instance, a qualified doctor/physician, nurse, pharmacist/chemist, and so on) for advice. Nothing on Wikipedia.org or included as part of any project of Wikimedia Foundation Inc., should be construed as an attempt to offer or render a medical opinion or otherwise engage in the practice of medicine.
Please see the article Wikipedia:Medical disclaimer for more information.
William Thompson
What substantial contribution did William Thompson make in the field of physics? Kittybrewster ☎ 12:34, 6 December 2009 (UTC)
- I suspect you're thinking of Lord Kelvin. - Nunh-huh 12:45, 6 December 2009 (UTC)
- Indeed. Thank you. Kittybrewster ☎ 13:13, 6 December 2009 (UTC)
- Pub quiz question? Fences&Windows 13:30, 6 December 2009 (UTC)
- Science test paper. "Homework for Grown-ups". Kittybrewster ☎ 13:59, 6 December 2009 (UTC)
- Maybe we should but the disambig link to the Thomson page a bit higher on that page? It seems like a pretty easy mistake to make. --Mr.98 (talk) 14:36, 6 December 2009 (UTC)
- Thought the same thing... --Stephan Schulz (talk) 15:34, 6 December 2009 (UTC)
- I couldn't see him anywhere on that page! I'll go and add him. Dmcq (talk) 16:20, 6 December 2009 (UTC)
- Thought the same thing... --Stephan Schulz (talk) 15:34, 6 December 2009 (UTC)
- Well, the issue is that he doesn't have a p in his name, so ol' Lord Kelvin himself doesn't belong on that page... I've done something like what I think might be useful (putting the non-P see-also at the top, rather than at the bottom). --Mr.98 (talk) 16:43, 6 December 2009 (UTC)
- If we want to increase the usefulness of disambiguation pages, we shouldn't insist on exact spelling. Similar spelling or similar pronunciations should be enough; the pages are so that articles that could be confused with each other can be found and distinguished. There should be one Thomson/Thompson disambiguation page for each Thomson/Thompson, with appropriate redirects pointing to it. - Nunh-huh 23:10, 6 December 2009 (UTC)
- Not to mention Tomson or even Tompson (although there are only two of those). Mikenorton (talk) 23:21, 6 December 2009 (UTC)
- If we want to increase the usefulness of disambiguation pages, we shouldn't insist on exact spelling. Similar spelling or similar pronunciations should be enough; the pages are so that articles that could be confused with each other can be found and distinguished. There should be one Thomson/Thompson disambiguation page for each Thomson/Thompson, with appropriate redirects pointing to it. - Nunh-huh 23:10, 6 December 2009 (UTC)
- Well, the issue is that he doesn't have a p in his name, so ol' Lord Kelvin himself doesn't belong on that page... I've done something like what I think might be useful (putting the non-P see-also at the top, rather than at the bottom). --Mr.98 (talk) 16:43, 6 December 2009 (UTC)
How long are viruses active?
When you are suffering from the Common cold you are spewing cold viruses all around your home or workplace when you cough and sneeze. How long can these viruses stay active and possibly infect someone else? And what eventually happens to them -- do their molecules eventually disintegrate, or do they just spread out so much that they can no longer cause an infection? —Preceding unsigned comment added by Fletcher (talk • contribs) 10:37, 6 December 2009
- A quick search of the reference desk archive (box at the top of this page) for 'virus "outside the body"' yields a link to a relevant discussion. Also, our Common cold article has some relevant info, though I would agree that those resources don't answer your question directly (I did not search the RefDesk exhaustively, so others may find a really good answer to what seems like it would be a frequently asked question). Virus survival in the environment varies widely based on environmental conditions. How those conditions affect viral infectivity depends on viral characteristics. For example, the most common cause of a cold is one of the many serotypes of Rhinovirus. Rhinoviruses are picornaviruses, which have a RNA genome, making them more susceptible (than DNA viruses) to genetic damage (which would render them noninfectious); making them much more resilient, though, is their lack of lipid coat such that they survive complete drying. Additional issues include the amount of virus shed, since a heavily-shed virus (relative to its infectious dose) will remain infectious longer. It seems clear that cold viruses can remain infectious for days (PMID 6261689, full text here), at least under some conditions (keep in mind that virus from your nose would never be in "buffered water", and drying in the presence of albumin, as they did in some experiments that showed more prolonged infectivity, is closer to the normal situation). This article references earlier studies on environmental persistence of infectious rhinoviruses, and the efficacy of various disinfection measures. There's also an interesting study of flu virus viability relative to environmental conditions. -- Scray (talk) 17:11, 6 December 2009 (UTC)
- Thanks, very helpful! Fletcher (talk) 17:58, 6 December 2009 (UTC)
Burns and clothing vs. bare skin
Are burns more or less severe when the burn area of the victim is covered by clothing? That is, does clothing, as opposed to bare skin, alleviate or exacerbate the severity of burns? I assume that it probably depends on the type of burn, so could the question be answered for the various types of burns (chemical burns, electrical burns, hot oil burns (that is, cooking oil), open flames, radiation burns, and steam burns)? —Lowellian (reply) 17:07, 6 December 2009 (UTC)
- First Aid for Soldiers[26] distinguishes between natural and synthetic materials (roughly). "Caution - Synthetic materials, such as nylon, may melt and cause further injury." They also distinguish between the cases whether the fire and flames are still burning on the clothing, or if the flames are extinguished. After the situation is safe, the general instructions are to expose the burn by cutting and gently lifting clothing away, but leaving in place any cloth or material which is stuck to the burn area. They also have special caveats for cases of chemical burns and blisters. Following treatment, the entire area is re-covered in sterile field dressing, to protect the burn area. Nimur (talk) 17:38, 6 December 2009 (UTC)
- Thanks, but I wasn't asking how to treat clothing burns; I was asking whether burns are less or more severe against clothing or against bare skin. That is, does clothing have protective value against burns, or do they only exacerbate burns? —Lowellian (reply) 17:45, 6 December 2009 (UTC)
- Of course, it depends on the clothing. As noted above, nylon will melt and exacerbate the burn. Conversely, an asbestos apron or a flame-retardant PPE will not burn and will also insulate the victim from heat. Materials like (real, non-synthetic) leather will probably serve a pretty good protective role. The more uncertain case are fibers like cotton or wool, which will burn. These will probably exacerbate a burn and may increase the contact-time with the flame/heat source, but it depends on conditions. In some cases, the flame may actually carry more heat away than it produces, but in general I think the direct exposure is a bad thing. Nimur (talk) 17:51, 6 December 2009 (UTC)
- (ec) Clothing is a blessing and a curse. It acts as a physical block, preventing as much "burning agent" from getting to the skin. For example, fabric insulates or slows heat transfer, absorbs small hot-oil drops so they cool before soaking through (if they soak all the way through at all), keeps as much concentrated sulfuric acid from reaching one spot, etc. And certain fabrics are well-designed to block penetration specific burning agents. But once the cause is removed, the clothing keeps the burning agent (what's left of it) close to the skin, leading to prolonged burning. For example, the soaked fabric is still transferring heat or sulfuric acid to the skin. And the results of the fabric exposure to the burning agent can have additional effects beyond "whatever the burn itself is" (see Nimur's comment about synthetic fabrics melting). So the "cause of the burn" isn't removed until the fabric is. DMacks (talk) 17:54, 6 December 2009 (UTC)
- Thanks, but I wasn't asking how to treat clothing burns; I was asking whether burns are less or more severe against clothing or against bare skin. That is, does clothing have protective value against burns, or do they only exacerbate burns? —Lowellian (reply) 17:45, 6 December 2009 (UTC)
Greenhouse Gases
How is it that CO2 rises in the Atmosphere when it is heavier than Air?Taskery (talk) 18:10, 6 December 2009 (UTC)
- Convection, or mixing of gases, is the dominant descriptor of the troposphere. This means that because of uneven heating and turbulent fluid motion, things like wind and updrafting occur, resulting in a "well-mixed" gas distribution. At higher altitudes (notably, first the stratosphere, stratified on temperature; and above, the mesosphere, layered based on chemical content), gases separate out based on their velocity or molecular mass, but this is not the case in the lowest regions of the atmosphere. Note that there are some exotic mechanisms which can carry CO2 to even higher than equilibrium altitudes. Middle Atmosphere Dynamics is a good book if you are interested in some other ways CO2 can "float" its way up. Nimur (talk) 18:16, 6 December 2009 (UTC)
- The region of atmosphere that is well-mixed is called the turbosphere or homosphere, and is separated from the heterosphere by the turbopause, which is usually well above the stratosphere. --Stephan Schulz (talk) 18:31, 6 December 2009 (UTC)
Knives and electrical sockets
I have heard it said that it is dangerous to stick the tip of a knife into an electrical socket. But as long as the knife is non-metal (e.g. plastic knife) or has a non-metal handle, I don't see why this would be any more dangerous than sticking an ordinary electric plug into the socket. I don't mean digging deep into and actually cutting up the socket; I mean just sticking the tip in as far as it will go without forcing. A plastic knife wouldn't even conduct, right? And wouldn't the wooden or plastic handle on a metal knife serve as insulation from the metal blade the same way that the plastic or rubber base on an electric plug serves as insulation from the metal prongs? —Lowellian (reply) 18:34, 6 December 2009 (UTC)
- An electric appliance provides an electrical path from the live wire to the return wire. If you stick things in the outlet, you are the easiest electrical path to ground - so even if you are insulated, it's still less safe than plugging in an appliance cord. Nimur (talk) 18:42, 6 December 2009 (UTC)
- A plastic or wooden knife won't conduct unless it is wet. Most knives are metal, though, and even for ones that have a plastic or wooden handle, you will usually see, if you look at them closely, that they aren't easy to grasp in a way that avoids touching any metal. Looie496 (talk) 18:49, 6 December 2009 (UTC)
- The pins of the electrical plug are of a standardized length - it's possible that a longer blade might short out the wires behind the socket too. But I agree that 110v (or even 240v) isn't going to arc though the plastic handle of a knife any more than placing your finger against the plastic housing of the electical socket is going to result in electricity jumping into your body. But these kinds of advice are put out there for the average knucklehead who hasn't noticed that the plastic side-plates of his knife are held on with a couple of brass rivets - and touching one of them might result in a shock. I've stuck electrical screwdrivers and volt-meter probes into electrical sockets dozens of times - but it always has to be a matter of thinking out each move carefully before you do it. Where are you going to be putting your fingers - where will the current flow. The trouble is that 99% of people don't do that - so "Don't stick knives into electrical outlets!" is very good advice. SteveBaker (talk) 20:57, 6 December 2009 (UTC)
- Actually I'd hope that the plastic housing of the electrical socket would be explicitly designed as a good insulator, whereas the plastic knife probably would not be. Thus it might be easier for the electricity to arc through the knife than the socket. Mitch Ames (talk) 11:59, 7 December 2009 (UTC)
- Why would you even want to do that? Do you stick beans up your nose? — Preceding unsigned comment added by 79.75.87.13 (talk • contribs) 16:46, 6 December 2009 (UTC)
More climate questions
One of the basic arguments of the Kyoto Protocol is that more carbon = bad. What is the relationship between increased CO2 and increased adverse effects? I'd think average ocean pH, average global temperature, and average temperature in the polar regions have been talked about enough that some sort of estimate could be given. Are these generally linear, exponential, logarithmic, sigmoid, or other mathematical relationships? Do these relationships work differently for other known bad actors (e.g. methane)? SDY (talk) 19:11, 6 December 2009 (UTC)
- I don't think there is the simple relationship you are looking for. For temperature, we have a fairly good idea that doubling CO2 implies approximately 3 ℃ of warming in the limit, i.e. when equilibrium has been reached. See climate sensitivity. The basic relationship (logarithmic warming) is the same for all greenhouse gases, AFAIK. However, the practical effect differs, since methane decomposes quickly into CO2 and water (with water raining out), while CO2 keeps accumulating. "Average ocean pH" will take a long time to equalize - ocean overturn times are on the order of millennia. Surface water acidifies a lot faster. Ocean acidification has some estimates for surface acidification. --Stephan Schulz (talk) 19:31, 6 December 2009 (UTC)
- Have there been any estimations of how close the system is to that limit currently? What physical condition is that limit consistent with (saturation of C02 in the upper atmosphere, perhaps)? SDY (talk) 19:45, 6 December 2009 (UTC)
- This paper sets forth the required measurement accuracy needed to estimate how close we are to a particular equivalent CO2 concentration in the atmospheric column. Nimur (talk) 19:59, 6 December 2009 (UTC)
- The limit is reached when the radiative forcing is zero, i.e. when the imbalance caused by extra CO2 is balanced by the greater emission caused by a warmer planet. A simple analogy is a pot of water on a stove. As long as the stove is off, temperature of stove and pot will tend to be equal. Put the stove on at a low setting, and the stovetop will heat up quickly, while the temperature of the water lags. Steady state is reached when the water does not heat up any more. --Stephan Schulz (talk) 20:14, 6 December 2009 (UTC)
- (edit conflict) The limit is associated with the Earth system reaching thermal equilibrium and limited primarily by the thermal inertia of the oceans. Given an energy inbalance of a few W/m2 created by an enhanced greenhouse effect, the oceans will continue to gradually warm for a century or two. The surface warms fastest, but that gets mixed downward over time and it takes a long time to reach a practical equilibrium given the shear mass of the ocean. This is generally referred to as "warming in the pipeline" or "already committed warming". In rough numbers, after the thermal inertia is overcome the ultimate change in surface air temperature averages may be roughly double what has been observed in the current short term. Dragons flight (talk) 20:16, 6 December 2009 (UTC)
- This paper sets forth the required measurement accuracy needed to estimate how close we are to a particular equivalent CO2 concentration in the atmospheric column. Nimur (talk) 19:59, 6 December 2009 (UTC)
- Have there been any estimations of how close the system is to that limit currently? What physical condition is that limit consistent with (saturation of C02 in the upper atmosphere, perhaps)? SDY (talk) 19:45, 6 December 2009 (UTC)
- It really depends who you ask, and what you consider "consensus". Here's a few good articles in Science: Climate Impact of Increasing Atmospheric Carbon Dioxide (1981), Anthropogenic Influence on the Autocorrelation Structure of Hemispheric-Mean Temperatures (1998), Detecting Climate Change due to Increasing Carbon Dioxide (1981), Where Has All the Carbon Gone? (2003), and (to pre-refute any claims that I am linking old science), here's The Climate in Copenhagen, (December 4, 2009). As you can see, even the qualitative patterns are hard to establish, let alone quantitative estimates. What is generally agreed is that excess carbon does yield negative results. But few quantitative predictions seem to agree. Nimur (talk) 19:34, 6 December 2009 (UTC)
- The problem with coming up with a simple mathematical model is that there are a lot of effects - some with positive feedback terms - adding together. So (for example) the increased greenhouse effect causes a temperature rise that (presumably) has a simple relationship to the amount of CO2 in the upper atmosphere...Great! (you might say)...but it doesn't end there. That temperature rise causes melting of sea ice - which results in a change in albedo from bright white snow to dark ocean. Ocean doesn't reflect the suns' heat away as well as snow. That causes yet more heat to be absorbed than would be predicted by the greenhouse effect alone. So our simple relationship is not quite right - we have to correct for the albedo change. But the relationship between global temperature rise and local temperature rise at (for example) the North pole is complicated. The weather systems that cause the temperature to change at the pole on a day-to-day basis are chaotic (mathematically chaotic) - the "butterfly effect" and all that. So a tiny error in our measurement of global temperature rise can cause a much larger error in the assessment of polar ice temperatures.
- But the trouble with that is that if the temperature remains 0.1 degree below freezing - then the ice stays frozen. If it's 0.1 degree above freezing then the ice starts melting...that's such a 'knife-edge' effect that an error in our temperature math of even a tiny fraction of a degree makes the difference between ice - and no ice. When you consider that the resulting temperature is dependent on how much ice melted - you have something that's unpredictable at a year-by-year basis. We know about general trends - more CO2 means more heat, no question about that - more heat means less ice, no question about that - and less ice means more heat absorbed, that's for 100% sure. But precisely the shape of the CO2, final-temperature curve - all we can say that generally, more CO2 means more heat...but putting a simple mathematical curve to that is tough. As bad as that is, it's only one of maybe a hundred other interacting effects. More heat melts glaciers too - but the meltwater flows down under the glacier, lubricating it's contact with the rock and soil beneath - causing it to slide downhill faster and enter the warmer ocean prematurely. As ocean levels rise, light colored land gets covered by darker water - and the albedo changes. As the oceans warm up, they expand (water expands as it warms) - so ocean levels get yet deeper. But then, warmer oceans MIGHT promote algal growth which would absorb more CO2, helping things a little...but then as CO2 dissolves into the oceans, it makes the water more acidic - and that might kill off more algae.
- This whole mess is a tangles maelstrom of interactions covering many, MANY subsystems. We can say a lot about the trends - but putting any kind of mathematical function to the effect is very tough. One worrying aspect of this is that many of the effects (like the potential for deep-ocean Methane Clathrates to melt, dumping ungodly amounts of another greenhouse gas, Methane, into the air) are extremely poorly understood. Another worrying thing is that we keep finding new and subtle effects that are making matters worse.
- But the trend is inexorably up - that much we know for sure. We also know that CO2 persists in the upper atmosphere for thousands of years - the amount we have put up there already isn't going away anytime soon. Current arguments are mostly about limiting the rate of increase in the amount we're adding every year! Only a few countries are talking about reducing the amount we produce - and none are talking about not producing any more CO2 at all. So the upward trend is there and we're not able to stop that. The best that we can do is to buy ourselves more time until we can figure out what (if anything) we can do about this mess. SteveBaker (talk) 20:43, 6 December 2009 (UTC)
- The knife-edge thing doesn't really work for me. There is X amount of energy in the system, and some of that energy is used for the phase change, and the 0.1 degree temperature change at a phase change is not a minor investment of energy (i.e. 0.1 C to -0.1 C is nothing like 0.3 C to 0.1 C), and I would assume that any decent model takes that into account and that an estimate for the amount of melted ice is not absurd to attempt. SDY (talk) 21:07, 6 December 2009 (UTC)
- I think everyone is trying to tell you it is more complicated than this. Initially warmer poles may mean more air borne moisture, hence more snow and faster ice accumulation for example. People talk of thickening ice in the centre but melting at the edges. Given that accurate weather forecasts more than about 10 days ahead seem to elude mankind you are asking for a precision or simplisticity which just isn't there. --BozMo talk 21:14, 6 December 2009 (UTC)
- SDY, if the temperature of the air in contact with the ice is above freezing the ice will melt. It may take some time because, as you said, the nergy investment can be high. But it will melt. Dauto (talk) 22:19, 6 December 2009 (UTC)
- I'm not expecting exact numbers, but there's more to the relationship than a knife edge. A 0.1 degree change in atmosphere temperature will not immediately melt all of the ice in the world, since there's an interaction between the ice and the atmosphere (and the ocean). I'm expecting that it's a question of "how much ice will melt and how rapidly will it melt" instead of "all ice immediately melts when this point is crossed." That's what I mean with the "knife-edge thing not working for me." Am I totally off base when I expect that climate change will cause the world to end "not with a bang but a whimper?" SDY (talk) 23:21, 6 December 2009 (UTC)
- OK - try not to focus on that one specific thing (although it's enough that we can debate the answer at all!). The point I'm trying to make is that there are easily a hundred things like that that we know can cause temperature change either as a feedback effect or directly because of CO2 concentrations. If even a few of those are not well understood - or are hard to calculate accurately from imprecise data (sensitive-dependence on initial conditions...chaos theory) - then we cannot make accurate predictions. When there are sharply non-linear effects - and many of them - and each affects all of the others - then you have no way to provide a simple formulation of the consequences.
- I don't think anyone thinks this will cause a literal end to the world. It could wipe out a lot of important species - humans may face starvation in alarming numbers if crops fail and invasive species run amok. The consequences for the health, well-being and lifestyle of everyone on the planet will be significant. But the planet will definitely survive - there will be life - and mankind will survive and probably still be on top. But the consequences are potentially severe. Whether it's a "Bang" or a "Whimper" depends on your definition. If this fully unfolds over a couple of hundred years then in terms of the history of life on the planet - it's a very brief "bang" - but from a human perspective, it'll be a long, drawn-out decline stretching over many lifetimes...I guess you could call that a 'whimper'. Was the Permian–Triassic extinction event a bang...or a whimper? At its' worst, this could be kinda similar in extent (80% of vertebrates going extinct) and recovery time (30 million years). SteveBaker (talk) 00:23, 7 December 2009 (UTC)
(undent) The original answer wasn't very clear, probably because my original statement wasn't very clear. I clarified, and did not get a response that really helps. My conclusion, based on the response, is that it is a very poorly understood network of systems and drawing any sort of conclusion at this point about the interaction between ice melting and air temperatures is preliminary. Is that an honest approximation of the current state of the art?
Fundamentally, I guess the stance I'm coming from is that of a rational skeptic. Climate science makes extraordinary claims (i.e. predictions of mass extinction events), but is there extraordinary evidence to support them? My impression given the inability to answer what seem like pretty basic questions is that the extraordinary evidence does not exist. Is this also an honest approximation of the current state of the art, or am I simply reading the wrong sources? SDY (talk) 07:20, 7 December 2009 (UTC)
- I would call that a reasonable assessment of the state of the art. Quantitative models vary widely in their predictions, and no global atmospheric model that I am aware of has accurately predicted numerical values for either CO2 concentration nor for ice melt rates over the long-range timescales. Other quantitative models do exist, but there is huge disagreement about values of parameters, etc., because the way that these complex networks of interrelated systems actually connect together (via thermal physics, optics, chemistry, etc.) are still uncertain. Simplified models that are not global atmospheric simulations also exist, and in the limited scope of estimating a specific parameter or a specific local region, these models can be very accurate. But again, I am not aware of any global climate simulation which accurately models the entire atmosphere/oceans/etc., and also predicts ice melt rates. Nimur (talk) 07:41, 7 December 2009 (UTC)
- I would also go out on a limb and agree with you that in general, many bold claims are made about global climate change. Often, for no particular reason, these bold claims are deemed "part of the great scientific consensus and backed by overwhelming evidence." That is silly. Certain specific claims about climate change are scientific consensus. Certain specific claims about climate change do have overwhelming evidence. Those particular claims are easy to find reputable publications and quantitative data for. But I notice a very disappointing trend to attribute such overwhelming certainty to every claim about climate science. Even ludicrous claims about catastrophic consequences are sometimes asserted to be "consensus" viewpoints, which is counter to reality. Runaway global warming, for example, seemed to be the reported opinion of a nonexistant "consensus" for a long time in many pop-science magazines. Real claims should be backed by specific references and specific experimental or modeling data. Asserting "consensus" is moot - scientific fact is not subject to a majority vote. Data is either valid or invalid; conclusions are either logical deductions from valid data, or not. Nimur (talk) 07:50, 7 December 2009 (UTC)
- I would suggest the OP reads the actual IPCC reports, especially the IPCC Fourth Assessment Report SPM, which contains both projections and certainties for many parameters and claims. The popular press likes dramatizing, and the right-wing blogosphere is completely useless. The climate sensitivity in the range of 2-4.5℃/doubling is fairly solid. But that is still a large range. Regional predictions are still very uncertain, and in the end for many effects are what matters. From the global predictions we know some regions will be hit hard, and others a lot less, but we cannot yet reliably predict which regions are hit how. Science also cannot predict how much greenhouse gases we release, as that is a political/economical question. Mass extinction, however, is not an extraordinary claim at all. There is no doubt that we are already in a mass extinction event, and by rate of species disappearing, one of the worst in history. We do that even without climate change, simply by taking over nearly all ecosystems, and doing things like shipping rats and dogs to New Zealand. --Stephan Schulz (talk) 08:49, 7 December 2009 (UTC)
Science games for kids
Christmas is rolling up and I will be spending some of it in the company of little ones, say toddler-ish to ten years old or so. I would like to have ideas of what to do with them, to test their scientific knowledge and cognitive development in (as the kids say) the funnest way possible. These need to be simple things to do, without fancy equipment. I am thinking of things like pouring liquid from a tall thin glass to a short wide glass, asking them which glass holds more, and seeing at what age the kid "gets" that the volume is the same. Any ideas for how I can approach this? Needless to say, my young relatives and friends' kids are very brainy babes. BrainyBabe (talk) 19:26, 6 December 2009 (UTC)
- This may be a bit tough for 10 year-olds - but kids vary a lot: [27] (from my personal Wiki). SteveBaker (talk) 20:06, 6 December 2009 (UTC)
- I was unable to get your human hair width measurement experiment to work at all. I made an honest effort, but it was very hard to get diffraction fringes. I was able to get diffraction fringes by shining the laser through two razor blades closely spaced, but not around a human hair. Nimur (talk) 20:11, 6 December 2009 (UTC)
- Ah Christmas: Standard keeping quite ones for brainies of that age are 142857 (get them to multiple it by 2,3,4,5,6 and guess what 7x is), which day of the week has an exact anagram, lateral thinking games (hanged man and puddle, dwarf and lift, anthony and cleopatra, surgeon "thats my son" etc.), which two digit prime has a square which looks the same upside down and in a mirror, wire through an ice block... I am sure other people know squillions... --BozMo talk 20:10, 6 December 2009 (UTC)
- Balancing two forks and a cocktail stick on the edge of a wine glass...whats the algo to find the dud ball out of ten with balances and only three weighing... fox/ chicken and grain with a river boat... the various logic puzzles with black and white hats Prisoners_and_hats_puzzle Hat_puzzle... --BozMo talk 20:13, 6 December 2009 (UTC)
- Surely 9 balls and two weighings would be more challenging? - Jarry1250 [Humorous? Discuss.] 21:38, 6 December 2009 (UTC)
- The version I know you are not told if the dud is too heavy or too light. That makes nine and two impossible and ten and three hard (especially for people who start off by putting five on each side). But I am sure there are loads of variants/ --BozMo talk 21:45, 6 December 2009 (UTC)
- There's a book called "Physics for Entertainment" ("Zanimatel'naya fizika" in Russian original) by Yakov Perelman. I remember enjoying it immensely when I was a kid. It was written in 1930's, so it does not rely on the modern technology; but that does not make it any less fun. I know it has been translated into English, although I am only familiar with the Russian version. You can try finding the English version in the library. --Dr Dima (talk) 21:09, 6 December 2009 (UTC)
- The trick using slaked cornflour never ceases to amaze... Make up a goo using cornflour (cornstarch for our American cousins) and water to the consistency of runny double cream. If you bang the container on the table and invert it over the head of the nearest brat it won't spill if you do it properly! --TammyMoet (talk) 10:24, 7 December 2009 (UTC)
- These all sound great for the older kids, but I said brainy, not genius! I think several-digit multiplication is beyond the toddler set. There are some tempting phrases to google here: I'd never heard of slaked cornflour, so thank you all, and keep 'em coming! BrainyBabe (talk) 22:54, 7 December 2009 (UTC)
- Google "red cabbage juice indicator". Red cabbage juice is a good acid base indicator and basically will go through the entire rainbow of colors depending on pH. Kids can have fun slowly adding acid or base to red cabbage juice (basically the water that red cabbage has been cooked in) and watching the colors change. Not sure what kind of game this would make, but little kids like all of the pretty colors. --Jayron32 23:20, 7 December 2009 (UTC)
Diesel automobiles without diesel particulate filters
I want a list of the currently produced diesel automobiles which aren't available with a diesel particulate filter. --84.62.213.156 (talk) 20:34, 6 December 2009 (UTC)
Cartography - spherical map - globe section
Encyclopaedia Britannica (CD, 2006)- Student library - Maps and Globes: "A useful compromise between a map and a globe, provided that not to much of the Earth has to been shown, is the spherical map, or globe section. This is a cutaway disk having the same curvature as a large globe. It is usually large enough to show an entire continent. A spherical map shows the shape of the Earth accurately but is much cheaper to produce and much easier to carry and store than a globe." Are there really such things? I ask a german cartographer, he has never heard of this. I could not find a single link on google regarding this globe sections. What is the exact scientific term for such a map? I am witing on the german article Globe (de:Globus). --Politikaner (talk) 21:31, 6 December 2009 (UTC)
- I found one old reference here [28] (there are a few of similar vintage [29]) to the use of spherical maps that were segments of globes, presumably that is what the Britannica is talking about, but no modern references yet. Mikenorton (talk) 21:44, 6 December 2009 (UTC)
- Slightly more recent reference, (1956), [30] (look at the bottom of page 317) which refers to the "design and production of spherical map sections displaying a portion of the globe at a scale of 1:1,000,000". Mikenorton (talk) 21:53, 6 December 2009 (UTC)
- Have you tried Map projection as a starting point? --BozMo talk 23:22, 6 December 2009 (UTC)
- If I'm interpreting it correctly, we're talking about maps shown on segments of spheres, so no projection is necessary. --Tango (talk) 23:32, 6 December 2009 (UTC)
- Or do they mean like this: [31]? --BozMo talk 23:29, 6 December 2009 (UTC)
- Have you tried Map projection as a starting point? --BozMo talk 23:22, 6 December 2009 (UTC)
Cholesterol and sodium
Does exercise get rid of cholesterol and sodium in addition to fat? --75.33.216.153 (talk) 21:38, 6 December 2009 (UTC)
Sodium levels are controlled by osmotic systems. Being ionic and very water-soluble it is easy to get rid of excess (simply drink more water and urinate more). I really think cholesterol is an endogeneous thing. Ingested cholesterols only account for a small fraction of cholesterol supply; the rest is produced as a pathway related to fatty acid synthesis. Thus the only practical solution to controlling cholesterol is via medication that can inhibit those pathways. See COX-1 and COX-2. John Riemann Soong (talk) 01:03, 7 December 2009 (UTC)
- Vigorous exercise will generally cause you to sweat, sweat contains sodium - so yes, your sodium levels will decrease (sometimes to pathologic levels, see hyponatremia). Cholesterol is a cell-wall component, I don't think it features in any primary catabolic (energy-producing) pathways, though if you exercise to the point of muscle wasting you'll probably be burning cholesterol along with everything else. However, reading the cholesterol article, total fat intake plays a role in serum cholesterol levels, which suggests to me that if exercise helps to burn off circulating lipids in the blood serum before the liver can synthesize cholesterol, less cholesterol will be produced - but I'm not positive on the timeframes involved. Franamax (talk) 01:41, 7 December 2009 (UTC)
- I'll defer here to the two responses below, which while not directly sourced seem to me entirely reasonable. Absolute levels of sodium will decrease with exercise, but relative concentration apparently may not and I don't know enough about ion transporters in the cell wall to have an opinion on how (primarily) nervous system function is affected. And if HDL itself is catabolized (rather than being recycled as a marker molecule), that too I am unfamiliar with. So best to just ignore my whole post maybe. :) Franamax (talk) 04:02, 7 December 2009 (UTC)
- Sweating doesn't reduce your sodium levels: it's hypotonic fluid, its sodium concentration is lower than the rest of your extracelluar fluid. When you sweat, you're losing more water than sodium. It's replacing sweat with hypotonic fluid of an even lower sodium concentration (water) instead of an isotonic or hypertonic replacement (e.g. Gatorade) that would lower your sodium levels. - Nunh-huh 01:57, 7 December 2009 (UTC)
- Exercise can be beneficial to blood cholesterol levels, which is all anyone cares about. Your body is full of cholesterol, but its only the stuff that clogs your arteries that makes any difference. Cholesterol ends up in the blood in two forms, HDL cholesterol and LDL cholesterol, these are respectively "good" cholesterol and "bad" cholesterol. People often miss why we test for blood cholesterol. Its not the cholesterol per se which is always bad, its that the cholesterol is a marker for things that are going on in your body. HDL and LDL are used as chemical tags attached to molecules in your body that tells them where to go. HDL is associated with catabolic processes, that is molecules taged with HDL are basically heading to be broken down. LDL is associated with anabolic processes, that is those molecules are heading somewhere to be added to your body. In general, having an excess of LDL means your body is growing, so LDL can indicate an excess of caloric intake, high blood sugar levels, and a general trend of increasing fat storage. Higher HDL levels are generally associated with lower blood sugar levels, breaking down fat, and lower overall caloric intake. There are lots of other factors involved, but exercise in itself can improve cholesterol ratings because exercise increases catabolic processes in your body by using up stuff for energy, especially fat stores. --Jayron32 03:16, 7 December 2009 (UTC)
- Well LDL is also "inherently" bad in that it promotes (i.e. it's more than a marker) lipid accumulation in blood vessels right? Otherwise why would people use statins? John Riemann Soong (talk) 05:31, 7 December 2009 (UTC)
- Exercise can be beneficial to blood cholesterol levels, which is all anyone cares about. Your body is full of cholesterol, but its only the stuff that clogs your arteries that makes any difference. Cholesterol ends up in the blood in two forms, HDL cholesterol and LDL cholesterol, these are respectively "good" cholesterol and "bad" cholesterol. People often miss why we test for blood cholesterol. Its not the cholesterol per se which is always bad, its that the cholesterol is a marker for things that are going on in your body. HDL and LDL are used as chemical tags attached to molecules in your body that tells them where to go. HDL is associated with catabolic processes, that is molecules taged with HDL are basically heading to be broken down. LDL is associated with anabolic processes, that is those molecules are heading somewhere to be added to your body. In general, having an excess of LDL means your body is growing, so LDL can indicate an excess of caloric intake, high blood sugar levels, and a general trend of increasing fat storage. Higher HDL levels are generally associated with lower blood sugar levels, breaking down fat, and lower overall caloric intake. There are lots of other factors involved, but exercise in itself can improve cholesterol ratings because exercise increases catabolic processes in your body by using up stuff for energy, especially fat stores. --Jayron32 03:16, 7 December 2009 (UTC)
- Because LDL-tagged lipid molecules tend to drift around the blood until the glom onto each other. HDL-tagged molecules are heading to the liver to be "eaten up", so they don't hang around a long time. The essentially get filtered out. LDL-tagged molecules are basically saying "We're ready to be used to build new cells" and if there isn't anywhere in the body that needs lots of new cells, these molecules just hang around until they accumulate in veseels and cause a mess. So yes, LDL-cholesterol can, of itself, cause problems, but the underlying concern can still be addressed, in many people, by some amount of behavioral modification; i.e. to exert control over those processes which decrease anabolism (lower blood sugar) and increase catabolism (more exercise). There's also been some interesting research out that people who use cholesterol lowering drugs like statins may not have much significant positive health outcomes; that is while their cholesterol numbers may be significantly lower, they don't have significantly lower incidents of cardiovascular disease. It seems somewhat like a case of covering up a problem rather than fixing it. It doesn't mean a whole lot to lower your cholesterol if doing so doesn't have an effect on the quality or length of your life. The only people for whom statins actually show positive outcomes are those for who actually have active heart disease. For people with no known heart disease symptoms, while statins do lower cholesterol numbers, they don't actuall seem to reduce the risk of emerging heart disease. See this article from CBS news which explains some of the controversy, and this article (subscription required) from a peer-reviewed journal. The CBS article actually makes some good points about advertising used in the case of Lipitor; the ad claims percentage improvements based on some pretty shoddy statistics. In the case cited, those taking Lipitor saw an incidence of 2 heart attacks per 100 people, and those taking placebo say an incidence of 3 heart attacks per 100 people. The question then becomes whether widespread prescription of Lipitor results in net positive health outcomes for the most people, given that it isn't preventing that many heart attacks among the general, healthy, population and that it does have documented side effects which need to be taken into account. --Jayron32 18:15, 7 December 2009 (UTC)
Why aren't rugby players overwhelmed with injuries?
In gridiron football, the players wear helmets and padding. Even with that protective equipment, concussions, broken bones, torn tendons, and other serious injuries occur frequently. So how is it possible that rugby players, who also play a full-contact sport and don't wear any protective equipment at all, aren't overwhelmed with more frequent and severe injuries? —Lowellian (reply) 21:52, 6 December 2009 (UTC)
- Rugby players do wear some protective equipment (Rugby union#Equipment). It is against the rules to tackle above the shoulders, and I think there are rules about how many people can tackle a player at once. I don't know if similar rules exist in other forms of football. --Tango (talk) 22:29, 6 December 2009 (UTC)
- Yep. Also see Rugby_union_equipment#Body_protection. There is an injury rate I think its about 300 injuries per 100,000 hours played but I easily could be a factor of ten out: anyway about ten times higher than most other sports I seem to recall, my mum was a school doctor and it was in one of her books. But also a large number of rules evolved over a long time to minimise injury, not just no neck tackle but no tackling in the air, no collapsing scrums, no playing when down etc etc. I am not sure that many players wear anything other than a mouthguard as protection despite some light padding being allowed (sometimes there is a bit in a Scrum cap to protect Cauliflower_ears). And of course it is not a complaining culture: even at school level players with broken bones (fingers for example) sometimes carry on to the end of a match which is allowed as long as there is no blood flowing but obviously a bit daft. --BozMo talk 22:43, 6 December 2009 (UTC)
- Come on! We all know the real reason is that Americans are weak sissys who wouldn't survive half an inning of a real Englishman's game like rugby or contract bridge! --Stephan Schulz (talk) 23:31, 6 December 2009 (UTC)
- I know they slam you around in bridge, especially when you're most vulnerable ... but innings? Clarityfiend (talk) 23:46, 6 December 2009 (UTC)
- Well, yes. Aside from their padded version of Rugby (which I played in school and is unbelievably dangerous!), American sports are most British 'girly' games that have been adapted into tamer forms for American men to play. Baseball is really just "rounders" - which is played almost exclusively by girls in the UK (and has been since Tudor times) - and Basketball is really just "netball" (since 1890 at least). Hockey is really hockey with padding and a nice slippery surface so you can't get a decent grip before you whack your opponent with a bloody great stick - but, again, it's mostly a girly game in the UK. That leaves golf...'nuff said? (And you can get some wicked paper-cuts in contract bridge when the play gets rough!) :-) SteveBaker (talk) 23:50, 6 December 2009 (UTC)
- Baseball, basketball, hockey (what you 'pedians call "Ice hockey" for some reason) and gridiron football are all Canadian inventions. We came up with them to distract attention from lacrosse, which is basically just hitting each other with sticks for 60 minutes. :) Franamax (talk) 01:49, 7 December 2009 (UTC)
- Come on! We all know the real reason is that Americans are weak sissys who wouldn't survive half an inning of a real Englishman's game like rugby or contract bridge! --Stephan Schulz (talk) 23:31, 6 December 2009 (UTC)
- Yep. Also see Rugby_union_equipment#Body_protection. There is an injury rate I think its about 300 injuries per 100,000 hours played but I easily could be a factor of ten out: anyway about ten times higher than most other sports I seem to recall, my mum was a school doctor and it was in one of her books. But also a large number of rules evolved over a long time to minimise injury, not just no neck tackle but no tackling in the air, no collapsing scrums, no playing when down etc etc. I am not sure that many players wear anything other than a mouthguard as protection despite some light padding being allowed (sometimes there is a bit in a Scrum cap to protect Cauliflower_ears). And of course it is not a complaining culture: even at school level players with broken bones (fingers for example) sometimes carry on to the end of a match which is allowed as long as there is no blood flowing but obviously a bit daft. --BozMo talk 22:43, 6 December 2009 (UTC)
- That's an odd statement. Baseball (aka Rounders) has been around in the UK since 1745..that makes the sport 123 years older than Canada (est 1867). Basketball (aka Netball) was first played in the UK in 1890's - no mention of Canada there. Our article on Gridiron football says the gridiron started out in Syracuse University (NewYork, USA). I guess we're going to have to demand some reference here. Lacrosse...yeah - that's a pretty serious sport. Hockey - but without the rule about the stick not being higher than shoulder-height - allowing some pretty decent head-shots. Yeah. The only trouble is that I can't think about lacrosse sticks without thinking about the delicate young ladies of St. Trinians. SteveBaker (talk) 03:26, 7 December 2009 (UTC)
- Sources, could be a problem for sure, that's why I confined myself to small print. Abner Doubleday codified the current rules of baseball (more or less), I read somewhere he came from Ontario (could be wrong); James Naismith set up modern basketball, an expat Canadian; Football, again my reading has been that it was codified as a Canadian university sport, then watered down to give the wimpy Americans one extra try with the fourth down :) ; hockey - oh yeah, we still get the head-shots, the euphonisms are "combing his hair" and "laying on the lumber a bit" although the incidence has decreased drastically. Lacrosse, well all due respect to the ladies, but when the Toronto Rock take the field, wearing shorts and short-sleeved shirts i.e. no body protection whatsoever in those areas - yeah, it's a scene man... :) (But I will retract anything I can't source immediately, which is mostly everything I just said) Franamax (talk) 03:45, 7 December 2009 (UTC)
- Canada has a perfectly legitimate claim to both Basketball and Gridiron Football. Basketball was invented in the U.S. (we luckily have documentation on that one) by James Naismith in about 1891 (Netball came a few years later, and is specifically a derivative of it). Naismith was in the U.S. when he invented the sport, but he was Canadian by birth. Gridiron Football was essentially invented by Walter Camp, but his changes were basically modifying the existing forms of football in the U.S., which were predominately recognizable as Rugby. Since rugby was introduced to U.S. colleges by McGill University, one can claim that McGill had a large role to play in introducing modern Gridiron codes of football. However, there is not one inventor of either of these games, just that Canadians played a prominent role in both of them. --Jayron32 05:21, 7 December 2009 (UTC)
- Sources, could be a problem for sure, that's why I confined myself to small print. Abner Doubleday codified the current rules of baseball (more or less), I read somewhere he came from Ontario (could be wrong); James Naismith set up modern basketball, an expat Canadian; Football, again my reading has been that it was codified as a Canadian university sport, then watered down to give the wimpy Americans one extra try with the fourth down :) ; hockey - oh yeah, we still get the head-shots, the euphonisms are "combing his hair" and "laying on the lumber a bit" although the incidence has decreased drastically. Lacrosse, well all due respect to the ladies, but when the Toronto Rock take the field, wearing shorts and short-sleeved shirts i.e. no body protection whatsoever in those areas - yeah, it's a scene man... :) (But I will retract anything I can't source immediately, which is mostly everything I just said) Franamax (talk) 03:45, 7 December 2009 (UTC)
- That's an odd statement. Baseball (aka Rounders) has been around in the UK since 1745..that makes the sport 123 years older than Canada (est 1867). Basketball (aka Netball) was first played in the UK in 1890's - no mention of Canada there. Our article on Gridiron football says the gridiron started out in Syracuse University (NewYork, USA). I guess we're going to have to demand some reference here. Lacrosse...yeah - that's a pretty serious sport. Hockey - but without the rule about the stick not being higher than shoulder-height - allowing some pretty decent head-shots. Yeah. The only trouble is that I can't think about lacrosse sticks without thinking about the delicate young ladies of St. Trinians. SteveBaker (talk) 03:26, 7 December 2009 (UTC)
- (Without evidence) There are several possible reasons for this. One is that if you have all of that padding & protection, you can tackle harder without risk of hurting yourself - therefore it may be that well-padded players are simply hitting with harder surfaces (helmets, for example) and with more power than an unprotected player could. Secondly - there is a phenomena where people are prepared to accept a certain level of risk in what they do - and improving the protection simply makes them take larger risks. When seatbelts were first mandated for cars, the number of injuries in car accidents decreased sharply - but it has gradually crept back up as people now feel safer and are therefore increasing the risk back to where it was before the seatbelt laws. Perhaps, with less risk to self, the guy with the odd-shaped ball is less careful about avoiding situations where he might be tackled. SteveBaker (talk) 23:50, 6 December 2009 (UTC)
- I'm sure I heard somewhere that, because of this, rugby has more injuries but American football has more fatalities. Vimescarrot (talk) 01:32, 7 December 2009 (UTC)
- This article [32] lists a bunch of fatalities amongst Fijian rugby team - so Rugby players clearly do die during professional play. The only recent references I could find to fatalities in American football were heat-related fatalities amongst young players - not due to collisions during play. But PubMed [33] says that close to 500 people have died from brain-related injuries and that was 70% of all fatalities - but mostly amongst children. So we should guess around 710 fatalities since the 1940's - so maybe 80 or so fatalities per year. The Canadian public health people [34] found zero Rugby fatalities between 1990 and 1994. But it's really hard to gather fair statistics because Rugby is played in a LOT of countries - and American football in just a handful - but the number of people involved seems higher in the US and Canada than in other countries. SteveBaker (talk) 03:43, 7 December 2009 (UTC)
- I don't have any hard statistics (although did come across this which hints at it [35] in relation to discussion on Max Brito who suffered a spinal cord injury at 1995 Rugby World Cup and became tetraplegic) but it's my understanding most serious rugby injuries are at the lower levels of play, i.e. not involving the top professionals and that this isn't just because of the obviously significantly greater numbers but because of the greater experience and knowledge of how to avoid serious injuries and how to avoid causing serious injuries and also because they tend to be fitter. At a guess I would presume it's the same for American football/gridiron but I don't really know. There are a variety of articles here [36] [37] [38] [39] [40] [41] which I come across dealing with rugby injuries particularly spinal cord ones which may have some useful statistics Nil Einne (talk) 11:35, 7 December 2009 (UTC)
- This article [32] lists a bunch of fatalities amongst Fijian rugby team - so Rugby players clearly do die during professional play. The only recent references I could find to fatalities in American football were heat-related fatalities amongst young players - not due to collisions during play. But PubMed [33] says that close to 500 people have died from brain-related injuries and that was 70% of all fatalities - but mostly amongst children. So we should guess around 710 fatalities since the 1940's - so maybe 80 or so fatalities per year. The Canadian public health people [34] found zero Rugby fatalities between 1990 and 1994. But it's really hard to gather fair statistics because Rugby is played in a LOT of countries - and American football in just a handful - but the number of people involved seems higher in the US and Canada than in other countries. SteveBaker (talk) 03:43, 7 December 2009 (UTC)
- I'm sure I heard somewhere that, because of this, rugby has more injuries but American football has more fatalities. Vimescarrot (talk) 01:32, 7 December 2009 (UTC)
- As a single data point, I can offer a distant uncle who died tragically young from a broken neck. Rugby at university, after winning a scholarship. The first of his family to go. So, at that level, people certainly have died. 86.166.148.95 (talk) 20:03, 7 December 2009 (UTC)
- I think some good points have already been made but a key point which hasn't been mentioned yet that I noticed is that in Gridiron/American football players are tackled regardless of whether they have the ball. While I don't play nor have any interest in the sport it's my understanding one of the key expections is that every player tries to tackle a rival player. This compares to rugby where you only tacke a player with the ball. Also just repeating what was said above, a number of rules in rugby have evolved to try & reduce injury. I came across [42] which may be of interest with comments from two people who've played both sports. Nil Einne (talk) 11:27, 7 December 2009 (UTC)
- Several sites seem to indicate that a lot of rugby injuries happen in the scrum...not so much during tackles. SteveBaker (talk) 14:35, 7 December 2009 (UTC)
- Precisely. ~ Amory (u • t • c) 16:35, 7 December 2009 (UTC)
- This is also mentioned in our Scrum (rugby union)#Safety of course. I didn't mention this earlier but it's probably the key area of focus in the attempts to to reduce the risk of injury (the 2007 rule change mentioned in Scrum (rugby union)#Safety for example). Some of course suggest an end to contested scrums ala Scrum (rugby)#Rugby league but as the earlier article says, there's little support for that. There is I think a simple code of ethics that if anyone yells 'neck' during a scrum, you stop pushing (but probably don't pull out of the scrum since if the guy's neck is injured that's not going to help) which is described here [43]. (Such codes of ethics aren't of course that uncommon in sport when they're called for.) That site also mentions a number on injuries primarily in American football players. BTW I also came this story [44] on Gareth Jones (rugby player) a professional Welsh rugby player who died last year just to emphasise it isn't just in countries like Fiji that they happen. Nil Einne (talk) 17:54, 7 December 2009 (UTC)
- Precisely. ~ Amory (u • t • c) 16:35, 7 December 2009 (UTC)
- Several sites seem to indicate that a lot of rugby injuries happen in the scrum...not so much during tackles. SteveBaker (talk) 14:35, 7 December 2009 (UTC)
- I'll also add (I don't think it was mentioned above) that since you cannot pass forward in Rugby, there is a much larger emphasis on speed, agility, and overall athleticism. In (American) Football, there is an emphasis on either throwing far or being really, really big. You want a players who can run like the devil, but the emphasis still comes from the throws, not avoiding getting hit. ~ Amory (u • t • c) 16:35, 7 December 2009 (UTC)
Are there any other types of radiation besides electromagnetic radiation?
Hi! I would like to ask if there's any other types of radiation besides electromagnetic radiation? By electromagnetic radiation I mean all of it's subcategories from gamma rays to infrasound. Thank you for your help!JTimbboy (talk) 23:19, 6 December 2009 (UTC) —Preceding unsigned comment added by JTimbboy (talk • contribs) 23:18, 6 December 2009 (UTC)
- There's gravitational waves. Of the four fundamental forces, only electromagnetism and gravity work over long distances, so only they really radiate in a noticeable way. There are also cosmic rays, but they are actually particles. --Tango (talk) 23:27, 6 December 2009 (UTC)
- What definition of radiation are we using here? The standard one contains lots of non-EM radiation (e.g., alpha particle, beta particles, neutrons, etc.). Sound is "acoustic radiation". --Mr.98 (talk) 00:24, 7 December 2009 (UTC)
- Two points. First: Infrasound is not a kind of electromagnetic radiation. Second: Being made of particles is not a reason to exclude cosmic rays from the list of radiations since light is also made of particles. Dauto (talk) 04:22, 7 December 2009 (UTC)
- Out of curiosity, if light is ambiguously photon/energy, do the same particles exist for different wavelengths? I.e. are there "photons" in X-rays and in the infrared? SDY (talk) 17:01, 7 December 2009 (UTC)
- Yes. All types of electromagnetic radiation are essentially the same; there's nothing special about the stuff our eyes happen to be able to see. Shorter-wavelength radiation corresponds to higher-energy photons. Algebraist 17:05, 7 December 2009 (UTC)
- The relationship between quantization energy and wavelength is defined by the Planck constant. SpinningSpark 17:08, 7 December 2009 (UTC)
- (ec) All wavelengths of light are made up of photons. Different wavelengths are made up of photons of different energies (E=hf - the energy per photon is the Planck constant times the frequency). --Tango (talk) 17:08, 7 December 2009 (UTC)
- Yes. All types of electromagnetic radiation are essentially the same; there's nothing special about the stuff our eyes happen to be able to see. Shorter-wavelength radiation corresponds to higher-energy photons. Algebraist 17:05, 7 December 2009 (UTC)
- Out of curiosity, if light is ambiguously photon/energy, do the same particles exist for different wavelengths? I.e. are there "photons" in X-rays and in the infrared? SDY (talk) 17:01, 7 December 2009 (UTC)
Thank you for your advice! Sorry, I mixed up infrasound with extremely low radio frequency, because they have a bit coinciding frequency. Could anyone give me examples of particle only radiation (I hope that radiation is the right word to discribe it)? And examples of radiations that work over short distances too?JTimbboy (talk) 15:24, 7 December 2009 (UTC)
- Alpha radiation, beta radiation and cosmic rays are all forms of particle radiation (I should say fermion radiation since, as Dauto points out, light can be thought of as being made of particles). When I mentioned short range forces I was talking about the strong nuclear force and the weak nuclear force. They only work on the scale of atoms and smaller and the particles involved are almost always virtual particles, so can't really be thought of as radiation. --Tango (talk) 15:39, 7 December 2009 (UTC)
- There is also neutrino radiation which is of some importance to astronomy. See Cosmic neutrino background and Neutrino astronomy. There is a large burst of neutrino radiation from supernova explosions. SpinningSpark 16:51, 7 December 2009 (UTC)
Total acres in California?
How many acres are in the entire state of California? —Preceding unsigned comment added by 98.248.194.191 (talk) 23:32, 6 December 2009 (UTC)
- 104,765,440. --Tango (talk) 23:33, 6 December 2009 (UTC)
- 104,765,165, from US Census Bureau figures rather than the rounded off ones in the Wikipedia article. SpinningSpark 02:07, 7 December 2009 (UTC)
- Any place with a coastline will have a somewhat indeterminate exact area because of the fractal nature of the coastline - so going to that degree of precision is probably too much anyway. SteveBaker (talk) 02:59, 7 December 2009 (UTC)
- The fractal nature of the coastline causes a problem for determining the length of the coast but not so much for the area enclosed. The USCB quotes the figure in square miles to two decimal places, so they must think they have it nailed to within a handful of acres. The precision I have used roughly corresponds to the precision the source has used and I am sure we can take the USCB as being expert in these matters. SpinningSpark 03:47, 7 December 2009 (UTC)
- Any place with a coastline will have a somewhat indeterminate exact area because of the fractal nature of the coastline - so going to that degree of precision is probably too much anyway. SteveBaker (talk) 02:59, 7 December 2009 (UTC)
- 104,765,165, from US Census Bureau figures rather than the rounded off ones in the Wikipedia article. SpinningSpark 02:07, 7 December 2009 (UTC)
- The difference between the two answers is 0.0002%. I would imagine that is well within the difference of the size of the state between high-tide and low-tide, so SteveBaker's answer is sort of correct; it probably isn't the fractal nature of the coastlines, its that the actual size of the state is fluctuating, and the amount of that fluctuation is larger than the difference between these two measurements. That wouldn't necessarily apply to a state like Colorado, however differences is measuring the land area may also arise. For example, the land area may be calculated from a map projection in one or both cases, and EVERY single map projection will introduce errors, just differing amounts. Plus, does one or both calculations assume a perfectly flat topography, or are they taking changes in elevation into account? --Jayron32 03:57, 7 December 2009 (UTC)
- Out of curiosity, I did the math. The coast of California is at least 850 miles long (much greater including small bays and inlets), and the difference between the two figures given is 275 acres. Along an 850-mile coast, we can lose 275 acres of land surface by submerging a bit less than three feet of coastline. Neat. TenOfAllTrades(talk) 04:28, 7 December 2009 (UTC)
- Which would be a very small difference in tides, so QED. Of course, that still does not answer if we replace California with Colorado, which being bordered by straight lines and not having a coastline, should have no variability in its area. But, as I said, different methodologies will result in different measurements of land area. However, with Colorado, there is likely to be a really correct answer due to the nature of its boundaries, which is different from California. --Jayron32 04:38, 7 December 2009 (UTC)
- These calculated areas assume that all hills have been squashed flat to sea level (on some projection). In reality, the land area will be much greater (if you had to spray it, for example) when slopes are taken into account. Are there any calculations of the actual 3-D surface area? It shouldn't be too difficult to get an approximation with modern GPS technology. Obviously the answer would depend on the resolution, but a reasonable compromise might be to take a height measurement every few yards to avoid counting areas of small rocks and molehills. Perhaps the calculation would make a bigger difference where I live (northern UK) where most of the cultivated land is on a slope. Dbfirs 08:59, 7 December 2009 (UTC)
- The USCB says of the accuracy of its data;
- The accuracy of any area measurement data is limited by the accuracy inherent in (1) the location and shape of the various boundary information in the TIGER® database, (2) the location and shapes of the shorelines of water bodies in that database, and (3) rounding affecting the last digit in all operations that compute and/or sum the area measurements.
- Ultimately, this database takes its geographic information from US Geological Survey data. The USGS define the coastline as the mean high water line, so no, California is not changing its area on a daily basis with the tides. It does of course, change over time due to such things as erosion, deposition and changing sea levels. SpinningSpark 13:43, 7 December 2009 (UTC)
- The USCB says of the accuracy of its data;
- These calculated areas assume that all hills have been squashed flat to sea level (on some projection). In reality, the land area will be much greater (if you had to spray it, for example) when slopes are taken into account. Are there any calculations of the actual 3-D surface area? It shouldn't be too difficult to get an approximation with modern GPS technology. Obviously the answer would depend on the resolution, but a reasonable compromise might be to take a height measurement every few yards to avoid counting areas of small rocks and molehills. Perhaps the calculation would make a bigger difference where I live (northern UK) where most of the cultivated land is on a slope. Dbfirs 08:59, 7 December 2009 (UTC)
- Which would be a very small difference in tides, so QED. Of course, that still does not answer if we replace California with Colorado, which being bordered by straight lines and not having a coastline, should have no variability in its area. But, as I said, different methodologies will result in different measurements of land area. However, with Colorado, there is likely to be a really correct answer due to the nature of its boundaries, which is different from California. --Jayron32 04:38, 7 December 2009 (UTC)
- Out of curiosity, I did the math. The coast of California is at least 850 miles long (much greater including small bays and inlets), and the difference between the two figures given is 275 acres. Along an 850-mile coast, we can lose 275 acres of land surface by submerging a bit less than three feet of coastline. Neat. TenOfAllTrades(talk) 04:28, 7 December 2009 (UTC)
- Argh! Where do I start?
- The reason I invoked "fractals" was precisely because I assumed that the USGS would use the mean high water mark or some other well-defined standard for tidal extent. The area of a fractal may be just as unknowable as the length. The length is not just unknowable - but also unbounded (in a true mathematical fractal, the length is typically infinite). The error in calculating the area can be bounded (eg by measuring the convex hull of the water and of the land and calculating the exact area between them) - but the actual area is only known for some mathematical fractals. For example, we know the exact area of a koch snowflake - but we don't know what the true area of the Mandelbrot set is [45]) - and we certainly don't have an answer for "natural" fractals. If you have to measure the coastline accurate to within (according to TenOfAllTrades) three feet - then the fractal nature of having to carefully measure around the edge of every large rock of every crinkly little tidal inlet (to a precision of three feet!?!) ensures that your error will easily be a few hundred acres.
- This is why SpinningSpark and Jayron32 are both incorrect and TenOfAllTrades calculation (while interesting) isn't what matters here.
- To Dbfirs - GPS technology doesn't help you at all unless you are prepared to clamber over the entire state logging GPS numbers every few feet...and that's not gonna happen. GPS is irrelevent here. (If you wanted reasonably accurate height data for California, you'd probably use the NASA radar study they did with the Space Shuttle a few years ago.)
- As for measuring the 3D area of the land instead of a projection - please note that mountains are also fractal. The area of a true 3D fractal is typically infinite just as the length of a 2D fractal is infinite. (See: How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension). A mountain is a 3D fractal - a coastline is only a 2D fractal - but the problem is exactly the same. Do you include the surface area of every boulder? Every little rock and pebble? The area of every grain of sand on the beach? Do you measure area down to the atomic scale? Where do you draw the line?
- So - you have to use the projected area - that's your only rational choice - and the fractal nature of the coastline prevents you from getting an accurate measure (certainly not down to a precision of a couple of hundred acres). Since we're told they USGS uses the TIGER dataese - I'm not at all surprised that there are errors because that data is a pretty crude vector outline of coastlines. That dataset was primarily drawn up for census purposes - it was never intended to be an accurate description of the shape of coastlines. SteveBaker (talk) 14:30, 7 December 2009 (UTC)
- The bigger question is does it really matter. The purpose of measuring area and boundaries to extreme accuracy is basically property rights: i.e. this land is mine, delineated from your land by this line. The state charges property taxes based on so-and-so dollars per acre, etc. So the real calculation of the real area of an entire state, while intellectually interesting, is practically trivial. Even for defining things like borders within bodies of water, and water rights; these are established by treaty and law, and these treaties and laws often include "fudge factors" for erosion and the like. The rules which define U.S. state borders around the Mississippi River, for example, include stipulations which allow for both slow erosion of river banks; in this case, the border drifts with the river, so states along the Mississippi have differing areas every day. If the river "abandons" an old channel and forges a new channel (i.e. oxbow lake formation); in that case the border does NOT change with the river, which explains why there are some locations in states along the Mississippi which are on the "wrong" side of the river, c.f. Kaskaskia, Illinois. Yes, any calculations are going to be approximations, but civil authorities cannot live with the "mathematicians" solution which says "its all fractals, and unboundable, so there is no definite answer". There always exists a set of error bars which is both workable in civil society, and which eliminates the fractalness of the problem. --Jayron32 16:19, 7 December 2009 (UTC)
- You're right of course - this 0.0002% error is entirely unimportant. If anything both the Wikipedia number and the US Census Bureau number should be rounded off to the nearest thousand acres or so - and both groups should be chastised for using an unreasonable amount of precision! The only interesting point here is what the size and cause of the error truly is - and hence how much rounding should be applied. SteveBaker (talk) 16:43, 7 December 2009 (UTC)
- Wikipedia cannot be faulted for quoting data to the same precision used by our sources. To round off numbers because of some supposed fractal "smearing" would be original research on our part unless the sources themselves also say this is happening. SpinningSpark 18:14, 7 December 2009 (UTC)
- The discrepancy is suspiciously close to the difference between the international foot and the U.S. survey foot, or international acres and U.S. survey acres.—eric 18:59, 7 December 2009 (UTC)
- ... and since the area measured the old way with a surveyor's chain would be significantly greater (because of slopes), the slight inaccuracy in a theoretical projected area doesn't really matter. Thanks SteveBaker, I should have said satellite survey, since GPSatellites only transmit (except for correction data). I was thinking of GoogleEarth where heights are given (or estimated?) every few yards. In the UK, we could use Ordnance Survey data which is surprisingly detailed on height. I agree that there is no one correct answer for 3-D area, which is why I said average every few yards, but there is no "correct" way to do this, so it will probably never be done. Projection area depends on the projection applied to the calculation. Steve suggests rounding to the nearest thousand acres, and that sounds reasonable in view of the inherent inaccuracies. Do we have any experts on projections who could estimate the differences in the area estimated using different projections? Dbfirs 20:04, 7 December 2009 (UTC)
- You're right of course - this 0.0002% error is entirely unimportant. If anything both the Wikipedia number and the US Census Bureau number should be rounded off to the nearest thousand acres or so - and both groups should be chastised for using an unreasonable amount of precision! The only interesting point here is what the size and cause of the error truly is - and hence how much rounding should be applied. SteveBaker (talk) 16:43, 7 December 2009 (UTC)
- The bigger question is does it really matter. The purpose of measuring area and boundaries to extreme accuracy is basically property rights: i.e. this land is mine, delineated from your land by this line. The state charges property taxes based on so-and-so dollars per acre, etc. So the real calculation of the real area of an entire state, while intellectually interesting, is practically trivial. Even for defining things like borders within bodies of water, and water rights; these are established by treaty and law, and these treaties and laws often include "fudge factors" for erosion and the like. The rules which define U.S. state borders around the Mississippi River, for example, include stipulations which allow for both slow erosion of river banks; in this case, the border drifts with the river, so states along the Mississippi have differing areas every day. If the river "abandons" an old channel and forges a new channel (i.e. oxbow lake formation); in that case the border does NOT change with the river, which explains why there are some locations in states along the Mississippi which are on the "wrong" side of the river, c.f. Kaskaskia, Illinois. Yes, any calculations are going to be approximations, but civil authorities cannot live with the "mathematicians" solution which says "its all fractals, and unboundable, so there is no definite answer". There always exists a set of error bars which is both workable in civil society, and which eliminates the fractalness of the problem. --Jayron32 16:19, 7 December 2009 (UTC)
beef slaughter house
A friend touring a beef slaughter house said he saw cows walking into an open horizontal barrel one at a time and when the gate was closed the cow inside the barrel started shaking violently until a man put a hammer handle like device between the cows ears to kill it. He said the violent shaking was because the cow knew it was going to be killed. Is this true or is there some other explanation? 71.100.160.161 (talk) 23:42, 6 December 2009 (UTC)
- There are procedures used in some slaughterhouses where cattle, once they enter the stunning box are electrically stunned using tongs or other devices before the use of a captive bolt pistol and then ejected for exsanguination. Nanonic (talk) 00:06, 7 December 2009 (UTC)
- The cattle have been under severe stress since they were loaded onto the truck back at the farm, not much attention is paid to comforting cows on the way to slaughter. The natural response of a cow to any sort of confinement or other stress is to run away from it (but not too far, they're herd animals). When the cow goes into the killing box, it's being confined to very close to zero movement, this is essential to get a clean kill. For the cow though, it's now gone from severe stress to ultra-maximum stress and its body will react accordingly. However to say the cow "knows" it is going to be killed, to me imparts rather more self-awareness than a cow actually possesses. They are sensing a dangerous situation and responding instinctively. Franamax (talk) 02:00, 7 December 2009 (UTC)
- I wonder whether the cow was shaking because of the electrical stunning? Electric shock can cause muscle tremors. If so, it wouldn't have been conscious at the time. SteveBaker (talk) 02:50, 7 December 2009 (UTC)
- No, he says it walked into the barrel and did not start shaking until the gate was closed. The front part of the barrel obscured everything but the head and it was not until 5 or 10 seconds later when the operator waled up on a platform and administered the (I presume) shock. 71.100.160.161 (talk) 03:37, 7 December 2009 (UTC)
- Nope, the cow doesn't know that closing a gate behind it means it is now going to be killed. What it does know is that it can no longer back up ergo it can no longer move at all - this is my maximum stress scenario. If you're going to confine a cow, you have to be sure the pen has higher sidebars than the cow, else it might try to jump over. Think of this from the POV of the cow, everything that has happened to it today has been bad and now it's getting worse. You can throw down some nice alfalfa hay in front of a cow in a pasture field and very calmly shoot it in the head (believe me). The cow is just panicking from the general situation. Franamax (talk) 04:16, 7 December 2009 (UTC)
- The problem with that scenario, which is why the first is used, is transporting hundreds of pounds of meat from the field to the slaughterhouse, which is expensive. If you killed every cow in an open field, then paid somebody to drag it onto a truck, then transported the dead carcass to the abattoir, it would be a) very expensive and b) less sanitary due to the amount of time the dead meat isn't being processed appropriately. If you actually kill the cow at the slaughterhouse, you get to prep the meat almost instantly after death (which is more sanitary) and the cow basically walks himself to his own death, saving LOTS of money and labor. If you find this method of death to be immoral in some way, then you may not want to eat cow. If this doesn't really bother you, then feel free to eat the cow. --Jayron32 04:56, 7 December 2009 (UTC)
- Nope, the cow doesn't know that closing a gate behind it means it is now going to be killed. What it does know is that it can no longer back up ergo it can no longer move at all - this is my maximum stress scenario. If you're going to confine a cow, you have to be sure the pen has higher sidebars than the cow, else it might try to jump over. Think of this from the POV of the cow, everything that has happened to it today has been bad and now it's getting worse. You can throw down some nice alfalfa hay in front of a cow in a pasture field and very calmly shoot it in the head (believe me). The cow is just panicking from the general situation. Franamax (talk) 04:16, 7 December 2009 (UTC)
- No, he says it walked into the barrel and did not start shaking until the gate was closed. The front part of the barrel obscured everything but the head and it was not until 5 or 10 seconds later when the operator waled up on a platform and administered the (I presume) shock. 71.100.160.161 (talk) 03:37, 7 December 2009 (UTC)
- You may be interested in the work of Temple Grandin to reduce the anxiety of cattle in the slaughterhouse. -- Coneslayer (talk) 13:48, 7 December 2009 (UTC)
Since we can't ask the cow's opinion the question is moot. Cuddlyable3 (talk) 11:21, 7 December 2009 (UTC)
- Sure we can. The answer is moo
t. --Stephan Schulz (talk) 12:02, 7 December 2009 (UTC)- It's a moo point. -- Coneslayer (talk) 13:48, 7 December 2009 (UTC)
- Stephan just made that joke. --Tango (talk) 17:20, 7 December 2009 (UTC)
- It's a moo point. -- Coneslayer (talk) 13:48, 7 December 2009 (UTC)
December 7
Change in kinetic energy
It's pretty easy to show, classically, that the observed change in kinetic energy doesn't depend on the frame of reference of the observer: it is a direct result from the equation relating the kinetic energy in a certain reference frame with that of the center of mass reference frame. How can it be shown that the change in kinetic energy (or energy), relativistically, doesn't depend on the reference frame? Do you have to go into the mathematical details to derive this result, or is there an a priori way of coming to the same conclusion?
A second, related question: Does an object's potential energy change between reference frames? I would think it would, because an object's potential energy depends on the relative distance between two objects, which, by Lorentz contraction, changes with reference frame. —Preceding unsigned comment added by 173.179.59.66 (talk) 00:18, 7 December 2009 (UTC)
- That what you said is not even true classically, let alone relativistically. Dauto (talk) 03:12, 7 December 2009 (UTC)
- It was (is) difficult to understand your question. That's probably why you got no answers. Also I don't think this field is well studied. Dauto: what is not true classically? Ariel. (talk) 09:16, 7 December 2009 (UTC)
- The OP said "It's pretty easy to show, classically, that the observed change in kinetic energy doesn't depend on the frame of reference". Well, that's not true. the change in kinetic energy DOES depend on the frame of reference. Dauto (talk) 14:46, 7 December 2009 (UTC)
- Why not, assuming that the system is closed? If the total kinetic energy in a certain reference frame is K, and the total kinetic energy in the center of mass reference frame is K_0, and the velocity of the center of mass relative to the reference frame in question is V, then K = K_0 + MV^2/2 (where M is the total mass). So ΔK = ΔK_0 (V won't change if it's a closed system). So the change in total kinetic energy will always be the same as the change in total kinetic energy of the center of mass, and thus the change in kinetic energy will always be the same. —Preceding unsigned comment added by 173.179.59.66 (talk) 17:45, 7 December 2009 (UTC)
- Why should we assume the system is closed? Dauto (talk) 18:37, 7 December 2009 (UTC)
- You aren't making sense. The kinetic energy in the centre of mass frame is zero. Or are you talking about the centre of mass of an n-body (n>1) system? It is easier to consider one object: If in my frame a 2kg object is moving at 1 m/s and speeds up to 2 m/s its KE increases from 1J to 4J. If your frame is moving at 1 m/s relative to mine in the same direction as the object then in your frame it starts off at rest (0J) and speeds up to 1 m/s (1J). In my frame the increase in energy was 3J, in yours it was 1J. As you can see, the change it kinetic energy is dependent on the frame of reference. (That's the classical view, the relativistic view is similar and gets the same overall conclusion.) --Tango (talk) 19:11, 7 December 2009 (UTC)
- Potential energy certainly changes. Consider two identical springs, one held highly compressed by a (massless) band. Now zip past them at relativistic speed; the total energy of each must scale by γ, and you must see some of that increase in the compressed spring as additional potential energy because the other one has the same rest mass, thermal energy, and (rest-mass-derived) kinetic energy and the difference in energy is larger the compression energy in the springs' rest frame. --Tardis (talk) 15:38, 7 December 2009 (UTC)
Redox rxn. How can I tell if a reaction is redox or not?
How can I tell if a reaction is a redox reaction just by looking at the chemical equation? Can someone show me an example? Thank you.161.165.196.84 (talk) 04:31, 7 December 2009 (UTC)
- A redox reaction is one where the oxidation numbers of some of the elements is changing in the reaction. All you do is assign oxidation numbers to every element in the chemical reaction. If the oxidation number for some of the elements is different on the left from on the right, then it is a redox reaction. If all of the oxidation numbers stay the same on both sides, then it is not a redox reaction. But you need to actually know how to assign oxidation numbers before you can do anything else here. Do you need help with that as well? --Jayron32 04:34, 7 December 2009 (UTC)
Yes, that would be great. My understanding is this: Hydrogen is usually +1, Oxygen is usually -2. In binary ionic compounds the charges are based on the cation's (metal) and the anion's (non-metal) group in the Periodic Table. Polyatomic ions keep their charge (Ex// Phosphate is -3, Nitrate is -1).
Now, my textbook says the following and I am not sure what this means: "In binary molecular componds (non-metal to non-metal), the more "metalic" element tends to lose, and the less "metalic" tends to gain electrons. The sum of the oxidation number of all atoms in a compound is zero." I'm not quite sure what the first part affects, but the second part is simply saying that once all oxidation numbers have been assigned, the sum of those numbers should be zero. Is this correct? — Preceding unsigned comment added by 161.165.196.84 (talk • contribs)
- Yeah, that's it. You should assign oxidation numbers per element not just for the polyatomics as a whole. Let me give you a few examples of how this works.
- Consider CO2. Oxygen is usually -2, and there are two of them, so that lone carbon must be +4, to sum up to 0, the overall charge on the molecule. C=+4, O=-2
- Consider P2O5. Oxygen is usually -2, and there are 5 of them, so the TWO phosphorus have to equal +10, so EACH phosphorus has an oxidation number of +5. P=+5, O=-2
- Consider H2SO4. Oxygen is usually -2, and hydrogen is almost always +1. That means that we have -8 for the oxygen and +2 for the hydrogen. That gives -6 total, meaning that the sulfur must be +6 to make the whole thing neutral. H=+1, S=+6, O=-2
- Consider the Cr2O7-2 ion. In this case, our target number is the charge on the ion, which is -2, not 0. So, in this case we get Oxygen usually -2, and there are 7 of them, so that's a total of -14. Since the whole thing must equal -2, that means the two Chromiums TOGETHER must equal +12, so EACH chromium has to equal +6. Cr=+6, O=-2.
- There are a few places where you may slip up. H is almost always +1, except in the case of metalic hydrides; in those cases (always of the formula MHx, where M is a metal) H=-1. Also, there are a few exeptions to the O=-2 rule. If oxygen is bonded to Fluorine, such as in OF2, fluorine being more electronegative will make the oxygen positive, so O=+2 in that case. Also, there are a few types of compounds like peroxides (O=-1) and superoxides (0=-1/2) where oxygen does not have a -2 oxidation number. These will be fairly rare, and you should only consider them where using O=-2 doesn't make sense, for example in H2O2, if O=-2 then H=+2, which makes no sense since H has only 1 proton. So in that case, O=-1 is the only way it works. However, these are rare exceptions, and I would expect almost ALL of the problems you will face in a first-year chemistry class to be more "Standard" types as I describe above.--Jayron32 06:09, 7 December 2009 (UTC)
- Note that O being an oxidation state of (-1) is the reason why peroxides are such strong oxidants. Oxygen is more stable at oxygen state (-2), and so peroxides are susceptible to nucleophilic attack where one oxygen atom accepts electrons and pushes out hydroxide or alkoxide as the leaving group (cuz of the weakness of the oxygen-oxygen bond). John Riemann Soong (talk) 06:29, 7 December 2009 (UTC)
- An important note is that "oxidation states sum to zero" ONLY in neutral compounds. If your compound is an an ion (for example, perchlorate or phosphate) or NAD+, then oxidation state will sum up to the charge of that ion. E.g. hydronium has an oxidation state +1. (It makes sense, right?) John Riemann Soong (talk) 06:33, 7 December 2009 (UTC)
This is helpful, thank you very much to all. Chrisbystereo (talk) 08:09, 7 December 2009 (UTC)
Value of a microchip
If you were to take all the metals and so on out of a chip (your choice, the newest Pentium, a digicam's image sensor, etc) and price them according to whatever tantalum/aluminum/titanium/cobalt/etc is going for, what would a chip's value be? I'm just curious what the difference between the cost of the components and the cost of the labor and such put into making it all work together. Has anyone ever even figured this out before? Dismas|(talk) 04:46, 7 December 2009 (UTC)
- The chip is basically a few grams of silicon, plastic, and maybe copper and iron. I can't imagine that the materials would be more than a few U.S. cents, if that much. The lion's share (99.99%) of cost of the chip itself is labor. --Jayron32 04:51, 7 December 2009 (UTC)
- I'm very aware that the value would be small but I was just wondering how small. My job is to make them and I've been spending the last few weeks looking at them under a scope and this question popped into my head. Dismas|(talk) 05:10, 7 December 2009 (UTC)
- Well, you probably have more accurate measures on the amounts of metal deposited in your process; and if you discount everything that gets wasted when it's etched away, you probably end up with a chip that contains a few nanograms of aluminum, a few picograms of boron, and a couple milligrams of silicon. Other trace metals depend on your process. Perhaps a better way to price everything is to count the number of bottles of each chemical solution or metal ingots that you consume in a given day/week/whatever, and divide by the number of chips produced. Again, this doesn't account for waste material, so you have to do some estimation. Nimur (talk) 08:06, 7 December 2009 (UTC)
- Pure silicon costs a lot more than impure. Does that difference count as labor to you? Metal ore in the ground is free for the taking. Making the base metal is all labor. Pretty much the cost of everything is just labor and energy. There is no "component price" for things, the only question is where do you draw the line and say "this is labor cost", and this is "component cost". I suppose - to you - it depends on if you buy it or make it. But globally there is no such line. To answer the question you are probably actually asking: I would suggest adding up the estimated total salary for everyone in your company, and subtract that from the gross income, and subtract profit. (If it's a public company you should be able to get those numbers.) Then you'll have things like overhead, and energy to include or not, as you choose. Ariel. (talk) 09:12, 7 December 2009 (UTC)
- Well, you probably have more accurate measures on the amounts of metal deposited in your process; and if you discount everything that gets wasted when it's etched away, you probably end up with a chip that contains a few nanograms of aluminum, a few picograms of boron, and a couple milligrams of silicon. Other trace metals depend on your process. Perhaps a better way to price everything is to count the number of bottles of each chemical solution or metal ingots that you consume in a given day/week/whatever, and divide by the number of chips produced. Again, this doesn't account for waste material, so you have to do some estimation. Nimur (talk) 08:06, 7 December 2009 (UTC)
- I'm very aware that the value would be small but I was just wondering how small. My job is to make them and I've been spending the last few weeks looking at them under a scope and this question popped into my head. Dismas|(talk) 05:10, 7 December 2009 (UTC)
- (either I was still sleepy, or I had an unnoticed ec - Ariel says essentially the same above) Also, the question is not very well-defined. For a microprocessor, you need very pure materials. A shovel of beach sand probably has most of the ingredients needed, but single-crystal silicon wafers are a lot more dear than that. If you pay bulk commodity price for standard-quality ingredients, the price of the material for a single chip is essentially zero. But in that case you will also need a lot of time and effort to purify them to the necessary level. --Stephan Schulz (talk) 09:13, 7 December 2009 (UTC)
- Wouldnt a vast inclusion of cost be R&D? I remember someone quoting West Wing on this desk about pharmaceuticals that would be relevant: "The second pill costs 5 cents, its that first pill that costs 100 million dollars." Livewireo (talk) 18:18, 7 December 2009 (UTC)
- Indeed. The cost is almost entirely R&D, I would think. That is a labour cost, though. --Tango (talk) 20:56, 7 December 2009 (UTC)
- Nevermind. I said I make them, I didn't say I owned the company and had access to all the costs associated with making them. I just wanted to know how much it would be if I melted it down and sold the constituent metals and such. I didn't think I was being that unclear. I'll just assume it's vanishingly small. Dismas|(talk) 20:19, 7 December 2009 (UTC)
- It would cost far more to separate the components than the components would be worth. Your question is easy to understand, it just doesn't have an answer - not all questions do. --Tango (talk) 20:55, 7 December 2009 (UTC)
- The fun of the question is how close to zero it is. Bus stop (talk) 21:10, 7 December 2009 (UTC)
- Thank you, Bus stop. I think you get my question most of all. I didn't mention labor at all. Or R&D. Nor did I ever say anything about the cost of separating the components. Again, nevermind. Dismas|(talk) 22:01, 7 December 2009 (UTC)
- Ok, but you can't get round the purity issue mentioned above. There isn't a single value for silicon, say, it depends on the purity. How pure the silicon would be depends on how much labour you put into separating the components. --Tango (talk) 22:12, 7 December 2009 (UTC)
- And the quantity of metals depends wildly on the actual die, photo masks, etc. As I mentioned above, you can estimate the masses of these constituent ingredients better than we can. Different mask patterns can leave as much as 100% or as little as 0% of a particular deposited layer - so there is no "in general" answer. You just have to estimate layer thickness and layer area for each stage of the process. Some typical numbers for areas and thicknesses might come out of articles like Self-aligned gate#Manufacturing process. Nimur (talk) 22:17, 7 December 2009 (UTC)
- Ok, but you can't get round the purity issue mentioned above. There isn't a single value for silicon, say, it depends on the purity. How pure the silicon would be depends on how much labour you put into separating the components. --Tango (talk) 22:12, 7 December 2009 (UTC)
- Thank you, Bus stop. I think you get my question most of all. I didn't mention labor at all. Or R&D. Nor did I ever say anything about the cost of separating the components. Again, nevermind. Dismas|(talk) 22:01, 7 December 2009 (UTC)
- The fun of the question is how close to zero it is. Bus stop (talk) 21:10, 7 December 2009 (UTC)
- It would cost far more to separate the components than the components would be worth. Your question is easy to understand, it just doesn't have an answer - not all questions do. --Tango (talk) 20:55, 7 December 2009 (UTC)
mesomeric versus inductive effects for the pka of catechol (ortho-diphenol)
I actually thought that o and p benzenediols should have higher pkas than phenol because of the destabilising mesomeric effect, but it seems that catechol (ortho-diol) has a pka of 9.5 (according to wikipedia). Google seems to say resorcinol (meta-diol) has a pka of 9.32 while para-diphenol is 9.8. This source seems to give a different frame of values.
My hypothesis is that the inductive effect is also at play, where having a carbanion resonance structure next to a (protonated) oxygen atom will stabilise it somewhat. And of course, the further away the two groups are from each other, the weaker the inductive effect, so it's why the para-diphenol would have the highest pka's of all the diphenols, while the meta-diol would barely see any mesomeric effect and mostly see the inductive effect. Is this reasonable? Is it supported by literature? John Riemann Soong (talk) 05:44, 7 December 2009 (UTC)
phenols as enols
I'm looking at this synthesis where a phenol is converted into a phenoxide and then used to perform a nucleophilic attack (in enol form) on an alkyl halide. My question is: why use lithium(0)? It seems a lot of trouble when you could just simply deprotonate phenol with a non-nucleophilic base like t-butyl hydroxide. Is it because a phenolate enolate is more nucleophilic at the oxygen? If so why not use something like lithium t-butyl hydroxide to bind the phenolate more tightly? John Riemann Soong (talk) 06:24, 7 December 2009 (UTC)
- I disagree with "a lot of trouble". Weigh a piece (or measure a wire length) of metal, drop it in, and you're done. Seems no worse than measuring your strong base (often harder to handle and/or harder to measure accurately). And where does that base come from? Do you think it more likely to be a benefit or a problem to have an equivalent of t-butanol byproduct (the conjugate acid of your strong base) in the reaction mixture (note that the chosen solvent is non-Lewis-basic) and during product separation/purification? The answer to every one of your "why do they do it that way" is because "it was found empirically to work well enough and provide a good trade-off for results vs cost." Really. Again again, nothing "in reality" works as cleanly as on paper, so you really have to try lots of "seems like it should work" and you find that every reaction is different and it's very hard to predict or explain why a certain set of conditions or reactants is "best" (for whatever "best" means). It's interesting to discuss these, but I think you're going to get increasingly frustrated if you expect a clear "why this way?" answers for specific reactions. On paper, any non-nucleophilic base will always work exactly as a non-nucleophilic base, and that's the fact. In the lab, one always tries small-scale reactions with several routes before scaling up whatever looks most promising. DMacks (talk) 07:05, 7 December 2009 (UTC)
- Along those lines, the best source for a certain reaction is the literature about that reaction. The ref you saw states "The use of lithium in toluene for the preparation of alkali metal phenoxides appears to be the most convenient and least expensive procedure. The procedure also has the merit of giving the salt as a finely divided powder." DMacks (talk) 07:12, 7 December 2009 (UTC)
- Sorry I guess my experience with oxidation-state 0 group I and II metals so far has been with Grignard and organolithium reagents. From an undergrad POV, they are such an awful pain to work with (compared to titrating a base and acid-base extraction)! Also -- deprotonated phenols can act like enols? Why aren't aldol side reactions a problem to worry about during the synthesis of aspirin from salicylic acid? And why aren't enol ether side reactions a worry here? John Riemann Soong (talk) 07:25, 7 December 2009 (UTC)
- The cited ref notes that the enol-ether side product is a huge problem (3:1 of that anisole product vs the "enolate α-alkylation" product they are primarily writing about). If the goal is "a difficult target", it doesn't matter if the reaction that gives it actually only gives it as a minor product compared to some other more likely reaction. The standard result of "phenoxide + SN2 alkylating agent" is O-alkylation, with other isomers being the byproduct. However, in general for enolates, the preference for O-alkylation vs C-alkylation is affected by solvent (especially its coordinating ability), electrophile, and metal counterion. It's unexpected to me that they get so much of it, but if there's any there and you want it badly enough, you go fishing through all the other stuff to get it. That's what makes this reaction worthy of publication...it does give significant amounts of this product and allows it to be purified easily from the rest. DMacks (talk) 09:16, 7 December 2009 (UTC)
- Sorry I guess my experience with oxidation-state 0 group I and II metals so far has been with Grignard and organolithium reagents. From an undergrad POV, they are such an awful pain to work with (compared to titrating a base and acid-base extraction)! Also -- deprotonated phenols can act like enols? Why aren't aldol side reactions a problem to worry about during the synthesis of aspirin from salicylic acid? And why aren't enol ether side reactions a worry here? John Riemann Soong (talk) 07:25, 7 December 2009 (UTC)
phenol-type quinoline
What do you call a phenol-type quinoline with an hydroxyl group substituted in the 8-position? I'm trying to find out its pKa (in neutral, nonprotonated form), but it's hard to know without realising what its name is.
(Also, it is an amphoteric molecule right?) These two pkas appear to interact via resonance, making for some weird effects on a problem set ... (I'm considering comparative pH-dependent hydrolysis rates (intramolecular versus intermolecular) for an ester derivative of this molecule...) John Riemann Soong (talk) 07:30, 7 December 2009 (UTC)
- Standard IUPAC nomenclature works pretty well for any known core structure: just add prefixes describing the location and identity of substituents. So quinoline with hydroxy on position 8 is 8-hydroxyquinoline (a term that gives about 132,000 google hits). Adding "pka" to the google search would help find that info. The protonation of these types of compounds is really interesting (both as structural interest and in the methods to study it)! All sorts of Lewis-base/chelation effects. DMacks (talk) 09:03, 7 December 2009 (UTC)
acidic proton question (Prilosec & Tagamet!)
Okay sorry for posting the 4th chem question in a row! I'm trying to figure out the acidic proton in two molecules, Tagamet and Prilosec. I'm given a pKa of 7.1 for the former and 4.0 and 8.8 for the latter. I don't know which ions the two in Prilosec the two pKas correspond to. (Possibly I feel there are much more basic and acidic sites, but they are not detailed or outside the range of discussion?)
With Tagamet imidazole is the the most obvious candidate for being a base with a pkB near 7, but I'm wondering why not the guanidine type residue? It has a nitrile group on it -- but how many pKa units would it raise? pKa of guanidine is 1.5, so plausibly a CN group could raise it to 7?
Oh yeah, and Prilosec. I'm ruling out the imidazole proton, but I feel that alpha-carbon next to the sulfoxide group is fairly acidic, cuz it has EWGs on both sides PLUS the carbanion could be sp2-hybridised if the lone pair helps "join" two conjugated systems. But the imidazole and pyridine lone pairs also look good for accounting for some of those pKas. Why aren't there 3 pKas? I think the imidazole-type motif in Prilosec is responsible for the pKa (pka of conjugate acid) of 8.8 -- but why the elevated pKa compared to normal imidazole? And why would the pKa of pyridine fall that low? (It has an electron donating oxygen substituted in para-position!) But assigning the pKas the other way round doesn't make sense either. Slightly disconcerted as I know these lone pairs are basic. John Riemann Soong (talk) 09:50, 7 December 2009 (UTC)
science
at firt there were judt two societies i.e hunting and gathering society but there was still life,the people were still living no equality was present evey one was equall and there was peace all over.But now a days cause of science there was no peace,no equality no respect every one is induldge in earning money.so wat should be hapened if the science and its invention are removed from our society? should we again start hunting and gathring society just 4 the sake of peace and equality? —Preceding unsigned comment added by Umair.buitms (talk • contribs) 13:31, 7 December 2009 (UTC)
- What makes you think hunter-gatherer societies were peaceful? They generally had greater equality since there wouldn't be enough food to go around if there was an elite that didn't hunt or gather, but they certainly fought neighbouring tribes. Do you really want equality, though? Surely everyone having a low standard of living is worse than some people having a high standard of living and others having a higher standard, which is the case in the modern developed world. --Tango (talk) 13:38, 7 December 2009 (UTC)
- I agree with Tango - there was unlikely to have been "equality" in the early days of humanity - and certainly no "peace". In modern times, there are still a few hunter-gatherer societies out there in places like the Amazon rainforest that science has not touched. For them, there is still warfare between tribes - women are still given one set of jobs and the men others - and there are still tribal leaders who rule the lower classes. The one place where equality is present is in "racial equality" - but that's only because they don't routinely meet other races because of the geography.
- As for removing science and invention - our society literally could not exist that way. The idea that (say) 600 million Americans could just put on loincloths and start hunting and gathering is nuts! There would be nowhere near enough food out there for that to happen - without modern agriculture, we're completely incapable of feeding ourselves. We would need for perhaps 599 million people to die before the one million survivors could possibly have enough to eat.
- I think your idyllic view of hunting & gathering is severely misplaced. It's a cruel, brutal existence compared to the relative peace and tranquility that is modern life.
- SteveBaker (talk) 13:51, 7 December 2009 (UTC)
- It a very common if very confused view, that all human problems are a product of modernity and so forth. It's true we have some new problems... but the problems of civilization are all there in part because living outside of civilization is so brutal. It is similar to the point of view that animals want to be "free"—most appear to want a stable food source more than anything else, because being "free" means starving half of the time. That's no real "freedom". --Mr.98 (talk) 14:44, 7 December 2009 (UTC)
- At least Sabre Toothed Tigers are extinct this time around. APL (talk) 15:16, 7 December 2009 (UTC)
- Do you have any references to support your utopian view of the hunter-gatherer societies? All evidence I've seen points to a society in which tribal warfare is common. Women are possessions. Children are expendable. And attempts to advance society are only accepted if they allow the tribe to attack the neighbors and steal more women and children. I feel that modern society is a bit more peaceful than that. -- kainaw™ 13:57, 7 December 2009 (UTC)
- To be fair, women as possessions is more what you get after the arrival of basic agriculture (herding), when the link between sex and children is more clearly understood and the concept of 'owning' and inheriting is established. Societies everywhere have cared about their own children: they were not viewed as expendable, except in as much as 'people who are not my family/tribe' are viewed so. If children were really viewed as expendable, there wouldn't be any concern about continuation of the family and providing inheritance, and hence there would be no possessiveness of women: the whole 'women as possessions' thing is about ensuring the children they bear are verifiably the children of the man who thinks they're his: without that, there's no reason for the man to care if the woman has sex with other men. The OP may have a hopelessly utopian view, but I'm not convinced yours is any more accurate. If nothing else, the Old Testament gives us accessible sources written up to three thousand years ago: the overwhelming feeling I get from it is how little the way people think has changed in the most basic ways. It is full of people caring very much about their children, way back. 86.166.148.95 (talk) 18:50, 7 December 2009 (UTC)
I'm going to be all cheesey and link you to Billy Joels We didn't start the fire. 194.221.133.226 (talk) 14:06, 7 December 2009 (UTC)
- I think even One Million Years B.C. was closer to the truth than the OP's utopian vision. Though their makeup probably wasn't as good :) Dmcq (talk) 14:27, 7 December 2009 (UTC)
OP, the viewpoint you expressed is known as anarcho-primitivism. You can read our wikipedia article, which includes views of both proponents and critics. See also Luddite, Neo-Luddism etc for less extreme versions of anti-modernism movements. Abecedare (talk) 15:39, 7 December 2009 (UTC)
- Also, for what it's worth, the development of science and so-called modernity was really just the logical outcome of a successful hunter-gatherer society. It's a lot easier, efficient, and survivable to build a house and a farm instead of wandering around hoping you find food, water, and shelter. Agriculture leads to spare time, spare time leads to advancements, which lead to greater agriculture, which eventually leads to Twitter. You can't go back, nobody would chose death over relaxation and creativity. ~ Amory (u • t • c) 16:26, 7 December 2009 (UTC)
- It's not that inevitable - plenty of societies didn't develop agriculture until they were introduced to it by other societies, some in modern times (eg. Australian Aborigines). Things wouldn't have needed to be too different for agriculture to have never been developed anywhere (or, at least, not developed until millennia later than it was). --Tango (talk) 16:34, 7 December 2009 (UTC)
- These are basically value judgements we are all making. These are subjective answers we are giving. Not surprisingly we favor what we have. Bus stop (talk) 16:41, 7 December 2009 (UTC)
- Re: not inevitable: I seem to recall [citation needed] that one of the ways archaeologists identify remains as early-domesticated goats rather than wild goats, is to look for signs of malnutrition. Captive goats were less well fed than wild goats. 86.166.148.95 (talk) 18:54, 7 December 2009 (UTC)
- I've never heard that, but it makes some sense. Goats were often raised for milk, rather than meat, and they don't need to be particularly well nourished to produce milk (actual malnourishment would stop lactation - animals usually don't use scarce resources on their children if they are at risk themselves). --Tango (talk) 18:57, 7 December 2009 (UTC)
- It's not that inevitable - plenty of societies didn't develop agriculture until they were introduced to it by other societies, some in modern times (eg. Australian Aborigines). Things wouldn't have needed to be too different for agriculture to have never been developed anywhere (or, at least, not developed until millennia later than it was). --Tango (talk) 16:34, 7 December 2009 (UTC)
- The actual experience of prehistoric hunter-gatherers is a serious bone of contention among anthropologists, made all the more difficult by various wild claims made by armchair researchers of the 18th and 19th centuries. (See state of nature and Nasty, brutish, and short). Among the complications are these: HGs lived in wildly diverse ecologies, meaning they had wildly diverse lifestyles, with wildly diverse advantages and disadvantages - how can you evaluate the lifestyle of an averaged out Inuit/San person meaningfully? Also, the few remaining HGs live at the very edges of the habitable earth, which makes it difficult to extrapolate what life was like in more normalized areas. In very, very, generic terms you can say this: people who lived the HG lifestyle worked a lot less per day than the farmers their descendants eventually became, they had few diseases compared to farmers, and they probably had more well-rounded diets than farmers. While there was surely enough sexual discrimination to go around, it was probably not nearly as bad as in the farming communities and the whole "slavery to acquisition" we in the modern world play to was pretty much non-existent; you can't build up wealth if you've got to slug everything on your back. On the other hand, they had relatively slow population expansion so when disasters did hit, it might spell the end to the band or tribe. Inter-band warfare was a real hit and miss kind of thing too - there were neighbours to be trusted and others that weren't, but with no central authority, there was really nobody "watching your back" if relations got out of hand. On the whole, it probably was quite a nice existence if you happened to be living in a reasonable area and didn't mind living your life within an animistic/mystical framework where you have enormous understanding of the surface of the world around you, but virtually no grasp of the real reason why anything happens. No books, no school, no apprenticeship, very little craft specialization beyond perhaps "women gather, men hunt" kind of thing. Matt Deres (talk) 21:43, 7 December 2009 (UTC)
- Of course you can build wealth if you need to haul it around. Your form of wealth would most likely be draft animals so that you can carry more stuff around. Googlemeister (talk) 22:21, 7 December 2009 (UTC)
- Domestication of animals is part of the road to civilisation. If we're talking about early hunter-gatherer societies (which I think we are - if we're talking about later h-g societies then you may be right), then they wouldn't have domestic animals. They wouldn't have had much to carry around. Simple clothes, stone tools, ceremonial items. Their economy was 99.9% food and I don't think they had the means to preserve it for long. --Tango (talk) 22:47, 7 December 2009 (UTC)
- Of course you can build wealth if you need to haul it around. Your form of wealth would most likely be draft animals so that you can carry more stuff around. Googlemeister (talk) 22:21, 7 December 2009 (UTC)
1. This subject would have been more appropriately placed on the Humanities Desk.
2. The OP is delusional.
Life, in the state of nature, is "solitary, poor, nasty, brutish, and short" leading to "the war of all against all."
— Thomas Hobbes, Leviathan (1651)
B00P (talk) 23:24, 7 December 2009 (UTC)
- Getting back to the OP: you may have been misled by the name into thinking there were two societies originally, one that hunted and one that gathered. In fact, hunter-gatherer is a generic name for pre-agricultural groups, almost all of which ate both types of foods: those (mostly animals) which some members (mostly male) had hunted, and those (mostly plants) which others (mostly female) had gathered. (Another name for these societies, should you wish to research further, is foragers.) It is true that most of these groups had to be mobile, to follow the food, and as such could carry little with them, so they did not accumulate wealth in the sense in which we understand it. However, there are always exceptions. One well-studied example are the Indigenous peoples of the Pacific Northwest Coast, who lived in a rich and fertile ecosystem, and particularly the Haida, who developed an impressive material culture -- so much so that they had to invent the potlatch in order to get rid of (or share around) the surplus. And that relates to another sort of wealth, a social and cultural wealth as opposed to a material one. Much harder to demonstrate than grave goods! BrainyBabe (talk) 23:29, 7 December 2009 (UTC)
- I think that in good times they had peace of mind beyond our wildest imagination. Bus stop (talk) 23:32, 7 December 2009 (UTC)
(edit conflict)I highly doubt the world would be able to support 6+ billion hunter gatherers. I would think that alone would answer the OP's question - if we, as a species, reverted back to hunting and gathering, it would require the deaths of billions. I would say that suggests that, "4 the sake of peace and equality", we definitely should not do this. TastyCakes (talk) 23:34, 7 December 2009 (UTC)
Photon
Does the energy of a photon depend on the reference frame? I would think so, because observers in difference reference frames measuring the frequency of a photon will measure different values (because their clocks run at different rates), and E=hf. But then a parado seems to arise: If observer A measures the energy of a photon to be E, then an observer B moving relative to A should measure a lower energy, γE. But in B's reference frame, A should measure an energy of γA. So who measures what energy? —Preceding unsigned comment added by 173.179.59.66 (talk) 17:54, 7 December 2009 (UTC)
- See redshift. Redshift is precisely photon energies being different in different frames. Redshift is determined by the relative velocity between the source and observer. The difference in velocity between two observers would mean each sees a different redshift - the one receding from the source faster (or approaching the source slower) will see the energy as lower. --Tango (talk) 18:04, 7 December 2009 (UTC)
- There are other factors that influence the observed frequency besides the γ-factors. If everything is taken into account, there is no paradox. See doppler effect. Dauto (talk) 18:08, 7 December 2009 (UTC)
- Also, see Relativistic Doppler effect, which extends the mathematics to apply to a wider range of relative velocities of reference frames, accounting for additional effects of relativity. Nimur (talk) 19:08, 7 December 2009 (UTC)
- There are other factors that influence the observed frequency besides the γ-factors. If everything is taken into account, there is no paradox. See doppler effect. Dauto (talk) 18:08, 7 December 2009 (UTC)
Area of Hong Kong
I was reading the question above about the size of California and I was wondering - has anyone ever gone and added up the total floor space in a dense city like Hong Kong, including all the floors in all those skyscrapers, as well as area on the ground, and compared that to its geographical area (1,104 square km, according to the article)? How much larger would Hong Kong, for instance, be? When viewed in that light, would the List of cities proper by population density change dramatically (ie cities with people living in big sky scrapers coming out looking better, ie less dense, than cities with lots of "1 story slums")? TastyCakes (talk) 19:23, 7 December 2009 (UTC)
- I vaguely recall that such statistics (total habitable area) are commonly collected by governments, tax administration authorities, electric/water utilities, fire-departments, etc. I can't recall if "Total habitable area" is the correct name. I'm pretty sure that the statistic of habitable- or developed area (including multi-story buildings) as a ratio to total land area is commonly used for urban planning. Nimur (talk) 19:54, 7 December 2009 (UTC)
- Floor Area Ratio. Sorry, the technical term had eluded me earlier. This article should point you toward more explanations of the usage of this statistic. Nimur (talk) 21:05, 7 December 2009 (UTC)
- Ah ok, thanks for that. Have you ever heard of it being calculated for an entire city? TastyCakes (talk) 23:41, 7 December 2009 (UTC)
- Floor Area Ratio. Sorry, the technical term had eluded me earlier. This article should point you toward more explanations of the usage of this statistic. Nimur (talk) 21:05, 7 December 2009 (UTC)
Converting from degrees K to F
Could someone please answer this question? Thanks, Kingturtle (talk) 19:58, 7 December 2009 (UTC)
- The formula for converting K to F is F = 1.8K - 459.7 Googlemeister (talk) 20:02, 7 December 2009 (UTC)
- That's rounded; F = 1.8K - 459.67 is the exact formula. "Degrees Kelvin" is obsolete terminology, by the way; they've been just called "kelvins" (symbol K, not °K) since 1968. For example, 273.15 K (kelvins) = 32°F (degrees Fahrenheit). --Anonymous, 21:04 UTC, December 7, 2009.
- Google can actually answer these types of questions. What is 1.416785 × 10^32 kelvin in Fahrenheit? -Atmoz (talk) 21:54, 7 December 2009 (UTC)
- WolframAlpha does this too, and gives other (scientific) information about the conversion for comparison. TastyCakes (talk) 23:37, 7 December 2009 (UTC)
Compact florescent bulbs
What is the acceptable temperature range at which you can use these lights? I ask because I want to know if I can use it outside when it is -50 deg, or if it will not work at that temperature. Googlemeister (talk) 19:59, 7 December 2009 (UTC)
- Form our Compact fluorescent lamp article: CFLs not designed for outdoor use will not start in cold weather. CFLs are available with cold-weather ballasts, which may be rated to as low as -23°C (-10°F). (...) Cold cathode CFLs will start and perform in a wide range of temperatures due to their different design. Comet Tuttle (talk) 20:16, 7 December 2009 (UTC)
- The packaging will indicate the acceptable range for the bulb. They are universally dimmer when cold, so this may be a persistent issue considering -50 (c or f) is 'pretty darn cold' in the realm of consumer products. —Preceding unsigned comment added by 66.195.232.121 (talk) 21:27, 7 December 2009 (UTC)
Inductive electricity through glass
With Christmas season here, I had an idea... Many wireless chargers use inductors to "transmit" electricity from a base unit to a device. Does anyone make that sort of thing that transmits electricity from inside the house to outside? I'm not considering a high-powered device. I'm considering the transmit/receive devices to be within an inch of each other on opposite sides of a window. -- kainaw™ 21:46, 7 December 2009 (UTC)
- I think normal wireless rechargers should be able to transmit through glass. 74.105.223.182 (talk) 23:55, 7 December 2009 (UTC)