Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jcmaco (talk | contribs) at 15:42, 30 March 2009 (→‎Send analog values over isolated grounds: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


March 24

Dystonia and body movement

I can't find any information on dystonia affecting the lower back. Does it not exist? Is it extremely rare?

Also, are there any muscles in the lower back which, upon contraction or spasm, cause the body to hunch forward? From what I've been able to find out, bending forward is caused by contraction of the abdominal muscles only and the back muscles maintain an upright posture or arch the back.

Thanks in advance. bcatt (talk) 01:36, 24 March 2009 (UTC)[reply]

I have no answer to the first part of your post. Our page for dystonia is not very clear and repetitive in parts (two times "general dystonia" for example). For the second part: It is impossible for muscles in the lower back to cause a forward movement of the body. Muscles are always long when relaxed and "shorten" when they tense. So back muscles work to shorten the back and abdominal muscles contract the abdomen. So both muscle groups work together (called antagonistic muscles) to determine body posture and to perform controlled movements. TheMaster17 (talk) 10:41, 24 March 2009 (UTC)[reply]
Thanks for confirming the info about the back muscles, that's what I thought. I would guess then, that IF dystonia of the lower back does exist, the sufferer would NEVER hunch forward at the time of a spasm? (edit: what is causing me some confusion on this is that I recall, several years ago, pulling a muscle in my neck which caused my head to stay stuck tilted and twisted to one side. I was told that it was the muscle on the opposite side (the elongated one) that was injured, not the one on the side to which my head was sticking. So, while it definitely makes more sense to me that dystonia of the back could not cause a forward bending posture, my experience with my neck reminds me that the body doesn't always seem to work in the most immediately logical way. Or maybe the doc who told me that about my neck was wrong and it was the muscle on the shortened side of my neck that was strained?) bcatt (talk) 16:03, 24 March 2009 (UTC)[reply]

Fingerprints

Why is it widely believed that everyone has different fingerprints? It's not beyond the realms of possibility that two people who are distant and unknown to each other could have the same prints. JCI (talk) 02:12, 24 March 2009 (UTC)[reply]

Francis Galton showed back in the 19th-century that statistically the odds of two people having identical prints is pretty much impossible. This is part of what made fingerprints acceptable evidence to courts. I'm fairly sure you are more likely to have 17 matching snippets of DNA (or whatever the standard matching SNP number is) than a full set of identical prints. --98.217.14.211 (talk) 02:47, 24 March 2009 (UTC)[reply]
But, of course, the problem comes in when comparing fingerprints (especially partials) to a database containing millions. With that many to choose from, you're sure to find several fairly close matches. But, when used properly, say by comparing bloody fingerprints at the scene with the prime suspect's, then the chances of getting a false match are extremely low. StuRat (talk) 05:21, 24 March 2009 (UTC)[reply]
What Galton calculated (the calculation was not a proof in the mathematical sense, more an informed estimate like the Drake equation) was the probability of two different prints matching in all their minutiae (Finger Prints, 1892, Ch. VII). There are tens of thousands of minutiae in one print. Fingerprint analysts, however, do not compare all of them. In fact, the number of minutiae legally required for a match is in the low tens, typically (Criminalistics, James Girard, p149). So Galton's huge probabilities, even if they are correct, tell us absolutely nothing about the reliability of modern practice. --Heron (talk) 23:35, 24 March 2009 (UTC)[reply]
Mistakes can still be made, and some instances are listed in our fingerprint article.--Shantavira|feed me 09:52, 24 March 2009 (UTC)[reply]
It depends on what you mean by "identical" - if you take two objects that are intended to be identical (two pennies for example) - they aren't REALLY identical, each has little nicks and scratches that distinguish it from the other. You can't imagine making any pair of physical objects that were utterly identical in every way. In that regard, I would maintain there are NO pairs of objects of any kind (at the 'macro-scale' at least) that are identical. Sure, you can talk about two Hydrogen atoms being identical - but not anything of any size.
So this oft-repeated claim that no two fingerprints are identical is OBVIOUSLY true at some scale of examination. There are something like 1020 atoms in your finger - it's clearly impossible that all of the atoms in someone else's finger are arranged precisely the same way. But it's not a particularly interesting or useful claim since no two of ANYTHING are utterly the same.
However, there aren't that many patterns of loops and whorls out there - there simply can't be. So at some other scale of description (perhaps a layman who knows nothing of fingerprint analysis using nothing but naked-eye examination) there are clearly pairs of fingerprints that are so similar that we'd need an expert to tell them apart. So I regard this whole thing with a deal of skepticism. Francis Galton's "calculations" cannot possibly be correct - there is simply not a solid point at which you can say "sufficiently identical to count" without very carefully defining you means of measurement and the errors inherent in making those measurements. Remember that your fingertips get cut and damaged - they grow callused if you work with certain tools - they are continually renewed and regrown. They change over time as our fingers grow from little baby fingers into adulthood. They get wrinkled up when you get them wet...so my fingerprint today is not the same as it was yesterday - at some level of examination.
The measure that I think is worth examining is whether your fingerprint at (say) age 5 years is more similar to your fingerprint at (say) age 75 years than to any one else's fingerprint in the world. I think that's a much harder standard to meet and I'd be very surprised if that were true.
SteveBaker (talk) 11:18, 24 March 2009 (UTC)[reply]
It's on, Steve Baker!! You say "However, there aren't that many patterns of loops and whorls out there - there simply can't be". how much informatino entropy does a fingerprint OBSERVATION (for comparison with other fingerprints) contain? answer THAT question. 79.122.44.240 User ID added by Sifaka
What exactly do you mean by "information entropy"? Steve is right in saying that there are relatively common patterns in fingerprints, see Fingerprint#Classifying fingerprints to see what I mean. According to this paper there are more errors when comparing two fingerprints from the same finger but decades apart than two fingerprints from the same finger taken recently using a minutiae based recognition approach. A quote from the aforementioned article:
"From a theoretical point of view it is common sense that aging may not impact the characteristics of fingerprints [l]. However for practical purposes, scaling effects of minutiae based matching algorithms may render older templates useless... In most cases, the ageing process does not change the structure of the fingerprint image. The ridges in the epidermis (dead dry skin) always show the same pattern since the information thereof is stored in the lower layer of the finger (dermis - live skin). If an injury of only the upper skin is sustained, after a certain time the same ridges are formed as before. Even the ageing process cannot change the paths of the ridges. The fingerprint may be a little larger, the ridges may be lower (if they were worn due to working), and the finger may show some wounds. However, the pattern always remains the same. Therefore, it should not be difficult, for the different verification algorithms, to identify fingerprints of the same finger, which only differ in the date of their acquisition, as being identical."Sifaka talk 18:28, 24 March 2009 (UTC)[reply]
The reason I addressed Steve Baker is that he is the only one here who knows what information entropy (Shannon entropy) is and can apply it to fingerprint observations!! But hes too squeemish to do so apparently...
You can't apply that fundamental information-theoretical approach when the terms involved are so vague. Let's read what our article says:
In the Henry system of classification, there are three basic fingerprint patterns: Arch, Loop and Whorl.[8] There are also more complex classification systems that further break down patterns to plain arches or tented arches.[7] Loops may be radial or ulnar, depending on the side of the hand the tail points towards. Whorls also have sub-group classifications including plain whorls, accidental whorls, double loop whorls, peacock's eye, accidental, composite, and central pocket loop whorls
So if I take that at first sight: There are three kinds (Arch,Loop,Whorl) - but there are really two kinds of arch, two kinds of loop, seven kinds of whorl...so there are eleven kinds of fingerprint. The probability of two fingerprints being identical is one in eleven and certainly there are billions of people with "identical" prints. Well - no. The print can be bent and twisted or closer to the fingertip or off to one side. Suppose we use the distance between the skin fold at the joint to the center of the arch/loop/whorl - well, if we measure that distance accurate to the nearest millimeter - then perhaps there is between 1 and 20 millimeters between 'fold' and 'feature' - so now there are 20 different kinds of eleven different features - so we have 220 different fingerprints in the world...but suppose we measure accurately to half a millimeter - now there are 440 different prints - but if we only measure accurately to 2mm - then there are only 110 different prints. If you alter the precision and complexity of what you measure - you can make the answer come out to anything you want. If I want it to come out that all 7 billion people in the world have "different" prints - I can just measure enough subtle parameters to enough precision and claim that statistically, it must be so. But it's meaningless. If I measure to less precision - there are fewer "unique" prints...if I just look at the gross 'shape' then there only 11 unique prints. You just can't attach a number to that. All we can say is that fingerprints are more unique than (say) pennies - but less unique than (maybe) snowflakes. But that's just a gut-feel thing - it's not science. In scientific terms - the print you have now - is different from the one you had when you started reading this sentence because a few skin cells have fallen off in the meantime. This is truly a bullshit thing...it's politics on behalf of crime-fighters and generally the stuff of urban legends. They say nobody ever got mis-identified because of their prints - but how would we know? If there were then they were misidentified for chrissakes! SteveBaker (talk) 00:49, 26 March 2009 (UTC)[reply]
tsk tsk Steve Baker, you're not thinking hard enough. You can give a range of entropies, depending on how good of an expert you pick, but the fact is, there are going to be an n number of fingerprints for which a given expert will say they are that of a different person, and the 2-based logarithm of n will give you the number of bits of entropy in the coding of that observer. You follow? If you want you can say the number of bits of entropy "range from" and then go 6-whatever, because the worst "expert" actually makes observations according to criteria by which fingerprints fall evenly into one of 64 possible groups (where he would say that two fingerprints in that group are "of the same person", according to his observation [=his observational criteria]!). The best expert might make observations by criteria according to which fingerprints fall evenly into about a million possible groups. In this case it would have 20 bits of entropy. So you could say "fingerprint observations for comparison purposes have 6-20 bits of entropy depending on the expert making the observations" -- you would make this statement if in your estimation the worst experts WOULD, for a given print, answer "yes it is the same person" for the prints of every 64th discrete person whose prints you could ask them to compare to the one in front of them, and the BEST experts would answer "yes it is the same person" for every every millionth discrete person whose fingerprints you could ask them to compare it with. For the first, the expert compares the prints by 6 bits of observational entropy, for the second, by 20 bits. However I am just making these bit numbers up!!! I am calling you out, Steve Baker, to propose an informational theoretical number of bits of entropy in actual, real, honest-to-goodness, expert fingerprint observational criteria for comparison purposes!! And the reason I'm calling you out is because you're the only one here who can possibly comprehend what I'm even talking about. Ball's in your court, Steve Baker. 79.122.75.197 (talk) 17:41, 26 March 2009 (UTC)[reply]
Your conduct is bordering on the disruptive. You are being argumentative. We don't call out individual editors here. Cut it out. - EronTalk 17:49, 26 March 2009 (UTC)[reply]
For people other than Steve Baker: although it may look like I am being argumentatitve with Steve Baker, in fact he (unlike anyone else here) knows exactly what I'm talking about and will answer with a better attempt soon enough. I'm only "calling him out" in that 1) he is one to respond to a good-natured challenge, and 2) he's the only one here who can possibly answer the question addressed to him. However I will give others a chance to see the record, which will be Steve Baker's correct result given shortly. That's why I'm doing it here, so everyone can see the answer.79.122.75.197 (talk) 18:03, 26 March 2009 (UTC)[reply]
This isn't your private chatroom with SteveBaker, or any other editor for that matter. And I'd suggest that most editors here don't much like being told that they can't possibly comprehend you. - EronTalk 18:31, 26 March 2009 (UTC)[reply]
While obviously, not being Steve Baker, my poor little mind cannot comprehend what you say, I will note that your capitalised 'would' seems rather odd, dismissing the whole probabilistic/expected value thing. After all, if people are giving the wrong answer at regular intervals, that's easy to correct for :P Or is the weird detached thing below written by you and supposed to fit into the above paragraph? 79.66.127.79 (talk) 20:23, 26 March 2009 (UTC)[reply]
attn steve baker: don't read the rest of this paragraph! -- No, the "weird detached thing" is for people like you, not to bug me about it. Steve Baker can understand information (shannon) entropy, which is about uncertainty and hence automatically includes...uncertainty. But why am I wasting your breath, you are re not Steve Baker -- so look, there's the weird detached thing written below, just for you! 79.122.75.197 (talk) 20:35, 26 March 2009 (UTC) [reply]
Was merely pointing out that you seem to think you are discussing this in a very deep, arcane sense, but appear to be addressing the problem in a very simple manner with the addition of a few well-known mathematical concepts. And while doing so, you appear to be skipping some basic and important ideas. But I'll leave this for Steve, if he feels like addressing it in your terms. 79.66.127.79 (talk) 20:46, 26 March 2009 (UTC)[reply]
Sure, I certainly I know about information theory - and so do many other here (and we have this big, impressive encyclopedia for the smart people here to go look it up in). But that doesn't matter because what I do' know is that it just isn't applicable here because we have no data to work with. There are bold assertions that no two humans have the same print so there are at least 7 billion different fingerprints - and yeah - we can take log2 of that and come up with a figure of about 32 or 33 bits of information content. But that's MEANINGLESS. If I observe the position of every molecule making up the print (let's say there are 1020 of them maybe) - we could measure the position of each one within a 1cm cube accurate to (say) a nanometer in three dimensions - measure the rotation of each molecule in nanoradians in all three axes and come up with some number which is something ungodly like a trillion bits of information. Or I, personally, might be only capable of recognising the two kinds of arch, two kinds of loop and seven kinds of whorl and come up with between 3 and 4 bits of information. Using information theory doesn't help in the slightest here - the answer is still "between maybe 3 and 1,000,000,000,000 bits of information depending on the quality of the observer" - that's such an astronomically vague answer as to be absolutely freaking useless. Hence you can keep chucking out stupid challenges and upsetting everyone here until hell freezes over - but applying information theory really doesn't help very much without carefully defining the limits of your observation. Hence the clearest answer for the OP is "Yes, all fingerprints are unique at a sufficient degree of observational precision - but it is far from clear what the observational precision of forensic labs is - so in that sense, it's still perfectly possible for there to be "non-unique" fingerprints at that level of observation.". Shannon's theorem has nothing whatever to do with that. SteveBaker (talk) 02:36, 27 March 2009 (UTC)[reply]
Suppose we use the distance between the skin fold at the joint to the center of the arch/loop/whorl — huh? Why not suppose we use the branching topology of the grooves/ridges, which (I gather) is what's used in fact? —Tamfang (talk) 22:25, 28 March 2009 (UTC)[reply]
Fingerprint evidence has been used in countless court cases. But there never seems to has been two persons found to have identical prints - not even for one finger, let alone all 10 fingers. If matching prints had ever been found, the occurrence would have received a lot of publicity. Also, from then on every defense lawyer would have fingerprint evidence thrown out. Even identical twins do not have matching prints, although their DNA is identical. Of course, matching prints of different people would have to be stumbled on by chance - fingerprint classification and searching is far from perfect. But with all the criminal cases that have used prints, two persons with the same fingerprint would surely have been found by chance. The problem with prints is that there is no known way to digitize the pattern. If there was, a computer could easily find identical prints. (Threshold scoring with a computer is useful when comparing two known fingerprints, but that is not a random search.) Illustrating this, a friend of mine had his place burgled and the burglar cut himself on the glass of the window he broke to get in, and he left a bloody fingerprint. The police said they could not try to match the print unless my friend could name 10 persons whose prints (if on file) could be checked for a match. That shows there is no way to make a full search of all fingerprints. (The FBI has a database of over 51 million prints.) Even when a match for the same person is found by other means, it is occasionally wrong due to police sloppiness (wrong name on the fingerprint card, etc.). See "Criticism" and "Errors in identification or processing" in the Wikepedia article Fingerprint.– GlowWorm.
See our articles on automated fingerprint identification, Integrated Automated Fingerprint Identification System and Brandon Mayfield. Gandalf61 (talk) 13:21, 24 March 2009 (UTC)[reply]
The FBI has recently made false positive identification of an innocent person, Brandon Mayfield, as a terrorist bomber. Duplicate fingerprints or malfeasance? The proof of validity of fingerprint identification is generally lacking and is based mainly on hand-waving and unverified statistical assertions. Any forensic technique's validity should hold regardless of whether the government is out to get the individual. Edison (talk) 04:45, 25 March 2009 (UTC)[reply]
Yeah - exactly. It's observer bias: "Nobody ever gets misconvicted because of fingerprint evidence"...except perhaps for the people who were indeed misconvicted! If you could say "Everybody who was ever convicted of a crime on the basis of fingerprint evidence subsequently made a full confession" - then maybe. But I'm 100% sure that plenty of people who were convicted on this basis have screamed and kicked and protested their innocence all the way to jail. How do you KNOW you didn't misconvict one of two of them because their prints happened to be identical to the real criminal? SteveBaker (talk) 00:49, 26 March 2009 (UTC)[reply]
The chances for two people having the same fingerprints is slim enough, but iris patterns are about 6 times more unique, and each eye has a different pattern (they are actually using this technique for identification in some countries).

Cancer or Sickle Cell????

What is, in lay terms, Renal Medulla Carcinoma? Danne dee (talk) 03:36, 24 March 2009 (UTC)[reply]

Renal Medullary Carcinoma (we don't have an article on this yet -- Done!) is a rare type of cancer that affects the kidney. It tends to be aggressive, difficult to treat, and is often metastatic at the time of diagnosis. It is not the same thing as sickle cell disease but the references in PubMed suggest that most individuals with this type of cancer have sickle cell trait or sometimes sickle cell disease -- meaning that the sickle cell trait may be a risk factor for this type of cancer (although it is still incredibly rare in people who carry the trait). See this for one of the first reports. There are also reviews available (here and here). I hope this helps. Clearly, anyone who has potential concerns about this disorder should see their physician as soon as possible. --- Medical geneticist (talk) 04:48, 24 March 2009 (UTC)[reply]
Nice work on that article, I've moved it to Renal medullary carcinoma though per WP:CAPS. —Cyclonenim (talk · contribs · email) 18:36, 24 March 2009 (UTC)[reply]

Thank you, that was the most comprehendable description i have gotten for it...and now wiki has an article about it. YAY! thank you again. —Preceding unsigned comment added by Danne dee (talkcontribs) 19:30, 24 March 2009 (UTC)[reply]

Humanity as a negative example?

If humans manage, through war, climate change or any other mechanism, to cause our own extinction, then is it likely that some other sentient creature (not necessarily of a currently existing species, not necessarily originating on Earth, and potentially including the Creator if one exists who isn't already omniscient) will study our mistakes and learn from them? If so, does our capacity to serve as an example in this way increase with our peak population? NeonMerlin 10:17, 24 March 2009 (UTC)[reply]

Given the openness of the scenario - yes clearly if some creature capable of study came about after the end of humanity then provided they can uncover a history of our demise they could use it as an aid to preventing their own demise. Of course knowing what causes something and avoiding that occuring are two different things. I don't see how a larger population makes it more obvious - apart from perhaps an increased chance of their being 'evidence' of our existence perhaps...but then i'd say a small population of technologically advanced citizens are more likely to be 'findable' than a mass-population without technological advancement. 194.221.133.226 (talk) 10:25, 24 March 2009 (UTC)[reply]

Well, given the difficulty of interplanetary travel - it's unlikely that aliens would come here. Doubly so if there were no intelligent lifeforms left here. So I think that's really highly unlikely. Another possibility might be that without humans, some other species that could survive whatever we screw up might evolve to eventually reach our levels of intelligence, curiosity and creativity. They might well be able to use archeological approaches to discover who we were and how we screwed up. Sadly, the most enduring things we'll have left behind are things like non-biodegradable plastic waste in landfills and nuclear waste bunkers - which won't speak well of our good sides. What's sad is that it's unlikely that our crowning achievements will survive - art, music, Wikipedia, architecture - all of that will have crumbled to nothing within half a million years...and it would probably take at least that for a race of intelligent cockroach-descendents to take over. I would hope that whatever fate befalls us - we'd have time to consider our legacy and build something so enduring that future species would be able to understand us. SteveBaker (talk) 11:00, 24 March 2009 (UTC)[reply]

NB: My reason for asking these questions is that if the answer is yes, then they add a positive component (which I've heretofore neglected) to, respectively, the total utilitarian value of humanity and the marginal utilitarian value of each new human born. Possibly even enough to shift the sign of these quantities from negative to positive. NeonMerlin 11:06, 24 March 2009 (UTC)[reply]

That entirely depends on what you are measuring. This "utilitarian value" thing is nebulous at best and downright nonsense at worst! In order to define a "value" you have to know what you are measuring and why it's important? The universe doesn't give a damn what happens to us or what we do. This is merely a matter of philosophy - and this is the Science desk - not the standing-around-making-specious-arguments-while-being-a-waste-of-quarks desk. SteveBaker (talk) 11:32, 24 March 2009 (UTC)[reply]

In numerous science fiction works, humans have found ancient alien civilizations somewhere in the universe which had flaws leading to their own destruction. Us learning from their mistakes was a plot element, but was never a very convincing one. See also the poem Ozymandias. It would seem to be within our present technology to leave behind a better archive than the crumbling statue of Ozymandias, or some nonbiodegradable plastic bottles in a landfill and some nuclear waste casks. Take some durable substrate and archive a copy of Wikipedia, various great books, copies of world-class art and music, and park it on a space probe at the L3 point or on the moon, where it will be out of harms way for a few million years. Something like that was done on the Voyager space probe, by means of the Voyager Golden Record. Edison (talk) 13:49, 24 March 2009 (UTC)[reply]

We might ourselves be able to go pick up Voyager in a few years when faster space travel is possible. However, a new species which would evolve millions (if from chimps) or billions of years later would have a lot farther to go, since it's headed out into space, and wouldn't know where to look. If there are plastics that last forever, why can't we use them to record Wikipedia, etc. ? If no inks last that long, we could burn letters into plastic pages. Perhaps we could also make something like a DVD out of it, although that would be inherently more difficult for a future species to read, requiring that they create a DVD player. StuRat (talk) 14:22, 24 March 2009 (UTC)[reply]
If we kill ourselves through climate change and leave proof of it, than any new species that comes along will have proof that anthropic climate change can kill you, so they won't cause it. If we kill ourselves through nuclear war, it will a) give proof that nuclear war will kill you, which we know and thus must not prevent us from causing it, and b) tell them how to make nuclear bombs. By the way, this reminds me of a demotivator. — DanielLC 15:07, 24 March 2009 (UTC)[reply]
(EC) Your question is very broad and has 2 very different main components. Aliens visiting our planet by definition have made it off their own dust ball. If our demise was due to the fact that we failed to establish a sustainable population elsewhere, they'd not be likely to make that mistake. (Said aliens presumable visiting our now empty world for that very purpose.) Local species would need time to evolve. Animals that are closest to us in the "use of tools" and "problem solving" department (e.g. chimpanzee, parrots) have only small populations left and it is doubtful that they will be able to multiply fast enough to become a dominant species. Animals with large populations (e.g. cockroaches, rats) will need time to evolve. By the time they get to a point of evaluating past events, there is likely to be very little left. They might dig up our "religious temple to energy" and start a traveling exhibit with our nuclear waste. Warning signs tend to be ignored, because risk taking is part of the process of developing s.th. new. (For example: Warnings on the walls of the pyramids were not paid much attention to.) Plastic will get crumbly after about 50 years, give or take. [1] It may make it for a couple of centuries under ideal conditions. Archiving Wikipedia in a durable manner would either require a consecutive population to produce backups on emerging technology or some advancement in data storage. Just consider what you'd do if you found a modern computer and a stack of punch cards, some magnetic tapes or a stack of floppy disks? ...and that would have been from within just a couple of decades. The magnetic storage will probably be unreadable anyway. CDs will merely last for about 5 years. Inscribing data in a crystal is still only a neat trick in the lab. History tells us that a) even things designed for the ages aren't necessarily intelligible to others and b) very few blunders get avoided the second time around. (aka. History is repeating itself.) Taking what is left behind for a following civilization as the only measure for an individual's worth is a pretty measly standard. Usually his/her contribution to the species while it's still around and to it's future existence and well being might be better gauge. ("If you are going downhill, at least enjoy the ride." :-) 76.97.245.5 (talk) 16:20, 24 March 2009 (UTC)[reply]
Increasing population takes almost no time at all, when compared to the millions or billions of years it takes for new species to evolve. If a population can double every generation, which isn't that difficult in ideal conditions, that would mean it would increase 1000-fold in 10 generations, a million-fold in 20, and a billion-fold in 30. Given a 20 year time frame for each generation, that would only take 600 years. StuRat (talk) 17:33, 24 March 2009 (UTC)[reply]
600 years would indeed be under ideal conditions. It took humans considerably longer to make it from caveman to computers. Tool use and problem solving abilities help to reduce some population pressures, but by no means all. Natural selection is a lot more complex and messy than the simplified "survival of the fittest." Sometimes the fittest get their heads bashed in by the second fittest or they just can't get a prom date :-). One lowly pathogen can wipe out populations in entire areas, so simple mathematical progression doesn't quite apply. 76.97.245.5 (talk) 22:31, 24 March 2009 (UTC)[reply]
Modern humans (Homo sapiens sapiens) are only 200,000 years old, so our population increased from almost nothing to several billion in that time. While much longer than 600 years, that's still nothing compared to how long it would take another species to evolve on Earth to the same level of intelligence as we currently have. Even starting from chimps, which are 98% of the way there, it would still take millions of years. StuRat (talk) 23:57, 24 March 2009 (UTC)[reply]
So given the timescale for another species to evolve to replace us - and assuming it's not chimps or dolphins because they'll probably die right along with us when whatever befalls us comes - we'd probably have to find a way to preserve our Wikipedia backup for a billion years. Sadly, it's not just a matter of finding a good material to write it on. We couldn't count on avoiding vulcanism, earthquakes, continental subduction, rising sea levels, rock deposition, being ground to dust by a kilometer of ice in an ice age...one of those could bury our best efforts beyond the ability of any advanced race to find it. Putting it somewhere more quiet like the moon or one of the Lagrange points would make sense - but remember: that time-capsule that the super-intelligent race of dinosaurs left for us it still sitting in a crater somewhere and the backup copies at the two lagrange points are still there. So even at our level of development - there is no certainty that we'd find these information sources. If you are a pessimist - you might successfully argue that the time it takes a new species to find a carefully preserved archive could easily exceed the typical lifetime of a civilisation. SteveBaker (talk) 01:27, 25 March 2009 (UTC)[reply]
There's another good reason to leave that stuff at the Lagrange points, because when we get there we may find them cluttered with time capsules from all the previous civilizations. :-) StuRat (talk) 05:19, 25 March 2009 (UTC)[reply]
In my attic are boxes of punchcards, reels of magnetic tape,and floppy discs. There is also punched paper tape with a Fortran 2 program for a PDP-8. I have no doubt that all would be readable. Edison (talk) 04:42, 25 March 2009 (UTC)[reply]

How long do CDs last?

Removed from previous question and given separate title. Matt Deres (talk) 20:41, 24 March 2009 (UTC) [reply]

How long do CDs last? I have some that are nearly 15 years old that work well, so I know its more then 5 years, but would they last centuries in a place like a bank vault? 65.121.141.34 (talk) 20:34, 24 March 2009 (UTC)[reply]

Plastics degrade rapidly when exposed to UV light, but I imagine they'd last far longer if buried. You'd probably also want to keep ground water away from them and bury them below the frost/freeze line. Hermetically sealed, under ideal conditions, I'd guess centuries, at least. StuRat (talk) 20:41, 24 March 2009 (UTC)[reply]
It depends very much on the specifics. In general, commercially stamped CDs last longer than writable media. But some early batches used an unsuitable glue, with CDs deteriorating after a few years only. --Stephan Schulz (talk) 20:47, 24 March 2009 (UTC)[reply]
I bought three CD's from the Philips Research Labs staff shop - about 2 months before the first CD players went on sale to the general public. All three still play just fine (In case you care - they are: Dire Straits: Brothers in Arms, Some Bach Fugues and a recording of Glenn Miller taken from the original pre-magnetic tape wire-recordings and remastered especially for CD). Those must be close to being the oldest mass-produced CD's in existence. There have been a few snafu's with disk manufacturers over the years - so some disks have behaved badly - but on the whole, they do pretty good. There is certainly no obvious fixed lifespan that you could point to. They don't all die after X number of years. (Mind you - I bet they are all three on about their tenth replacement jewel cases!) SteveBaker (talk) 01:08, 25 March 2009 (UTC)[reply]
There was a court case somewhere (EU?). The industry representatives were unwilling to guarantee their CDs for more than 5 years. So that's how long they think they'll last. That is for ones subjected to ordinary use, though. Under some carefully managed storage conditions they might last as long as the plastic stays intact. 76.97.245.5 (talk) 02:42, 25 March 2009 (UTC)[reply]
I have CD's from 1982, reel-to reel tapes from the 1960's, LPs from 1950, 78's from 1909, and cylinders from the 1890's which still play fine. It is a matter of preservation. Edison (talk) 04:37, 25 March 2009 (UTC)[reply]
There was one period where the formulation of the aluminium that's evaporated onto the disk to make the mirror-surface wasn't quite right - and CD's from many manufacturers over a period of several years had a tendency to develop tiny pin-holes in the aluminium layer - and (weirdly) an effect similar to surface tension in liquids was making the holes grow slowly over time - eventually ruining the disks. But once the problem was known, it was fairly quickly rectified - a lot of the bad press that CD's have had over the years can be attributed to that incident. SteveBaker (talk) 05:26, 25 March 2009 (UTC)[reply]
Fungus can also be a problem for CDs in some tropical countries, I've experience it personally and there are widrespread reports on the internet. It seems to eat the aluminium layer. See [2] as an example image. Nil Einne (talk) 11:17, 25 March 2009 (UTC)[reply]
As noted above, the commercially stamped media is generally far more durable than user writable CDs. The same chemistry the makes most writable CDs able to be encoded by laser heating also makes them degrade over time. I remember seeing a report that most writable CDs developed significant errors in under 5 years. This was even true of the CDs that had never been burned, they also became unusable after only a few years of shelf-life. That said, there are also some companies now that will sell you "archival quality" writable CDs. I have no idea if they really do the job, but I have heard of one company that even offers a 100 year guarantee which at least says they are serious. Of course, their product is also like $5 / disk as opposed to $0.25 for the cheap stuff that dies after a few years. Dragons flight (talk) 06:13, 25 March 2009 (UTC)[reply]
Certainly writable CD's have a short life - and indeed, their shelf-life is problematic even before they've been written to. I worked on the project that produced the first CD-ROM ever - we had to press the disks in a CD factory because the projected life of the early experimental writable CD technology was so short that it would be tough to get them out of the factory and into our hands before they stopped working! The 100 year guarantee is only for the value of the blank media - it's really no comfort at all when 20 years from now you find you've lost all of your data and all they give you is a couple of bucks to buy a new blank disk. SteveBaker (talk) 22:18, 25 March 2009 (UTC)[reply]
Sort of like a money-back guarantee on an artificial heart ? :-) StuRat (talk) 04:54, 26 March 2009 (UTC)[reply]

Fingernail

How far up one's finger does a fingernail start growing? --98.217.14.211 (talk) 14:59, 24 March 2009 (UTC)[reply]

just a millimeter or two, it grows at the tip (obviously, just look at it grow) 79.122.44.240 (talk) 15:56, 24 March 2009 (UTC)[reply]
The only place that fingernail growth occurs is at the base. The entire hard part of the 'nail' is non-living; the nail extends as new material is deposited by the nail matrix: living soft tissue that sits under the nail. The visible portion of the nail matrix is the lunula, that whitish semicircular bit at the base of the nail. (The lunula may not be visible on all your fingers and toes, and it is often most conspicuous at the base of the thumbnail.) TenOfAllTrades(talk) 15:58, 24 March 2009 (UTC)[reply]

What's the hottest a human can stand for more than a few seconds?

the question on actually experiencing boiling water weeks back brough to my mind a story - don't recall where, but my money's on it being James Bond - where the bad guy locks the protagonist in a sauna or somesuch and knobs it up to about 165 degrees or so.

My question is, how much can the human body stand for more than a few seconds? I imagine the writer did some research on how high to have that turned up (why would one have the option to have it dangeruosly high, anyway?), but ISTR someone saying a really hot sauna can get up to 212 F, which is boiling? Is that really doable?

It makes me wonder about people near a blast furnace, too, where I imagine as you get real close it can be what, several hundred degrees? Or just what firefighters face, ignoring the problem of smoke.172.130.27.46 (talk) 15:44, 24 March 2009 (UTC)[reply]

This depends heavily on what the hot thing in question is. Hot metal, hot water and hot air of the same temperature will have different effects, due to differing thermal conductivity and heat capacity. A human can easily stand hot air at well above boiling temperature (try sticking your hand in an oven sometime), but boiling water is another matter. Algebraist 15:50, 24 March 2009 (UTC)[reply]
Real Finland style saunas routinely go up to 100℃, and sometimes reach 110℃, significantly above the boiling point of water. People can stand this with low humidity for several to many minutes - the body manages to regulate its temperature fairly well for short periods of time. On the other hand, long-term exposure to 45℃ is very unhealthy, as is, of course, exposure to hot water quite a bit below boiling point. --Stephan Schulz (talk) 15:54, 24 March 2009 (UTC)[reply]
Steam burns can be far more harmful than flame burns, because the latent heat of condensation of the gaseous water vapor as it liquifies on the skin surface releases additional heat energy into the skin, worsening the burn. Nimur (talk) 16:33, 24 March 2009 (UTC)[reply]
The key factor is humidity. In low humidity (such as the saunas Stephan mentions), the human body can cool itself very effectively through sweat even at temperatures significantly above the boiling point of water. In high humidity, sweat doesn't work, and you get into trouble at temperatures well below the boiling point of water - 50℃ will have you suffering severe heat stroke pretty quickly in tropical humidities if you aren't careful (and probably even if you are careful - without access to some form of refrigeration, at least an cool box full of ice, I can't see what you could do to survive more than a couple of hours, if that). --Tango (talk) 18:49, 24 March 2009 (UTC)[reply]
I remember reading in the Guinness Book of Records in the 1980s that humans had experienced temperatures of over 500 deg C in US Army trials and survived! Unfortunately the GBWR website is not at all search-friendly so I can't instantly confirm this. --TammyMoet (talk) 19:04, 24 March 2009 (UTC)[reply]
I think that's degrees F, not degrees C. --Trovatore (talk) 19:27, 24 March 2009 (UTC)[reply]

Concur, 500C would probably melt aluminum.65.121.141.34 (talk) 20:30, 24 March 2009 (UTC)[reply]

The melting point of Aluminium is 660.32°C, so not far off. I agree that 500°F is far more likely (that's 260°C - still rather toasty!). --Tango (talk) 20:45, 24 March 2009 (UTC)[reply]
Thanks; yeah, I thought of looking at Guinness, but yeah, their site isn't too easy to move around in? Wow, 500 degrees F is still amazing. the stuff on how much of a different humidity makes is really helpful, too. I'd have thought something that high would do something really bad to the blood or skin or somethingg even after a couple seconds; just like I've read putting one's hand in liquid nitrogen (or is that oxygen) freezes it instantly. (Then again, that's a liquid, but at that low a temperature, I think you'd get frostbite anyway.).209.244.187.155 (talk) 21:42, 24 March 2009 (UTC)[reply]
So my recollection is a little different. The way I remember it, they were NASA experiments rather than Army. The numbers I recall were that you could tolerate 400 F unclothed or 500 F if bundled up (at those temperatures a heavy jacket keeps you cool, relatively speaking). I don't know what the time frame was; I can't believe it was a really long time but presumably it was long enough to accomplish some task, or maybe make it through re-entry and have the ship pick up a living person rather than some barbecue. --Trovatore (talk) 21:59, 24 March 2009 (UTC)[reply]
There is a big difference between liquid nitrogen and air - liquid nitrogen conducts heat far better than air. If your skin ever got to 100 degrees, you would be in serious trouble within a fraction of a second, but heat doesn't go from air to skin very quickly so the body's cooling methods can prevent the skin ever getting hot enough to burn even when in direct contact with 200 degree air. --Tango (talk) 22:17, 24 March 2009 (UTC)[reply]
Putting your hand in liquid nitrogen doesn't freeze it instantly, anyway. It's perfectly possible to dip your hand in for a brief period without taking any harm (be very careful if doing this at home!). Algebraist 00:38, 25 March 2009 (UTC)[reply]
Having had my face briefly immersed in a large propane flame (probably around 1000°F), walking through a turbocharged diesel exhaust plume from a military vehicle (about 400°F), and having inadvertantly swallowed drops of splashing liquid nitrogen, I can attest personally that you neither burn nor freeze instantly. My eybrows and eyelashes got singed, but otherwise I was unharmed. With the nitrogen, I was also unharmed. Frostbite can happen after a few seconds of liquid nitrogen exposure but for the most part it evaporates so fast it never directly contacts the skin (swallowing a drop results in belching out a white cloud of vapor a few seconds later). I highly recommend not trying any of this yourself. In my case, they were accidents. ~Amatulić (talk) 01:07, 25 March 2009 (UTC)[reply]
I've heard stories from friends of "liquid nitrogen fights" (similar to water fights). They splash it on each other, apparently with no ill-effects. While the thermal conductivity is pretty high, the thermal capacity of a small drop can't be much - there probably just isn't enough "cold" there to harm you. --Tango (talk) 20:25, 25 March 2009 (UTC)[reply]

By the way, the OP's memory was probably of Thunderball, in which (the film as well as the novel) it's Bond himself who traps Count Lippe in a steam bath. Deor (talk) 22:42, 24 March 2009 (UTC)[reply]

Babbage and the oven

"... Chantrey was engaged at that period in casting a large bronze statue. An oven of considerable size had been built for the purpose of drying the moulds. I made several inquiries about it, and Chantrey kindly offered to let me pay it a visit, and thus ascertain by my own feelings the effects of high temperature upon the human body. ...

"The iron folding-doors of the small room or oven were opened. Captain Kater and myself entered, and they were then closed upon us. The further corner of the room, which was paved with squared stones, was visibly of a dull-red heat. The thermometer marked, if I recollect rightly, 265° [130°C]. The pulse was quickened, and I ought to have counted but did not count the number of inspirations per minute. Perspiration commenced immediately and was very copious. We remained, I believe, about five or six minutes without very great discomfort, and I experienced no subsequent inconvenience from the result of the experiment."

Charles Babbage: Passages from the Life of a Philosopher, chapter XVI. —Tamfang (talk) 01:25, 29 March 2009 (UTC)[reply]

Expansion of space cont.

Kind of continued from this thread. Do we know whether space is expanding all over the universe (i.e. all space around us is stretching) or whether it's just the outermost sections of our universe expanding? I mean is the space around us now, Earth, the Moon, expanding with the rest of space around the universe? I'm not even sure if this is testable or not, because if all space around us was expanding at the same rate as everything else, then it wouldn't be detectable. —Cyclonenim (talk · contribs · email) 18:41, 24 March 2009 (UTC)[reply]

The expansion of space on a local scale would be detectable (at least theoretically). We measure the distance to the moon, for example, by bouncing laser beams off mirrors left by Apollo astronauts and timing how long it takes for them to get back. The speed of light isn't changed by metric expansion, so we would notice the time taken increasing (actually, the moon is moving away from the Earth due to tidal forces, but that's irrelevant!). Gravitationally bound systems (anything on the scale of galaxy clusters or smaller) aren't expanding. That doesn't mean it is just "outermost" sections of the universe expanding (whatever that means - there is no centre to be far away from) - expansion happens on large scales, but not small scales. Any two objects a large enough distance apart will be moving away from each other, regardless of where they are in the universe. --Tango (talk) 18:57, 24 March 2009 (UTC)[reply]
It sure seems like there's an influx of "expansion of space" questions lately. Anyway, this is answered in the lede of the metric expansion of space article linked in the first answer of the above question: no, small-scale gravitationally-bound systems (including the Earth-Moon system and the solar system, but also including the Milky Way as a whole and yet larger systems) do not expand within themselves. As for detectability, the "measuring distance in a metric space" subsection addresses this.
However, along a related line, it's possible that all space (and other related constants) expand at a particular rate, shared equally, and that it's therefore completely undetectable. Of course, it's also completely irrelevant -- if everything continues to function in such fashion as if there were no inexplicable immeasurable force, then we might as well assume that there is not. — Lomn 18:59, 24 March 2009 (UTC)[reply]
How would you define such an expansion? Our definition of "distance" depends on various constants, and the units we express those constants in depends on our definition of distance. I can't see how you could define things in a way that makes the kind of expansion you describe make any sense. --Tango (talk) 20:35, 24 March 2009 (UTC)[reply]
I don't know how it would work, either -- I'm just noting that while such a philosophical concept could be bantered about, it's not a meaningful discussion. I've seen a few thought experiments before along the lines of "what if everything is expanding, all means of reference included?" and thought it worth addressing why they're not really interesting. — Lomn 20:54, 24 March 2009 (UTC)[reply]
People have already kind of said this, but anyway: your questions would make sense if space were more like a substance (a liquid or a gas or a loaf of bread), but it isn't. Space does have some properties of its own (like curvature), but it doesn't satisfy a continuity equation—it isn't "conserved". If you have a liquid-filled region that's getting larger, it makes sense to ask whether it's because the existing liquid is expanding or because more liquid is being added at the edges. You can distinguish those two cases because you can trace the motion of the liquid over time—you can draw worldlines in spacetime showing what happens to individual bits of liquid. If the liquid region is getting larger then either the lines must be diverging from each other or someone must be adding new lines at the edge. When you're talking about expanding space you still have the spacetime but not the lines in it, so that distinction disappears. Galactic superclusters have worldlines and they're generally diverging, so the gas of superclusters is expanding (and it really is very much like a gas, strange as that may sound). Smaller objects within the superclusters have their own worldlines and those are not generally diverging (within a single supercluster), so individual superclusters are not expanding. -- BenRG (talk) 19:11, 25 March 2009 (UTC)[reply]

What is the rate of expansion of the universe at various points in time, according to recent experiments with supernovas, etc. Thanks, *Max* (talk) 18:55, 24 March 2009 (UTC).[reply]

Hmm... as I understand it, a simple answer to your question doesn't exist: there is no universal "rate of expansion". We suspect that much of the universe is, due to expansion, now beyond our light horizon. As such, it is unknowable to us (save that it has receded at a rate greater than c). Within the observable universe, the rate of recession varies object to object; however, scientists currently believe the general trend (the deceleration parameter) is that those rates are increasing. — Lomn 19:21, 24 March 2009 (UTC)[reply]
How can the universe receed at greater than the speed of light? I thought nothing could travel faster than that. 78.146.178.204 (talk) 23:33, 24 March 2009 (UTC)[reply]
Locally, things can't travel faster than the speed of light, but on cosmological scales it doesn't quite work like that - you can think of it as the galaxies staying still and space being created inbetween them (BenRG will tell us this isn't the case, but I'm still not convinced!). --Tango (talk) 00:30, 25 March 2009 (UTC)[reply]
Well, it isn't the case but it isn't not the case either. It's just meaningless to ask whether space is being created as far as the theory is concerned. What worries me is saying that space is being created in some cases (the supercluster motion) but not other cases (other relative motion), since that distinction doesn't exist in the theory. -- BenRG (talk) 19:22, 25 March 2009 (UTC)[reply]
I realize, but space is, at least locally, homogenous, so shouldn't the observed values be the same everywhere in our observable universe? I know that most scientists believe that the rates are increasing; I am looking for the exparimentally measured redshifts that back this up. *Max* (talk) 20:12, 24 March 2009 (UTC)[reply]
I don't know much about this, but this page cites some of the major experimental evidence for ΛCDM. -- BenRG (talk) 19:22, 25 March 2009 (UTC)[reply]
There is a universal rate of expansion (or large enough scales), it's just proportional to separation. See Hubble's law. It is theorised that Hubble's Constant varies over time (increasing due to dark energy, decreasing due to gravity - observations suggest more of the former than the latter, so a net increase). I can't find any estimates of its value at other times than now, though, sorry. --Tango (talk) 20:30, 24 March 2009 (UTC)[reply]
I found one paper ( http://arxiv.org/abs/astro-ph/0701519 ) which contains some measured values for the hubble parameter at different red-shifts z. Dauto (talk) 22:33, 24 March 2009 (UTC)[reply]
According to the ΛCDM model, the value of the Hubble parameter at different times is given roughly by , where t is the "time since the big bang" (see Age of the universe#Explanation for what that means) and k ≈ 1 / (11 billion years). The image on the right is a graph of the coth function which I found on Commons. The x < 0 part isn't physically meaningful. The present day (14 billion years after the big bang) is in the vicinity of x = 1.2. The horizontal asymptote (y = 1) corresponds to a Hubble parameter of around 60 km/sec/megaparsec. Three caveats: (1) I didn't find this in a textbook, I derived it from the Friedmann equations and I may have made a mistake; (2) it's only valid for t > a few thousand years, before that other physics comes into play; (3) it's only valid in the future if the ΛCDM model is correct, and there's not enough data yet to be sure of that. -- BenRG (talk) 13:43, 25 March 2009 (UTC)[reply]
Would you mind sharing the details of your solution with us? For some reason I'm not getting the 2/3 factor. Dauto (talk) 13:38, 26 March 2009 (UTC)[reply]
I used the first Friedmann equation in the form given at the bottom of Friedmann equations#The density parameter. I took (consistent with the evidence) and (close enough for t > a few thousand years) and plugged the rest into a CAS, which found the solution . (I've been basing Ref Desk answers on that formula for ages, so I hope it's right.) Then H = a'/a. The factor of 2/3 comes from the exponent. It's canceled by the 3/2 from inside the sinh, but I absorbed that factor into my k. -- BenRG (talk) 19:23, 26 March 2009 (UTC)[reply]
Thanks. Dauto (talk) 20:52, 26 March 2009 (UTC)[reply]
Thanks for your help everyone. *Max* (talk) 01:49, 26 March 2009 (UTC)[reply]

Syngerism

What is 'syngerism'? I encountered it in antibiotic syngerism against enterococci. Nadando (talk) 22:41, 24 March 2009 (UTC)[reply]

Probably a typo for synergism. 76.97.245.5 (talk) 22:47, 24 March 2009 (UTC)[reply]

How bright is Earth in the radio astronomy sky?

Looking at earth from near another star, how bright would it seem? Brighter than the sun? One of the brightest things in the sky? I would like to ignore the time delay caused by the speed of light and assume that the radio brightness in 2009 is what is observed. 78.146.178.204 (talk) 23:01, 24 March 2009 (UTC)[reply]

The only terrestrial signals likely to be detectable from another star are those intentionally sent into space - either specific attempts at signalling to aliens, or from studies of planets and asteroids using radar. Those could only be detected if they were pointed directly at the star in question. So, from virtually all stars, the Earth would not be visible in the radio spectrum. If one of these radar signals happened to go to the star, I'm not sure how bright it would be - it would, obviously, depend on how far away it was, but I'm not even sure how it would compare to the Sun... --Tango (talk) 23:32, 24 March 2009 (UTC)[reply]
Using our current technology, detecting an Earth-sized planet with a level of technology similar to ours (i.e. looking at "earth from another star") would be impossible on any level, and at any frequency, visible or radio or any other. Earth is just too small. Nearly every extrasolar planet we have found has been basically a Jupiter-sized planet orbiting at rediculously close distances; such planets cause their parent stars to "wobble" and also "dim" as they pass in front of it. From a distance of a few light-years or more (remember the closest star to us is about 4 light years away) Earth would be entirely undetectable. Looking for the Earth would be like trying to resolve a specific grain of sand on a beach if viewing the beach from the moon. We are REALLY small, and we just don't give off enough general radiation to be detectable, and we don't reflect enough light, block enough of the sun's light, or gravitationally effect the sun enough to be seen. MAYBE an outside-of-the-solar-system viewer could detect Jupiter, but not earth. Possibly, if we were beaming a tight, high energy radio signal directly at another star, it could be detected, but otherwise we would be invisible. --Jayron32.talk.contribs 01:41, 25 March 2009 (UTC)[reply]
I seem to recall that there have been at least two signals intentionally sent into space from radio telescopes which would have made the Earth far brighter than the sun. I strongly question the wisdom of such efforts to call attention to us. Edison (talk) 04:28, 25 March 2009 (UTC)[reply]
I'm picturing a tentacled, drooling alien creature noticing the radio emissions from Earth, then heading our way as he straps on a bib... :-)
Well - if he's that hungry, he may be in a lot of trouble because getting here is (in all likelyhood) going to take a couple of centuries. SteveBaker (talk) 05:20, 25 March 2009 (UTC)[reply]
Yes - that's true - the actual amount of power in those transmissions isn't really that great - but being sent from a radio telescope means that the beam is highly directional...I forget which stars they were aimed at - but only beings orbiting around those stars would have any chance of seeing the signal. For the rest of the universe, the earth would still have been pretty much invisible. There are other people claiming to do this though - there used to be (and maybe still is) a company on the web someplace that'll beam any short ASCII message you care to give them out in some direction or other for just a few bucks. Aleksandr Leonidovich Zaitsev [3] has been sending all sorts of things out on a really powerful narrow-beam transmitter. So this does happen...but the odds of anyone out there picking it up are really slim. SteveBaker (talk) 05:20, 25 March 2009 (UTC)[reply]
My memory may be paying tricks with me here, but I seem to remember that the chosen star was Vega. Dauto (talk) 17:43, 25 March 2009 (UTC)[reply]
If you're talking about the Arecibo message, the destination was the globular cluster Messier 13. You may be remembering Vega because of its role in Contact (novel) or Contact (film). --Bowlhover (talk) 18:36, 25 March 2009 (UTC)[reply]

If you add up the power of all the radio, tv, and other radio-wave type emmissions (including those from power lines or electrical wiring, even thunderstorms) it must be a lot. Either it gets absorbed, or it leaks into space. I'm wondering how this would compare with the power of similar wavelengths from the sun - if these emmissions from modern technology would be a beacon in some wavelengths. 89.243.177.130 (talk) 11:39, 25 March 2009 (UTC)[reply]

You have to remember that numbers that may be enormous by human view-point, can be miniscule by astronomical standards. Some order-of magnitude comparisons:
Total energy consumption on Earth (including coal, oil, nuclear, solar etc) ~ 1013 Watts
Solar radiation reflected or radiated off Earth ~ 1017 Watts
Sun's energy production ~ 1026
So even if you assume that all energy generated/consumed by man is radiated out into space, it would be much smaller than the reflected solar radiation, and about 0.000000000001×total solar radiation. This should give you some idea of the difference in magnitudes we are talking about!
Of course these calculations will change, if the radiation from Earth is very narrowly directed in terms of its frequency or direction, but barring that detecting Geo-radiation from another star is a long shot, if not completely hopeless. Abecedare (talk) 19:31, 25 March 2009 (UTC)[reply]
Frequency is the key consideration - the Sun emits enormous amounts in the visible part of the spectrum, but orders of magnitude less in the radio frequencies. A unidirectional radio transmission from the best technology we have would be detectable by the best technology we have from other stars (I'm not sure how distant those stars could be, though - the other side of the galaxy might be a challenge, but the time required for a signal to get there makes it irrelevant). If we sent a visible light laser beam to another star, I doubt they would notice. --Tango (talk) 20:30, 25 March 2009 (UTC)[reply]


March 25

Radioactive sickness?

I am writing a research report on radioactive sickness. But when I search on Google I also get radiation sickness in my resaults. Is radioactive sickness and radiation sickness the same thing? —Preceding unsigned comment added by 174.6.144.211 (talk) 00:59, 25 March 2009 (UTC)[reply]

I've never heard of "radioactive sickness". The illness caused by exposure to ionising radiation is called "radiation sickness". What do you mean by "radioactive sickness"? --Tango (talk) 01:07, 25 March 2009 (UTC)[reply]

I got a topic on Aleksandr Litvinenko's poisoning and was asked to write a report on radioactive poisoning, as this is what happened to him. But, as i mentioned before, i am not sure whether or not radioactive poisoning and radiation poisoning is the same thing, sine they both pop up when i search for radioactive poisoning. —Preceding unsigned comment added by 174.6.144.211 (talk) 01:12, 25 March 2009 (UTC)[reply]

Litvinenko died of radiation poisoning. I think whoever it was that said "radioactive poisoning" just make a mistake. --Tango (talk) 01:27, 25 March 2009 (UTC)[reply]
All these phrases are talking about the same thing, but "radiation sickness" is the proper name for it. It's not really a kind of poisoning, although people talk of it that way. (Poisons injure the body in different ways than radiation does.) --Anonymous, 01:32 UTC, March 25, 2009.
Or acute radiation syndrome if you want to sound really clever! --Tango (talk) 01:33, 25 March 2009 (UTC)[reply]

(after multiple edit conflicts) We have an article on this incident, Alexander Litvinenko poisoning. The name for what he died from is radiation poisoning. It is surmised that this was caused by the ingestion of a radioactive substance, rather than from exposure to an external source of radiation. One could call it "radioactive poisoning" in that he was poisoned by a radioactive substance as opposed to a toxic chemical such as cyanide or a neurotoxin of some sort. However, there is no real difference between what killed him and what killed the victims of the Chernobyl disaster; the only distinction is the means by which he was exposed to the lethal dose of ionizing radiation. - EronTalk 01:34, 25 March 2009 (UTC)[reply]

Wait, so the affect of external radiation does not count as radioactive poisoning? —Preceding unsigned comment added by 174.6.144.211 (talk) 01:51, 25 March 2009 (UTC)[reply]

Yes, it does. It doesn't matter where the radiation comes from or how it gets to your tissues, the effect is still called radiation poisoning/sickness. --Tango (talk) 01:55, 25 March 2009 (UTC)[reply]
Yes - my earlier response may have added some confusion here. I was speculating as to why someone might call what happened to Litvinenko "radioactive poisoning" - I didn't mean to infer any difference between internal and external sources of radiation. - EronTalk 02:26, 25 March 2009 (UTC)[reply]

Can radioactive poisoning be genetic? —Preceding unsigned comment added by 174.6.144.211 (talk) 02:10, 25 March 2009 (UTC)[reply]

No. Radiation poisoning (not "radioactive poisoning" - that is not the proper name for it) is caused by exposure to high levels of ionizing radiation. There is no genetic component to it. - EronTalk 02:28, 25 March 2009 (UTC)[reply]

Does nuclear warfare, nuclear reactors, radioactive materials and gamma rays cause radiation poisoning because they realease ionizing radiation? —Preceding unsigned comment added by 174.6.144.211 (talk) 02:36, 25 March 2009 (UTC)[reply]

Yes. - EronTalk 02:43, 25 March 2009 (UTC)[reply]
I wrote a good chunk of the Alexander Litvinenko poisoning article - it's an amazing story, the closest thing to a classic Hollywood spy story you'll ever see in real life. But what was significant here is that Polonium-210 (which is the radioactive material that was mixed into Litvinenko's tea - and which eventually killed him) - is not normally particularly dangerous stuff. In fact, you can buy significant quantities of it in anti-static lens cleaning equipment from any decent camera store. It's radioactive - but only emits alpha radiation - which is stopped by a single sheet of paper. Even if you get Polonium-210 directly on your skin - the layer of dead skin cells that covers your body is quite enough to stop the radiation from harming you. What's dangerous is only if you eat (or in this case, drink) the stuff (or perhaps breathe it in as a fine dust in large enough quantities). That spreads it throughout your body and gives the alpha radiation a way to irradiate living tissue. I suspect that is the reason why everyone calls it a 'poisoning' - it's not at all like standing a foot away from a chunk of plutonium and getting irradiated. It only took a couple of days for Litvinenko to show symptoms bad enough to put him in hospital - but it was close to three weeks until he eventually died. For the majority of that time, it was thought that he had been poisoned with non-radioactive Thallium - because although his symptoms were classic symptoms of radiation sickness (nausea, hair falling out, that kind of thing) - there was no measurable radiation coming from his body. So they looked for more conventional poisons that produce that set of symptoms, and thallium popped up. But again - this is probably why the term "poisoning" was initially kicked around and kinda stuck - even after it was realised that this wasn't strictly a poison. The thing that's particularly chilling about this is that the murderer could have used any of a dozen - much easier to obtain - poisons in Litvinenko's tea. This one is so unique and trivially easy to trace (it has to be made in a nuclear reactor - and there are only two sources of it - one Russian and another American - but the isotope ratios make it abundantly clear that this is batch is Russian - and that it was made very soon before the poisoning took place (suggesting that the stuff hadn't been smuggled through long, complicated chains of black-market dealers). The Russians aren't stupid - they'd have known that this particular technique for killing him would be instantly traceable back to them. So it's clear that the point of the exercise was to send a message: "You aren't safe from us anywhere - and we don't care that anyone out there knows that we do this kind of thing."...that's real cold-war stuff! What we don't know is who, specifically, this rather gruesome message was intended for...whoever that is - is evidently keeping a much lower profile! Anyway - enjoy reading the article - and be sure to follow the links to the 150 or so references at the bottom - there is much more to be learned here! SteveBaker (talk) 05:03, 25 March 2009 (UTC)[reply]
(Nitpick: Plutonium is not that radioactive—a lot less so than Polonium-210. You won't get irradiated from standing a foot away from a chunk of it assuming the chunk is not a critical mass. As with Polonium, it is a radiological danger primarily as an alpha emitter that you might inhale. Eating it isn't even really a problem—it passes out of the system rather quickly—it is only a serious problem if it gets into your lungs or bones. Also I would dispute that Polonium-210 is not "normally dangerous stuff"—the amount in anti-static brushes is quite small. You don't need much of it, in terms of mass, to have a problem. It is far more toxic than uranium, for example. It is not as bad as a fission product. But it is still stuff to handle with care in any significant amount.) --140.247.240.69 (talk) 18:43, 25 March 2009 (UTC)[reply]
The claim that Polonium-210 is present in 'significant quantities' in antistatic brushes is somewhat misleading. The largest brush sold by these guys has about 500 microCuries of Polonium, which according to our article is about a tenth of a microgram. That's more than enough to kill you if you decide to eat it, but is nonetheless a pretty small amount. If you actually had a decent-sized chunk of it (a gram, say) on your skin, then your initial problem would be the way the intense heat was burning through your skin. Algebraist 11:17, 26 March 2009 (UTC)[reply]

Did Litvinenko suffer from chronic radiation syndrome or acute radiation syndrome? and why? —Preceding unsigned comment added by 174.6.144.211 (talk) 00:33, 27 March 2009 (UTC)[reply]

Acute. He didn't live long enough to develop any chronic problems. Algebraist 00:40, 27 March 2009 (UTC)[reply]

What is the difference between chronic and acute problems? —Preceding unsigned comment added by 174.6.144.211 (talk) 01:22, 27 March 2009 (UTC)[reply]

See Acute (medicine) and Chronic (medicine). Algebraist 01:25, 27 March 2009 (UTC)[reply]
(ec) wikt:chronic, wikt:acute. Chronic are long-term problems, acute are short-term. The common cold is generally an acute problem, asthma is generally a chronic one. --Tango (talk) 01:28, 27 March 2009 (UTC)[reply]

So, in other words, the concequences are acute radiation syndrome are short-term and the concequences for chronic radiation syndrome are long-tern? —Preceding unsigned comment added by 174.6.144.211 (talk) 01:34, 27 March 2009 (UTC)[reply]

Yes. So after the Chernobyl disaster - the amazingly heroic guys who worked in the core of the reactor to put out the fire died within days of acute radiation sickness. The people in the towns and cities further away from the reactor didn't die from the immediate consequences - but instead have lifetime increases in cancer risk, birth deformities and so forth. That's a chronic consequence. Litvinenko lived for just short of three weeks - but his fate was sealed from the moment he drank the polonium-laced tea - so that's acute radiation sickness. SteveBaker (talk) 02:05, 27 March 2009 (UTC)[reply]
And it's a very important distinction! Often people get the two things very much mixed up—different things cause each of them, generally. The thing about radiation is that something is either VERY radioactive but has a SHORT half-life, or it is WEAKLY radioactive but has a LONG half-life. So polonium has a half-life of only a few months—that's pretty radioactive, as things go, but not as bad as, say, fission products (raw radioactive waste), which will kill you within minutes of close exposure. On the other hand, something like, say, plutonium, has a very long half-life, meaning it won't kill you from close exposure, but sticks around a long time. If during part of that long time it happens to be inside, say, your lungs or bones, it'll sit there radiating and radiating, doing lots of long-term, chronic damage. BOTH of these kinds of radiation risks are bad but they are bad for different reasons and caused by different things. Thus when the Civil Defense guys say you can come out of your fallout shelter after a few days, what they mean is, the stuff that is going to ACUTELY kill you has already radiated itself out of existence by that point. But the long-term, CHRONIC stuff is going to be around for thousands of years, proving a real long-term hazard as part of the food chain, building materials, etc. Just because something is weakly radioactive does not make it safe at all — it just means it won't kill you immediately, but can still kill you in, say, 10-20 years! This is something that even very savvy scientific types often lose sight of (and we've had lots and lots of incidents with this—Chernobyl, Castle Bravo, uranium miners, etc.—where the physicists jump in and say, "oh, it's pretty safe, it's weakly radioactive only, nobody seems to have died!" and then only decades later the health problems show up and everybody gets cancer). --98.217.14.211 (talk) 19:12, 29 March 2009 (UTC)[reply]

In Flight Pitch Control / Dive or Stall

Can someone with knowledge of the principles of flight help me understand something? A UK company named Parajet is building a flying car of sorts called the SkyCar. At the link the above, they make the following statement about the SkyCar:

"It has no pitch control and therefore (is) impossible to stall or dive."

So what is it about having pitch control in fight that makes stalling or diving possible? Why would its absence prevent these situations? Appreciate any input. Wolfgangus (talk) 01:23, 25 March 2009 (UTC)[reply]

"Pitch control" means being able to control what angle the nose is at in a vertical direction. Obviously, that is required to dive - diving is pointing to nose steeply downwards. Stalling is caused by trying to climb too steeply (or not descending fast enough if you're going quite slowly). Presumably this flying car sets its own pitch, somehow, at a level somewhere below where it would stall. (I guess you then control altitude by varying speed - slow down to go down, speed up to go up. However, that would mean it must be possible to either stall, or dive - if you cut the engines, one of those two things has to happen.)--Tango (talk) 01:32, 25 March 2009 (UTC)[reply]
I imagine they just mean it is possible for you to cause a dive or stall by applying too much upward or downward pitch. --Anonymous, 21:35 UTC, March 25, 2009.
Actually, I take my last statement back - cutting the engines wouldn't be sufficient, you would probably need some way to actually brake to get slow enough for the only non-stalling configuration to be a dive. --Tango (talk) 01:36, 25 March 2009 (UTC)[reply]
What stalls a plane is when the pitch of the wing to the airflow is too steep. If you pitch the plane up and have plenty of engine power - the plane climbs and the airflow remaines pretty much parallel to the wing....but if you pitch the plane up without enough power applied - you get into a vicious circle where the wing loses lift - you start to fall downwards - which means that the airflow is now somewhat upwards - which further increases the angle of pitch to the airflow - which increases the drag - which slows you down - and the pitch angle to the airflow increases still further - until you're simply falling like a rock. What's suspicious about this claim is that even without pitch control - if you slow the motor down enough then the roughly 4 degrees of upward pitch (which you pretty much have to have built into the wing to make the plane fly) - is enough to cause a low-speed stall. The way you recover from such a stall is to push the nose down and (if you can) you apply power. Without pitch control - if your engine fails - you've got no means to recover from the stall and you're going to crash.
There is a means to fix this problem - and that's to have a 'canard' design - where the pitch control happens on surfaces in front of the main wing and with a slightly steeper pitch than the main wing. What happens then is that if your speed drops, the little 'canard' wings stall before the main wing can - when the front wings stall - they lose lift - the nose falls - and that automatically reduces the angle of attack - so they immediately un-stall without any input from the pilot. Of course if you lose the motor - the plane is going to nose down into the ground...but it won't stall - so you stand at least a chance of pulling out of the dive when you've built up enough speed.
The other odd thing about not having pitch control is that you can't fly faster or slower...if you add power without altering your pitch - you'll climb - and if you reduce power without changing pitch - you'll lose altitude. At a particular pitch there is only exactly one speed that'll keep you flying level! I find it hard to believe that this contraption really doesn't have some means to control pitch - I imagine that what they REALLY mean is that the pilot has no direct control over pitch - it's hard to believe that the flight computer can't control pitch.
SteveBaker (talk) 02:38, 25 March 2009 (UTC)[reply]
I definitely agree with Steve here, the only reasonable explanation is that the pilot has no direct control over pitch. There must be a mechanism for pitch control for the airplane to be flyable. If there were no pitch control, a steady altitude could only be maintained at one speed, which would be a complex function of weight, center of gravity, and air density. It would even change during flight as fuel is consumed, or even if a passenger leans forward in their seat! In turbulent air natural stability would be the only way to maintain attitude (and therefore altitude). It's even possible that in turbulence near the ground, the aircraft could be put into an attitude where recovery with no pitch control is impossible. Finally, pitch control would be necessary to flare into the proper attitude for landing. So pitch control is basically required for any airplane, even if the pilot has no direct control. anonymous6494 03:43, 25 March 2009 (UTC)[reply]
Some version of "fly by wire?" Making the car/plane "idiot-proof?" Edison (talk) 04:23, 25 March 2009 (UTC)[reply]
After some more reading it seems the link you provided is for a powered parachute, which indeed has no pitch control. anonymous6494 06:40, 25 March 2009 (UTC)[reply]

Thanks so much for the solid feedback, and that's right- a powered parachute, although the term - the entire field actually - is new to me. There are clearly some pretty substantial differences between this vehicle and the Terrafugia Transition but I was assigned to write a feature about the SkyCar and was told it was a flying car plain and simple. Evidently that's not the case at all. Wolfgangus (talk) 06:57, 25 March 2009 (UTC)[reply]

The Parajet Sykcar doesn't look like it has no pitch control. It looks like a powered parachute, which I would think can be stalled given enough effort. I wonder if the statement was about the Moller Skycar M400, about which is said "the pilot's only inputs are speed and direction". It would certainly lack pitch control. DJ Clayworth (talk) 21:23, 25 March 2009 (UTC)[reply]

The Moller contraption certainly has pitch control inside the computer system - but the pilot has very little to do with flying the machine at all. Mostly he enters a destination and lets the computer do all of the flying. But the Moller gets most of it's lift from the half dozen thusters - it's not particularly aerodynamic...and it doesn't really work. Beyond a few hover tests, it hasn't done much flying. They sold the prototype under the condition that whoever bought it would not allow it to be flown again. SteveBaker (talk) 22:54, 25 March 2009 (UTC)[reply]

Mount st Helens

hi,

i think i heard somewhere that when mt st helens erupted in 1980 that it released more corbon dioxide into the atmosphere than humans have done to date. Is this at all true or not?

thanks, --84.66.48.29 (talk) 10:37, 25 March 2009 (UTC)[reply]

[4] while not discussing the 1980 eruption in particular helps put things in perspective Nil Einne (talk) 11:07, 25 March 2009 (UTC)[reply]
Direct measurements of atmospheric carbon dioxide taken at the Mauna Loa Observatory say otherwise. Things to note at the linked page's image:
  • The yearly natural oscillation is regular enough that it can be easily subtracted, leading to the red curve.
  • The steady growth is non-negligible
  • Pinatubo and Saint Hellen's eruptions don't show up at all.
Dauto (talk) 14:43, 25 March 2009 (UTC)[reply]

I don't think volacno erruptions put out much CO2, but it does put out other gasses like SO2 which are far less nice. —Preceding unsigned comment added by 65.121.141.34 (talk) 14:46, 25 March 2009 (UTC)[reply]

And the SO2 levels can be higher than those created by humans in the immediate surrounding area, but not when compared to all the sulfur dioxide created by humans worldwide. Also, volcanoes are an occasional thing, while human industry is relentless in adding pollution. So, you might want to evacuate the area around a volcano, during an eruption, because of all the pollution it puts out (among other reasons), but volcanoes are bit players on the world stage. There is, however, something quite rare called a supervolcano which is entirely different. StuRat (talk) 15:51, 25 March 2009 (UTC)[reply]
I site I looked at said the less than 1% of CO2 came from volcanoes - compared with SO2, about 1/3 of which (17 million tonnes) comes from volcanoes (that is a rough estimate on several counts). - Jarry1250 (t, c) 21:08, 26 March 2009 (UTC)[reply]
It's also going to vary dramatically from year to year. StuRat (talk) 08:06, 27 March 2009 (UTC)[reply]

Sunny Delight

Does anyone know what ingredient/chemical in Sunny Delight causes sterility? Thanks, Paper CB. Papercutbiology♫ (talk) (Sign here!) 11:28, 25 March 2009 (UTC)[reply]

None of them. Our SunnyD article lists the ingredients. --Heron (talk) 12:12, 25 March 2009 (UTC)[reply]
I heard a lecture on food that it was a dye...but I can't remember. Thanks though. My fears are relinquished.  :) Papercutbiology♫ (talk) (Sign here!) 13:30, 25 March 2009 (UTC)[reply]
Just keep the bottle cold or: "Benzene can form in soft drinks containing vitamin C, also called ascorbic acid, and either sodium benzoate or potassium benzoate." That is more likely to cause cancer than sterility (Even for high doses our article says "not known".) Soft drink companies are scrambling to reformulate their beverages, so check the label. This is one of those things where the hype is bigger than the known study result. Should have known we have an articleBenzene in soft drinks. - 76.97.245.5 (talk) 14:20, 25 March 2009 (UTC)[reply]
Yellow 5 is often being incorrectly cited as lowering sperm count. Mountain Dew getting the brunt of the criticism. However, yellow 5 doesn't seem to be in SunnyD; maybe that's another common myth. -- 72.248.158.162 (talk) 14:38, 25 March 2009 (UTC)[reply]
Thanks for all the answers. I did know we have an article on SunnyD, I just didn't find it helpful. Papercutbiology♫ (talk) (Sign here!) 15:25, 25 March 2009 (UTC)[reply]

Endocrine stress system

A chapter in an article I'm reading is called: Endocrine stress system. It says:
The basic components of the stress system include:
- the locus ceruleus/noradrenergic sympathetic system;
- the hypothalamic-pituitary-adrenal axis.
However, I wonder if the LC/NE sympathetic system is neural and not endocrine? Lova Falk (talk) 11:55, 25 March 2009 (UTC)[reply]

Both. Our epinephrine article leads with "Epinephrine (also referred to as adrenaline; see Terminology) is a hormone and neurotransmitter." Further investigation will reveal to you that strong sympathetic neural stimulation will result in endocrine release of epinephrine and norepinephrine from the adrenal medulla. Cool, huh? --Scray (talk) 02:11, 26 March 2009 (UTC)[reply]
Thank you, but I'm just getting more and more confused. In the article on norepinephrine it says: "As a stress hormone, norepinephrine affects parts of the brain where attention and responding actions are controlled." As far as I understand, a hormone is released into the blood stream. So norepinephrine is released into the bloodstream, the blood travels to the brain and the brain gets more attentive. Is that really correct? I thought that norepinephrine affected the brain as a neurotransmitter.
The article also says: "It (norepinephrine) is released from the adrenal medulla into the blood as a hormone, and is also a neurotransmitter in the central nervous system and sympathetic nervous system where it is released from noradrenergic neurons." Isn't it more correct to say: "It is released from the adrenal medulla into the blood as a hormone, and it is released from the locus ceruleus as a neurotransmitter in the central nervous system and sympathetic nervous system."  ??? Lova Falk (talk) 10:11, 26 March 2009 (UTC)[reply]
Not sure if you're still watching this thread, but I suggest that you question your assumptions when things don't make sense. The first paragraph of our article on hormones states that hormones "are chemicals released by cells that affect cells in other parts of the body." and goes on to say that "Hormones in animals are often transported in the blood.". Likewise, the first paragraph of our article on neurotransmitters states that "Neurotransmitters are packaged into vesicles that cluster beneath the membrane on the presynaptic side of a synapse, and are released into the synaptic cleft, where they bind to receptors in the membrane on the postsynaptic side of the synapse." Thus, hormones act at a distance, and neurotransmitters act across a synaptic cleft (directly from one cell to the one on the other side of the cleft). If you go back and read what you'd quoted and what I'd said earlier in light of these facts, it's all consistent. --Scray (talk) 01:29, 30 March 2009 (UTC)[reply]

vegetable oil for fuel

pls i would want to know what properties of vegetable oils are compatible with that of fuel.i means what properties of vegetable oils makes it possible to be used for fuels —Preceding unsigned comment added by Peaceobioma (talkcontribs) 12:15, 25 March 2009 (UTC)[reply]

Hydrocarbon and oil are good places to start. 76.97.245.5 (talk) 13:53, 25 March 2009 (UTC)[reply]
You may also want to look at the articles biofuel and biodiesel (and the links therein). TenOfAllTrades(talk) 14:28, 25 March 2009 (UTC)[reply]
Note that it's not economical to use new vegetable oils for fuel, as they cost far more than other fuels. However, waste vegetable oils, such as those collected from the fryers in fast food restaurants, can be economical, after filtering, but only for a small segment of the fuel industry, as waste vegetable oils aren't produced in the quantities needed to supply all our fuel needs. One side benefit, the exhaust smells yummy (although that could be a negative if it makes you always hungry). :-) StuRat (talk) 15:37, 25 March 2009 (UTC)[reply]
The current price of Malaysian palm oil is about 500 USD per metric ton, which is comparable to a petrolum cost of about 65 USD per barrel. The current price of crude oil is a bit more than 50 USD per barrel ([5]). At those spot prices, a switch to biodiesel could be economical right now if supported by relatively minor government incentives.
That said, palm oil experienced a temporary price spike last year (up to just over 800 USD per ton) due to a combination of drought and intense interest in biodiesel; it's price over the last few years has also been lower than 300 USD per ton. Meanwhile, the price of crude oil has also rollercoastered over the last few years, running as low as 20 USD at the start of this decade, and spiking above 145 USD per barrel last summer. Neither type of oil has been a poster child for price stability of late.
So it's a bit of an overstatement to assert that non-waste vegetable oils are inherently uneconomic. While capacity to produce such oils certainly doesn't exist to replace all fuel uses of petroleum overnight, it is by no means a foregone conclusion that a gradual transition is impossible — nor is such a transition even unlikely. TenOfAllTrades(talk) 16:27, 25 March 2009 (UTC)[reply]
Using foodstuffs as fuel is an inherently bad idea, as the recent US experiment with using corn to produce ethanol shows. The price of corn skyrocketed, making ethanol cost more than gasoline, even at it's peak prices. Meanwhile, food prices went up dramatically as a result, since growing corn became more profitable than other crops, due to government subsidies. Also, there's the argument that it's immoral to burn food to run cars when people are starving. Finally, there's the infrastructure problem. Refineries and gas stations could be modified to provide biodiesel from palm oil, but that would be an enormous expense which would only be justified if this approach would be economical in the long run, and there's no sign that it would be. That leaves people to buy palm oil on their own and produce biodiesel, which ends up being far more expensive, unless you start with waste oil. StuRat (talk) 17:21, 25 March 2009 (UTC)[reply]
Using the wrong foodstuffs as fuel is an inherently bad idea, as is attempting to make the changeover too quickly. See ethanol fuel in Brazil for one case where it was done properly and successfully. The United States' example is simply a perfect demonstration of exactly the wrong way to manage the transition. Extracting ethanol from corn has a much lower energy output per cultivated acre of land – and indeed may be negative net output once the energy costs of harvesting, fermentation, and refining are factored in – compared to Brazil's sugar cane ethanol or tropical palm oil. The U.S. system of farm subsidies (for corn and other products) badly distorts the market, and probably encouraged even more farmers to make an ill-advised switch to corn. About the only thing the U.S. approach has going for it is that it has encouraged the development of infrastructure (from refineries and gas stations to individual flex-fuel motor vehicles) which can cope with ethanol and ethanol-blended fuel. That will pay off in spades if cellulosic ethanol (from switchgrass, most likely) or another technology matures sufficiently to provide large amounts of sustainable ethanol.
Replacement of diesel with biodiesel would require no changes for end users or distributors; they're equivalent products for virtually all uses. The same supertankers that carry crude oil from Alaska and the Middle East can carry biodiesel or unrefined palm oil from the tropics. The same gas pumps which deliver diesel can pump biodiesel. The same city bus that belches diesel soot will run happily on biodiesel.
Yes, different refining equipment would be required for biodiesel compared with conventional diesel, but that's not a problem. Refinery capacity can be built to keep pace with the supply of suitable oils — and minimal refining of edible oils is required compared to the refining required for most petroleum products. Biodiesel can be blended into the petroleum diesel supply chain at any point after the products are refined. If anything, you've missed the most significant infrastructure hurdle, which is that most private motor vehicles don't currently burn diesel. Still, that's a problem that can resolve itself over a period of many years, as the cost of biodiesel declines. TenOfAllTrades(talk) 18:37, 25 March 2009 (UTC)[reply]
In the US, at least, most fuel is gasoline, so the goal would be to replace that. Isn't biodiesel thicker than gasoline ? That would make me think new pumps would be needed. They also need to add biodiesel to the selection switch or add separate dedicated pumps for it. Also, unless they intend to no longer carry regular gasoline (which sounds like a foolish idea in the short term), gas stations will need to install new tanks for biodiesel storage, unless they just happen to have a spare tank already installed. For those stations which already have regular diesel, they could switch those pumps over to biodiesel rather easily, but that's only a small portion of stations (most of which which cater to truckers) in the US. And building new refineries is highly problematic in the US, as nobody wants one near them. StuRat (talk) 22:56, 25 March 2009 (UTC)[reply]
Picky, aren't you? Any filling station pump that can handle regular diesel can pump biodiesel. Any underground tank that is compatible with gasoline and diesel can quite comfortably accommodate biodiesel. If a facility already pumps diesel, there's no barrier to biodiesel at all — just start filling the tanks with biodiesel. Contrary to your claim that only a small portion of filling stations offer diesel, a 2005 study pegged the fraction at 42% in the United States: [6]. (That represents a sharp increase over the 30% which offered diesel in 2000; by now the fraction is probably over one half.)
Most filling stations in the United States dispense multiple grades of gasoline (typically one 'regular' unleaded and one or two 'premium' higher-octane options). If demand existed, a gas station owner could choose to drop one of the premium grades of gasoline and dispense (bio)diesel instead. All of the in-ground plumbing would remain the same. (The U.S. had a similar experience twenty or thirty years ago with the phase-out of leaded gasoline, and much of that multiple-fuel infrastructure is still in place.) Pumping equipment is repaired and replaced on a regular schedule; a switchover from gasoline to diesel is relatively straightforward.
Yes, most personal motor vehicles in the United States use gasoline — but that would gradually change if biodiesel offered a consistently lower-priced, carbon-neutral alternative. (Slow, steady growth of the biodiesel market will also ease the economic dislocations that could result from a more rapid shift in demand. If cellulosic ethanol turns out to work in the meantime, that's great — we don't have to have all the eggs in one basket.) The distribution infrastructure works equally well for diesel or gasoline, so it's a matter of raw ingredient supply and refining capacity. Build the refineries in the tropics if you can't get the NIMBYists to let you build them in the States—diesel is much safer to transport than refined gasoline. TenOfAllTrades(talk) 03:18, 26 March 2009 (UTC)[reply]
The food vs fuel issue is a perpetual one and is not IMHO a simple matter. Firstly the causes of the 2007–2008 world food price crisis are in great dispute as the article testifies too. IMHO fuel demand was a factor but I personally doubt it was the primary factor rather it was a large combination of factors and blaming biofuels was a convient excuse for a large number of people with different purposes. Regardless though, as TOAT testifies, you can't lump all food crops together. For starters while palm oil is in widespread use, it's a controversial oil because of it's high saturated fat content. While there are obvious some cases when you would want that, in many other cases its use is controversial. Of course there is some demand from those who believe the health problems caused by saturated fats, particularly saturated fats from vegetable oils may be overrated (perhaps due to the influence of money from other oil producers who tend to be in the developed world on scientists) as well as from those who believe that when polyunsatured oils are used for cooking they produce transfats or free radicals in sufficient amounts that it is better to use saturated oils. Regardless though, whether palm oil will, or should have a role as a major food crop in the future is an area of much debate. More importantly, as the world food price crisis testifies to (for example, the price of rice and other food not used to make biofuels to any great extent were also greatly affect), just not using food crops doesn't actually help in itself. If the non food crops used to make biofuels takes over the land of food crops, as will happen if they fetch a better price, then that doesn't help it makes things worse. At least if you are using food crops you still have the food you just have to pay more for it. There is some suggestion that you can use things like jatropha which it is hoped will not compete for land with food crops but this remains an unproven suggestion. There's also the hope we will be able to use all the waste material from plants that currently goes to waste, but this will obviously include food crops such as palm oil. There's also the question of whether it's fair to refuse to use biofuels, if you aren't actually subsiding them solely for your desire to not raise food prices when effectively what your preventing is farmers (many in the developing world) from getting a fair market price for their goods as well as effectively discouraging people from farming. This gets into a whole raft of complicated issues like agricultural subisidies, globalisation, protectionism, food security and competiting interests in the developing world. Let's not forget that one of the reasons why corn was so cheap is because of US government subisidies and many people have argued the US government effectively outcompeted many developing countries leading to a great fall in their agricultural production which has had numerous ill effects. (The EU also of course has large subsidies and had been blamed for many of the same problems in different areas, e.g. sugar beet.) Of course one of the great concerns with palm oil is whether it's current development is sustainable, but again this gets in to a whole raft of complicated issues like whether it's fair to demand the countries in developing world keep their rainforests when they could be getting a better return by replacing them (particularly when many developed countries have cut down a significant portion of their natural forests), what length of time we're referring to as well as how any future Kyoto protocol will develop (I think there's great concern parts of it will be based on the level of forests at some set point of time which is not going to seem fair to those who have preserved their forests). Of course if we bring biofuel subsidies in to the mix, it gets a lot more complicated. N.B. I recall reading that Malaysia was planning to develop a processing plant for making biodiesel from pa

How many barrels in a tonne though? A quick search online suggests around 7 but not sure how reliable that is. 194.221.133.226 (talk) 16:46, 25 March 2009 (UTC)[reply]

'Around 7' is about right. The density of crude oil ranges from about 0.8 to 0.9 grams per cubic centimeter [7]. A barrel of oil is 42 US gallons or about 160 liters; the weight of a barrel is therefore 130 to 145 kilograms (roughly). At an even 1000 kilograms per metric ton, that's about seven barrels to the ton. TenOfAllTrades(talk) 18:39, 25 March 2009 (UTC)[reply]
Note that comparing tonne to tonne or volume to volume is pointless in itself. We need to consider the energy density of the fuels. Our Energy density article gives a slightly higher energy density for crude oil compared to biodiesel but I'm not sure how palm oil which hasn't yet been processed in to biodiesel compares. On the other hand, palm oil can be used as a fuel with relatively little processing I believe Nil Einne (talk) 01:55, 26 March 2009 (UTC)[reply]

Red decoder screen reveals blue text?

Which exact shades of red and blue should somebody optimally use to make one of those decoder things? If I am not mistaken, it is often text or an image printed in blue ink, then hidden by red "noise". When you put a certain kind of red transparency over it, the red is filtered out and the text or image comes out clearly. Is there a certain shade of blue or red that work best for this? Pantone or otherwise? etc. Are the red screens usually made from a certain type of plastic or other material? --Sonjaaa (talk) 19:04, 25 March 2009 (UTC)[reply]

You want to ideally have a color to match the red noise. When something is red transparent, it appears red because it is reflecting the red portion of the white light in the room back at you. Being transparent, appart from this it is letting all other light through. So when you hold it up to the decoder you want it to reflect the red noise and let the blue noise through, so a color most similar to the one of the noise you want to filter out is best.
If you are trying to print your own, I'd make sure to use just ink from the red color cartrage and just ink from the blue color cartrage. What you want to avoid is having multiple frequencies of light mixed together. Generally a clear red plastic is easy and cheap to produce. Anythingapplied (talk) 20:56, 25 March 2009 (UTC)[reply]
That's a very complicated - and entirely wrong - explanation! It's nothing to do with the film reflecting light. Red film allows red light to pass through it - and blocks all of the other colors. So when you look at (say) blue writing on a white background - the blue light is blocked - so the writing looks black - the white paper looks red because all of the other colors are filtered out. Hence you see black writing on a red background. The choice of color for the 'blue' should be something as far from 'red' as possible - so a sky blue would probably be the best choice - it is on the opposite side of the 'color wheel' - the complement to red. Inkjet printers have magenta, cyan, yellow and black inks - none of which is a particularly good match for the red filter. But printing a red/cyan picture should work pretty well. SteveBaker (talk) 23:33, 25 March 2009 (UTC)[reply]
This reminds of the old home version of the television game show Jeopardy!, which used a red plastic sheet on the back of the game board to make the blue answers hidden behind red "noise" visible. It was used to keep the clues hidden when selecting agame sheet, but allow them to be seen once the sheet was put in place behind the game board. --Thomprod (talk) 16:38, 27 March 2009 (UTC)[reply]

Optical inversion, but not reversal?

I just bought a projecting alarm clock. It has a projector which can be aimed to project the time onto the bedroom wall or ceiling. There is a switch to select the colour of the projected time: red, blue, or green. The projector, from my (external) investigation, consists of red, blue, and green LED emitters behind an LCD shadow mask, and a simple, manually-focusable lens at the front. The shadow mask is inverted from ordinary pocket calculator or wristwatch LCD operation; the background is black, and the digits — made up of standard 7-segment-bar display grids (the type that display "88:88" when all segments are active) — are transparent. Thus, the LED light shining through the transparent digits creates the projected time display. Now here's the mystery: if I peer into the operating projector, the time display appears inverted, but not reversed. That is, if the time is 1:08, peering into the projector reveals 1:08, not 80:1. If the time is 12:03, peering into the projector reveals 15:03, not E0:51 (please use your imagination to make these digits out of straight line segments; a "1" has no base and no hook, but is just a straight line, an upside-down "2" looks like "5" and vice-versa, a reversed "3" looks like "E", and so on). I am probably overthinking this, but how is it that the time is projected correctly on the opposite wall when it is apparently flopped only in the vertical, not in the horizontal? —Scheinwerfermann T·C22:13, 25 March 2009 (UTC)[reply]

Oops, I was underthinking: when I look at the projection on the opposite wall, I'm horizontally flopping myself. If I were to hold up a translucent screen in front of the projector and look in the direction of the projector, then the digits would appear horizontally flopped. I'm pretty sure that solves the mystery, yes? —Scheinwerfermann T·C23:50, 25 March 2009 (UTC)[reply]

Fault zone or short metamorphism

Is there another name for this that I am unaware of? Which form of metamorphism does this refer to? Thanks, Grsz11 22:49, 25 March 2009 (UTC)[reply]

Changes in a fault zone during deformation are sometimes referred to as dynamic metamorphism. Is that what you meant? I've never heard of 'short metamorphism', what context was this in? Mikenorton (talk) 22:57, 25 March 2009 (UTC)[reply]
Sorry, by short, I meant shock. Grsz11 23:51, 25 March 2009 (UTC)[reply]
'Shock' metamorphism is described as impact metamorphism in our article, a section that is a lot less than comprehensive (there's also impactite but that's only a stub). Basically it refers to the effects of very high strain-rate events such as you would get during a meteorite impact or a large volcanic explosion. This causes characteristic effects in some minerals, such as deformation lamellae (microfractures along which local melting has sometimes occurred) in quartz and locally wholesale melting of the rock, forming suevite [8], or local high slip-rate faulting causing melting of the fault walls forming pseudotachylite.Mikenorton (talk) 08:57, 26 March 2009 (UTC)[reply]
This link [9] is to a page on shock metamorphism that looks pretty comprehensive. Mikenorton (talk) 10:14, 26 March 2009 (UTC)[reply]
We now have an article on Shock metamorphism. Mikenorton (talk) 18:13, 27 March 2009 (UTC)[reply]

Earth Warming... not!

I've heard data saying that the Earth was cooling, not warming, despite global warming. If the Earth is showing symptoms of global warming, why do they say the planet is cooling?--24.4.54.96 (talk) 23:40, 25 March 2009 (UTC)[reply]

In fact, the Earth is warming according to the Temperature record. --TeaDrinker (talk) 01:07, 26 March 2009 (UTC)[reply]
It depends on your time scale, of course. We are considerably cooler than say the Jurassic Period, so if you used those two data points, you could say we are cooling off... Remember, it's all in how you organize your data... But, if you want to look in the recent past, say the last few hundred years, we are warming some... --Jayron32.talk.contribs 01:38, 26 March 2009 (UTC)[reply]
Perhaps you should read the Global cooling article? I think the Global Cooling hypotheses recently were popular in the 1970s and 1980s, and there have been improvements in understanding of the climate, as well as better computer modeling simulations. -- JSBillings 02:04, 26 March 2009 (UTC)[reply]
Also note that some areas may cool whilst others warm. - Akamad (talk) 02:23, 26 March 2009 (UTC)[reply]
If the earth is cooling - it's doing it on geological time-scales. Our behavior is warming it up on human time-scales. So, it's possible that what we'll happen is heat the planet up by 10 degrees over the next 150 years - and half a million years later, global cooling will erase that gain. But it's too far off to really affect the answer. Although the world was warmer still at some times in the dim and distant past - and it might be cooler again in the distant future - but our problem is with NOW - the next generation of humans. We can clearly see a trend - it's upwards - and the consequences are serious indeed. It's not rocket science. Follow along with these calculations:
  • According to Earth#Hydrosphere, our oceans contain 1.4x109km3 of water.
  • According to Coefficient of thermal expansion, water expands in volume at a little over 200 parts per million for every degree centigrade that the temperature rises.
  • So - when the temperature goes up an average of 1 degree centigrade across the globe, we gain 200x10-6x1.4x109 cubic kilometers of water. That's 280,000 cubic kilometers of water.... 280,000,000,000,000 cubic meters.
  • According to Earth, our oceans cover about 360,000,000,000,000 square meters of the earth's surface.
  • Which mean that all of those zeroes cancel out and our oceans get 280/360 = 0.78 meters deeper every time we warm up the planet by just one degree.
  • According to Effects of global warming - we've already seen a 20cm rise in global ocean levels since the 1920's from a third of a degree global temperature rise. This fits perfectly with the numbers I've just calculated.
Now - take a look at the graph to the right here. It shows the best estimates of temperature rise from the eight leading climatological institutes. They certainly disagree - but you've gotta admit we're going to see a degree or two of rise in our lifetimes - two to five degrees in our kid's lifetimes. Note that the graphs aren't levelling out - in most of them, the curve is getting steeper and steeper.
So we should expect to see several meters of ocean level rise. Because most decent farmland is in low-lying river basins - we'll find that at least 150 million people will lose their livelyhood as a result of a ONE meter sea level rise. We're certainly going to get a lot more than that.
But this is forgetting melting ice and all of that stuff. The melting of ice in Greenland alone is responsible for 260 cubic kilometers of water being dumped into the oceans every year...factoring that in - we're getting oceans that are going to be several meters deeper in the immediate future - and between 7 and 20 meters deeper in 100 years. Imagine yourself at your favorite seaside resort - now imagine the water at high tide being SEVENTY FEET deeper than it is right now. Do the same thing at any place with a big river flowing through it. London, NewYork, LA - completely gone.
Denying what's happening right under our noses has gone beyond mere healthy skepticism. We're getting into the realms of obstructing the survival of modern civilisation.
SteveBaker (talk) 03:26, 26 March 2009 (UTC)[reply]
Did you just make that up about rivers being similarly affected? I would imagine that inland rivers would get lower as their snow & ice sources get smaller. --Sean 12:33, 26 March 2009 (UTC)[reply]
Well, eventually...but I'm not talking about mountain streams here - I'm talking about the close-to-sea-level river deltas. These were ideal places to start major cities - and they are also big-time sources of agricultural land...and when the ocean levels rise - they vanish. SteveBaker (talk) 19:55, 26 March 2009 (UTC)[reply]
"London, NewYork, LA" don't really have "inland" rivers. I'm going to go out on a limb and say that the rivers in those cities are probably very much affected by sea level. APL (talk) 12:46, 26 March 2009 (UTC)[reply]
At the risk of sounding like a denialist, do those temperature scales from 1900 to present account for the urban heat island effect? In other words, areas with thermometers that were rural in 1900 may now be urban today, and would gain heat as a result. 65.121.141.34 (talk) 13:25, 26 March 2009 (UTC
I think those are average temperatures over the entire world. The urban heat island effect doesn't do much to average temperatures, just the ones in urban areas. --Tango (talk) 15:59, 26 March 2009 (UTC)[reply]
I honestly think the climatologists would have thought of that! If just one of those institutes was claiming this kind of rise and the others were not - then we could wonder whether they had made such a major boo-boo. But this goes well beyond that. They ALL pretty much agree - and they aren't measuring this with thermometers hung out of their office windows...we're talking ice-cores in the antarctic, tree rings, satellite thermal imagery...this is a pretty seriously researched topic. SteveBaker (talk) 19:55, 26 March 2009 (UTC)[reply]
One other note on the expansion of water part. The math is wrong in that it assumes that the oceans will warm evenly. In fact, only the top layer will warm significantly. At 10,000 ft deep the water will still be about freezing. 65.121.141.34 (talk) 13:27, 26 March 2009 (UTC)[reply]
True. Most of the rise in sea levels is from melting polar ice. I think the accuracy of Steve's figures is a coincidence! --Tango (talk) 15:59, 26 March 2009 (UTC)[reply]
But just the ice from Antarctica, correct? Most (though not all) of the arctic ice is floating, and Archimedes says that the water displaced by that ice must be equal to mass of the ice itself. Matt Deres (talk) 14:08, 29 March 2009 (UTC)[reply]
I believe you're partially wrong [10] [11] [12] [13] and Ice shelf#Ice shelf disruption Nil Einne (talk) 14:20, 29 March 2009 (UTC)[reply]
Damn facts getting in the way of things... Matt Deres (talk) 21:48, 29 March 2009 (UTC)[reply]


I believe those predictions are made assuming we continue pumping out CO2 at ever increasing rates as we've been doing previously. If we do cut back on emissions, we can significantly reduce (but certainly not eliminate) the risks. So far, that isn't happening, though - until China starts taking steps to reign in its increasing emissions, there isn't a great deal the rest of the world can do (every little helps, of course). --Tango (talk) 15:59, 26 March 2009 (UTC)[reply]
That's an odd claim. China produces about a quarter of the world's CO2 emissions. That leaves plenty for the rest of the world to do without China onside. Algebraist 19:23, 26 March 2009 (UTC)[reply]
It's not absolute amounts, it's increases/reductions. China is increasing its emissions at a rate that more than cancels out everyone else's reductions (I think - it's been a while since I examined the numbers). Also, China hasn't picked all the low hanging fruit that everyone else has done, so could make significant reductions if they tried (or, at least, significantly slow down their increases - they couldn't actually reduce without dramatically slowing their economic growth, which they won't do and it's a bit much for other countries to ask them to). --Tango (talk) 22:23, 26 March 2009 (UTC)[reply]
No, but they are also accellerating their CO2 production far more quickly then Europe or even the US. 65.121.141.34 (talk) 20:13, 26 March 2009 (UTC)[reply]
The fact is, if the US and Europe don't take the lead, nobody will. In any case, per person the US's CO2 production is an order of magnitude more than the Chinese. If the US isn't willing to cut back, why would China? In other areas of strategic importance (say, nuclear weapons stockpiles), when the US (which has a lot more) refuses to make reasonable and appropriate reductions, nobody else (like China, Russia, etc.) has ever felt the need to do it instead. When the US does make such reductions, it gives it moral and political leverage (and leverage with other nations, which can apply additional weight to it) when it makes requests of others. The "China isn't doing it, so why should we?" is a pretty silly argument—one that guarantees that no one will reduce if everyone follows it. --140.247.249.53 (talk) 22:03, 26 March 2009 (UTC)[reply]
I never said others shouldn't do their bit anyway, I just said without China doing something it's not going to help much. --Tango (talk) 22:23, 26 March 2009 (UTC)[reply]
Well I think that point is somewhat in dispute since 1) If every single other countries stop emitting all CO2 it would have a significant effect in itself for a long time. 2) Given 1, China's emissions would go down too anyway since they won't be buying all those goods coming out of China since they'd have massive populations crashes. Of course, 1 is not a sensible suggestion but it doesn't change the fact Nil Einne (talk) 14:25, 29 March 2009 (UTC)[reply]

The calculation about the ocean water expanding if the global temperature rose makes the unsupported assumption that the water would have a constant quantity (or mass). Higher surface temperature would mean more evaporation, putting more water in the atmosphere. On the other hand, melting ice (at least the ice on land) would increase the mass of water in the ocean, but would slightly lower its density by diluting the salt. How would global precipitation be affected, and how would underground aquifers be affected, as people pump the water table down lower and lower? We have also seen the water level in lakes drop dramatically due to lower rain and higher usage in some regions. How about biomass: desert versus jungle/forest/cropland, where some water is bound up in plants and soil. There are several factors which would increase and several which would decrease the mass of water in the ocean. 19:09, 26 March 2009 (UTC)

You're kidding right? You think that a 1 degree temperature rise will evaporate a quarter of a million cubic kilometers of water - and that won't have a drastic effect? Let me point out a little something...water vapor is a MUCH nastier greenhouse gas than CO2. If anything remotely close to that much water made it into the atmosphere - it would be approximately like living on the surface of venus! But it's silly - even at 100% humidity - there wouldn't be "room" in the atmosphere for a quarter of a million cubic kilometers of liquid-water-turned-into-vapor. SteveBaker (talk) 19:50, 26 March 2009 (UTC)[reply]
The total water vapor content of the atmosphere amounts to about 3 cm of sea level equivalent. Dragons flight (talk) 22:14, 26 March 2009 (UTC)[reply]

Venus to earth atmosphere analogy is next to worthless because we are not going to get anywhere near 96.5% CO2. I believe right now we are at 0.035% CO2 give or take. And at levels above 8% you have 10 minutes to live anyways so temperature would not matter much. Plus there is a little thing called condensation and cloud formation, more prevalent with higher degrees of water vapor which would reflect more sunlight (a cooling effect I believe) right? 65.121.141.34 (talk) 20:10, 26 March 2009 (UTC)[reply]

I've heard this claim used countless times by AGW skeptics, that the Earth has been cooling since 1998. It's because 1998 and 2005 were the top two warmest years on record, 2005 being the warmest with 1998 at a close second. Some say that 1998 was the warmest year, therefore global warming has stopped. That's not true. 1998 was in the midst of a strong El Nino followed quickly by another strong La Nina. Most of the years in the past decade have been among the top ten warmest years on record, but there is always variability in the level of warming across the globe. Also, most climate models do not factor in the effects of positive feedbacks, especially recent ones such as the release of methane clathrates. The IPCC keeps the maximum expected sea level rise this century under one metre, but it is based on information that is several years old. Also, there are likely to be areas where ice sheets such as West Antarctica could be destabilised from, such as Pine Island Bay, one of the fastest-warming areas in Antarctica outside of the Antarctic Peninsula. When sea level rise occurs around rivers, the ocean can flood farther inland upstream the river. Another possible scenario, if the sea level rise exceeds about 25 m, is that water could flow past Lake Manych-Gudilo. However, us having a climate like Venus is not very likely. ~AH1(TCU) 01:09, 28 March 2009 (UTC)[reply]


March 26

Gravitomagnetism

Why can't the gravitomagnetic equations be quantized as an approximation to GR if they are so similar to Maxwell's equations? —Preceding unsigned comment added by 76.67.79.89 (talk) 01:52, 26 March 2009 (UTC)[reply]

You seem to be misunderstanding the gravitomagnetic effect; the gravitomagnetic effect is a (small) effect of general relativity which is governed by equations very similar to Maxwell's for electromagnetism. The gravitomagnetic effect can't be quantised as an approximation to general relativity as it is a direct consequence of relativity. The similarity of gravitomagnetism to electromagnetism is like the similarity of Newton's law of universal gravitation to Coulomb's law, the two forces use similar equations but you can't infer more about one by using the other... - Zephyris Talk 09:09, 26 March 2009 (UTC)[reply]
You're talking about the gravitational analogue of magnetism (which is what I think of too when I hear the word "gravitomagnetism"), but our article gravitomagnetism appears to be about a weak-field approximation to GR that looks like Maxwell's equations. I don't know whether that specifically can be quantized, but weak-field gravity can be—see gr-qc/9512024. -- BenRG (talk) 12:56, 26 March 2009 (UTC)[reply]

Is human food killing the seagulls?

Is it true that the seagulls living in urban areas that feed on discarded human junk food are starting to drop dead from heart disease or develop diabetes? I was told this today by a taxi driver and I don't know whether he was winding me up or not. It sounds slightly plausible to me, considering that some of these gulls seem to exist on chips, pizza, burgers, fried chicken and kebabs. --84.66.64.241 (talk) 01:57, 26 March 2009 (UTC)[reply]

I rather doubt they live long enough for those diseases to be a problem. But let's see what our resident expert has to say. (Are they even subject to diabetes?) Clarityfiend (talk) 05:42, 26 March 2009 (UTC)[reply]
Birds, like mammals, have a pancreas that produces insulin and glucagon. I suppose that anything with pancreas may develop diabetes under certain environmental and/or genetic conditions; I can't see why not. OTOH, I've never seen avian diabetes studied or even mentioned. Feeding sugar or HFCS to a seagull is an exceedingly bad idea, at any rate. They don't normally put sugar on their fish or crab :) . Fried foods, as you can imagine, are also not a part of their natural diet. Heart problems stemming from overeating and lack of exercise are expected, too. Finally, plastic and foil wrappers are potentially a serious problem. AFAIR, seagulls can dispose of the inadvertently swallowed pieces of mollusc or crab shells; but a swallowed piece of a nylon bag may well prove fatal. --Dr Dima (talk) 06:53, 26 March 2009 (UTC)[reply]
As with a lot of human diseases, animals tend not to suffer from them because they are so short-lived. It takes years of a terrible diet to develop these conditions - and seagulls simply don't live that long. SteveBaker (talk) 19:44, 26 March 2009 (UTC)[reply]

No to be contrary, but at least 1 gull has lived to the age of 49... http://web1.audubon.org/waterbirds/species.php?speciesCode=hergul&tab=natHistory (talk) 20:01, 26 March 2009 (UTC)[reply]

FWIW, the larger gull species tend to be up there amongst the most long-lived of birds. 25-plus-y.o. Herring/Lesser BB Gulls are not uncommon, as I understand it. They don't even start breeding until they're at least four. Here's a couple of slightly-related links I just found (see here and here) - they doesn't specifically answer the original question but seem to suggest that a diet high in fat and sugar is indeed having an effect of some kind on the gulls. --Kurt Shaped Box (talk) 22:17, 26 March 2009 (UTC)[reply]

Adverse drug reaction: Rabeprazole

Is their any evedience that long term use of rabeprazole like Proton pump inhibitor is associated with incease risk of gasric carcinoma or gynecomastia??? —Preceding unsigned comment added by Samir doc (talkcontribs) 08:35, 26 March 2009 (UTC)[reply]

I found no evidence of these on a literature search. This study noted a number of side-effects, but not gynaecomastia or gastric cancer. Axl ¤ [Talk] 11:29, 26 March 2009 (UTC)[reply]

Identify this fish!

I took this photo last summer of small (~10cm long) fish trapped in a rockpool on Holy Island in North Wales. Does anyone have any idea which species they are? - Zephyris Talk 08:58, 26 March 2009 (UTC)[reply]

I think they are lesser sand eels. Axl ¤ [Talk] 11:14, 26 March 2009 (UTC)[reply]

Electric arcs: Possible terahertz sources?

It seems that lightning and other arcs are shown, sometimes unexpectedly, to produce electromagnetic radiation in virtually every part of the spectrum where detection attempts have been made: Radio and microwave [14] , infrared [15], visible (hence visibility of lightning and sparks), ultraviolet [16] , X-rays [17], and even gamma rays [18]. So why not terahertz? Since commonly discussed THz sources, even incoherent ones, are extremely expensive and high-tech it seems like something as obscenely low-tech and low-cost as source of high voltage electric arcs deserves some attention. Wouldn't it be easy to try shooting high voltage arcs through random gases at random pressures and observing in the THz region of the spectrum, just to see what happens? Wouldn't a THz arc-lamp/discharge-lamp be far cheaper than other sources?

69.140.12.180 (talk) 15:29, 26 March 2009 (UTC)Nightvid[reply]

I'm no expert - but isn't the problem to get enough power into those THz ranges to be useful? What you do by producing (essentially) Radio-spectrum white noise is to put power into the spectrum in roughly the inverse of the frequency (or maybe the inverse of the square of the frequency...I forget). At any rate, that means you've got to put an insane amount of energy into your arc to get enough THz stuff to be useful. Lightning can do it because it discharges an ungodly amount of energy in a very short space of time...you can't sustain that kind of power for very long. SteveBaker (talk) 19:42, 26 March 2009 (UTC)[reply]
If that were so then lightning and other arcs wouldn't be effective in radiating visible light. I emphasize that as far as I know there is no part of the electromagnetic spectrum that arcs and lightning are terribly bad or inefficient at radiating in, and it would be very strange if unlike all other parts of the spectrum one got so little in the THz region for a reasonable input power.

69.140.12.180 (talk) 19:57, 26 March 2009 (UTC)Nightvid[reply]

I think you're missing what I think is at least part of Steve's point: if they are indeed radiating in all parts of the spectrum, they can't also be highly efficient in radiating in an any/every arbitrarily-chosen narrow range. With a fixed amount of energy, you can either radiate all of it at one frequency or spread it out thinly. So if you have a huge amount of energy over a broad spectrum, you get a decent amount in your band of interest, but that's not efficient because so much of the energy is in other bands. DMacks (talk) 20:18, 26 March 2009 (UTC)[reply]
This has gone slightly in the wrong direction. The distrubtion of energy with respect to frequency is important, as nothing will emit radiation equally in all frequencies - the result would be infinite power radiation. That's why the notion of lightning radiating in "all other parts of the spectrum" is inherently flawed. This isn't even like blackbody radiation with a smooth curve on the power vs frequency distribution graph - the distribution of lightning's radiation is going to have peaks and valleys, corresponding to the different mechanisms that produce that radiation during the strike. For example, the visible light just under the 1,000 THz range is due to photons with an energy on the order of a few electron-volts, being generated by molecular-level reactions from the oxygen and nitrogen in the atmosphere being strongly ionized by the strike. This does not neccessarily imply that THz radiation will also be emitted strongly. There just happen to be various peaks associated with their respective generation criteria: just under 1,000 THz (or PHz) radiation, i.e. visible light, corresponds to electrons hopping around in orbits (flames, sparks, neon signs, etc) as well as blackbody radiation around a couple thousand kelvins (incandescent lightbulbs) - Ultraviolet, at a few PHz, typically comes from higher energy electron hopping (black light phosphors) and blackbody radiation of around 9000 kelvins (electric arcs) - X-rays up in the hundreds to thousands of PHz (10e5 to 10e6 THz or 10e18 Hz) typically come from high-energy electrons knocking into heavy atoms, either knocking out inner valence electrons (causing outer electrons to undergo a huge drop to fill the hole) or Bremsstrahlung from nearly hitting the nucleus - Gamma rays up in the 10e20 Hz range typically come from state changes of million of electron-volts, typically found in nuclear reactions - Infrared in the ten to hundred THz range is abundant, with the power decreasing with temperature, from simple blackbody radiation at various sane temperatures (like human body heat, a low power emitter of 30 THz radiation) - lower frequencies go from microwaves generated by ballistic electron motion within a small but macroscopic cavity, all the way down to radio waves that are easily generated with discrete electronics equipment. At around 10-20 kelvins, you can generate blackbody radiation with a peak in the THz range (interstellar dust does exactly this), but the power of this radiation is too low to be useful for anything. It's just a fact of life that there aren't any common mechanisms in nature that generate photons with the right wavlength (around a millimeter). That's why THz radiation is hard to generate. DeFaultRyan 23:09, 26 March 2009 (UTC)[reply]
What you describe as "different mechanisms" of generation of radiation are really just phenomena which occur on different timescales - for instance bremsstrahlung processes generally emit radiation at frequencies comparable to the inverse of the time it takes the electron to stop or be deflected - if it passes close to the nucleus and is going really fast this time may only be around 10-18 seconds, corresponding to an X-ray period. If it is farther from the nucleus and not so fast it will be a longer time such as 2*10-15 sec. and emit lower frequencies such as visible light. (And because by definition a plasma has free electrons this emission must also include free-free radiation, not just transitions between quantized bound states or "hopping") And the free electron motion over yet longer timescales emits radio waves and microwaves, as the chaotic nature of the process means current in the discharge flows erratically, electron motion changes on the scale of 1 ns would produce 1 GHz radiation. So although there are in a sense different mechanisms involved, the issue really amounts to the timescales of the motion changes and irregularities of the electrons in the discharge. To say that little THz radiation is emitted is to say that there are not significant features of the motion of electrons on the timescale of 10-12 seconds. This seems to be questionable to me because the discharge is chaotic and analogous to turbulence. Turbulence in a fluid produces sound waves (that is why jet airplanes are so loud is this happening in the air jet's turbulence) and analogously, the electrons constituting a discharge current are in "electromagnetic turbulence" / "electromagnetic turbulent flow" and radiate electromagnetically. But this would imply that the electron motion is highly irregular and spans many orders of magnitude in timescale, from those corresponding to interaction of electrons with atoms, with molecules, with groups of molecules, with micron-scale thermal fluctuations, with small filaments, with large filaments, and with macroscopic irregularities in the structure of the arc. But lightning and other sparks in some sense appear "fractal", meaning it has spatial structure at different scales, so it would be natural to expect the same of electron motion at different timescales, including 10-12 seconds. If in air at 1 atmosphere it just so happens that there isn't much in the way of electron motion features at that timescale, then surely that could be changed by using a different gas, pressure, arc current density, electric field, and/or arc length. When you said "It's just a fact of life that there aren't any common mechanisms in nature that generate photons with the right wavelength (around a millimeter).", did you mean THz emission from arcs has already been sought but not found? We don't know until we try, because many radiations which have been discovered were described by the scientists as "unexpected" including X-rays from arcs in air at 1 atmosphere in the laboratory. In light of all this (no pun intended), how could one not justify an experiment wherein electric arcs are made in different gases at different pressures (say, from 10-3 to 1, and to 102 atmospheres), with THz detectors watching? (or has this been done already?) 69.140.12.180 (talk) 15:30, 27 March 2009 (UTC)Nightvid[reply]

A quantum mechanical "proof" that any positive real number is zero

About a week ago, while browsing Wikipedia, I stumbled onto a (fallacious) proof that any positive real number is zero. It was quantum mechanical, basically "proved" that the Planck constant equals zero. The resolution had something to do with bra-ket notation not working on a sphere or hiding a functional analysis fact from the plain sight. It ended in words like: "Thus , an arbitrary positive real number, must be zero." The exact wording must have been different, as googling doesn't help. I really cannot remember anything else. Could anyone please point me to the Wiki article?  Pt (T) 22:59, 26 March 2009 (UTC)[reply]


March 27

Simple Motor -- Won't Work -- Please Help

I'm trying to make a simple motor to help my younger sister better understand circuits. It is composed of copper/metal wire, a "D" battery, and several strong magnets, which was supposed to create an electrical currents strong enough to flip a copper wire hoop suspended between 2 paper clips. It's supposed to look something like this Video

The problem is my demo isn't working. Any help here? Is my battery too small? Not enough magnetic power? Please respond by 5:00am EST/-5 GMT. Thanks! Zidel333 (talk) 02:20, 27 March 2009 (UTC)[reply]

Is your wire insulated? Did you half-strip the wire as instructed? When you spin the copper loop manually, can you feel the tug of the magnets? Not sure people here can help you without more details of what you've done and what you've observed (especially with your tight schedule, which makes this sound so much like homework). --Scray (talk) 02:31, 27 March 2009 (UTC)[reply]
Not homework, I'm 20; it's more like we had a deadline by Friday because we have spent the past 6 days trying to figure out the *cursing* problem. I'll try your suggestions. Zidel333 (talk) 02:34, 27 March 2009 (UTC)[reply]
Hmmm - that's a kinda crappy way to make an electric motor - but I agree that the biggest source of error is that bit about 'half-stripping' one end of the wire. This makes an extremely crude 'commutator'. The deal is that if you just apply current through the wire, it'll spin through maybe a half turn and then stop because the magnetic force that pulled you around (say) clockwise - now wants to push you back anticlockwise. What you have to do is to cut the current (or better still, reverse it) for the second half of the rotation. This movie covers things a little better: http://www.youtube.com/watch?v=PSNgsluUfhc - more turns of the wire will be a big help, I don't think you need more battery power. SteveBaker (talk) 02:50, 27 March 2009 (UTC)[reply]
Steve, that video you linked was very helpful. I'm going to try to construct his version instead. I'll keep you updated if it worked or not. :) Zidel333 (talk) 02:56, 27 March 2009 (UTC)[reply]
How the insulation is stripped is critical. One end may be stripped in its entirety. The other end shuld be stripped such that when center axis of the coil (rotor) is perpendicular to the magnet (stator) as shown in the first video diagram, the circuit is closed, and when it is rotated 180 degrees it is open. For a bottom magnet design, you should hold the coil off the edge of the table so that it is vertical (not flat) when you remove the isulation. If you lay it flat and sand, the system will be 90 degrees out of phase and will not work. -- Tcncv (talk) 03:10, 27 March 2009 (UTC)[reply]
It should be balanced as best you can, the magnet should be strong and as close as possible, if you have magnets above and below that would help. If you use 2 magnets, they should have the N pole of one facing the S pole of the other above and below the rotating coil. Be especialy careful to follow Tcncv's instruction about which half of the insulation is stripped relative to the coil orientation. I have made these and seen them spin. Edison (talk) 03:52, 27 March 2009 (UTC)[reply]
You do have to start it spinning by hand - it needs a certain amount of momentum to keep it spinning when the insulated half of the half-stripped wire is in contact with the frame. But I can't over-emphasise the importance of following that "half-stripping" advice in the video to the letter! Your description of it just 'wobbling' just screams "You messed up the half-stripped-wire part"!! The electricity has to be disconnected from the coil for half of it's rotation...and the correct half with respect to the orientation of the coil - or what you'll get is...well...exactly what you are getting! This approach to making a commutator is really kinda flakey...I've seen better designs for home-built motors. SteveBaker (talk) 04:19, 27 March 2009 (UTC)[reply]

Radio source SHGb02+14a

I'm a little confused by this New Scientist article, which states that the 8-37 Hz/s drift of this radio source corresponds to a speed much greater than that of Earth's rotation. 8-37 Hz/s of drift in a 1420 MHz signal should correspond to a speed of (8/1420*10^6)*300 000 000=1.69 m/s, correct? Perhaps Hz/s should really be KHz/s? There's frustratingly little information on the Internet about this signal, considering it's the best candidate for an extraterrestrial transmission SETI@Home has produced.

Two more questions: (1) Is it possible for a terrestrial planet that rotates 40 times faster than Earth to form? (2) For how long did the signal last each time it was observed? I can't believe New Scientist doesn't give this information. --Bowlhover (talk) 03:40, 27 March 2009 (UTC)[reply]

I'm somewhat doubtful that the signal is still of great interest given that the article was published in 2004 and it's now 2009. Also see Radio source SHGb02+14a. In terms of the New Scientist, while I read it regularly and find it of interest, they are sometimes guilty of being a bit sloppy in the articles and also of being sensationalistic or publishing minority viewpoints or ideas without making it clear they are such. Nil Einne (talk) 05:55, 27 March 2009 (UTC)[reply]
Check the units. (8Hz/s)/(1420*10^6Hz)*(300 000 000m/s)=1.7m/s^2. That's an acceleration, not a speed. That's two orders of magnitude higher then earths centripetal accelerations at equator. Dauto (talk) 06:46, 27 March 2009 (UTC)[reply]
Oops, that was a pretty silly mistake. This raises a new question, though: would radio transmissions from Earth really drift at 1.5 Hz/s as the article claims? That would imply an acceleration of 0.32 m/s^2, still much greater than the actual equatorial centripetal acceleration of 0.034 m/s^2.
Edison: You're right that the signal may not be of great interest among the general public, but I've been an amateur astronomer and SETI@Home participant for years, so I'd like to find more information on this signal candidate. Also, thanks for the tip about New Scientist. I'll remember to check other sources if it makes far-fetched claims. --Bowlhover (talk) 07:04, 27 March 2009 (UTC)[reply]

ecg explanation

This question appears to be a request for medical advice, and this issue is being discussed here. It is best to discuss medical conditions with a trained and licensed professional. --Scray (talk) 11:39, 27 March 2009 (UTC)[reply]

You may find the these articles provide helpful background information to discuss the matter with your doctor: pre-excitation syndrome, electrical conduction system of the heart. – 74  16:21, 27 March 2009 (UTC)[reply]

Historical composition of shaving creams?

The rather short Wiki article on shaving cream only mentions one composition from ancient Sumer. I was watching the smoldering Seth Bullock lather up on Deadwood last night, set in and around the 1880's, and I got to wondering what *exactly* he was lathering up with... figuring that it was unlikely he was even using whatever the period standard material was, given the remoteness of Deadwood and the corresponding high cost for city luxuries...

So, shaving cream through the ages... what say you? —Preceding unsigned comment added by 61.189.63.137 (talk) 09:42, 27 March 2009 (UTC)[reply]

Soap? --TammyMoet (talk) 09:50, 27 March 2009 (UTC)[reply]
Specifically, shaving soap, as mentioned in shaving scuttle and shaving brush, and available at places like this. jeffjon (talk) 13:07, 27 March 2009 (UTC)[reply]
Modern shaving cream is essentially aerated liquid soap. You can create roughly the same substance by rubbing your favorite bar soap in your hands with some water. In fact, before canned shaving cream, this was exactly how it was done for thousands of years... --Jayron32.talk.contribs 13:28, 27 March 2009 (UTC)[reply]

Speed of light and time

As I understand it, if I got in a spaceship and travelled at the speed of light for a year, then turned around and came back, while two years would have passed for me, a different amount of time would have passed for everyone else on Earth. How much time would have passed?

Similarly, while I know that light takes 8.33 minutes to get from the Sun to the Earth, how long does it take from the light's POV? I mean, if I was to slap on my welding goggles and stare at the sun, at exactly the same time a sun-gnome popped out of the Sun's surface and sped towards me, I would wait 8.3 minutes, but what would the time be on the sun-gnome's stopwatch when he careened out of the sky and smacked me in the face?

Thanks

FreeMorpheme (talk) 12:58, 27 March 2009 (UTC)[reply]

Well, the answer is a bit weird, but... At the speed of light, there is no time. Time stops. The sun-gnome's watch wouldn't have moved a nanosecond. I'm not sure about the first question ("Captain, relativity gives me a headache!"). Not that it's possible to travel at the speed of light unless you're a massless particle. —Preceding unsigned comment added by 83.253.252.234 (talk) 13:17, 27 March 2009 (UTC)[reply]
Indeed, travel AT the speed of light is impossible for any particle with mass and volume. When you work out the mathematics, all sorts of weird stuff happens:
  • You arrive anywhere instantaneously (on your clock), while everyone else sees you moving at the speed of light (on theirs)
  • You become infinitely heavy. Not just really heavy, but your mass becomes infinite.
  • You become infinitely thin in the direction of travel, and infinitely wide in the orthogonal directions. Not just kinda thin in one direction, and kinda fat in the others. In otherwords, you become as wide as the universe, but you become two-dimensional.
Based mostly on 2 and 3, it becomes readily apparent that some silly stuff is going on at the speed of light. It's not just "we can't travel that fast because we don't yet have the technology"; it's that "we can't travel that fast because of the fundemental way the universe is put together". --Jayron32.talk.contribs 13:26, 27 March 2009 (UTC)[reply]
Shorter answer: you can't travel at the speed of light, nor can a sun-gnome with a stopwatch. The principle of relativity ensures that motion at one speed is equivalent to motion at another if the speeds are less than c, but motion at c follows different rules. The physical laws that make your existence possible don't operate at light speed.
"You become infinitely heavy" is a reference to relativistic mass, which is not usually taught any more because it's not a very useful concept. It would be more useful if fast-moving objects behaved somewhat like stationary objects with an increased mass, but they don't. You can't plug the relativistic mass into F=ma, for example. Nor does a fast-moving object collapse into a black hole, even though its increased "mass" ought to put it inside its own Schwarzschild radius. It's pretty hard to think of situations where the relativistic mass does make sense. The only one I can think of is as the m in E=mc².
"You become infinitely thin" is a reference to Lorentz contraction. "Infinitely wide" is incorrect, though. Lorentz transformations preserve the volume of your world-tube in spacetime, but not the volume of spatial slices through it. Think of slicing a dowel. If you cut it at right angles you get a circular cross section. If you cut it diagonally you get an elliptical cross section that's wider in one direction by but not narrower in the other direction. That's pretty much what Lorentz contraction is, except that the plus inside the square root becomes a minus. -- BenRG (talk) 14:15, 27 March 2009 (UTC)[reply]
Your first question has it backwards. Two years pass on Earth. The time that passes for you is two years times , where v is your speed. In the second question, beware of saying that things happen "at exactly the same time" in different places. That's a reference-frame-dependent concept; it's no different from saying that two things happen at exactly the same x-coordinate. -- BenRG (talk) 14:15, 27 March 2009 (UTC)[reply]

Blimey. OK, so what if I were to travel at 99% of the speed of light, is that possible? Would there be time-jiggering effects then? And are you saying that it is impossible to say that two things happen at the same time in two different places in the universe? FreeMorpheme (talk) 16:32, 27 March 2009 (UTC)[reply]

Yes, you can (theoretically) achieve any speed below c, wether it is 99% or 99.999% (although at that speeds other real-world effects may become annoying). Unless my math is off, if you fly around at 99%c, the γ-factor is close to 7, i.e. your 2 years are seen by an outside observer as a little bit more than 14 years. And yes, it is impossible to say that two things in different places happen at the same absolute time - the order of events depends on the movement of the observer. --Stephan Schulz (talk) 17:00, 27 March 2009 (UTC)[reply]

Thanks. So am I correct in thinking that if my sun-gnome were travelling at 99% of c, then he would arrive at approx (8.33/7) = 1.19 minutes relative to himself, but I would wait 8.33 minutes for him? And if he were a massless object travelling at the speed of light then all bets are off, as he would arrive instantly from his POV (and possibly at the same width as the universe)?

Does this mean that the difference in speed between 99.99999% of c and c itself is infinite? FreeMorpheme (talk) 17:24, 27 March 2009 (UTC)[reply]

Yes, 1.19 minutes sounds about right (he sees the distance from the Sun to the Earth as having been contracted, so if he measures his speed it is still less than c). The difference in speed between 99.99999% of c and c is what you would expect it to be, 0.000001% of c. The difference in energy, however, is infinite. --Tango (talk) 18:45, 27 March 2009 (UTC)[reply]
Indeed, this is the problem - the difference in our answer for (say) 99.<twelve nines>% and 99.<thirteen nines>% is enormous - so just knowing it's "a bit less than c" doesn't help one bit in getting an answer. Since you can't travel at literally the speed of light - there is no good answer to the OP's question. SteveBaker (talk) 21:01, 27 March 2009 (UTC)[reply]

Question: Is it sensible to talk about two things in different places happening at the same time if those two places are at rest relative to each other (like the two ends of a space craft) and not accelerating? Zain Ebrahim (talk) 19:05, 27 March 2009 (UTC)[reply]

You can always define a standard of simultaneity if you want to. UTC defines a reference frame, for example, and you can take two events to be simultaneous if they happen at the same time UTC, regardless of the motion of any people or objects that are involved. On the other hand, distant simultaneity doesn't figure into the laws of physics. All influences propagate at a finite speed, so it just doesn't matter what's happening "right now" over there, it only matters what will be happening over there a bit later. It's kind of like the US before the transcontinental railroad, when every town had its own time standard and it didn't much matter because getting from one town to another took so long. -- BenRG (talk) 21:31, 27 March 2009 (UTC)[reply]
For a given frame of reference, such as for a rider on the spacecraft, the concept of simultaneous events at each end of the that spacecraft is well defined. You can measure the distance between yourself and each end of the spacecraft, adjust for any light-speed propagation delays in the observed events, and be confident in your results. That said, someone in a different frame of reference, such as the planet you are approaching, may just as confidently observe that the two events occurred at different times. This is a hard concept to comprehend. It is not just an illusion that observers in different reference frames reach different conclusions, the Lorentz transformation causes the actual geometry of space and time to be different in the two reference frames, so both observers are quite correct, even though they reached different conclusions. I went years without really understanding what was going on, until one day I watched episode 42 of The Mechanical Universe... And Beyond and the illustrations used finally allowed me to put the puzzle together. -- Tcncv (talk) 02:44, 28 March 2009 (UTC)[reply]
I try to avoid this conflation of observers (i.e. people making measurements) with reference frames (i.e. systems of coordinates). Einstein did not conflate the two in his original paper; that was done by later writers. There's a little more about this at observer (special relativity), particularly the "history" section. It is not true that people in different states of motion disagree about their measurements. People who choose different systems of coordinates disagree about their measurements, but this is no more profound than a disagreement between Celsius and Fahrenheit. Associating every person with a reference frame is... well, I can't call it wrong, since you can solve problems that way, but it's unnecessarily constraining and it isn't motivated by any property of the real world. Consider the analogous proposal that every person has their own Cartesian coordinate system with the positive z axis pointing forward, the positive x axis pointing to the right and the positive y axis pointing up. Then different "observers", i.e. people, disagree on fundamental properties such as "width", which I'll define as the difference between the minimal and maximal x coordinate of an object, and "elevation", which is the value of the y coordinate. Someone who's looking upward, relative to you, will say that certain things have different elevations that to you have the same elevation. That's exactly as profound as the relativity of simultaneity. If you think the relativity of simultaneity is more profound than that, you don't understand it! That's not to say that Einstein's 1905 paper wasn't profound. But what was new about special relativity was that it got rid of the idea of time as a universal parameter. It didn't add a bunch of incompatible observer-dependent universal time parameters. That makes as much sense as having a plane-of-constant-y associated with every person. You can do it, but the universe is indifferent to planes of constant y and it's equally indifferent to planes of constant t. -- BenRG (talk) 18:33, 28 March 2009 (UTC)[reply]

As others have pointed out, to simply say that an object is moving close to the speed of light isn't very informative. See rapidity for a more insightfull way to describe how fast something is moving. With respect to the question about wheather it's sensible to describe two events as being simultaneous, the answer is yes, but different observers will have different understandings (both correct) about what is simultaneous and what isn't. Dauto (talk) 05:36, 28 March 2009 (UTC)[reply]

Muscle Tearing from 1st Time Strength Training

Take someone who is out-of-shape and hasn't exercised in a very, long while. Then, that person decides to join a fitness club. The personal trainer makes this person do numerous strength training exercises the first day there which results in muscle pain the next day. Isn't muscle tearing adverse to the health of the muscles? Would it not be healthier to gradually build up these muscles instead of tearing them the first day? When I say tearing, I don't mean the kind that ends it in the hospital. I meant the kind miniscule muscle tearing that happens when you do strength training exercises. --Emyn ned (talk) 13:12, 27 March 2009 (UTC)[reply]

See Microtrauma. 76.97.245.5 (talk) 13:23, 27 March 2009 (UTC)[reply]

Ice cores

Can an icecore tell you the average temperature in 1900? If so, how does it work and how accurate is it? 65.121.141.34 (talk) 13:35, 27 March 2009 (UTC)[reply]

According to de:Eisbohrkern and Oxygen Isotopes (de) yes it can. Since O18 and O16 have slightly different evaporation rates one can measure the average temperature of a given time by measuring the respective contents in a given probe. Therefore accuracy depends on the amount of ice one has at his disposal for testing. The oldest found ice core is according to the articles approximately 900.000 years old. The respective english articles are Oxygen and Ice core--91.6.18.23 (talk) 13:56, 27 March 2009 (UTC)I changed your link to the German page, it didn't work the way you had it. Hope you won't mind.[reply]
There's also Ice core although that only hints at it. Follow the links from there for more information.76.97.245.5 (talk) 14:20, 27 March 2009 (UTC)[reply]
No, ice cores can not be used to determine the temperature in 1900 with any degree of reliability. Ice cores are used in paleoclimate to determine temperatures using the O18/O16 ratio in trapped air bubbles. But the air in ice is not sealed in instantly. Instead, as the ice is compressed more and more by new layers above, it becomes denser and denser, and air exchange with the environment becomes less and less. IIRC, the seal is near complete after about 30-50 years, depending on exact circumstances. So any air bubble will have a mixture of air from about 30-50 years (more from the early years, less from the later years). This allows a temporal resolution on about the same time scale - you can make statements about climate, because 30 years is about the shortest time frame we consider significant for climate (as opposed to weather). But you cannot usefully determine the temperature during a single year. --Stephan Schulz (talk) 17:11, 27 March 2009 (UTC)[reply]
You're mistaken. Though one can measure δO of air, most ice core temperature work refers to the δO of ice where the O is from H2O. Dragons flight (talk) 20:43, 27 March 2009 (UTC)[reply]
Aha. I did not know that, thanks! So you can get the temperature with a reasonably high resolution, but the trace gases with a much lower one? --Stephan Schulz (talk) 22:32, 27 March 2009 (UTC)[reply]
Yes, though in practice the difficulty in time and labor required for gas work tends to be a bigger factor in influencing the results. It is not uncommon to see only one gas sample for every few meters of core which is even worse for resolution than the gas mixing effect you refer to. By contrast, the modern temperature systems can get ~20 measurements per meter. Dragons flight (talk) 03:22, 30 March 2009 (UTC)[reply]

Does all life share some genetic code?

Humans share genetic code with many life-forms. That is very obvious for creatures with skeletons, but it is not obvious to me that we share genetic code with a tree, an insect or a tube worm living on an oceanic thermal vent. Do humans share genetic code with a fungus, a bacteria? a virus? a prion (which may not be considered a "life-form")? If humans share genetic code with all life on earth, could it be assumed that there was a single "original life" from which all life grew and from which all life-forms evolved? Ddcarroll (talk) 16:10, 27 March 2009 (UTC)[reply]

Yes to everything except prions, which don't have any DNA or RNA. StuRat (talk) 16:36, 27 March 2009 (UTC)[reply]
I agree that all currently existing cells have a shared set of base genetic machinery. However, I'm not sure all viruses necessarily have genes in common with humans. Viruses only need a limited genome because they co-opt the cellular machinery of other organisms. So you might find viruses on a very distant branch of the tree of life (bacteriophages, perhaps?) where their genome had no overlap with humans in particular. Dragons flight (talk) 21:03, 27 March 2009 (UTC)[reply]
There are things, like prions and some computer programs, that make giving a definition of the word "life" difficult. For the things you describe, see common descent, which discusses the currently accepted dogma. --Sean 17:35, 27 March 2009 (UTC)[reply]
It's unclear whether we share common descent with prions - and it's also unclear whether we should classify them as "life" or "chemicals" (or indeed whether we should even make such distinctions!). However, excluding things with no DNA or RNA at all - there is a very clear set of common genes - a much bigger percentage than you might perhaps expect. This does indeed point towards a single "original" lifeform from which we're all descended. While it may not look like we have much in common with some bacterium or other - or with plantlife - there are a vast number of chemical pathways that we all share - and which represent a much bigger fraction of the genome than the genes for more obvious things like having a head, two arms and two legs. Things are a little more complicated though because it's clear that our genes have large 'insertions' from lesser life-forms that have somehow come to be wedged into our DNA. So we should perhaps strictly be considering all of life as an interlinked 'web' rather than the traditional tree-of-life. But this is a very small matter. The "big picture" is that we're all descended from a common ancestor and our DNA & RNA proves that. SteveBaker (talk) 20:53, 27 March 2009 (UTC)[reply]
Well, since all prions are just variant foldings of a host protein, and are encoded in the host protein, they certainly are descended in EXACTLY the same way as that protein in the host. Viruses, on the other hand, do not clearly share descent with their host and their evolutionary path is not so clear (though as agents of horizontal gene transfer they often carry some host genetic material). Going back to the OP, all currently existing cells do not share the same genetic code, though the similarities are greater than the differences and do suggest common descent. By necessity, viruses share the genetic code of the host cell. --Scray (talk) 02:40, 28 March 2009 (UTC)[reply]
Sure, obviously the genetics of all living things are not "identical" because if they were, we'd all look identical! But we have to emphasise that the biochemical mechanisms - and a large percentage of the genes - are so spectacularly similar that common descent is the only reasonable hypothesis. It's comforting to hear that Prions are really no exception to that. SteveBaker (talk) 12:19, 28 March 2009 (UTC)[reply]
The term "Genetic code" has a precise meaning, and I get the impression that some of the participants in this thread are using the term when discussing something different - the sharing of DNA sequences between organisms. From our article: The genetic code is the set of rules by which information encoded in genetic material (DNA or RNA sequences) is translated into proteins (amino acid sequences) by living cells. A computer analogy: the genetic code is the instruction set by which a cell's ribosomes, t-RNAs, and Aminoacyl tRNA synthetases (together, the CPU) translate genes (the program) into protein (the output). Scray's point above, is that the CPUs of different organisms may have slightly different instruction sets, not the obvious point that there are large differences in the programs (genes) that are used to build the organisms. --NorwegianBlue talk 20:37, 28 March 2009 (UTC)[reply]
Yes, that was the definition of "genetic code" I was using, and is why I excluded prions, as they contain neither DNA nor RNA. StuRat (talk) 23:03, 28 March 2009 (UTC)[reply]
I understand the distinction - but even in organisms with different genetic codes (different "CPU instruction sets" for the computer geeks amongst us) - the difference is only in the interpretation of a few codons - most of the code is read the same way - which (if you think about it) is a necessary consequence of the other part - which is that we do indeed share considerable chunks of genetic information (the software that's being run by the genetic-computer is identical over long stretches). This is not unlike the transition from 8080 computers to Z80 computers. A few instructions were added in the Z80's processor - this didn't hurt anything because software written for the 8080 didn't use those instruction codes. But one or two very obscure instructions changed meaning in very subtle ways. 99% of programs written for the 8080 ran just fine on the Z80 - but some would behave a little differently - or (mostly) just fail entirely and crash the computer. Similarly, organisms with different genetic code interpretations can still have genes in common - but either those genes don't happen to include the codons that have alternative 'meanings' - or perhaps the alteration doesn't affect the consequences of the gene being expressed - or perhaps that change in expression is actually beneficial. Of course in cases where changing the meaning of a particular codon results in "the program crashing" - evolutionary considerations will ensure that such lifeforms die off - so there are unlikely to be cases like that in nature - just as a typical Z80's disk drive would be unlikely to contain any 8080 programs that caused it to crash. SteveBaker (talk) 14:24, 29 March 2009 (UTC)[reply]

Make time history from PSD?

Hi, I'm trying to write MATLAB code to generate a representative time history from a spectral density plot. I've found a few things that look useful on the internet, for example this one [19], but I don't really understand the steps involved. For example, in the link above, I don't understand what is meant by a "PSD that you have empirically as samples" - what is a sample referring to in this sense?

If anyone could explain in simpler terms that would be great.

Cheers for any help, LHMike (talk) 17:07, 27 March 2009 (UTC)[reply]

It sounds like you have been given the power spectral density, discretely sampled in frequency (see discrete fourier transform). It sounds like an inverse fourier transform should convert that PSD into a representative time-series. You may need to normalize amplitudes based on your detailed problem specifications. Nimur (talk) 09:00, 28 March 2009 (UTC)[reply]

list of self HOMO-LUMO gaps (especially that of chlorophyll a)

Why is it so hard to find self HOMO-LUMO gap values on the internet? If they are relatively easy to calculate using Hartree-Fock and a bit of time, why doesn't Wikipedia have a self HOMO-LUMO gap listed on all major chemical infoboxes?

Decidedly, I need the HOMO-LUMO gap of chlorophyll-a (with the magnesium atom attached) compared to beta-carotene compared to silicon, in eV, if possible, to win an internet argument. John Riemann Soong (talk) 17:33, 27 March 2009 (UTC)[reply]

Readers unfamiliar with the expression can begin by referring to HOMO/LUMO. -- Wavelength (talk) 20:17, 27 March 2009 (UTC)[reply]
this google search may provide a place to start. I haven't started reading all of these articles for you, but there are some promising hits in this google search. You could also try google-scholar, which will confine the search to research journals and the like. --Jayron32.talk.contribs 01:39, 28 March 2009 (UTC)[reply]
Why hasn't anyone set up a database of found HOMO/LUMO gaps (theoretical or otherwise), seeing as how useful the information often is? John Riemann Soong (talk) 19:31, 29 March 2009 (UTC)[reply]

Seasonal affective disorder and sunshine advertising

How common was seasonal affective disorder before the tourism industry started to remind people about the sunshine they were missing? To what extent is the condition actually caused by a lack of sunshine experienced by sedentary people in some places during some seasons? To what extent is it caused by envy cultivated by the tourism industry? -- Wavelength (talk) 19:25, 27 March 2009 (UTC)[reply]

We don't know - the condition was only noticed scientifically in the 1980's when we were already neck-deep in tourism adverts. However, they underlying implication that you think this is all nonsense is not backed up by the facts. Firstly, we know that reports of the condition go back to the 6th century - LONG before tourism was commonplace - and secondly, it's been studied carefully and there are even some pretty solid genetic and dietary risk factors that have been nailed down. So yes - it's very real. Fortunately, it's fairly manageable once diagnosed. SteveBaker (talk) 20:44, 27 March 2009 (UTC)[reply]
I sense a conspiracy theory. SAD was invented by unscrupulous travel agents and we are all their unwitting dupes.. Cuddlyable3 (talk) 21:58, 27 March 2009 (UTC)[reply]
I'm not sure about SAD, but the idea of Blue Monday, that the most depressing day in the year is the 3rd Monday in January, was invented by a travel company to sell holidays.[20]. --Maltelauridsbrigge (talk) 11:25, 30 March 2009 (UTC)[reply]

March 28

Antibiotic deodorant

Would making deodorant with a liberal dose of some antibiotic work? I've read that underarm odor is not really you exactly, it's waste from bacteria living under your arms. So an antibiotic deodorant makes great sense to me. However, I also see a very real problem and possible *danger*. You would quickly evolve antibiotic resistant strains right? Is that possibly the main reason it's not done? Because it would only be effective (maybe really, really effective!) but only for a short time period until you bred and released yet more multi-antibiotic bacteria on the world? I have some erythromycin sitting around I may just grind it up and smear it on. Good idea? (not serious).♥70.19.64.161 (talk) 03:27, 28 March 2009 (UTC)[reply]

You've listed one reason it's a bad idea. Another is that an antiseptic can kill off helpful bacteria which control nastier things which would otherwise grow in that warm, moist, dark environment. Adult diapers treated with hexachlorophine allow fungus to grow, for example. StuRat (talk) 04:21, 28 March 2009 (UTC)[reply]
The link above should be to Hexachlorophene. --NorwegianBlue talk 14:13, 28 March 2009 (UTC)[reply]
Thanks, that explains the red link. I'll add a redirect, in case others misspell it the same as I did. StuRat (talk) 15:44, 28 March 2009 (UTC) [reply]
For a decade or more I've been spritzing my pits with alcohol each morning. It does a fine job of keeping odor down, much cheaper than antibiotics(!), with no ill effects other than occasionally making the skin uncomfortably dry. —Tamfang (talk) 01:05, 29 March 2009 (UTC)[reply]
Yes, the drying effect would be an issue for most people. If you have oily skin to begin with, perhaps you can get away with it. StuRat (talk) 17:31, 29 March 2009 (UTC)[reply]
In contrast to our current article layout, where deodorant and antiperspirant redirect to the same article, they are slightly different things (e.g. Speed Stick sells "deodorants", "antiperspirants", and "deodorant/antiperspirants"). An antiperspirant stops sweating, most commonly via aluminum salts. A deodorant, in contrast, functions by stopping the growth of bacteria which produce most of the noxious body odor. Usually this is through making the underarm inhospitable to the bacteria, by increasing the salt concentration, or increasing/decreasing the pH. Sometimes alcohol is used to kill bacteria directly, or broad spectrum antibacterial compounds like triclosan or hop extract are added. While an antibiotics like erythromycin or penicillin might kill the bacteria (while usually effective on a range of bacteria, most antibiotics are still somewhat specific - c.f. gram positive and gram negative), you're right in that resistance would likely occur rapidly in such an uncontrolled setting. -- 76.204.102.79 (talk) 18:01, 29 March 2009 (UTC)[reply]

How do pet parrots perceive humans?

It's an oft-quoted 'fact' on bird-keeping discussion forums/newsgroups that when a pet parrot sees a human, it supposedly only identifies the face and hands as an individual 'fellow creature' with which it can interact and sees the rest of the body as some sort of strange, wobbly, walking tree. I've got no idea where this theory comes from - but it seems to be 'common knowledge'. Can anyone tell me if there's any actual, real scientific evidence behind it?

I appreciate that the answer to this may simply be 'you'd have to be able to read the parrot's thoughts to know' - but from my own interaction with parrots, I don't personally believe it to be the case (why would a parrot nip at a person's feet when desiring attention if it believed that the feet were just 'roots' and not a part of the person they were trying to attract the attention of, for instance?). --Kurt Shaped Box (talk) 03:37, 28 March 2009 (UTC)[reply]

I agree that that description seems rather silly for a parrot. That might apply to an insect which can't see all of you, but only the part near them. StuRat (talk) 05:05, 28 March 2009 (UTC)[reply]
I read that dogs see humans as peculiar three-headed creatures - with our hands being seen as 'mouths' - and actually - when you see how the dog interacts with you - that makes a lot of sense. But obviously, we can't really know what they think...particularly with birds, which are so far from us, genetically. SteveBaker (talk) 12:14, 28 March 2009 (UTC)[reply]
The identify us how they identify us. Not how we identify us. Not how we identify trees. Not how they identify parrots. Not how they identify trees. It's sort of like asking if light is a particle or a wave. It is what it is, independent of how we think. If they do think of our bodies in a similar manner as trees, that doesn't mean that they wouldn't be able to understand that it's part of us. A lot of people think their monitor is their computer, but it doesn't stop them from turning on their actual computer. — DanielLC 15:34, 28 March 2009 (UTC)[reply]
Perhaps they could do one of those experiments they do to test cognition of very young humans, where they make a ball disappear somehow, and at a certain age the infant gets noticeably befuddled. They could use some kind of trickery to hide the "strange, wobbly, walking tree" and see if the bird gets weirded out. I've noticed with my dogs that when I lift another human onto my shoulders they seem to think the people have left the room and some bizarre creature has appeared in their place. --Sean 21:37, 28 March 2009 (UTC)[reply]
That test is used tell if the baby understand that objects don't disappear when it can't see them anymore. It shows nothing of how the baby thinks of the object. I don't see how that test could be used here. — DanielLC 20:19, 29 March 2009 (UTC)[reply]

Mechanism of Copper (II) Sulphate on the iodine clock reaction

I am writing an investigation on the iodine clock and am confused as how the copper ions increases the rate of the reaction. I understand the mechanism of iron ions as:

2Fe2+(aq) + S2O82-(aq) 2Fe3+(aq) + 2SO42-(aq)

Fe2+ ions act as reducing agents to reduce the Peroxodisulphate ions to sulphate ions, in turn, being oxidised themselves to form Fe3+

2Fe3+(aq) + 2I-(aq) 2Fe2+(aq) + I2(aq)

The Fe3+ ions produced then act as oxidising agents to oxidise the Iodide ions to Iodine whilst themselves being reduced back to Fe2+.

But I understand the mechanism with copper (II) sulphate is different,I would like to know how.I also understand that the copper is not regenerated so it's not exactly a catalyst in that respect. —Preceding unsigned comment added by 82.27.37.30 (talk) 06:47, 28 March 2009 (UTC)[reply]

iodine clock reaction does not mention copper. Is this a part of the reaction? Graeme Bartlett (talk) 21:04, 28 March 2009 (UTC)[reply]

The original reaction doesn't but i'm investigating the effect of adding copper to the reaction and my results show a much faster rate with the addition of copper.82.27.37.30 (talk) 00:01, 29 March 2009 (UTC)[reply]

Copper makes numerous complex ions with even moderately good Lewis bases. My guess is that there is some sort of Copper (II)-Iodide complex ion formation going on here which is affecting the reaction in some way. Since we don't know all the details of your particular iodine clock reaction (such as all reagents, pH, amounts, etc. etc.) it is hard to tell exactly what is going on here, but my guess is that there are some complex ions forming... --Jayron32.talk.contribs 03:49, 29 March 2009 (UTC)[reply]

Neuro Behcet's disease

I just looked at the article on Behcet's disease (Silk Road Disease) but it appears to be solely a dermatological condition. My nephew has "Neuro Behcet's disease" which seems an outgrowth of cereberal meningitis. My sister (she's 58, he's in his early '30s) is forced to care for him because of government cutbacks to long-term care facilties and his behaviour's become erratic/violent, I'm trying to find researchers involved in that field in the hope of finding alternative therapy/programs that could help him, and her. Because of the stress, she had a collapse from a pelvic infection last year, meanwhile his condition worsens. I just today found out the name of the syndrome as she'd posted something in facebook to Oprah's blog, as Dr. Mehmet Oz had discussed hyperbaric chamber therapy, but the doctors in the local medical system (in British Columbia) say there's no evidence it works. Any help/advice appreciated as to where to look/who to ask greatly appreciated.Skookum1 (talk) 13:13, 28 March 2009 (UTC)[reply]

This question was a request for medical advice, which we cannot provide here. This removal can be discussed here, on the RefDesk Talk page. I am sorry we cannot provide assistance. --Scray (talk) 19:25, 28 March 2009 (UTC)[reply]

Because this question is not clearly a request for medical advice, I have restored it. It is being discussed here, on the RefDesk Talk page. --Scray (talk) 20:30, 28 March 2009 (UTC)[reply]

HER2/neu

Hi I am doing some research into breast cancer and after reading some articles I have a couple of things I am a bit unsure about. After reading the article on HER2/neu I was a little confused as to whether or not there is difference between HER2/neu and HER2? Or is HER2 simply a shortened version of writing HER2/neu? Also, the article on estrogen receptors says that a theory of why over expression of oestrogen receptors can cause cancer is that the metabolism of oestrogen produces 'genotoxic waste.' Is 'metabolism of oestrogen' refering to breaking down oestrogen once it has bound to the receptors or producing oestrogen or something else? And finally what do over expressed oestrogen receptors do? Cause cell division? Thanks. —Preceding unsigned comment added by 139.222.241.27 (talk) 13:35, 28 March 2009 (UTC)[reply]

"HER2" and "neu" are synonyms, as explained in the second paragraph of the section on HER2/neu and cancer. Estrogen is a steroid hormone, and there are many breakdown products with a variety of effects. The metabolism (breakdown) of a hormone can occur before or after it has bound its receptor. Some of your other questions are answered in the section on estrogen receptors and cancer, but in short you are correct that over-expression of estrogen receptors is thought to facilitate cell division. I hope this helps. --Scray (talk) 16:44, 28 March 2009 (UTC)[reply]

Frequency in physics

With reference to previous question, March 9, titled “The meaning of Frequency in physics” , this is a new attempt to clarify the concept(/-s?) of “frequency” in physics.

To me frequency is the number of occasions of an event per unit of time (area, volume etc.).

I’m informed in the answers to my previous question, that this is “emission frequency”.

So I ask: What kind of frequency is represented by the rules c=(ny)•(lamda) and E=h•(ny)=h/(lamda) ?

Did Planck have reason to believe, that all emission sequences from the same emitter have maximal emission frequency ?

Does this latter convention arise from frequency spectrography ?

And why not give it another name ? Suggestions are welcome !

Rolf —Preceding unsigned comment added by 83.226.97.246 (talk) 16:53, 28 March 2009 (UTC)[reply]

Monochromatic light is an electromagnetic wave. As such, if you were to plot an electric field component perpendicular to the direction of propagation of this light, vs. time, you would get a sine. It is the frequency of this sine that is used in the formulas you speak of. —Preceding unsigned comment added by 81.11.162.104 (talk) 19:36, 28 March 2009 (UTC)[reply]

To help you conceptualize, the electric and magnetic field vectors have spatial locations; and as vectors, they have spatial positions and a direction. However, the quantity they measure is Electric field or magnetic field, so there is no "spatial extent" for the arrows - they exist as a magnitude and a direction at each point in space. As such, if you plot the magnitude vs. time for a specific location in space, you will see a sinusoidal signal as described above. Alternatively, if you take a snapshot of a single time, for a large range of spatial values, you will probably also see a sinusoidal wave in space as well. These "plots" will depend on what exactly your light signal is doing - if it is similar to a plane wave with long duration, these statements are valid; if it is more pulse-like, you might see wave packets or other frequencies as well when you plot the magnitude of the fields vs. either time or space. Nimur (talk) 22:30, 28 March 2009 (UTC)[reply]

Rolf: – I perceive ‘a wave’ as a ‘shower ‘ of wave packets ≈ photons/phonons, the intensity of which varies as ‘a wave’. So, what is the frequency of a single photon/phonon ? / Rolf

Rolf, your understanding of the electromagnetic wave as a particle shower of varying intensity is not correct. In a plane wave for instance, the wave intensity is the same everywhere at all times. What's varying over time with frequency ν is the electromagnetic field. Even with a single photon you still will have the same frequency ν. That single photon still propagates according with the wave equation and can be subject to interference, diffraction, refraction, etc. The energy of that single photon is related to its energy content through Planck's equation E=hν. Planck didn't know that at the time, but that relation is a consequence of the fact that position and momentum are not compatible and cannot both be established with zero uncertainty according with Heisenberg's uncertainty principle. Dauto (talk) 13:18, 29 March 2009 (UTC)[reply]

Certainly our OP's mental model of how light 'works' is wrong - but actually, so are other models too. The problem is that light is neither a wave nor a particle because there are times when it behaves entirely unlike a wave and other times when it behaves entirely unlike a particle. Rolf's model fails because it doesn't explain any of these behaviors. The mental models of wave and particle merely serve to help us to apply math to the problem of light - they don't get you far as mental 'pictures'. FWIW, I offer the "wave packet" image at right as a better way to envisage things...but it's still not a true picture of what's going on. SteveBaker (talk) 22:06, 29 March 2009 (UTC)[reply]
Yes, SteveBaker is right. The wavepacket model isn't the whole story. The main problem with the wavepacket picture by itself is that it does not give a theoretical reasoning (beyon fitting experiment) behind Planck's equation E=hν which is simply added to the model 'by hand'. This is sometimes described as a semi-classical (or semi-quantical) picture. It took a quarter of a century after Planck's work for a more consistent picture of quantum mechanics to arise. The Canonical commutation relation was the essential element missing in the old quantum mechanics. Dauto (talk) 05:19, 30 March 2009 (UTC)[reply]

What would actually happen if you used a defibrillator on yourself?

There is a scene in the latest episode of Terminator: The Sarah Connor Chronicles where Sarah places defibrillator paddles to her own chest and gives herself a jolt (in order to destroy a radio tracking device that has been implanted into one of her breasts, FWIW). Before doing so, she asks a doctor if doing this will kill her and is told "no - but it'll hurt a lot". So she shocks herself, falls over and is momentarily stunned - then gets up, kills a nameless henchman (by defibrillating his temples with the same machine!) and seems perfectly fine and healthy afterwards.

So, in the boring world of Real Life, and mammary tracking devices aside, autodefibrillating (heh!) on a normal heart rhythm would pretty much guarantee a cardiac arrest or another extremely serious heart-related 'event', right? --Kurt Shaped Box (talk) 17:24, 28 March 2009 (UTC)[reply]

We have an article on defibrillators that has interesting information related to your question. Any response beyond that, particularly with regard to the prognosis of someone who uses a defibrillator outside of established guidelines, would violate our prohibitions regarding medical advice. --Scray (talk) 17:33, 28 March 2009 (UTC)[reply]
Seriously? Do I need to specify that I'm not planning to defibrillate myself, nor advise others to defibrillate themselves in order to get an answer? ;) If so, I do specify that (honestly). I was merely curious about something I saw on a TV show that seemed rather unlikely to me... --Kurt Shaped Box (talk) 17:55, 28 March 2009 (UTC)[reply]
Defibrillators are available in many public places, and they are designed for exclusively medical purposes. I didn't delete your question, I just gave a limited answer. Further discussion should probably happen on the RefDesk Talk page. --Scray (talk) 19:08, 28 March 2009 (UTC)[reply]
The defibrillators available in public places are, or should be, automated ones that read the heart rhythm before delivering a shock, so in the TV show situations described by the original poster, they probably wouldn't do anything at all. Defibrillators designed for use by doctors, who (unlike us here) are supposed to have actual medical knowledge, are different. --Anonymous, 19:36 UTC, March 28, 2009.
It would depend on exactly where she placed the paddles and what she set the defibrillator to. If the current is low enough, and/or it doesn't go through the heart, then it shouldn't kill you. A high enough current through the heart runs a high risk of stopping it (although I doubt it is guaranteed). --Tango (talk) 22:42, 28 March 2009 (UTC)[reply]
(post e.c.) "Burns are often complications of using a defibrillator, especially on repeated defibrillation. They are mostly mild, but may be uncomfortable for the patient." (ref is http://www.healthhype.com/sudden-cardiac-death-scd.html) Mythbusters, though definitely not a reputable source, decided that it was "plausible" that defibrillation of patients with underwire bras and nipple piercings could cause burns. Taking that into account, the presumably conductive radio tracker could possibly cause internal burns. I also found the following quote from the NYT: "because shocking a healthy person or someone with another type of heart problem could be dangerous or even fatal." I couldn't find anything regarding nonfatal outcomes of defibrillating someone with a normal heart rhythm, but that may be due to the tendency for extreme outcomes (i.e. accidental death) to be the most widely disseminated. As Anonymous mentioned above, a lot of public access external defibrillators automatically sense the different abnormal heart rhythms and will not shock people with a healthy rhythm. Accidental defibrillation of people who don't need it may be pretty rare because of automated defibrillators. Sifaka talk 22:50, 28 March 2009 (UTC)\[reply]
AEDs can shock a healthy rhythm. Movement interferes with the rhythm analysis, and shivering or seizure activity can be interpreted as ventricular fibrillation. Also, pulseless ventricular tachycardia is a shockable rhythm, but the device is unable to distinguish between that and sinus tachycardia, i think they are set to shock any rate above 180 beats per minute or so. You are only supposed to attach AED leads to unresponsive and pulseless patients.—eric 00:16, 29 March 2009 (UTC)[reply]
Resuscitation, the Official Journal of the European Resuscitation Council, printed the results of research into what happens to healthy people who receive secondary shocks. Not good, but not lethal. There are several cases, easily found using Google, where direct defibrillation of a healthy person caused that person's death. One example: the U.S. State of Virginia found a person guilty of manslaughter for the death of a coworker after he (ahem) 'playfully' shocked her with a defibrillator. 152.16.16.75 (talk) 09:45, 30 March 2009 (UTC)[reply]


Dweller's thread of the week. It's an 'out of the box' idea.

This shocking thread wins this the newly rescucitated, and possibly no longer weekly, and certainly unlucky for some 13th Ref Desk thread of the week award. Mammaries, like the corners of my mind... --Dweller 14:18, 6 November 2007 (UTC)[reply]

When does a square become a rectangle (Psychology)?

I'm looking for the height-width ratio that differentiates when a human perceives an object as being a square vs a rectangle. I know that technically, anything other than 1-1 is not a square, but I think that human perception is different than the mathematical definition. I've tried some google scholar searches but have not found any thing. Thanks AmitDeshwar (talk) 23:00, 28 March 2009 (UTC)[reply]

If the visual field is black-and-white with no other details or depth cues then one sees small deviations from squareness, though I can't quantify that. I think we are more sensitive to distortions of circles than squares. Added details or textures can however distort perceived dimensions so that a true square seems rectangular, as in various optical illusions. A related issue is the distortion that seems to be well tolerated by viewers of widescreen television when a movie made with 4:3 picture format is "stretched" to fit the screen. Cuddlyable3 (talk) 00:53, 29 March 2009 (UTC)[reply]
Though... keep in mind that most widescreen TVs these days allow you to adjust how the movie is displayed, so it doesn't have to distort. (For example, on mine you have "Widescreen", "Normal", and various "Zoom" options. By fiddling with them you can come up with a setting that has no black space but doesn't cut off too much.) --98.217.14.211 (talk) 02:05, 29 March 2009 (UTC)[reply]
It is relative. If you put a square next to a rectangle, the square may lose its squareness. There are many of those optical illusions in which humans see straight lines as being bent, elongated, or parallel when they aren't. -- kainaw 02:37, 29 March 2009 (UTC)[reply]
Gestalt psychology seems to be the relevent bit here... Others have hinted at it, but it bears repeating that it will depend a LOT on context. The entire viewing environment of the rectangle and the square will alter our perceptions of it. Depending on the viewing environment, a true square may be made to look rectangular; a rectangle may be made to look square, etc. etc. There is a lot more to perception than quantifiable data. Even personal psychology will play a part here. --Jayron32.talk.contribs 03:22, 29 March 2009 (UTC)[reply]
I think the point is this: If you have a smallish rectangle (sufficiently small that perspective isn't playing much of a part) - but it's held at an angle to the eye such that the long sides are fore-shortened - then it can appear to be perfectly square. However, our brains are very good at compensating for effects like this - so if there is (for example) a subtle texture to the paper that it's printed on - or if the edges of the paper are within your visual field - then there are enough cues to let you know that this supposed-square is really a rectangle that's not parallel to your plane of vision - and your brain says "Nope! It's not a square!". But if the lighting conditions are set up just perfectly and all cues as to the nature of the surface are removed - then it's impossible to tell the difference. So while we're very attuned to the precise ratios of sizes - we can be fooled. However (as others have suggested) when you view a 4:3 aspect-ratio broadcast TV show on a 9:5 aspect-ratio widescreen TV - then (for example) the circular wheels on cars are compressed into ellipses - but we really don't notice it. However, if presented with a similarly distorted white ellipsoid on a black background - you can see easily that it's not a circle. What this means is that a precise answer to the question is really not possible. There is no single number like 1.3% that says how far off of 'perfect' a square can be without our brains yelling "Rectangle!!!" - it all depends on context.

In this image (which I just drew to illustrate the point) - the red shape on the left looks rectangular - the shape on the right looks square. They are neither quite parallel to the eye - and perspective comes into play here. However, (of course - because this is always the answer with optical illusions) the two red shapes are identical. What changed is the background grid. On the right side of the image - the grid is also made of perfect squares - with identical perspective to the red square. On the left side, the underlying grid is 7:8 aspect ratio rectangles - also with identical perspective - and that red shape is still a square. Our eyes/brains prefer to see the grid on the left as being at a steeper slope than the one on the right (even though the angles are identical) and therefore are forced to deduce that the square on the left is really a rectangle. Of course if you measure either red shape - neither have parallel sides or equal length sides. SteveBaker (talk) 14:03, 29 March 2009 (UTC)[reply]

I think you may have distorted the picture a little too much to illustrate your point; I saw them both as trapezoids first and only later noticed that they looked different from each other (in comparison to the background). It didn't help that WP shifted the picture to the right, ruining the symmetry of their positions. Matt Deres (talk) 14:34, 29 March 2009 (UTC)[reply]
It is interesting that a Go board is traditionally printed as a rectangle in order for it to look like a square when seen at an angle from the perspective of the players. Dauto (talk) 15:38, 29 March 2009 (UTC)[reply]
Go boards are weird - some of the very best traditional ones are made with a grid that's just slightly SMALLER than the go-stones you play onto it. This is because traditionalist players prefer the slightly irregular 'jostled' look when the stones don't line up in neat, straight lines. SteveBaker (talk) 18:02, 29 March 2009 (UTC)[reply]

March 29

Marsupials

When the fetal marsupial is in the mother's pouch, attached to a nipple, does it excrete waste? How do they keep from messing the pouch? If we have an article that answers this question, I haven't been able to find it. -GTBacchus(talk) 01:34, 29 March 2009 (UTC)[reply]

they lick the pouch and the joey clean and consume the waste products. It seems that this is useful to recycle water lost in producing milk.[21] [22]--Lenticel (talk) 02:54, 29 March 2009 (UTC)[reply]
Thank you! -GTBacchus(talk) 01:50, 30 March 2009 (UTC)[reply]

Indoor Vs Outdoor Air Quality

I guess this is a sort of two part question.

I live near a rather busy main road and a thought randomly occurred. Would the air quality be better inside the house or outside. The roads are sort of layed out in this fashion:

^^^^^^^^ [Houses]
{------} [Service road for acess to houses and the driveway to houses, not very much trafic]
........ [Median strip aka small strip of grass with a couple trees]
===== [Main Road going in one direction]
........ [Seperator/Strip of grass&trees]

The whole thing is mirrored on the other-side.

—Preceding unsigned comment added by 121.220.48.94 (talk) 01:41, 29 March 2009 (UTC)[reply] 
Your question is not answerable. Nobody here knows such things as:
  • Do you have mold in your house?
  • Do you have a sewer system running under the street?
  • Are you downwind from a paper plant?
  • Are you allergic to the trees along the road?
Four questions just to get you started. If anyone wanted to take time, they could come up with another four thousand pertinent questions. If you are truly concerned about the air quality around your house, get an air quality tester (they are sold in both Lowes and Home Depot here). If you are truly concerned about your health with the air quality around you, this is absolutely not the place to ask this question. You must seek professional medical help for health questions, not random opinions from strangers on the Internet. -- kainaw 02:36, 29 March 2009 (UTC)[reply]

It was just more of a general interest question really..nothing too serious :-) 121.220.48.94 (talk) 03:32, 29 March 2009 (UTC)[reply]

Just about anything that is in the air outside the home will eventually make it's way inside the home due to air infiltration. That article discusses the measurement of "air changes per hour (ACHs)", and mentions ACHs in the range from 0.25 through 1.5. So all other things being equal, the inside and outside air quality would on average be the same. A passing poorly maintained vehicle may give off noticeably offensive fumes, but those fumes would quickly dissipate. However a small portion of those fumes may infiltrate the home and will linger for a while.
But of course all things aren't equal. Larger particulates might not infiltrate the home as easily as other pollutants, and those that do may be be reduced by air filters in the air conditioning system. On the other hand, the home is full of indoor air pollution sources. Naturally occurring radon gas, construction materials, household cleansers, cooking fumes (even if they smell good), perfumes and deodorizers, and even bodily methane expulsions all reduce the air quality. Given a choice, I'd prefer the outside. I could be wrong though. Big cities routinely recommend staying indoors during smog alerts, and now that I think about it, I'm not sure why. -- Tcncv (talk) 04:34, 29 March 2009 (UTC)[reply]
air infiltration is absolutely necessary. Every now and again you get someone who insulated their home so thoroughly that they asphyxiate inside e.g. [23]. Opening your windows during rush hour or smog alerts / high ozone levels may not be advisable, but there are bound to be times of low traffic on the road. There's a lot more air volume outdoors than indoors. That makes air exchange more efficient outdoors and allows fewer pollutants to accumulate. 76.97.245.5 (talk) 07:09, 29 March 2009 (UTC)[reply]
Well, we're certainly not going to be able to give you a clear "Yes" or "No" answer here. This is definitely an "it depends..." kind of question.
Smog - in it's truest sense of "Smoke Combined With Fog" - may well be a different matter indoors and out. Fog can only remain in the air in specific conditions of temperature, humidity and pressure. It's unlikely those pertain inside your house - so one of two things has to happen:
  • The fog condenses on some kind of surface - leaving a film of smoke particles wherever it condensed.
  • The water droplets in the fog evaporate - making the air more humid and leaving the smoke particles to settle out.
Either way, the nasty smoke doesn't enter your lungs. So I think that staying indoors during smog conditions is indeed a good idea. However - for other kinds of air pollution - NOx, SOx, Ozone, Carbon Monoxide, etc - I can't imagine any mechanism that would make the air coming into your home be any cleaner afterwards than before. But in the particular scenario the OP lays out - there is likely to be a pollution 'gradient' where the concentrations are highest in the main part of the road where the traffic flow is (presumably) the highest - gradually tailing off to lower values on the service road - and then to even lower values close to the rear of your home. So while there might be very little (if any) difference between standing just outside your front door versus just inside - I'd expect there to be a measurable difference if you were standing on the grassy median or out in the traffic flow itself. I think a lot depends on what's behind your house. If it were parkland or idyllic unspoiled countryside - the answer would be different than if there were merely another row of houses and an identical road system. In the former case, the pollution 'gradient' would be tailing off as the pollutants were being diluted by the clearer air out in the countryside - in the latter case, there is nowhere for the nasty stuff to go - so there's going to be a gradual levelling out of pollutants between the two sources. In the former case, a lot would also depend on the current and prevailing wind directions. Tcncv's comments about sources of pollutant INSIDE the house is well taken. Certainly things like radon gas (if it's prevelant in your area) which can concentrate inside the building from natural emissions of the ground beneath you. Radon is a radioactive byproduct of natural radiation in certain rock formations (I believe granite is a particularly strong source) - and there have been serious health risks associated with it. If you live in an area where there is radon - you need fans built into the house that ensure a decent air flow to avoid it building up. Outgassing from plastics and (especially in new houses) construction materials is also a source of indoor pollution that you're unlikely to encounter to the same degree outdoors. SteveBaker (talk) 13:26, 29 March 2009 (UTC)[reply]
Now for some concrete suggestions on how to improve air quality inside your home:
1) When you bring in a new item that noticeably smells, such as vinyl, let it sit in the garage or in an unused room (possibly with the window open) until it stops smelling.
2) Time opening of windows for when it is more polluted inside than outside. So, open them after you burn some food, and close them during rush hour.
3) Try to open windows away from outside pollution sources. In your case, the back of your house sounds better, unless there's another pollution source back there. Also open windows in more polluted parts of the house. If the bathroom or kitchen smells, open the windows there, but not in the rest of the house. If you use window fans, point them to exhaust in areas with polluted indoor air and intake in areas with fresh outdoor air.
4) Avoid the use of candles, whenever possible. If you do use candles, extinguish them by leaving a candle snuffer on them, so the smoke particles will settle out of the trapped air, rather than pollute the indoor air.
5) If you have a gas oven/stove which lacks an exhaust fan, limit use to when you can open windows. Beware that many gas stoves have fans which merely filter the exhaust and blow it back into the room. This doesn't remove many of the combustion products, or replace oxygen lost during combustion. An electric oven/stove pollutes less, but you can still get smoke from the food. This is worst for frying and least problematic when boiling water, say as when making hard-boiled eggs. Microwaving foods is generally better than conventional stoves or ovens, although burnt foods can obviously pollute the air regardless of the method of cooking.
6) Be careful when using air filters, as many can make the air quality worse. Electrostatic filters, for example, can produce ozone, while HEPA filters in a humid house can grow mold.
7) Avoid the use of spray-cans indoors. In many cases, a non-spray-can alternative may be available. Instead of hair spray, perhaps a pump can be used, or better yet a mousse or other product applied by hand. Instead of spray-on deodorant, perhaps a roll-on, solid, or gel can be used. Spray paint or lubricants can be applied outside or in a garage. Epoxy, glues, wood stain, and paint should all be applied outside, whenever possible.
8) Buy scent-free detergents and other cleaning products, whenever possible.
9) Limit use of electric space heaters, and run them outside for a few minutes to burn off any accumulated dust when using them for the first time in a while, or whenever you notice a burning smell. StuRat (talk) 17:00, 29 March 2009 (UTC)[reply]

Nutritional Question

So. I went to a nutritionist the other day who suggested that I take a few dietary supplements: calcium, b complex, and omega, in addition to a multi, every day. These are all to be taken with food, and separately, so as not to undermine each's effect. This would mean four meals a day, which doesn't really fit in my schedule. But I've also heard that whenever you eat, your metabolism kicks up--thus the many-meals-a-day dietary suggestion often put out there. My question is what suggestions you guys might have for good foods to snack on--portable, cheap, somewhat filling--that I could turn into impromptu meals throughout the day during which to take my vitamins and which would boost my metabolism. I was thinking maybe of celery, which, as is widely said, has "negative calories" due to how it takes more energy to break down the plant's fibers than is gained from metabolizing them.

My question, though, is whether this will a) effectively kick up my metabolism, since it contains so few calories, and b) whether a substance my body has difficulty digesting will provide the necessary buffer or whatever that the dietary supplement needs to be fully utilized. Would celery work? Bananas? Any other non-fattening suggestions? I'm also trying to lose a little weight, if that hasn't already been made clear.

Thanks a lot, 70.108.188.101 (talk) 03:05, 29 March 2009 (UTC)[reply]

PS, while I'm at it. Protein and working out. If I go lift weights at the gym, and I want to maximize the effect, when should I ingest protein? Before? Immediately after? Does it matter? And will eating directly after be a wasted metabolistic boost, as exercise already boosts one's metabolism temporarily? Thanks again, 70.108.188.101 (talk) 03:05, 29 March 2009 (UTC)[reply]

Please use the search button with regards to the protein question, it has been asked and answered twice before in the last 2-5 months. --Mark PEA (talk) 12:49, 29 March 2009 (UTC)[reply]
To clarify, Mark is referring to the search box/button at the top of the page labeled "Search reference desk archives", not the search box/button in the side bar, which searches the main Wikipedia pages, and doesn't include the Reference Desk archives by default. -- 76.204.102.79 (talk) 17:21, 29 March 2009 (UTC)[reply]
I wouldn't be too worried about taking vitamins and other nutritional supplements with the same meal. You could take some at the beginning of a meal and others at the end, for example, so they interact with each other less. I also don't quite understand why you would take a multi-vitamin and also take things like calcium, which are no doubt already included in the multi-vitamin. Finally, I should mention that vitamin and mineral supplements haven't been shown to be helpful for healthy people with a healthy diet (who aren't suffering from a deficiency). And, if you have an unhealthy diet, improving it would be better than just adding supplements. StuRat (talk) 16:46, 29 March 2009 (UTC)[reply]

Galaxy cluster data

I'm investigating some natural datasets with fractal statistics, and I'd like to include the large-scale structure of the universe. So what I'm looking to create, ultimately is a normalized dataset of points with 3D Cartesian coordinates, such that each point represents a super-cluster, and the whole set is scaled to fit inside the unit cube (or unit ball or something similar). In essence I'm looking for the kind of dataset that would produce this image, or the next one in the series. I don't know precisely what scale I'm looking for (galaxy clusters, or superclusters or what), but somewhere 1000 to 10000 points would be ideal. A set of densities in a grid, rather than points would also be good.

I'm perfectly willing to put in the work and learn the required algorithms to do the coordinate conversions, but at the moment my lack of knowledge about astronomy is making it difficult to find a starting point. Which survey would be the right starting point? What steps would I need to take to get the manageable cube of cartesian points I'm looking for? Can I just make the step from curved space to euclidean geometry like that or is it impossible to represent the universe like that without massive errors? Any help will be appreciated, thanks. risk (talk) 16:29, 29 March 2009 (UTC)[reply]

You might be interested in this article: Large-scale structure in the Universe: Plots from the Updated Catalogue of Radial Velocities of Galaxies and the Southern Redshift Catalogue. This page gives the principle galaxy catalogues. The Wikipedia articles are not very informative, or non-existent on these unfortunately. After that, if you want to work in more detail still, you are looking at working directly from the starfield images. On the question of conversion to cartesian co-ordinates, the distance to the galaxies is calculated from the Hubble redshift which is only valid if our model of the universe's gravitational field equations is valid. SpinningSpark 19:48, 29 March 2009 (UTC)[reply]
The state-of-the-art catalogue would be the Sloan Digital Sky Survey Data Release 7, accessible here. It contains spectroscopic redshifts (essentially distance, as mentioned above) for nearly a million galaxies. It is not whole-sky, hence you'll run into some (probably solvable) problems of placing your cube properly. There are a number of cluster catalogues based on (earlier releases of) the SDSS, although I can't name any names off the top of my head; just use Google or search in ADS. The coordinate transformation is relatively straightforward (there is a complication do to proper motions contributing to some extent to the redshifts - less of a problem if you start from a cluster catalogue). The biggest problem is probably to properly account for the selection function, or incompleteness, of the survey. All in all, not a simple undertaking! --Wrongfilter (talk) 09:34, 30 March 2009 (UTC)[reply]

Attributes or Parameters of Sound

What are all of the attributes or parameters of sound? Because I'm pretty sure there are more than just volume and frequency. And what is the article about the attributes or parameters of sound? --Melab±1 17:31, 29 March 2009 (UTC)[reply]

I'm not sure we could list ALL of them, it depends on how detailed you get. Traditionally, musical sounds have three charactaristics: Pitch, Loudness, and Timbre. If converted to a waveform, "Pitch" is analogous to frequency/wavelength; "Loudness" is analogous to amplitude, and "Timbre" is all the little squigly bits that make it different from a simple sine wave. So basically, to answer your question, EVERYTHING not pitch or loudness is considered part of "timbre". You may also want to read the acoustics and sound articles. --Jayron32.talk.contribs 17:57, 29 March 2009 (UTC)[reply]
This may be a different way of looking at it. For a simple sine wave, frequency and volume are all you need to reproduce the sound. For a slightly more complicated sound you could combine two sine waves, with different frequencies and volumes. For this sound, you'd need four parameters (two frequencies and two volumes) to reproduce it. You can get these from the original sound using a Fourier transform.
If you apply this fourier transform to natural sound, you find that it isn't made up of two or three frequencies, but that all frequencies in some range are represented with different volumes. So in this sense, the number of parameters depends on the complexity of the sound wave, and things like white noise have an infinite number of parameters. risk (talk) 18:23, 29 March 2009 (UTC)[reply]
Within some specified frequency range and fidelity (eg that of human hearing), the number of parameters isn't infinite - it's really quite a small number. SteveBaker (talk) 18:31, 29 March 2009 (UTC)[reply]
(ec)
Well, in a sense - amplitude (volume), phase (delay) and frequency (pitch) really are the only parameters. Every sound imaginable can be made by adding together some number of sine waves with specified volume, relative-phase and frequency - and NOTHING else. We can even use mathematical techniques such as Fourier analysis and Wavelets - to figure out what set of frequency/phase/amplitudes are needed to reconstruct any given sound to whatever fidelity is required.
In another sense - there are a literally infinite number of possible wave-shapes and the description of those using English words and without math or science to help requires this huge vocabulary of vague terms that musicians and others have built up over the years.
In a third sense - any waveform of any complexity can be represented to better accuracy than our ears can detect using 40,000 numbers in the range -32,000 to +32,000 for every second that the sound lasts. Each of those numbers is the instantaneous "volume" of the sound - so in that sense, it's just volume - and nothing else.
Things get a bit more complex with spatial audio - stereo, quadrophonic, 3D,etc.
SteveBaker (talk) 18:28, 29 March 2009 (UTC)[reply]
Steve Baker beat me to mentioning the direction. The human nervous system has special pathways to analyze the direction of sound, primarily horizontally, but you also can sense wideband sounds' height or distance to some extent. Another aspect is the "presence" where you can hear the sound effect of a room, the reverberation causes a bathroom, hall, outdoors, carpeted bedroom to all sound different. Graeme Bartlett (talk) 21:11, 29 March 2009 (UTC)[reply]
I thought there must be a term for the sound of a room, thanks for that. The article on reverberation talks about the sound dying out but I can hear it better when it is quiet. I guess there must be low level sound all the time like the seashell effect with a hand over the ear, that's why one can hear what a room is like without any explicit noise. Dmcq (talk) 21:41, 29 March 2009 (UTC)[reply]

Alien Fast and Slow thinkers.

I'm re-reading the scifi book "The Algebraist" by Ian Banks...one of the themes is that some species of alien are 'fast' living/fast-moving/fast-thinking and others are slow. It's thought-provoking to imagine the benefits of being a 'slow' species - one of which is that the speed-of-light limit for interstellar travel is much less of a problem for slow species. Imagine a species that operates at speeds perhaps a million times slower than us. A 1% of lightspeed 400 year trip to the nearest star taking just a few hours in perceived time.

We have a few organisms on Earth who can live for hundreds to maybe a thousand years - but I'm thinking creatures who can move and think. Anyway - I was wondering what the biological (or other) limits there might be for super-slow life forms?

SteveBaker (talk) 18:14, 29 March 2009 (UTC)[reply]

I think a lack of evolutionary adaptation would be one of the main drawbacks on a planet like ours. If death/birth occurs only once in a very long time, then the creatures with shorter cycles will adapt faster to any change in the environment. Any creature that's going to live 10000 years must either be very independent from its surroundings, have a very stable environment, or be capable of adapting by other means than evolution (like modern man), and do so in a way that beats out the fast evolving animals with the short life cycles. Of course at some point a short life cycle is going to have disadvantages as well (it's difficult to grow to 100kg in five days). Despite the evolutionary disadvantage, I can't think of a reason why lifeforms couldn't have (theoretically) infinite life. risk (talk) 18:50, 29 March 2009 (UTC)[reply]
A 400 year spaceflight might be trivial to such a race but, equally, it would take them an inordinately long time to build the spaceship. The ten-year Apollo program, for instance, would take them 10 million years in your example. To get to another star, well, the target might very possibly not be there any more by the time they had built anything. SpinningSpark 20:01, 29 March 2009 (UTC)[reply]
True - but a 'fast' species like ours has problems too. We can build such a device in 1% of a lifetime - but it takes tens of lifetimes to actually get anywhere inside it. For super-slow lifeforms, it still takes 1% of a lifetime to build - but once built, you can go zipping around the galaxy like it was a vacation trip to Vegas. SteveBaker (talk) 21:43, 29 March 2009 (UTC)[reply]
(edit conflict) Another problem is that if a species reproduces, say once every 10000 years on average, the ones that reproduce faster would, well, reproduce faster and would become a larger part of the gene pool, which means that the average speed of reproduction would steadily decrease until there's a significant advantage to taking longer. As for simply moving slower, take an herbivore with no predators, such as a turtle. If it moves half as fast, it gets food half as fast, but, beyond a certain point, it takes more than half as much energy to live, so it wouldn't move slower than that. Most animals either eat meat, which they have to be able to chase down, or have predators, which they have to be able to run away from, so they move much, much faster. It's possible that an animal that's in a completely different circumstance might have a significantly lower optimal speed for a turtle, but if it has any predators or prey, it's going to have to move fast. — DanielLC 20:04, 29 March 2009 (UTC)[reply]
Well yes, if you are talking about lions and gazelles. But even on earth there are an extraordinary variety of environments and solutions. In the arctic, equivalent species tend to live longer and move slower, I understand that crabs have a surprisingly long lifespan in the arctic. Whelks hunt mussels at (forgive the pun) a snail's pace. It must be at least possible, that in alien environments we don't understand populated by lifeforms whose solutions we have no way of knowing, such a slow-moving system prevails. SpinningSpark 20:46, 29 March 2009 (UTC)[reply]
I was thinking that these guys would have to have evolved in a low-energy environment - perhaps in very dim sunlight - where plant growth would have to be really slow and anything that evolved to feed on those plants that lived significantly faster would wipe out the vegetation in short order and then starve to death - and anything that predated on super-slow herbivores would have no need to evolve to be super-fast in order to catch them. Besides, we do have slower-living things (like the Sequoia - which lives for over 2000 years - but doesn't happen to move or think to any noticable degree). I was wondering more about biological limitations - maybe its DNA would decompose or something - erosion of body structures due to wind & rain could be a major problem...that kind of thing. SteveBaker (talk) 21:43, 29 March 2009 (UTC)[reply]
Contrary to SB´s assertion, Sequoyah DID move and think. --Cookatoo.ergo.ZooM (talk) 22:23, 29 March 2009 (UTC)[reply]
There's a whole lot of options in the question of 'fast' living/fast-moving/fast-thinking - each of the three could be slow or fast independently. For instance one could be fast moving with very rapid and extensive reflexes and pre-thought actions and yet be a very slow thinker, or one could think very fast and be stuck in a slow body. Dmcq (talk) 21:16, 29 March 2009 (UTC)[reply]
Freeman Dyson wrote an essay (the title of which I can't remember) about how he thought life could continue as an open-ended universe dwindled away toward heat death. He postulated creatures that lived progressively closer and closer to their (very chilly) ambient temperature, and which ran their body processes very slowly indeed. Perhaps someone can remember more about it (I think it might have been published as part of Infinite in all directions). 87.114.147.43 (talk) 22:15, 29 March 2009 (UTC)[reply]
Speed of thinking is probably based on chemical processes in the body and the speed of travel of electrochemical nerve impulses. Previous experience with similar thought processes may also play a part. An alien life form may use different chemical processes and that may affect the speed of thought, and perhaps the awareness of time and length of life as well. Life on earth is sometimes said to be carbon based. I once read a science fiction story in which silicon was the basic life chemical. Like carbon, silicon has the ability to form many chemical compounds, and thus may be useful in forming the many chemical compounds life requires. – GlowWorm. —Preceding unsigned comment added by 98.21.108.66 (talk) 22:55, 29 March 2009 (UTC)[reply]
I could picture some kind of plant, which moves extremely slowly, perhaps by growth alone, and yet lives for millions of years, allowing it to migrate great distances in a lifetimes. One case where this might work is in an extremely high gravity environment where something like a moss may be the only thing which can survive. I also had the idea of a single organism on a planet, which never reproduces, but only spreads, over millions of years. StuRat (talk) 04:07, 30 March 2009 (UTC)[reply]
Many intelligence tests and ability tests have a time limit for doing the test. The test originators apparently believe that, as well as the test score, a person who completes more of a test in the allowed time has more ability than a slower person. (A slower person will answer fewer questions, and thus can have fewer correct answers.) I wonder if innate speed of thought, or speed of answering questions in a test, is a valid indication of the ability being tested. Has this ever been investigated? Should a person be allowed as long as he wants to do a test? Is a time limit simply for the convenience of the person adminstering the test? – GlowWorm. —Preceding unsigned comment added by 98.21.104.119 (talk) 09:04, 30 March 2009 (UTC)[reply]
No. For two humans - with identical biochemistry and near-identical morphology - you'd expect them to think at the same rates. If one of them takes longer to do the test then we can conclude that they are not as intelligent. If we ever came across an alien civilisation with thought rates that were significantly slower than ours - yet had made more intellectual achievements than us (albeit in longer amounts of time) then we might have to come up with a new definition of 'intelligence'. But then we don't have a decent definition of that word anyway...so take what you like from that! SteveBaker (talk) 13:49, 30 March 2009 (UTC)[reply]
I don't agree with equating speed of test-taking with intelligence. If one student double-checks his work, for example, that's a good thing, even if neither had a mistake. Or perhaps one takes the time to write his work out very carefully and clearly, while the other only makes illegible scribbles. Thus the first student would make a good scientist or engineer, while the second can only qualify to be a doctor. :-) StuRat (talk) 14:08, 30 March 2009 (UTC)[reply]
As for why they limit time on tests, yes, that is for the convenience of the test givers (and takers). I'd say that time would need to be limited in some way, in any case, but they certainly don't need to limit it to the point where many students are unable to finish. This teaches students that giving quick answers is more important than right answers, which is a very bad thing. The way the standard school day is broken up is one problem. Each class is typically less than an hour long. In such a short time period, even one complex, multi-part question may be too much for a few students to complete. Some schools use a different system, where they have one class before lunch, and another after. This allows more time for a test (as long as they don't triple the test length), but can introduce a new problem. With such a long period students are likely to need bathroom breaks, which could allow them a chance to cheat by talking to each other or looking up things. StuRat (talk) 14:20, 30 March 2009 (UTC)[reply]
Building anything would be a challenge on an earth-like planet, but i suppose life might be easier on a planet with a very calm atmosphere, and no oxygen in the air. You'd still have to worry about structural components wearing out while you were still building. If your house collapsed it would happen so fast you wouldn't be able to perceive it.
That makes me wonder if the laws of physics would be easier or harder to work out. On the one hand, the planets would slide around in the sky in a timescale that lends itself to direct observation, (Earth would orbit every 31 perceived seconds.) but on the other hand your planet would be spinning at a dizzying rate. Ball-drop experiments and the like would also be difficult. How would early slow-creatures work out such early principles? APL (talk) 14:50, 30 March 2009 (UTC)[reply]

Possibility of recovering methane hydrate from seabed

Hello. I have come up with an idea for a business involving methane recovery and refining. As you may be aware of, there are numerous methane reserves under the seabed. There are estimates that the energy content surpasses that of the oil reserves by as much as a factor of two! Methane exists down there as a hydrate. By any means, is it possible that the methane hydrate could be recovered without risk of realeasing all that gas and setting off a climate catastrophe? And is it possible that the methane hydrate recovered could be refined into methane or some other usable fuel? If it does work, please don't run away with my idea and make money for yourself, as it was my idea to start that business. Thanks to all who answer.--Under22Entreprenuer discuss 21:21, 29 March 2009 (UTC)[reply]

Unsurprisingly, you are far from the first, or the only, person to have such an idea [24] [25] [26] Nil Einne (talk) 21:40, 29 March 2009 (UTC)[reply]
The problem is two-fold. Firstly that these things exist only in the deepest parts of the ocean - where mining would be exceedingly difficult. Secondly that extracting energy from the stuff produces CO2 - which produces the climate catastrophy you were trying to avoid. Granted, methane is a nastier greenhouse gas than CO2 - but it decomposes in the upper atmosphere after a fairly short amount of time (10 years maybe) - where CO2 stays there pretty much indefinitely. Sadly, this idea is very far from being new...and you're not going to make a fortune from it. SteveBaker (talk) 21:55, 29 March 2009 (UTC)[reply]
Unfortunately... But I have other ideas for businesses.--Under22Entreprenuer (talk) 00:38, 30 March 2009 (UTC)[reply]
See methane hydrate. Hydrates form in ocean sediments at depths (relative to the ocean surface) of approximately 300-700 m, and hence exist on continental shelves. They do not form in the deep ocean at all as methane hydrates are not stable there. As we have learned more about these hydrates it appears that most deposits are probably not economically viable due to concentrations of only a few percent methane by mass relative to sediment. Dragons flight (talk) 22:17, 29 March 2009 (UTC)[reply]
Methane hydrates have already been observed to be melting, so if you want to avoid a climate catastrophe, you should act quickly. Unfortunately, CH4 hydrates are effectively another fossil fuel, and will release CO2 into the atmosphere. However, if there's a way of burning it without oxygen, or sequestering the emissions, then it may be a more eco-friendly method. ~AH1(TCU) 23:26, 29 March 2009 (UTC)[reply]
If it's not economically viable to extract them, then extrating them to save the planet is potentially a bad idea as you may make the situation worse from all the energy you expend extracting them Nil Einne (talk) 05:16, 30 March 2009 (UTC)[reply]

March 30

why don't sugars taste like alcohols?

For give me, but the alcohol article says nothing about why OH groups (that are not part of COOH groups) on sugars or glycerine don't make their host molecule behave like alcohols. I mean, shouldn't sugars behave like extra strong alcohols, what with their abundance of OH groups? If not, isn't the definition of an "alcohol" as given in the articles inaccurate? I mean, the presence of an OH group on a carbon chain cannot be the only thing that makes alcohols alcohols. John Riemann Soong (talk) 02:06, 30 March 2009 (UTC)[reply]

Alcohol is a generic class name for all compounds having a a hydroxy (-OH) functional group. Do not confuse this with ethanol, a specific chemical compound with unique (pharmacological) properties that are caused by the whole molecule, not just by the isolated -OH part. Cacycle (talk) 03:14, 30 March 2009 (UTC)[reply]
But, methanols and propanols and cyclohexanols all behave similarly, have similar pKa's ... heck, even my octanol product produced by hydroboration-oxidation smells like all other alcohols I have known. And if I dared, I bet they would taste roughly the same -- pungent, overpowering, and quite bitter. Yet sugars taste sweet. John Riemann Soong (talk) 05:39, 30 March 2009 (UTC)[reply]
Sorry can't actually answer your question, but ethylene glycol tastes sweet. --TammyMoet (talk) 09:28, 30 March 2009 (UTC)[reply]
Water has an OH group aswell, but that isn't sweet either. The point is that certain molecules will bind to orthosteric site on the sweet receptors on your tongue, these include ethylene glycol, sucrose, sorbitol and many others (mostly sugars though). I personally haven't read into the pharmacology of sweet receptors, but I'm certain there will be papers out there describing the structure of the orthosteric site of those receptors and descriptions of exactly what molecular shape + functional groups are necessary in order to agonize those receptors. --Mark PEA (talk) 13:24, 30 March 2009 (UTC)[reply]
Alcohols have one hydroxyl group attached to a carbon chain. Sugars have many hydroxyl groups plus either a aldehyde or a ketone. they are quite different. Dauto (talk) 15:03, 30 March 2009 (UTC)[reply]
That doesn't tell me much. Why don't sugars then act like super-alcohols? John Riemann Soong (talk) 15:41, 30 March 2009 (UTC)[reply]

Developing immunity or resistance to a pathogen

This is a general question rather than a request for medical advice. People often get Upper respiratory tract infections manifesting as "colds", sneezing, sore throats, bronchitis, sinus infections, stuffy nose, coughing, fever, or the flu, and after a few days or a couple of weeks make a full recovery, without antibiotics or other medical treatment beyond rest and plenty of liquids. The impression is that the body has "fought off the infection." What mediates the acquired resistance of those who have gotten over the upper respiratory infection? What sorts of pathogens would be capable of reinfecting the person if he/she were re-exposed to the exact same germ/virus after recovery, and what sorts would be very unlikely to cause a re-occurrence due to a new exposure? The section "Clearance and immunity" in Infectious disease does not tell me whether the victims of Typhoid Mary who survived typhoid, for instance, would likely be reinfected if they foolishly hired her again as their cook. If she were "Common Cold Mary," "Streptococcus Mary," "Pneumonia Mary," or "Influenza Mary" and were a persistent carrier, how likely would those who recovered from the exact pathogen she passed to them be to suffer a recurrence from her renewed presence, or say if they ate frozen leftovers she had prepared which were full of the pathogen she carried, after they had recovered from the original infection? Edison (talk) 03:01, 30 March 2009 (UTC)[reply]

I'd say the following in somewhat indirect response to your question.
Most infections evoke both a humoral (antibody) and cell-mediated immune response.
The response is not always absolute: evoking either or both immune responses is not a guarantee that one is now protected absolutely from reinfection, but rather that one is less likely to develop severe disease on re-exposure to the organism in question.
The response fades over time; protective antibodies and cells capable of cell-mediated response gradually disappear from the circulation. They may be quicker to appear on re-exposure, but they don't just hang out in the circulation forever waiting for it.
With regard to typhoid, see [27]. It seems that within years such protective immunity as develops fades, so re-infection upon re-exposure is certainly possible - and is possible even before, as protective immunity is not absolute. - Nunh-huh 03:36, 30 March 2009 (UTC)[reply]
With respect to the Flu and especially colds, the microorganisms constantly mutate, so an immunity ceases to apply, as the next time you're infected it's likely to be a different strain. StuRat (talk) 05:47, 30 March 2009 (UTC)[reply]

macrophage apoptosis and fever

Hi,

Im am interested in the connection of macrophages and fever, especially if/how macrophage apoptosis is connected to the onset of fever. If I understood correctly, macrophages produce cytokines that have a role in the development on fever symptoms but is this more the case with active macrophages or do the dying macrophages release cytokines in significant amounts. —Preceding unsigned comment added by Vuori12 (talkcontribs) 07:39, 30 March 2009 (UTC)[reply]

Humans on Titans surface.

Saturns moon titan has an atmospheric pressure that's actually slightly more than that of Earths, but it's extremely cold. Would it be possible to survive outside on Titan with little more than an oxygen tank and heavy insulation? If not, what exactly would you need to survive outside on Titan? I'm very curious. 63.245.144.68 (talk) 09:31, 30 March 2009 (UTC)[reply]

According to our article on Titan, the surface temperature is −179.45 °C, and that's without wind chill. By comparison, the coldest temperature ever recorded in Antarctica is −89.2 °C, and it takes far less than that to really do serious damage very quickly. In simple terms, any exposed skin is going to freeze solid pretty much on the spot. Proper insulation would help, sure, but we're probably talking full-body coverage here -- in other words, essentially a space suit. I'm not sure if it would have to be completely airtight, but you couldn't just dress really, really warmly by conventional standards and wear a respirator and expect to survive. -- Captain Disdain (talk) 09:51, 30 March 2009 (UTC)[reply]
Atmospheric composition aside, I don't think there's any place in the Solar System (apart from Earth) where humans can survive the temperature. But please correct me if I am wrong. A Quest For Knowledge (talk) 12:14, 30 March 2009 (UTC)[reply]
Go down deep enough in any of the outer planets and the temperature will rise enough so humans can survive. They mightn't like he pressure though :) More seriously Mercury goes around slow enough to give places where a person could survive the temperature for a while. Dmcq (talk) 12:40, 30 March 2009 (UTC)[reply]
Oh, depending on where and when you are, Mars sports some fairly nice temperatures -- it can get up to 20 °C in the summer, which means that you could walk around in shorts and not get chilly. Of course, the very low atmospheric pressure would cause other problems, but in terms of the temperature, it wouldn't be that bad. Still, in nasty winter conditions, it can get as low as −140 °C, so you'd definitely better be sure about the weather before you step outside... -- Captain Disdain (talk) 12:53, 30 March 2009 (UTC)[reply]
I suppose that temperature is rather important for manned planetary exploration, but not because of the possibility of walking around without a space suit, as pressure and atmospheric composition will likely make those necessary in any case. Where I think the importance would be is in the energy needed to heat the human habitat, or, in the case of a location that gets too hot, the energy required to cool the habitat plus the added weight of an air conditioning unit (or do they automatically include a temperature control unit that both heats and cools ?). StuRat (talk) 14:00, 30 March 2009 (UTC)[reply]

monochrome TV transmitter and reciever

please help me with the block diagram and description of a monochrome TV transmitter and reciever —Preceding unsigned comment added by 117.206.32.67 (talk) 12:35, 30 March 2009 (UTC)[reply]

This appears to be a homework question, which Ref Desk policies prevent us from answering directly: we believe that the value of homework lies in doing it yourself. However, we also have an article on how television works, linked helpfully from our television article. If you have specific points you'd like to clarify, you're welcome to return here and ask. — Lomn 13:06, 30 March 2009 (UTC)[reply]
The how it works article is very good but boy is it going out of date fast. Soon you won't be able to get anything except digital television, and there'll be no need for those dangerous high voltages because the display is an lcd screen or something similar. And who'd bother with a monochrome lcd screen? Dmcq (talk) 15:34, 30 March 2009 (UTC)[reply]

Send analog values over isolated grounds

I am trying to transfer an analog value (0-5V) from one microcontroller to another. Both microcontrollers are on different power sources (both grounds are isolated) and need to be kept that way. I can easily transfer digital data with optocouplers but I can't find an easy solution for analog signals. Another constraint is that I have a limited amount of I/O's that I can use.

Here are some crazy ideas that I've had to solve this:

  • Convert the analog signal to an AC signal with amplitude modulation. Transfer the AC signal using a 1:1 coil and then convert the AC signal to DC. Seems too complicated
  • Convert the analog signal with an ADC (8 bit) and use an optocoupler for each line. Too many I/O's

Any suggestions? --jcmaco (talk) 15:42, 30 March 2009 (UTC)[reply]