Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 82.31.133.165 (talk) at 00:41, 17 December 2011. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 12

Started with the big bang

I am not an expert of any kind, this question is suppose to be what findings could discredited the whole big bang theory? (you can answer if you like) and then i wondered, is it more productive to try and disproved scientific theories and findings rather thatn finding more proof about it? (please correct any grammatical errors or spelling) MahAdik usap 00:01, 12 December 2011 (UTC)[reply]

The fundamental evidence of the big bang is that everything we can see in space is moving away from everything else. In other words, space is expanding. Also, looking into the past (if that doesn't make sense, ask about viewing things many light years away), space has always been expanding. So, if it has always been expanding, it must have been much smaller in the past. At some point, it was really small and it got bigger - that is the big bang. To disprove it, you have to show that at some point in the past, space wasn't expanding. Then, it could have been larger at some point in the past, not smaller. You could get into the specifics of the big bang and try to disprove some small detail. The details are always being examined and theories about the details change over time. -- kainaw 00:06, 12 December 2011 (UTC)[reply]
Scientists are always trying to disprove theories. You can't prove them - there is no amount of evidence that could make you 100% certain it's right. Scientists do experiments to see if the theory correctly predicts the results. If it gets it wrong, you've disproven the theory. If it gets it right, then nothing really changes. If, after lots of attempts to disprove a theory, no-one has succeeded, then you conclude that it is probably right. --Tango (talk) 00:28, 12 December 2011 (UTC)[reply]
Does that mean you can disprove gravity? MahAdik usap 00:48, 12 December 2011 (UTC)[reply]
Been there, done that, with general relativity.
As for the OP's question, if a new theory was invented that explains the cosmic microwave background, the redshift of distant galaxies, and the current composition of the universe to a greater precision than the Big Bang theory, the Big Bang theory would be disproved. What might that theory look like? Well, if scientists knew, they wouldn't be supporting the Big Bang theory. --140.180.15.97 (talk) 00:58, 12 December 2011 (UTC)[reply]
Science is never really static, it's a balance between induction and deduction - coming up with new ideas and finding ways where it might be disproved or be replaced with better ideas. Some fields of sciences are predominantly about disproving things. By pinpointing what is impossible you can reliably construct a suitable enough explanation for what is possible. The concept of the scientific method and subsequent peer-review is one example of that. If a conclusion can not be observed or replicated in other experiments, its reliability goes down.
The Theory of Gravity itself is one example, it is now known to be incorrect, but reliable enough to have been used until the advent of the Theory of General Relativity (which is also imperfect and being challenged by Quantum Mechanics). Even the modern Theory of Evolution itself is very different from the original theory by Darwin. That's why the highest level of reliability in Science is and always will be theoretical, because it never presupposes something to be infallibly true. Do not confuse the usage of "Theory" in scientific terminology with colloquial usage though, the former has a great deal of reliability, the latter almost none.-- Obsidin Soul 01:22, 12 December 2011 (UTC)[reply]

Regression to the mean

It seems to me that Darwin's theory of evolution, as he formulated it, doesn't work because any variation would quickly be destroyed due to regression to the mean. How did Darwin explain away this problem, considering that he had no idea about basic Mendelian inheritance, let alone the Hardy-Weinberg equilibrium or modern genetics? --140.180.15.97 (talk) 02:14, 12 December 2011 (UTC)[reply]

It's the "mean" it self that drifts. I don't know if he formally addressed the issue but he first carefully outlined how his theory already works in terms of artificial selection (pigeon and dog breeding for example). He then proposed that natural selection could replace the hand of man to impart a similar, albeit slower or more subtle influence. There were several such "problems" even after Darwin's time, but overall his theory fit the available evidence well enough that the "problems" didn't present absolutely unambiguous refutations, just "loose ends" that needed working out. We're still discovering more loose ends and working them out to this day. In fact it's more correct to call our current understanding Modern evolutionary synthesis then Darwinian evolution, just because we have build so much since Darwin's time, but there is no question that it is Darwin's work that lays as the foundation. Vespine (talk) 03:08, 12 December 2011 (UTC)[reply]
Regression to the mean doesn't really apply here. For it to apply, a single animal must be born with exceptional mutations - very exceptional. Then, that animals offspring will regress to the mean because it simply wouldn't be possible for such exceptional mutations to be maintained. But, that was never Darwin's theory. He didn't claim that a fish was swimming around and suddenly gave birth to a monkey. He suggested that very small - almost impossible to detect genetic variations could occur with each offspring. If one of those mutations were to give the offspring a better shot at reproducing, then those mutations would be passed to the offspring who may pass it to their offspring and so on. Then, as Vespine noted, the mean will shift to include the mutation. -- kainaw 05:01, 12 December 2011 (UTC)[reply]
But the mean shifts by a smaller amount for minor mutations than for exceptional mutations. If a fish had a mutation that made it 10% stronger, for example, it doesn't matter how fit the fish is reproductively. Even if it had a 100% chance of reproducing, its mate will almost certainly not have the mutation, so their children will only be 5% stronger, and their grandchildren 2.5% stronger, etc. Within a few generations, the mutation will be forgotten, unless the natural selection is so strong that it can combat the exponential decay of the mutation. Of course, we know now that DNA can remember mutations from the beginning of life to the end of time, but I thought scientists in Darwin's time believed the child's traits were simply the average of its parents' traits. --140.180.15.97 (talk) 06:33, 12 December 2011 (UTC)[reply]
Is this a question? In any case, there is no 'mean' here, except in as much as there is a 'genetic mean' - an (imaginary) 'average' phenotype - but that is exactly where evolution is operating, if you want to study it from a population perspective. It is true enough that Darwin didn't understand the finer points of the mechanisms of inheritance - and thus found some of his conclusions difficult to reconcile with what was understood at the time regarding the subject - but that turned out to be a problem with the contemporary understanding of such mechanisms, rather than with Darwin's theory. 'Darwinism' survived (and evolved) not because it revealed all the answers, but because it presented a 'fitter' explanation of what was evident from studies of nature than was previously available. That is how science works, and why dogmatism doesn't... AndyTheGrump (talk) 06:46, 12 December 2011 (UTC)[reply]
My question is exactly whether or not Darwin found his conclusions "difficult to reconcile with what was understood at the time regarding the subject". Did he regard regression to the mean as a problem that's difficult to reconcile with his theory, or did he explain it in some way? Obviously the modern evolutionary synthesis deals with regression very well, but that's not what I'm interested in. --07:13, 12 December 2011 (UTC) — Preceding unsigned comment added by 140.180.15.97 (talk)
Sorry, but your issue is based on a incorrect premise. To wit: "If a fish had a mutation that made it 10% stronger ... even if it had a 100% chance of reproducing, its mate will almost certainly not have the mutation, so their children will only be 5% stronger, and their grandchildren 2.5% stronger, etc." This is incorrect as genetics doesn't work like this - to keep it simple, for a basic single gene trait, if the parent with the 10% stronger gene (however you want to interpret that in real life) bred, any offspring that got the gene would also be 10% stronger, and therefore more likely to survive as well, and also pass on the 10% stronger mutation. Meanwhile, those that didn't get the gene would be weaker and more likely not to survive and reproduce. There are no 5% stronger fish. Genes don't get diluted down in the way you propose. (Of course I'm simplifying things greatly, ignoring dominance and recessiveness, etc, but for brevity's sake this is entirely valid.) Thus the trait doesn't regress to the mean, but rather the 10% stronger fish survive and reproduce until that gene and the phenotype it codes for becomes more prevalent in the population. This regression to the mean/dilution of traits argument has long been a creationist tool for those that want to go beyond their more basic nonsense and confuse those with slightly higher levels of reasoning. In terms of Darwin, IIRC he indeed understood this argument and knew it was wrong as he'd done extensive experimentation to prove that traits were not diluted/averaged, but of course he didn't have the knowledge of genetics to explain why it was wrong. I'll have a look when I get home to see if I can find exactly where he addressed it. --jjron (talk) 07:47, 12 December 2011 (UTC)[reply]
OK, couldn't find exactly what I hoped, but here's a little further information. FWIW at the time this concept was referred to as blending inheritance, which I probably should have remembered. In Bully for Brontosaurus (Chap 23, "Fleeming Jenkin Revisited"), the late Darwin expert Stephen Jay Gould writes "Darwin had pondered long and hard about problems provoked by blending ... As for recurrent, small scale variation, blending posed no insurmountable problem, and Darwin had resolved the issue in his own mind long before reading Jenkin. A blending variation can still establish itself in a population under two conditions: first if the favourable variation continues to establish itself anew so that any dilution by blending can be balanced by reappearances, thus keeping the trait visible to natural selection; second, if individuals bearing the favoured trait can recognize each other and mate preferentially - a process known as assortative mating in evolutionary jargon. Assortative mating can arise for several reasons, including aesthetic preference for mates of one's own appearance and simple isolation of the favoured variants from normal individuals. Darwin recognised both recurrent appearance and isolation as the primary reasons for natural selection's continued power in the face of blending." It's probably worth reading the whole essay though if you can get your hands on it.
Incidentally there's also a bit more discussion of this in the blending inheritance article, including statements such as "Darwin himself also had strong doubts of the blending inheritance hypothesis, despite incorporating a limited form of it into his own explanation of inheritance published in 1868" and "Moreover, prior to Jenkin, Darwin expressed his own distrust of blending inheritance to both T.H. Huxley and Alfred Wallace."
In a quick look in Darwin's epochal On the Origin of Species in the "Variation Under Domestication" chapter he writes: "If it could be shown that our domestic varieties manifested a strong tendency to reversion ... so that free intercrossing might check, by blending together, any slight deviations of structure, in such case, I grant that we could deduce nothing from domestic varieties in regard to species. But there is not a shadow of evidence in favour of this view ...". He then goes onto a fairly lengthy discussion of almost straight Mendelian inheritance based on his extensive personal research into pigeon breeding. A few pages later he writes: "There can be no doubt that a race may be modified by occasional crosses, ... but that a race could be obtained nearly intermediate between two extremely different races or species, I can hardly believe.", which seems to express considerable doubt on the concept of blending. Anyway, that's bit more to your question directly based on Darwin. Yes, he had thought about it, and no, it didn't lead to his theory not working. --jjron (talk) 11:14, 12 December 2011 (UTC)[reply]
From a strictly historical point of view, it's worth noting that on the fine points of how heredity worked, Darwin was very confused, very wrong. He didn't realize quite how wrong he was when he published Origin, but by the time Descent of Man came out, he was aware that this was something to address if his theory was to have any standing. Darwin's pangenesis theory was his attempt at this. It was very confused and this was really not Darwin's strong spot (it was partially a "particulate" theory of heredity, like later Mendelism, but also had aspects of blending, as well as odd quasi-Lamarckian feedback loops — just a big mess of ideas). Darwin was not worried so much about regression as he was about limited variation, which pangenesis tried to get around.
The problem of regression to the mean was heavily studied by later evolutionists, led primarily by Francis Galton, Darwin's younger cousin. Galton was the one who coined the term "regression to the mean" and formulated it statistically, though originally he called it "regression to the mediocre," which points out Galton's interests. Galton has his own not-very-good theory of heredity that was derived from Darwin's, and those of the Galtonian school — Karl Pearson and the other "biometricians" — saw themselves as distinctly non-Darwinian in the sense that they didn't believe that gradualism actually worked. This is part of what set the stage for decades of clashing between the biometricians and the Mendelians later, neither of which saw themselves as strictly Darwinian. In fact, it was entirely common for both camps to give Darwin credit for showing that evolution had occurred, but that natural selection was foolishly wrong. Julian Huxley called this period (from Darwin's death through the 1930s) the "eclipse of Darwinism". Note that this period was not anti-evolution at all — just anti-Darwinian, in the sense that even the gradualists and the "saltationists" (e.g. the Mendelians and the mutation theorists) thought that natural selection by itself wasn't enough. There's a very amusing note in Karl Pearson's biography of Galton (written in the 1920s, I believe) where he says that it's unfortunate that today everybody knows how wrong Darwin was. It comes as quite a shock of a statement from such an eminent scientist if you're not aware of what the debates were (and what Pearson's role was in them).
The resolution for this is well known today — it's the modern evolutionary synthesis, which combines all of the good insights from the biometricians and the Mendelians, and shows how they actually work out for a pretty great way of understanding why natural selection was right all along.
It's an interesting history, I think, and it also throws a little light on how bad the "Darwin came up with this and everybody saw he was right, except the Church" narrative, which is very wrong. Recommended reading (a very short, very well-written, very intelligent little volume): Peter J. Bowler, The Mendelian Revolution: The Emergence of Hereditarian Concepts in Modern Science and Society (Baltimore: Johns Hopkins University Press, 1989), esp. chapters 2-4 if you're curious about this period. Big figures in this debate included Francis Galton, Karl Pearson, August Weismann, Walter Frank Raphael Weldon, and esp. William Bateson. --Mr.98 (talk) 16:47, 12 December 2011 (UTC)[reply]
True to an extent, but a lot of Darwin's 'confusion' and 'errors', such as with pangenesis, came about largely in his attempt to respond to critics, and make allowances and corrections for 'problems' that had been identified with his original work. Of course, as we know today, many of the identified problems were not actually problems at all, but with the limited knowledge available at the time they seemed far more significant at the time, especially when a number of very prominent biologists around the world remained ardent anti-evolutionists and there was quite a lot of tension about. Part of the reason Darwin took so long to publish (there's pretty strong evidence he sat on his theory for something like 22 years before being brought to publish) was that he was essentially trying to have all bases covered, and have pre-prepared responses to any likely criticisms. This is also why if anyone's going to read The Origin of Species, they should make sure they read the first edition, as later editions had many (in hindsight ill-considered) changes he made in an attempt to counter the critics. --jjron (talk) 10:18, 13 December 2011 (UTC)[reply]
I tried to limit my discussion of Darwin's confusion and errors to his discussion of heredity. He really had no clue how heredity worked and struggled to come up with a model that would work with his theory of evolution. I don't blame Darwin for this; his critics didn't know how it worked either, and it took a long time to fully flesh out. Heredity is not an insignificant part of evolution, obviously. You can black box that sort of thing if you want to, but I don't blame the biologists for saying, "hold on, how is this supposed to work on a cellular level?" immediately afterwards. Again, it's to Darwin's credit that he convinced them that evolution had occurred in the first place. It is to his lasting legacy, and the reason he is so celebrated today, that his mechanism turned out to be quite on the nose, despite his black boxing, and then fudging, of the hereditary mechanism. --Mr.98 (talk) 20:54, 13 December 2011 (UTC)[reply]
Darwin did explicitly address this issue. His argument was that it was the mean itself (say the mean height of giraffes for example) that drifted over many generations. This position is independent of the mechanism of inheritance (of which Darwin had no understanding). --catslash (talk) 13:44, 17 December 2011 (UTC)[reply]

Jehovah's witnesses denial of blood and Hippocratic Oath

If a doctor is forced by the Hippocratic Oath to preserve life and must not play god, but has to make a blood transfusion to an unconscious Jeovah's witness, what should he do? Which rule has preference? — Preceding unsigned comment added by 88.9.111.78 (talk) 03:11, 12 December 2011 (UTC)[reply]

This is ultimately a legal question, such decisions are generally not left for individuals to make on the spot. Jehovah's_Witnesses_and_blood_transfusions and Criticism_of_Jehovah's_Witnesses#Legal_considerations go into some detail. Vespine (talk) 03:38, 12 December 2011 (UTC)[reply]
I remember reading a newspaper article about a pregnant Jehovah's Witness who rejected a transfusion urged by a doctor and died. Here's a more recent incident, where a minor/teenager in a car accident did the same thing. On the other hand, Canada's Supreme Court ruled that a minor's rights weren't violated when she was given a transfusion against her will. Clarityfiend (talk) 05:18, 12 December 2011 (UTC)[reply]
This should actually go to the humanities desk. Clarityfiend (talk) 06:17, 12 December 2011 (UTC)[reply]
It would probably depend on national/state law and on the doctors' knowledge of the patient's wishes. Many medical facilities require the consent of the patient (or possibly next-of-kin) for medical procedures, and would if possible seek consent in any circumstances. In the UK if it's not possible to ask for consent in an emergency the patient can be treated if the doctor judges it's necessary, but if the patient has explicitly refused the treatment it should not be given (subject to numerous conditions and requirements). In the UK you can refuse treatment even if that means you will die, but other nations may have different laws. The UK practice is described in detail here: [1][2] --Colapeninsula (talk) 10:57, 12 December 2011 (UTC)[reply]
I've been told that British doctors no longer take the Hippocratic oath, although clearly they are under similar professional obligations. The latter could, of course, have a specific policy for this issue in a way the oath would not. Grandiose (me, talk, contribs) 11:05, 12 December 2011 (UTC)[reply]
The Hippocratic oath is mostly about how an apprentice should respect his master. Nobody has actually taken it in centuries. --Tango (talk) 12:17, 12 December 2011 (UTC)[reply]
That depends heavily on how you interpret "taken" -- my sister's graduating class, just a few years ago, recited the oath, complete with Apollo, Asclepius, and the rest. A caveat was provided that the oath was more symbolic than literal -- for example, "belief in Apollo the healer" was specifically noted as neither implied nor required -- but the impression was given that this remains not uncommon at US medical schools. I believe the abortion clause also got talked around some in the explanatory text, but I can't recall the specifics there (as that's one of the more highly-charged and relevant bits). — Lomn 15:00, 12 December 2011 (UTC)[reply]

Soybean and male infertility

Wikipedia says soybean has no effect on male fertility which is sourced to a 2010 journal article. But a 2008 BBC news piece, citing another journal article, claims soybean is responsible for male infertility. Which is true and what is the latest consensus among scientific community? --Foyrutu (talk) 09:05, 12 December 2011 (UTC)[reply]

The BBC article you link to discusses a single study. Our article specifically mentions:
Because of the phytoestrogen content, some studies have suggested that soybean ingestion may influence testosterone levels in men. However, a 2010 meta-analysis of 15 placebo controlled studies showed that neither soy foods nor isoflavone supplements alter measures of bioavailable testosterone or estrogen concentrations in men
In other words, it doesn't dispute that a few studies may suggest there is a possible connection. But studies do that all the time with a large variety of things, it doesn't mean there is anything to worry about, although it may suggest to scientists there's something to look in to. In fact from the BBC article itself, it's seems all the study found was a correlation between soya consumption and low sperm count. The BBC article doesn't even mention if those conducting the study attempted to correct for possible confounding factors (since it only involved 99 participants this would seem difficult to do) and I'm lazy to look it up. But heck the BBC article itself mentions there are reasons to take the study with care.
What our article makes clear is a 2010 meta analysis of 15 studies suggests there is no effect on testosterone or estrogen concentration. And from the description in our article, it sounds like these were high quality studies where rather then just looking for a correlation, soy consumption was varied and some possible effects (I haven't checked the article so I don't know if they checked anything other then testosterone or estrogen concentration) which may affect fertility were studied. This in itself is generally significantly more reliable then simply looking for a correlation among the general population based on their existing soybean consumption. When you have a meta-analysis of these studies even better (and incidentally is something akin to what we require in medical cases, see Wikipedia:Identifying reliable sources (medicine)).
Nil Einne (talk) 12:39, 12 December 2011 (UTC)[reply]

Non-placebo effect?

Hi, is there any known effect in which a patient is given a real medication, but is told (or believes) that she is given a placebo, and as a result does not react to the medication? Gil_mo (talk) 10:26, 12 December 2011 (UTC)[reply]

It's not quite what you are asking about, but see nocebo. --Tango (talk) 12:22, 12 December 2011 (UTC)[reply]
I read about nocebo, thanks, I am asking about something else. Gil_mo (talk) 12:32, 12 December 2011 (UTC)[reply]
Intentional deception of clinical trial subjects is frowned upon and done only when necessary due to ethics rules, so this isn't something that would be seen in medical research on a regular basis. I don't think it has a formal name, it's just an aspect of the placebo effect. This might be interesting reading: it more or less states that placebo effects generally only affect how patients feel, which only affects things like pain and nausea, so "not react" is difficult to interpret. SDY (talk) 14:45, 12 December 2011 (UTC)[reply]
The standard in medication studies is the double-blind study. The people who prescribe the medications do not know which patient is which. Some are given real medication. Some are given a placebo. Then, the the doctor who meets the patients doesn't know if the prescription is a medication or a placebo. So, it is very possible for a doctor to tell a patient "this is probably a placebo" when it isn't. The doctor doesn't really know. -- kainaw 15:54, 12 December 2011 (UTC)[reply]
I don't have any sources for you, though I would guess in most cases, the medicine would do as expected (if you poison someones drink, they still get sick, even though they don't expect it.) To be honest, I doubt anyone has done any real studies where they informed patients they were getting a placebo and, then, gave them something else; there would be no real point to this and it would be of questionable ethics (depending on how it was done) Though, I would imagine if you told people that something wasn't going to be effective and it had a subjective element, then it might have a lessened effect (for example, if you tell someone stopping smoking that nicotine patches don't help, I bet they would report a harder time. Same thing if you told someone that advil doesn't help with headaches.) But this isn't really the same thing, here the patient is having their subjective state more strongly influenced by what they are being told/expecting, then by what they observe; but, nobody is telling them that the nicotine patch is a bandaid, then watching to see if they have an easier time quitting. Was there some specific context you were curious about? Phoenixia1177 (talk) 05:46, 13 December 2011 (UTC)[reply]
No specific context, sheer curiosity. Your answer makes perfect sense. Gil_mo (talk) 06:26, 13 December 2011 (UTC)[reply]

un-work hardening

A friend told me that when plumbers buy flexible copper tubing it can be bent easily but if they leave it alone for a couple of years it will become too stiff to use. This seems to be the opposite of work hardening. Is there a term for this property? RJFJR (talk) 15:04, 12 December 2011 (UTC)[reply]

I'd just say ageing... see (maybe) precipitation hardening. --Ouro (blah blah) 15:32, 12 December 2011 (UTC)[reply]
Strike that, not relevant. --Ouro (blah blah) 15:35, 12 December 2011 (UTC)[reply]
Isn't it just called 'natural aging' when it happens at room temperature ? Sean.hoyland - talk 15:50, 12 December 2011 (UTC)[reply]
Is this even true? The only place I have used this in my house is running gas line to appliances (never for water). And there it is intended for devices which are only ever moved at several year intervals. If it became brittle it would create serious issues. Rmhermen (talk) 16:37, 12 December 2011 (UTC)[reply]
Natural aging via processes like precipitation hardening at ambient temperatures is certainly true for some alloys. Don't know about copper piping though or whether it would ever make it brittle over decades. Copper pipes approved for use as gas lines are commonplace aren't they ? I assume natural aging isn't an issue or at least someone who knows what they are talking about when it comes to copper pipes (not me) has already thought of it. Sean.hoyland - talk 18:26, 12 December 2011 (UTC)[reply]
I'd expect a patina to form on the surface if the tubing isn't coated with something to prevent it. The patina is likely harder than the base copper, so might crack or flake off if it's a thin layer, when you try to bend the copper tubing, or might be stiff enough to prevent bending, if it's a thicker layer. Presumably most copper tubing only needs to be flexible when initially installed. After that, the need to bend it isn't likely to come up until it needs to be replaced, and then it can just be cut into sections and removed. StuRat (talk) 01:04, 13 December 2011 (UTC)[reply]
No copper tubing is used on appliance gas lines where they are occasionally moved for cleaning/repair (the appliances, not the tubing). You keep a couple large loops of tubing to allow for the distance travel and are careful not to kink it. They never put the gas connection on the front of a stove, for instance. Rmhermen (talk) 19:28, 13 December 2011 (UTC)[reply]

Most dense ceramic

What is the most dense ceramic and how dense is it? ScienceApe (talk) 15:53, 12 December 2011 (UTC)[reply]

I see a claim made that cerium oxide-stabilised zirconium oxide is the densest.[3] But it may be only a promotional claim. Rmhermen (talk) 16:41, 12 December 2011 (UTC)[reply]
Tungsten carbide can be used as a ceramic material, and it's more than twice as dense. ~Amatulić (talk) 01:25, 13 December 2011 (UTC)[reply]

What is this new "Elvis monkey" Reuters speaks of, and can somebody please redirect the redlink in the subject line to the proper species article. Rgrds. --64.85.221.193 (talk) 17:25, 12 December 2011 (UTC)[reply]

It's Myanmar snub-nosed monkey. One Reuters article isn't evidence that anyone other than one lazy journalist uses this term, so a redlink isn't appropriate. -- Finlay McWalterTalk 17:29, 12 December 2011 (UTC)[reply]
A photo(shop) of it is here. It doesn't remotely look like Elvis. -- Finlay McWalterTalk 17:32, 12 December 2011 (UTC)[reply]
Thank you kindly, Mr. McWalter. Turns out I already knew about that discovery, just didn't make the connection. Oh, well. Rgrds. --64.85.221.193 (talk) 17:45, 12 December 2011 (UTC)[reply]

Method of image charges

If a problem in electrostatics is solved by using the method of image charges, the induced charge on the conductor is always equal to the image charge. Is there a simple reason for this? 65.92.7.9 (talk) 21:25, 12 December 2011 (UTC)[reply]

It's a simple syllogism. If the induced charge was different from the one in the solution than the solution wouldn't be a solution. Dauto (talk) 00:41, 13 December 2011 (UTC)[reply]
See Gauss theorem in case my brief explanation above isn't clear. Dauto (talk) 00:43, 13 December 2011 (UTC)[reply]
Sorry, I don't understand, nor do I see the connection with Gauss' theorem. 65.92.7.9 (talk) 01:33, 13 December 2011 (UTC)[reply]
Which do you find most simple? Each of these are a restatement of the derivation provided in our article:
  • The solution to the boundary value problem and the (unphysical) assumption of a non-conductive volume on the other side of the ground plane is uniquely a point-charge of value -q.
  • The construction of a ground-plane sets up a problem with spatial symmetry; the equations that define electrostatics consequently result in charge symmetry.
  • The integral of the surface charge is determined by the electrostatic field due to the test charge.
These "simple" answers are a little more vague than the complete solution of the defining equations, which are presented in detail in our article. Nimur (talk) 03:32, 13 December 2011 (UTC)[reply]
How do you conclude that the conductor's charge = image charge? I know that the image charge and induced charge produce the same electric field in the upper half plane, but couldn't it be that a certain charge distribution with total charge =/ q produces the same E-field as a point charge q? 65.92.7.9 (talk) 13:54, 13 December 2011 (UTC)[reply]
That's where Gauss theorem comes in. If the two solutions produce the same field outside of the conductor than they must have the same total charge inside of the conductor because the total charge can be obtained from the field by a surface integration according with Gauss theorem. Dauto (talk) 15:02, 13 December 2011 (UTC)[reply]
Oh, okay! I can see that for the spherical conductor example in the article, but I'm having trouble seeing what the Gaussian surface would be for the infinite plane conductor (first example). 65.92.7.9 (talk) 15:58, 13 December 2011 (UTC)[reply]
Closed-path integrals around regions with infinite size can be computed as Cauchy integrals. It's not a very intuitive concept. To be mathematically rigorous, you must start using terminology like "holomorphic" and "simply connected" and "complete subset." The physicist in me does not like having to fall victim to such mathematical lingo. In any case, if you're willing to accept equations at face value, you can compute an integral for the gaussian surface around the infinitely-sized region underneath the grounding surface. If you want to understand why that integral is valid, you should start by reading Methods of contour integration. Nimur (talk) 20:20, 13 December 2011 (UTC)[reply]
Another way to see that is to solve the problem for a sphere of radius R and then take the limit R -> infinite. Dauto (talk) 08:36, 14 December 2011 (UTC)[reply]


December 13

drywall mud

re-post, got archived

I Was looking at the MSDS sheet for a powdered drywall mud that you mix with water and hardens in 20 min. what was strange to me is that it has formaldehyde in it, presumably as a biocide. I don't see any functional purpose for this as it's a powder and it dries in 20 min. can anyone shed some light on this? The link to it is below --Jrbsays (talk) 11:26, 1 December 2011 (UTC)

http://www.usg.com/rc/msds/joint-compounds/sheetrock/durabond/sheetrock-durabond-20-joint-compond-msds-en-61205014.pdf

I'm certainly not an expert on building materials, but the the text above the carcigenicity table in that data sheet says, "All substances listed are associated with the nature of the raw materials used in the manufacture of this product and are not independent components of the product formulation," which clearly implies that they're not intentionally using formaldehyde as an ingredient in the product. My guess is that both acetaldehyde and formaldehyde may be given off by the "vinyl alcohol polymer" that is an ingredient in the product. Deor (talk) 13:13, 1 December 2011 (UTC)
It's not the EVA, I don't think, otherwise the MSDS would mention the formaldehyde as a decomposition product, no? --jpgordon::==( o ) 15:11, 1 December 2011 (UTC)
It might get damp from time to time from condensation, especially on an exterior wall, such as when it's cold outside and you've just taken a shower nearby or boiled water on the stove. A roof leak or window left open during rain could also dampen it occasionally. StuRat (talk) 16:26, 1 December 2011 (UTC)
"BGC Multipurpose Joint Compound Data Sheet" says "trace amounts of residual vinyl acetate monomers, acetaldehyde and formaldehyde may be associated with the production of the emulsion polymer". So it's a by-product of the manufacture. --Heron (talk) 19:20, 1 December 2011 (UTC)

I don't think it's a leftover product, because other MSDS sheets from the same company on a different type of drywall mud do not list formaldehyde as an ingredient. That MSDS sheet can be found below

http://www.usg.com/rc/msds/joint-compounds/sheetrock/sheetrock-all-purpose-joint-compound-msds-en-61320001.pdf

In addition other brands of drywall mud such as "dap" brand do not list formaldehyde on their MSDS sheet.--Jrbsays (talk) 05:25, 2 December 2011 (UTC) — Preceding unsigned comment added by Jrbsays (talkcontribs)

Demixing of gases of different density

Suppose I put inside a gas cylinder 0.25 atomic percent of xenon and the remainder helium. Everything is thoroughly mixed and compressed to 40 bar. Now I let the bottle sit at 300K. How long must I wait before the very top of the cylinder is depleted of xenon by a factor of 2? --HappyCamper 08:10, 13 December 2011 (UTC)[reply]

That's likely to involve complex math because the gases will work like liquids at some point and you'll get condensation at the walls of the container and fluid dynamic effects. (Think lava lamp. - Experts: I know not an exact analogy, but just to convey the idea.) OTOH there are probably standard formula that will get you a close enough approximation. 196.202.26.177 (talk) 09:59, 13 December 2011 (UTC)[reply]
Forever. At 300 K, the kinetic energy per particle is much larger than the gravitational potential energy difference associated with each species traveling the length of a gas cylinder. As a result, the amount of differentiation you can expect due to gravitational separation is very low, and will never reach anything close to a factor of 2 enrichment. Diffusive mixing due to the thermal energy in the gases will keep them well mixed. Dragons flight (talk) 10:24, 13 December 2011 (UTC)[reply]
I don't know about that. At 40 bar the mean free path would be quite tiny so you're really talking about a kind of fractional distillation process rather than them zipping from one end to the other. On the other hand the difference in pressure between the top and bottom would be quite small, if either gas was on its own it would fill the cylinder nearly uniformly. There probably is I feel some elegant and simple way of estimating the answer rather than going through all the maths but it eludes me at the moment. Also I would guess you'd have to wait a very long time to avoid convection effects. Dmcq (talk) 12:58, 13 December 2011 (UTC)[reply]

An important question would be: How tall must the cylinder be? Assuming ideal gases, the answer to this question is independent of pressure. It depends on the Boltzmann factors where m is the atomic (or molecular) mass, g is the gravitational acceleration, h is the height, k is Boltzmann's constant, and T is the temperature. The density of each gas is proportional to this Boltzmann factor (with the specific atomic mass), so you can calculate the height at which the ratio of the Boltzmann factors becomes 2 (about 1385 m). Icek (talk) 15:13, 13 December 2011 (UTC)[reply]

That sounds a bit high to me. I typed 'heavy gas' into googleand got this video cool trick with heavy gas and it certainly didn't look like it was set to become fairly evenly dispersed with the air. Not helium and xenon of course. Dmcq (talk) 16:57, 13 December 2011 (UTC)[reply]
It will disperse. Just give it some time. Dauto (talk) 18:22, 13 December 2011 (UTC)[reply]
Let's say the cylinder is 2m in height. Is a century long enough? Somehow I'm sure that there is a simple way to estimate the time scale, but I just can't think of it right now. It's not clear what's the most appropriate model for this system. --HappyCamper 04:07, 14 December 2011 (UTC)[reply]
Read Dragons flight's and Icek's comments above. The two gasses will never separate at room temperature. Dauto (talk) 05:13, 14 December 2011 (UTC)[reply]
The systems you can see in those videos aren't at equilibrium; if you wait long enough, the xenon (or sulfur hexafluoride) will mix with the room air. TenOfAllTrades(talk) 05:39, 14 December 2011 (UTC)[reply]

First-line, second-line, third-line defense against cancer... What's the point of this?

I've been reading about cancer treatment recently, and have noticed a pattern. Oncologists tend to use a certain treatment (first-line treatment), and when the cancer develops resistance to it, they bring in a second treatment (second-line treatment), and when the cancer develops resistance to it, they bring in a third treatment (third-line treatment), and so on.

In many cases, there's no obvious reason why these different treatments can't be used at the same time. Sometimes they're not even in the same treatment class (chemotherapy, radiotherapy, immunotherapy, tumor-treating fields, vaccines, etc.), so there isn't the problem of drug interaction or overtoxicity!

So, to me, using staged treatment instead of simultaneous treatment seems just as stupid as using one antibiotic at a time on tuberculosis, and allowing it to gain resistance to all antibiotics one step at a time.

Can someone please explain why we have different lines of treatment, instead of using them all at the same time to minimize the chances of the cancer becoming resistant? Thanks!--109.14.86.149 (talk) 08:53, 13 December 2011 (UTC)[reply]

If the other treatment options have the same kind of side effects that chemotherapy does, then using them all at once is pretty bad news for the patient... 80.122.178.68 (talk) 09:18, 13 December 2011 (UTC)[reply]
I'm far from a medical expert but have had some close relatives with "personal experience." First of all with most cancer treatments we're still at the stage of "Kill the cancer cells without killing the patient." Far from humans being all the same, every body is different. We have different tissue, immune systems, health history, endocrine reactions, etc., etc. Yet treatment is mostly based on the best available statistical data, what worked for a large number of patients in the past. By its very nature, that data is not for the most recent advances in treatment. Five year survival data is for people who lived that long after the treatment they received 5 years ago. So, we have some idea how well that worked for a statistical sample. That still means it may not work for you. There's also a "standard of care" which is based on those studies. But not just that. Increasingly "quality of life" is being considered. That is, keeping the patient alive but in constant pain, misery and barely functional may be considered not in the patient's best interest. A first line therapy may also be applied to make tumors shrink to be able to evaluate a patient's chances for surgical removing those later on. Some patients get so sick from that "first line" that they are no longer suitable candidates for surgery. People's stamina, pain threshold, general mental state and attitude towards risk, are also taken into consideration. You punch some person in the face and they go "Is that all you got?" and the next person doubles over and huddles in a corner for a good cry. BTW: Diet (the stuff you eat, not the weight loss kind) is still not being studied sufficiently, although it's becoming increasingly apparent that it's very important to the development and treatment of cancer. Our standard medical establishment is just not set up to cover that area. Hope this helps. 196.202.26.177 (talk) 09:52, 13 December 2011 (UTC)[reply]
Some forms of therapy are contradictory: immunotherapy can either strengthen the immune system or disable it. Chemotherapy typically depresses the immune system by killing fast-dividing cells, so it won't work well with therapies that strengthen the immune system. Something like boron neutron capture therapy which involves filling the tumor with boron may interfere with conventional radiotherapy. --Colapeninsula (talk) 11:18, 13 December 2011 (UTC)[reply]

US scientific association membership for teenagers

Where I live, in England, we have the Royal Institution, which has a category of membership for teenagers, to get them interested in science. I have a friend in the US, with a scientifically-minded daughter in her early teens, and I am looking for something roughly equivalent over there. Does anyone know of such a thing? It doesn't necessarily have to be general science; it can be more physicsy, for example. Thanks, everyone. The Wednesday Island (talk) 13:27, 13 December 2011 (UTC)in the USin the US[reply]

I don't know of any organizations like that, hopefully someone else does. I would suggest some of the science (related) competitions like FLL (middle school) or FRC (highschool) (robotics competions) if it doesn't have to be any particular subject, and if your friend wants something that's likely to be useful in the future. Heck froze over (talk) 14:56, 13 December 2011 (UTC)[reply]
Yes, see our article on FIRST, or their web site http://usfirst.org. As a former judge at a FIRST Robotics competition 3 years ago, I can heartily recommend this. I never saw such a large group of kids enthusiastic about science. A couple of the teams even had their own cheerleaders. For younger kids, FIRST also has a "Lego league" which is a competition using Lego robotics.
The Society of Physics Students also has some outreach programs for teenagers. ~Amatulić (talk) 18:04, 13 December 2011 (UTC)[reply]
You can become a member of the AAAS, the publishers of the prestigious journal Science. Their membership includes a subscription to the magazine and they have specific kids and student categories. Here's their website. Vespine (talk) 22:10, 13 December 2011 (UTC)[reply]

radiation

Is there any formula for the radiation of matterials other than black bodies, if not, can I at least look at a diagram or something? also, a black body emits every possible frequency at any temperature, what about others? are there "gaps" in their intensity-frequency diagrams?--Irrational number (talk) 14:27, 13 December 2011 (UTC)[reply]

The simplest modification to the black body formulae is just to introduce a constant emissivity and absorbance. That gives you a grey body (which just redirects to black body, but it's mentioned near the end of the "explanation" section). You can get more complicated things once you introduce chemistry, though. Different atoms and molecules have different absorption spectra. If you look at the spectrum of the Sun, there are lots of black lines corresponding to wavelengths that are absorbed by different atoms in it (that is, in fact, how helium was first discovered, hence the name - helios is Greek for sun). See Fraunhofer lines for more detail. --Tango (talk) 15:08, 13 December 2011 (UTC)[reply]

Born's Proof of uncertainty principle

can anyone explain the mathematical proof of the uncertainty principle in laymen's terms to someone who only understands basic Matrix algebra.Thanks.109.148.92.178 (talk) 16:59, 13 December 2011 (UTC)[reply]

If the extent of a wave is small compared to its wavelength (Small uncertainty in the position) than the wavelength itself becomes uncertain (Large uncertainty in the momentum). Dauto (talk) 19:15, 13 December 2011 (UTC)[reply]
If you don't understand the math (and want to learn it), Fourier Transform is the place to go. Dauto (talk) 19:17, 13 December 2011 (UTC)[reply]

as i said, in laymens terms, and wading through that article is a long way to find the solution; pretend you were writing a book for the people.Thanks — Preceding unsigned comment added by 109.148.122.30 (talk) 20:03, 13 December 2011 (UTC)[reply]

Alright. In lay terms. If you have a wild animal in a corner, the tighter you constrain his wiggle space (small uncertainty in position), the wilder it becomes jumping up and down and back and forth (large uncertainty in momentum). Dauto (talk) 02:01, 14 December 2011 (UTC)[reply]

Radio around the world

During the Second World War, WLW in Cincinnati, Ohio, USA broadcast at 500,000 watts, and there were reports of American military personnel in Alaska and England being able to hear it in optimal conditions. Given ideal conditions (including no interference), how many megawatts would a radio broadcast need to be able to be heard at the antipode? 140.182.232.225 (talk) 17:25, 13 December 2011 (UTC)[reply]

It's not just the power, it's the bounce. See DXing. Rmhermen (talk) 17:54, 13 December 2011 (UTC)[reply]
It is also stated as if this were a homework problem. That isn't what this page is for. ~Amatulić (talk) 17:56, 13 December 2011 (UTC)[reply]
And how can it be that the radio waves bounce around the planet? How is this effect called? could we all transmit a laser around the planet bouncing around? — Preceding unsigned comment added by 80.58.205.105 (talk) 18:07, 13 December 2011 (UTC)[reply]
See Radio propagation and Ionosphere Nanonic (talk) 18:47, 13 December 2011 (UTC)[reply]
.. and Tropospheric propagation Nanonic (talk) 18:48, 13 December 2011 (UTC)[reply]
My father explained it to me by telling me that the Van Allen belts reflected the radio waves back to earth. They are opaque to radio waves but don't absorb them, so the waves bounce. He was an ex-RAF radar operator so I thought he knew what he was talking about. The query about a laser, I'm not sure but as laser is a form of light, and as we can all see on a starry night the Van Allen belt is transparent to light, it wouldn't happen. --TammyMoet (talk) 18:57, 13 December 2011 (UTC)[reply]
The explanation I had heard growing up in Cincy was that WLW got an exemption becaue they broadcast farm reports acoss the midwest, not anything to do with WWII, but I had heard the same thing about them being occasionally picked up absurdly far away from Ohio. Their tower is quite impressive, a standard high mast radio tower turned upside down with another one right side up on top of it. Beeblebrox (talk) 19:43, 13 December 2011 (UTC)[reply]
The first paragraph of Radio#Audio says that stations that powerful had their transmitters "commandeered for military use by the US Government during World War II", and the WLW article says that the authorization for the station to broadcast at 500 kw was withdrawn in 1939, though "because of the impending war and the possible need for national broadcasting in an emergency, the W8XO experimental license for 500 kilowatts remained in effect until December 29, 1942." There seems to be a bit of a contradiction there. Deor (talk) 20:29, 13 December 2011 (UTC)[reply]
This isn't a homework question; my homework right now is related to matters of abstruse historical theory. It's simply that I know almost nothing about radio propagation. I'm guessing that you mean that it wouldn't really be possible, regardless of the wattage; am I right? 140.182.232.225 (talk) 21:06, 13 December 2011 (UTC)[reply]
At night time the radiation from the sun does not ionise the D layer of the ionosphere which cause absorption due to high density. They reflect off the lower side of the F layer. The main problem with long distance AM band is interference. So the far station cannot be received due to interference from closer stations. If there were no other stations reception may be possible at the antipodes at sunset/sunrise over the night path, and higher power would not be required. The signal becomes more concentrated at the antipodes due to focusing by the curved inner ionosphere. Graeme Bartlett (talk) 21:27, 13 December 2011 (UTC)[reply]

The receiving antenna is important. A Beverage antenna is suitable for this purpose. Count Iblis (talk) 00:09, 14 December 2011 (UTC)[reply]

The Kon-Tiki expedition carried 3 transmitters with 10 W power, and they managed to reach Oslo from their raft in the middle of the pacific, about 10000 miles away, sending only slightly delayed birthday greetings to King Haakon. See Kon-Tiki#Communications. --Stephan Schulz (talk) 13:32, 15 December 2011 (UTC)[reply]
Yes, but they used the 20 meters band and then it's a lot easier. Count Iblis (talk) 14:44, 15 December 2011 (UTC)[reply]

Sending a radio signal to the antipode is trivially easy, provided the correct frequency for the conditions is selected. In fact going completely around the world is routinely achieved by amateur radio operators. In good conditions a few milliwatts is enough power to do it. Roger (talk) 15:24, 15 December 2011 (UTC)[reply]

pleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeease help me

two thin rods that have some angle with each other, and rotate along an axis perpendicular to the plane of the two rods, how do I calculate moment of inertia?I know that I mustn't simply add them up, because if this was the case, the MOI of a thin rod that rotates around an axis perpendicular to it's center could be the sum of MOIs of two thin rods with half the length that rotate around their end, which is not. so, how do I do it?--Irrational number (talk) 17:46, 13 December 2011 (UTC)[reply]

Is this a homework problem? ~Amatulić (talk) 17:54, 13 December 2011 (UTC)[reply]
Well, technically, I am solving the problems of my university's last year exams (my own is on this Thursday). but I have thought about it(the problem), and have no clue what to do!--Irrational number (talk) 18:21, 13 December 2011 (UTC)[reply]
can anyone please give me a hint?--Irrational number (talk) 18:56, 13 December 2011 (UTC)[reply]
I am not an expert in this field, but I'll try to help a bit. According to Moment_of_inertia#Properties,

"The moment of inertia of the body is additive. That is, if a body can be decomposed (either physically or conceptually) into several constituent parts, then the moment of inertia of the whole body about a given axis is equal to the sum of moments of inertia of each part around the same axis."

--This seems to me to contradict your claim on additivity, so you may wish to research further along these lines. Also you could also try to solve the problem as a system of point masses, and then take a limit as the point masses approach a continuous uniform mass distribution. If done correctly, you should get an integral similar to the one at the end of Moment_of_inertia#Scalar_moment_of_inertia_for_many_bodies. You'll have to be careful about how to set up the integral, but this might help you get started. SemanticMantis (talk) 19:15, 13 December 2011 (UTC)[reply]
Your first instinct was correct. Simply add the two MoIs. The MOI of a thin rod that rotates around an axis perpendicular to it's center IS the sum of MOIs of two thin rods with half the length (and half the mass) that rotate around their end. Dauto (talk) 19:10, 13 December 2011 (UTC)[reply]
That's right. What trips up a lot of people when comparing the two rod equations at list of moments of inertia is forgetting to adjust the mass when adjusting the length.
You didn't say if the axis of rotation was at the vertex of the angle, or through any arbitrary point perpendicular to the plane of the rods. The solution is trivial if it's the former, more complex if it's the latter. ~Amatulić (talk) 19:29, 13 December 2011 (UTC)[reply]
Ha, I get it, I found the MOI of the two rods, just forgot to add them up!sorry for disturbing you...--Irrational number (talk) 19:30, 13 December 2011 (UTC)[reply]

Kennedy's Disease

The current diagram shows only carrier mother. What is the diagram if the father is an affected father? — Preceding unsigned comment added by 99.243.178.111 (talk) 18:48, 13 December 2011 (UTC)[reply]

Here is a link to the article, for anyone who doesn't know what you're talking about: Kennedy's disease. StuRat (talk) 19:16, 13 December 2011 (UTC)[reply]
That's actually the most interesting case. In the case of daughters, they will be obligate carriers, since the farther only has the one, the affected, X chromosome to contribute the the daughter's XX genotype. Sons of affected fathers can never be carriers of the disease, since the father will necessarily have contributed the Y chromosome, so the X comes from the non-affected mother. Fgf10 (talk) 19:53, 13 December 2011 (UTC)[reply]
Forgot to add, this is not just the case in Kennedy's, but in all recessive X-linked diseases. (And for completeness, the inheritance is the same in a father affected by a dominant X-linked disease, with the difference that the daughters will be affected by the disease, rather than be just carriers) Fgf10 (talk) 19:56, 13 December 2011 (UTC)[reply]

focal length and depth of field and wavelength-dependence

I have a very narrow depth of field in a particular imaging experiment. I notice that if I switch from visible (red) light to near-infrared light, the image becomes out of focus. Does depth of field increase or decrease with increasing wavelength, and how much does the focal length for a given aperture change with wavelength? elle vécut heureuse à jamais (be free) 19:44, 13 December 2011 (UTC)[reply]

First of all, what determines your focal length? (In other words, describe your optical system). If you have a lens made of optical glass - even high-grade scientific-quality optical glass, its index of refraction is wavelength-dependent (and therefore, its focal length is also wavelength-dependent - to say nothing about derived properties like its depth of field). This behavior is called chromatic aberration, and because you are working with infrared, it's very probable that even your high-quality glass optics, calibrated for minimal chromatic aberration in the visible spectrum, suffer from severe aberrations at infrared. The focal length of a single lens element can have a huge variance with wavelength - this is sometimes called the Vd or dispersion value of the index of refraction. Multi-element lens? All bets are off! You should try to get your hands on the excellent (but pricy) book, Applied Photographic Optics; you can read much of it online at Google Books.
In any event, single-element optical glass or multi-element optical system, the vd is the value that parameterizes wavelength-dependent behavior of your focusing system.
In the purely theoretical sense, all other items being equal, an aperture is measured relative to the wavelength you are imaging. So, shrinking the wavelength is, physically, similar to growing the aperture. Therefore, I would expect a shorter wavelength to have a tighter depth of field ("fewer things are in focus"). In other words, infrared light should have a larger region in focus than red light; and, the center of the focused region will occur at a different wavelength-dependent focal length. However, practical complications can severely limit the validity of this general rule. Nimur (talk) 20:34, 13 December 2011 (UTC)[reply]
Chapter 22, Depth of Field and Depth of Focus. Equations, diagrams, charts, tables of common values; pure physics, and on-the-ground realities for the common lenses that you can actually buy in stores. There's even a discussion of lensing for electron microscopy. This book is truly excellent. Nimur (talk) 19:06, 14 December 2011 (UTC)[reply]

December 14

Earth with Rings?

Say that one day, possibly 3 billion years ago, a massive rocky object, maybe a rogue planet or giant meteor, hit the Moon (without hitting the Earth), with enough force to destroy it and break it into thousands if not millions of pieces. Would these pieces eventually circle around the Earth and create a ring system? If not, what is likely to happen to the shattered moon? 64.229.180.189 (talk) 01:22, 14 December 2011 (UTC)[reply]

I would say it would form a ring, but the ring wouldn't last, with parts of it falling to Earth, other parts escaping the Earth's orbit, and possibly the rest reforming a new moon. I'm not certain what forces cause this, but it is noted that no terrestrial planets in our solar system have rings while all the Jovian plants do have rings, so the planet's size seems critical in ring stability. StuRat (talk) 03:38, 14 December 2011 (UTC)[reply]
What is needed is gravitational pulling and pushing to keep the ring from reforming into a moon. Mars and Venus don't provide much gravitational disturbance on Earth. Jupiter doesn't either. So, it would have to come from a moon around Earth. Around Saturn (and the other ringed planets, but mainly Saturn), the moons push and pull on one another with great force. They nearly rip each other apart. For the rings, they keep them from reforming. -- kainaw 03:49, 14 December 2011 (UTC)[reply]
It is theorised that the Moon was formed by a large body (roughly Mars sized) hitting the Earth and causing a large amount of debris to get thrown up into orbit. That debris formed a ring, which then coalesced into the Moon. (See Theia (planet).) If you destroyed the Moon, some of the debris would fall to Earth, some would escape and some would form a ring again. Given enough time, that ring would re-coalesce into a new (smaller) moon. As Kainaw says, the rings around gas giants can only stay around for a long time because the moons of those planets stop them coalescing. --Tango (talk) 04:02, 14 December 2011 (UTC)[reply]
If the presence of a moon or moons is all that is needed for ring stability, then shouldn't the Earth and Pluto, with large relative moons (and a couple extra small moons, for Pluto), and Mars, with two small moons, have rings ? (Perhaps Pluto does have rings, which we can't see.) StuRat (talk) 04:25, 14 December 2011 (UTC)[reply]
All the Jovian planets have rings, Uranus, Jupiter and Neptune. They are just less obvious then Saturn's. So my guess is that it's something to do with the fact they are Gas giants, rather then anything to do with moons. I don't doubt they play a role, they may be a necessary condition, but not a sufficient one. Vespine (talk) 04:36, 14 December 2011 (UTC)[reply]
Yes, all the planets with rings (Jupiter, Saturn, Uranus and Neptune) are gas giants, but they also have 64, 62, 27 and 13 moons respectively. The moons are important. You can get a ring whenever you have matter orbiting within the Roche limit (which is quite small for small planets, so the mass of the planet is a contributing factor) however it wouldn't form the kind of stable, distinct, clearly defined rings we see around the gas giants (and particularly Saturn) without moons to "shepherd" them. You can read more about it at planetary ring. --Tango (talk) 05:01, 14 December 2011 (UTC)[reply]
You might enjoy looking at that. Dauto (talk) 05:09, 14 December 2011 (UTC)[reply]
It would be handy to have such a clearly visible indicator for compass directions and latitude. Still no easy way to determine longitude though. APL (talk) 03:04, 15 December 2011 (UTC)[reply]
One thing that seemed to be missing from that video was the Earth's shadow on the rings, as viewed from Earth. The rings also looked a bit too bright, during the day. StuRat (talk) 03:29, 16 December 2011 (UTC)[reply]

Appearance of Moon - geographic variations

Following is a quote from the article Day of Ashura.

"While Ashura is always on the same day of the Islamic calendar, the date on the Gregorian calendar varies from year to year due to differences between the two calendars, since the Islamic calendar is a lunar calendar and the Gregorian calendar is a solar calendar. Furthermore, the crescent appearance to determine when each Islamic month begins varies from country to country due to obvious geographical reasons."

I question the last sentence. Are there "country to country" differences in the appearance of the moon within a 24 hour period? Thanks, Wanderer57 (talk) 06:32, 14 December 2011 (UTC)[reply]

It depends on how well the observers can see. So at some point in time the crescent moon will be visible. And different people with different visual acuity will see it at different times, as it gets bigger and bigger. Graeme Bartlett (talk) 08:29, 14 December 2011 (UTC)[reply]
There is a different depending on the local time. The new crescent might show too late to count in a given location while that will be early enough elsewhere. Dauto (talk) 08:42, 14 December 2011 (UTC)[reply]
Thanks. To see if I have this clear, is the key difference in the hour at which the moon becomes visible to people in different places or in the "shape" of the crescent as seen on the same night by people in different places? Wanderer57 (talk) 16:48, 14 December 2011 (UTC)[reply]

The former. The latter also has an effect but it will be much smaller. Dauto (talk) 17:12, 14 December 2011 (UTC)[reply]

Groung water tannins

Hello-

I have tannins in my groundwater. We just built our house 1 year ago. We had the water tested, no iron, no magnesium, no bacteria .....many minerals/organisms that cause problems. We cut our own wood up on our 13 acres for our wood stove for winter. I sometimes notice a horrible oil smell when I burn some of our wood. I have had psoriasis since age 3 and the smell reminds me of coal tar. Is it the tannins? Thank you, Hallie Cornell — Preceding unsigned comment added by Halliekoss (talkcontribs) 06:59, 14 December 2011 (UTC)[reply]

Surface water often contains tannins due to plant matter. It makes the water brown, like tea. The ground water could easily come from this brown surface water. The smell of burning wood has nothing to do with the water. Graeme Bartlett (talk) 08:18, 14 December 2011 (UTC)[reply]
(edit conflict)No, the tannins in the water, I would guess, come from the leaf cover of the ground. I'm guessing your well is shallow and is basically leaf tea. It might be good for you, if there aren't any sprays or poisons in the area. The smell of the wood in a stove will vary with the temperature of the fire. I'm guessing you are burning pine wood or another wood with lots of pitch. If it's oak or other hardwood, I don't know what's happening. A very pitchy piece of wood which is burning hot will smell like oil, and it's not nice. If you burn it cooler, it might smell nicer, but you will build up creosote in your chimney and all that nice-smelling smoke could have been heating your house. Go for a balance between having a cool smokey fire and letting the heat go up the chimney by regulating the damper. This is all from personal experience, you might ask an expert. BeCritical 08:23, 14 December 2011 (UTC)[reply]
What psoriasis question is that Graeme? Hallie was explaining why she knows the coal tar smell. I agree with Critical. Caesar's Daddy (talk) 08:33, 14 December 2011 (UTC)[reply]
Yeah OK, I have removed that comment as you are right there is no question about it there. Graeme Bartlett (talk) 12:02, 14 December 2011 (UTC)[reply]
Sounds like your not leaving it enough time to properly dry before burning. Here are some tips. [4]. The tannin in wood should not effect the burning quality... it just burns and thus decomposes.--Aspro (talk) 16:30, 14 December 2011 (UTC)[reply]
Insistently, you can save even more money and save the Earth at the same time by using the wood ash as a source of lye. Also, you can recycle all that applejack you hillbillies drink by saving your urine. Follow these instructions and be no more need to keep rushing down to Walmart for another bottle of laundry room essentials. --Aspro (talk) 16:47, 14 December 2011 (UTC)[reply]
I'm not sure if your question was misinterpreted. It sounds like you think the ground water tannins could affect how wood cut from the area smells when it burns? But tannins are present at quite high levels in wood by its nature - the tree doesn't concentrate them out of the groundwater. I think that even cut wood can produce more tannins when it discolors during storage or drying, though apparently more often it is breakdown or alteration of the tannins it has previously produced.[5] Wnt (talk) 20:02, 14 December 2011 (UTC)[reply]

bonferroni correction

Is that like bonferroni corrections are done only for p values that are statistically significant? Research articles with tables show many p values and some of them are marked with asterick, mentioning it to be statistically significant and corrected by bonferroni. What does this mean? were they already significant and then corrected by bonferroni or every p value (significant of not) is corrected by bonferroni? — Preceding unsigned comment added by 199.224.149.10 (talk) 08:35, 14 December 2011 (UTC)[reply]

Bonferroni correction is one of a number of methods for addressing the problem of multiple comparisons in statistics. When you see jargon like, "statistically significant and corrected by bonferroni", that means that rejection of the null hypothesis was supported by the statistical test used even after correction for multiple comparisons using the Bonferroni method. In a situation like that, it would be unusual to report other un-corrected p values when such correction would be appropriate. I presume that in the situation you describe, the alternative would be that the null hypothesis was not rejected. -- Scray (talk) 12:19, 14 December 2011 (UTC)[reply]

Thanks friend. — Preceding unsigned comment added by 199.224.149.10 (talk) 04:42, 15 December 2011 (UTC)[reply]

Boulder clay?

Hello, is it fair to say that this term, Boulder clay, is obsolete? See articles such as clay, till (espec.), and soil. I was going to rewrite the current article until I noticed it was an orphan based solely of an early 20th c. open content encyclopaedia. Now, when I search boulder clay on the internet there are *some relevant* hits, so I will probably just leave it for an expert but will just ask in case we've any experts floating about here... ~ R.T.G 12:47, 14 December 2011 (UTC)[reply]

I very much doubt that the term boulder clay is obsolete. It's a sort of thing of much interest, I'd have thought, to civil engineers who have to be concerned about the strata on which they build buildings. I think its orphan status probably says more about our poor coverage of specialised civil engineering and geology concepts than it says about the term. --Tagishsimon (talk) 13:00, 14 December 2011 (UTC)[reply]
I understand the subject being interesting but see also till and other articles vastly better improved which seem to cover most or all of the same, is it fair to say boulder clay is a specialised topic or is it just an obsolete term? It seemss pretty much the same subjects to me only that boudler clay has a different name.. ~ R.T.G 13:13, 14 December 2011 (UTC)[reply]
Till is a much wider term for "stuff which used to be under a glacier". Boulder clay is a specific stuff which, in part, used to be beneath a glacier. IMO boulder clay is a notable specialised term and, having played on google a bit, certainly not obsolete. Neither is the overlap between boulder clay and till, soil, etc, enought that we should ever contemplate merging it into another article. Don't let the EB1911 source convince you otherwise. --Tagishsimon (talk) 13:19, 14 December 2011 (UTC)[reply]
I agree with Tagishsimon. My soil mechanics lecturer regaled us with stories about it. It's the curse of pile drivers and tunnellers, because initial soil sampling can easily miss the fact that you might have a 2-tonne boulder in your path. It would be better to add some links from related articles, such as clay. That might encourage more editors to do some work on the article.--Shantavira|feed me 15:53, 14 December 2011 (UTC)[reply]
The British Geological Survey define Boulder Clay as "Clay and silty clay, commonly pebbly and sandy, reddish brown, stiff, possibly interbedded with sand and gravel-rich lenses and rare peat" here, so certainly not obsolete, but I note that the term is not used on any of their 1:50,000 maps, suggesting that it is not the preferred name for such deposits. To me it's just an informal name for till, but it may have a more defined usage than I'm aware of. I'll see what else I can dig out. Mikenorton (talk) 22:13, 15 December 2011 (UTC)[reply]
This report from 2004 uses the term in parentheses "thick till (boulder clay) deposits". Mikenorton (talk) 22:30, 15 December 2011 (UTC)[reply]
The terms have apparently been regarded as synonyms since 1863 "These terms are used interchangeably in this paper" according to Archibald Geikie[6]. Mikenorton (talk) 22:43, 15 December 2011 (UTC)[reply]
And finally from 2009, "The term till is used in preference to the more commonly used term 'boulder clay' to which it is synonymous. Boulder clay is a generic term rather than a lithological description, but it is generally misunderstood by engineers who may complain with justification that the material described as boulder clay on the map has neither boulders no clay as constituents."[7]. Mikenorton (talk) 22:53, 15 December 2011 (UTC)[reply]

Geodesic for thrown body

How can one draw the geodesic for a body thrown straight up from the surface of the earth? The rubber-sheet interpretation of general relativity won't give anything that makes sense (the geodesic on a rubber sheet stretches out to infinity, so a body thrown up will always never come back). My understanding of general relativity is that a massive body curves space into the time direction (and vice versa). Basically, when the 'rock' (massive body) is placed on the sheet, the rock bends the rubber sheet into the time direction. So, when a body is thrown up, it goes partially against the 'flow of time', and slows down; and if its velocity is not too lagre, it will stop and reverse its velocity. This seems to explain the body being thrown up (though I am not sure if this explanation is correct, please let me know if it isn't), but I have no way of figuring out the geodesic for a thrown body that is consistent with reality. A straight line going up and then suddenly doesn't look right, as geodesics are usually smooth curves, not pointy things. I suspect it also has a component in the time direction that makes sense, but I'm not sure. So how does the geodesic look? And why is the space-component of the geodesic pointy? Thanks, ManishEarthTalkStalk 13:01, 14 December 2011 (UTC)[reply]

You're quite right that it has a component in time; it has to, because I can throw two objects along the same initial direction at different speeds and they trace out very different geodesics despite being initially tangent in the spatial dimensions. The body's world line initially points away from the earth (to the right, say), then turns smoothly through the vertical back to the left. Why it should curve is the interesting bit: it might help to draw a different kind of rubber sheet, one with one space and one time dimension, with the usual grid drawn on it to represent the coordinates of a distant observer. The planet's mass distorts the sheet so that the horizontal lines (those that correspond to a particular constant time) converge a bit as you move to the left (gravitational time dilation). Then it only makes sense that the geodesics on such a surface should also act like there is some sort of center of rotation to the left, and they curve towards it. --Tardis (talk) 14:07, 14 December 2011 (UTC)[reply]
Thanks, I think I got it... Just one thing: On a distorted 2D space time diagram, like the one you described, how does one draw the geodesic/path for a particle at a certain point, when the particle has a certain direction? For example, do we say that the particle is 'accelerated' along the angle bisector of the space and time lines in the diagram? Or, another way of looking at it is that the paper you are drawing the lines on is the xy plane, the space lines are a vector field S, and the time lines are T. At a point x,y, given the directions of S and T, and the direction of the particle's motion, what will the direction of the geodesic be? Thanks, ManishEarthTalkStalk 15:12, 14 December 2011 (UTC)[reply]
If you include the time dimension then the path of the rock in spacetime is a parabola (think of a graph plotting the rocks height on a vertical against time on a horizontal) so it is indeed a smooth curve. If you collapse the time axis to a single pont, the parabola becaomes a line segment - this is an example of what mathematicians call "projection". This is fine - there is no reason why the projection of a smooth curve has to remain smooth.
I think you need to be careful about taking the "rubber sheet" analogy too literally. Although the rock does bend spacetime a little, the reason why it travels in a parabola rather than in a straight line is because the Earth (which is much more massive than the rock) bends spacetime in its vicinity - this bending of spacetime is what we perceive as gravity. Gandalf61 (talk) 15:15, 14 December 2011 (UTC)[reply]
Whoops, in my second comment for some reason I used the word geodesic in place of World line everywhere.. ManishEarthTalkStalk

@Gandalf:When I used the term 'rock' in the rubbersheet analogy, it corresponded to the Earth (or whatever massive body is creating the bending). I just used the term as it's fairly common. Sorry if it caused any confusion.. ManishEarthTalkStalk 15:21, 14 December 2011 (UTC)[reply]

How do photons know the type of charge they are interacting with?

When two particles interact via the electro(magnetic/weak) force, they exchange a photon. Repulsive force is when the one particle emits a photon towards the other, and is absorbs. The recoil/push from this exchange result in bothe particles moving away. Fine. Attraction is when a particle emits a photon in the opposite direction, and the photon hits the other particle anyways, due to the fact that the photon is a spread out wavefunction and can have any position immediately after emission. (Another interpretation that i've seen is that the photon is emitted "backwards in time", thout I think that these two interpretations are the same thing) Also fine. Now, my issue is, how does the photon 'know' which mechanism to use? It probably has to do with the momentum/space wavefunction of the photon, which (i think) is the only thing that can depend on the type of charge (as spin, speed, etc are not dependant), and thus 'encode' the charge that sent it. But the wavefunction should be radially symmetric, and for the attractive/repulsive forces, the wavefunctions should differ along the common axis. Basically, if I have a positive charge, it will emit the same photon regardless of the type of charge sitting next to it. So, both positive and negative charges will experience equal attractive and repulsive forces as they have equal probability of being hit by an 'attractive' or 'repulsive' photon. There must be some quality of the interaction of the photon with the second charge that collapses the photon's wavefunction into a predominantly attractive of repulsive position/direction. What is it? How does this mechanism exactly work? Note:I understand what wavefunctions and all are (and how to deal with complex valued wavefunctions), but not stuff like Hamiltonian operators, and the mathematics of wavefunctions (why one needs to multiply this wavefunction with that one, etc). So layman's terms are not required, but if there is any mathematics involved, please explain why a certain thing needs to be done (if its not too complicated). Thanks, ManishEarthTalkStalk 13:34, 14 December 2011 (UTC)[reply]

This URL contains a pretty good explanation: http://math.ucr.edu/home/baez/physics/Quantum/virtual_particles.html. Excerpt: 'The important point is that the photon doesn't "know" that it's going to hit a particle of the same charge as the one that emitted it, or of the opposite charge. The distinction between attraction and repulsion actually arises when the effect of the virtual photon interferes with the unperturbed wave function! In general, the distinction comes from interference between the contributions from odd and even numbers of virtual photons traveling from one particle to the other.' Truthforitsownsake (talk) 13:54, 14 December 2011 (UTC)[reply]
Thanks, i understood everything except:
  • Why is the momentum wavefunction of the emitted photon in the imaginary plane?
  • Why ido we multipy the wf of the photon by i and the reciever particle charge? The i multiplication probably has the same reason as my above point, but what does the particles charge have to do with it? I'm referring to this section:

 The effect of a virtual photon hit on the charged particle's momentum-space wave function is, then, quite simple.  The photon has a certain probability amplitude of knocking the charged particle to the left and a certain amplitude of knocking it to the right.  The probability amplitude for each possibility is just proportional to i times the charge of the particle times the photon wave function times the time!

Basically, I want to know the aspect of the interaction that makes the charge of the particle part of the wavefunction.

Thanks, ManishEarthTalkStalk 14:53, 14 December 2011 (UTC)[reply]
I believe he is illustrating the momentum-space representation of the photon wavefunction (which is a complex function) in the imaginary plane for clarity to show the feature he wants. This feature is that "the wave function is a function proportional to the electric charge of the emitting particle (in a sense this defines what electric charge is)". In other words, the charge determines whether the photon wavefunction is "up" or "down" in the imaginary plane. When the photon interacts with the particle, we are really multiplying two complex functions, which is why the two magnitudes end up being multiplied by i to get the true value. Truthforitsownsake (talk) 15:32, 14 December 2011 (UTC)[reply]
My question was more of "why is the momentum function complex" or "why is the momentum function the shape that it is"? I understood why they were multiplied by i, but not why they had to be complex in the first place.. But after thinking a bit, it seems that it comes directly from certain equations, the likes of which I'd rather not go into.. Thanks, ManishEarthTalkStalk 15:47, 14 December 2011 (UTC)[reply]
All wavefunctions are complex by definition. "Why" this is I don't think anyone knows, but the world would certainly be a much different place if they weren't. Truthforitsownsake (talk) 15:51, 14 December 2011 (UTC)[reply]

Standard gravity and huge objects

In the formula for calculating Standard gravity, there is no mentioning of the object that is 'free falling' to earth. What happens if an object with a huge mass, say mass of Jupiter - but with a small radius, 'falls' onto earth?. Will it still fall with an acceleration of g? Gil_mo (talk) 14:13, 14 December 2011 (UTC)[reply]

It won't: the Earth will fall to it. Well, technically, they will both fall towards their common barycentre, but to a first approximation the relative movements of two mutually gravitating objects depend only on their masses, not their densities and hence sizes: hence, for example, a planet will orbit in exactly the same way around a physically tiny black hole as it will around a much larger star of the same mass. "Free-falling" in a straight line is merely a particular case of an orbit, one with an eccentricity of ∞.
In such a case as you posit, the Standard gravity formula is inapplicable, because the mass of the other object is not negligible. You will need instead a more general formula that takes both masses into account. {The poster formerly known as 87.81.230.195} 90.193.78.30 (talk) 14:41, 14 December 2011 (UTC)[reply]
To a distant third observer, the Jupiter-mass black hole will still be attracted towards Earth with an acceleration of g -- but Earth will be attracted towards the JMBH at an additional substantial acceleration (much higher than g). From the perspective of an Earth-bound observer, the JMBH will be falling much faster than g. Newton's law of universal gravitation holds for all two-mass scenarios (handwaving over relativistic effects and such). — Lomn 14:43, 14 December 2011 (UTC)[reply]
If so, the formula mentioned in Standard gravity is an approximation for small objects? What would be the corrected formula taking into account the second mass? Gil_mo (talk) 14:47, 14 December 2011 (UTC)[reply]
The acceleration is the same (neglecting deformation of the Earth's shape due to strong tidal forces which would definitely happen). The apparent acceleration (as seen by someone standing on the Earth) is going to be proportional to the total mass of the system (as opposed to being proportional to the Earth's mass alone). Dauto (talk) 15:02, 14 December 2011 (UTC)[reply]
The formula in standard gravity is exactly correct for all objects at the surface of the Earth (note that it's not correct for objects falling from height). However, the Earth also falls towards all objects. For conventional small-mass problems, that attraction is so small as to be functionally zero and so we ignore it. If you don't want to ignore it, though, you use the same equations that you would for Earth, except with parameters from the other body. Newton's law is a good way to do the calculation. Once force is found, force over mass yields the acceleration. — Lomn 15:05, 14 December 2011 (UTC)[reply]
See Gravitational acceleration which says:
The relative acceleration of two objects in the reference frame of either object or the center of mass is:
The standard gravity is g0 = G×M/r2 ~ 9.8 m/s2, where M and r is the mass and radius of Earth.
If the other object has n times the mass of Earth then the total mass is 1+n times Earth and they accelerate towards eachother with (1+n)×g0. PrimeHunter (talk) 15:57, 14 December 2011 (UTC)[reply]

Rage-induced blackout

Hi everyone, I was wondering what the technical term for a rage-induced blackout is? What I'm describing is like an alcohol-induced blackout, but the subject is not under the influence of drugs at the time: the trigger is an event that makes them very angry. Someone told me the term was Redout, but our article on that term refers to G-forces (and video games). Our article on Anterograde amnesia doesn't mention rage and our article on Intermittent explosive disorder doesn't mention memory, is this something that has been studied by scientists/doctors? (I hope this doesn't fall afoul of the proscription of medical advice, I have not experienced this and am not seeking advice on treatment.) Mark Arsten (talk) 17:25, 14 December 2011 (UTC)[reply]

Scanning some psychiatric journals, I quickly found a few that dealt with extreme anger and referred to the amnesia as "redout". Our article is a stub, so omissions of anger shouldn't lead to the assumption that anger does not cause redout. -- kainaw 17:34, 14 December 2011 (UTC)[reply]
Comment - in aviation physiology, redout is the term for a very specific, different condition. It's the antithesis to a blackout, is purely physiological in nature, and has nothing to do with emotional state. Nimur (talk) 19:10, 14 December 2011 (UTC)[reply]
According to a few psychology articles I read, redout is used because those who experience it say they feel like everything turns red ("I was drowning in a red tide" was one quote). -- kainaw 19:14, 14 December 2011 (UTC)[reply]
The basic problem here is that it is essentially impossible to generate a rage-induced blackout experimentally, and even if it was possible it would probably be unethical. So everything that is written about this is anecdotal, and for many of the anecdotes one has to wonder whether there was really a blackout or whether it was invented as an excuse to avoid having to explain a violent act. Looie496 (talk) 18:12, 14 December 2011 (UTC)[reply]
The obvious solution to this problem is to find other animals which suffer from the problem, too. Perhaps animals subject to uncontrollable rages ? Bulls come to mind, but also some primates, like chimps and baboons. As for the mechanism, the higher blood pressure which accompanies rage might cause a stroke, embolism, or aneurysm, but I'd expect more serious and long-lasting problems if any of these had occurred. Perhaps the increase in adrenaline causes a reaction in some people ? StuRat (talk) 21:36, 14 December 2011 (UTC)[reply]
Interesting observations, I looked again and was able to find a couple of articles to peruse (such as [8]). Thanks, Mark Arsten (talk) 18:54, 14 December 2011 (UTC)[reply]
A common phrase for this in the UK is "red mist", as in "The red mist came down and I lost my temper". I wonder if searching using that phrase would elicit any other results? --TammyMoet (talk) 22:19, 14 December 2011 (UTC) I wonder if there is anything in Anger#Physiology about this? --TammyMoet (talk) 22:22, 14 December 2011 (UTC)[reply]

Common Method Variance

I'm a psychology student and I've come across "common method variance" in several articles that I've been using in a related project. Could someone please explain as simply (but thoroughly) as possible what CMV is? I can't seem to get the gist of it. Lord Arador (talk) 18:21, 14 December 2011 (UTC)[reply]

NOTE: After typing the comment below, it was brought to my attention that the definition of CMV is different in different fields of science. My comment is based on how I've used it in health informatics. -- kainaw 19:37, 14 December 2011 (UTC)[reply]
CMV is a correlation between two things being measured that is inflated (or, sometimes deflated) due to the use of the same method being used to measure each one. I saw a lot of examples that are hard to grasp, but there's an easy one I read about a long time ago. When measuring depth the ocean floor, a trawler would use GPS to travel in as a straight a line as possible for a few miles. It would turn and go back along a parallel line. Then, it would turn around and go back along another parallel line. It went back and forth, mapping out the depth to the ocean floor in about a 3 mile by 3 mile square. When the results were put in a computer, they were amazed to find that the ocean floor had been carved into very straight stair steps. These were long steps that spanned the length of the survey. Further, they were very straight - something that had to be man-made. So, they found, deep under the ocean, evidence of what must be the stairs leading into a huge city (Atlantis?). Then, someone ruined it all by bringing up CMV. They used the same method to measure the distance to the ocean floor on each trip back and forth and didn't take anything else into account - like the tide. So, on each trip, as tide went out, they were closer to the rather flat ocean floor. Putting that anecdote into a more complicated scenario... Consider you stop 100 people walking out of McDonalds and ask them if they like fast food. Then, you stop 100 people walking out of McDonalds and ask them if they have hypertension. You can easily have a CMV increased correlation between fast food and hypertension here because you used the exact same method of measure for both of the variables. You didn't account for anything else, such as the possibility that people walking out of McDonalds may prefer fast food or that they may be prone to hypertension after dealing with the fast-food staff. -- kainaw 19:13, 14 December 2011 (UTC)[reply]

feynman

Do/did Japanese people ever dislike Feynman for his role in Manhattan project?--Irrational number (talk) 19:51, 14 December 2011 (UTC)[reply]

Feynman's memoir Surely You're Joking, Mr. Feynman! contains a section where he talks about visiting Japan in the 50s, and the topic never comes up. I'm sure some Japanese people quite possibly disliked him, but it certainly wasn't widespread enough that Feynman - and other American physicists - weren't invited to visit the universities. Smurrayinchester 21:25, 14 December 2011 (UTC)[reply]
Even Oppenheimer was invited to visit Japan, and he had a much more crucial role (both in the making and the decision to use the bomb) than Feynman did (Feynman had zero input into the latter). He was not treated poorly by the Japanese to my knowledge. It would be interesting to contrast the treatment of Japanese physicists/biologists by Americans and American physicists by Japanese, with the treatment of German physicists by the British and Americans, in the postwar period. It would be a nice little undergraduate research paper, anyway. The German physicists got treated the worst out of the bunch, I believe. --Mr.98 (talk) 21:38, 14 December 2011 (UTC)[reply]
They got treated the worst? Both the US and the USSR scrambled to get German rocket scientists working for them. Von Braun, the leader of the Saturn V project, is just one example. Space exploration wouldn't have been possible for a long time without enlisting their help. --140.180.15.97 (talk) 23:09, 14 December 2011 (UTC)[reply]
Maybe mistreated just by satirists like Tom Lehrer: "...the widows and cripples of old London town / Who owe their large pensions to Wernher Von Braun." ←Baseball Bugs What's up, Doc? carrots00:15, 15 December 2011 (UTC)[reply]
I meant the nuclear physicists, not the rocket engineers (totally different types of scientists there, totally different outcomes — don't confuse the physicists with the engineers!!). And what I meant specifically was things like not being invited to conferences, spurned by their colleagues, and so on, for a number of years. People took the whole "worked on an atomic bomb for Hitler" thing fairly hard. --Mr.98 (talk) 12:56, 15 December 2011 (UTC)[reply]
Generally speaking, once a war has been over for awhile, the once-opposing warriors often get along well. They understand that each was just doing his duty. ←Baseball Bugs What's up, Doc? carrots00:16, 15 December 2011 (UTC)[reply]
Additionally, many physicist who were involved in the project Manhattan became outspoken critics of nuclear weapons. 88.8.78.13 (talk) 16:06, 15 December 2011 (UTC)[reply]
I wonder how many Japanese even knew he worked on the project. Clarityfiend (talk) 21:48, 15 December 2011 (UTC)[reply]

Strange animal behaviour

Hi. In recent years, especially around November and December of 2009 and this December of 2011, I've seen flocks of geese flying north (or other atypical directions such as west) when they normally would have migrated south weeks ago. I'm in Southern Ontario, north of Lake Ontario. What does this unusual behaviour signify? Thanks. ~AH1 (discuss!) 21:51, 14 December 2011 (UTC)[reply]

Perhaps it's based on temperature. That is, when it gets cold, they head south, and when it warms back up, they head back north. This sounds inefficient due to excess travel, but if they can get more food out of an area that has warmed up again, it might be worth it. StuRat (talk) 00:11, 15 December 2011 (UTC)[reply]
So how sure are you that this is not simply an observational error? I don't mean that you don't know which direction is which, but simply that you've just started noticing something that has always been happening. That is, you expect the birds to be migrating south but you now notice them heading other directions. But what we don't know is how long have you been observing birds in this location and how familiar you are with their historical behaviour. While their general migratory direction might be south at this time of year, in any given location it would not necessarily be the case, for example they may divert short distances to access feeding sites say in other directions. In your particular location this may appear to be a north or west migration, but their overall direction remains south. If you are quite near the lake I would suspect this would increase the chances of apparently odd migration directions. In terms of later migration than you expect this may be a response to climate change; I have heard reputable scientific sources in recent times reporting changing migration patterns such as this. Many species are quite sensitive to even minor temperature variations. Just a possibility; there's not that much info to go on. --jjron (talk) 14:28, 15 December 2011 (UTC)[reply]
Generally speaking, the reason birds migrate is to find food. Some birds will stick around in a wintry climate if there's a food source. ←Baseball Bugs What's up, Doc? carrots01:09, 16 December 2011 (UTC)[reply]

USS Constitution commander glass rooms

Resolved
 – – Kerαunoςcopiagalaxies 02:46, 15 December 2011 (UTC)[reply]

I'm extremely curious about the glass rooms connected to two of the officers' berths on the USS Constitution. They stick out on each side of the ship and would allow someone to sit and watch the ship from "outside" of the ship while it's traveling. What are these rooms called? Are they typical of ships of the time, or is this a one time situation? My understanding is that one of the gun deck crew members (or boys) would stick their heads out of a porthole during battle and watch for damage in the side of the ship; I'm assuming this glass room could've been used for the same purpose, but because it's connected to an officer's room, it probably wasn't. And finally, were these glass rooms added during one of the later restorations, or was it always part of the original design? It just seems a bit on the luxurious side to me for a ship of war. – Kerαunoςcopiagalaxies 22:00, 14 December 2011 (UTC)[reply]

That would be excessive luxury today, yes, but back then admirals and generals felt entitled to certain luxuries. And perhaps it could be justified as an "observation room". StuRat (talk) 00:09, 15 December 2011 (UTC)[reply]
They're called quarter galleries. By the 19th century they were relatively vestigial, and served principally to house the captain's and sometimes the senior officer's head and washing facilities. They were designed into the ship and were standard equipment in naval architecture, which was heavily ruled by very conservative tradition in design practices.I agree with StuRat that they were, in part, symbols of privilege. They began to disappear a couple of decades after the Constitution was built. The quarter galleries on the USS Constellation (1854) are quite attenuated. Acroterion (talk) 02:23, 15 December 2011 (UTC)[reply]
Thank you both so much, that's wonderful! – Kerαunoςcopiagalaxies 02:46, 15 December 2011 (UTC)[reply]

Determining the polarity of the anode

What determines the polarity of an anode? For example, why is it negative in an electrochemical cell and positive in an electrolytic cell? Widener (talk) 22:21, 14 December 2011 (UTC)[reply]

It looks like the first thing discussed in the Anode article is precisely what you're asking about. Vespine (talk) 23:47, 14 December 2011 (UTC)[reply]
Oops! Yes, that would have been an obvious article to check, wouldn't it... Widener (talk) 23:56, 14 December 2011 (UTC)[reply]

December 15

Blood Pressure differential

A person registered a blood pressure with a wide differential between systolic and diastolic pressure (say 150/60), what physiological conditions (if any) would that indicate?

Full disclosure: past test question.

Thanks in advance. — Preceding unsigned comment added by 174.113.7.240 (talk) 01:32, 15 December 2011 (UTC)[reply]

See Pulse pressure#High .28Wide.29 Pulse Pressure. --Tango (talk) 01:41, 15 December 2011 (UTC)[reply]
And if you have concerns about your health, see a doctor. No one here is qualified to diagnose medical conditions over the internet. ←Baseball Bugs What's up, Doc? carrots12:26, 15 December 2011 (UTC)[reply]
Aren't you making quit a leap here Bugs, assuming that the OP is suffering from hypochondriasis of medical students? Reliving past tests may well affect one's pulse, but there is no indication that is happening here. -- 19:51, 15 December 2011 (UTC) — Preceding unsigned comment added by 203.82.91.133 (talk)
Medical students' disease is the applicable link (search on Google for 'isolated systolic hypertension')--Aspro (talk) 20:03, 15 December 2011 (UTC)[reply]

Exploding electrode

What would cause a pure graphite electrode to explode during use? Explode may be the wrong term - it suddenly cracked open lengthwise and briefly thereafter disintegrated, sending shrapnel flying everywhere. The electrode was cylindrical, about 3 mm in diametre, and was used in a modified Castner reaction. Plasmic Physics (talk) 11:13, 15 December 2011 (UTC)[reply]

Was it a (very) porous electrode. Could metal ions have had enough time to diffuse into it. Is their a chance that the polarity got reversed. Had it been immersed in an acid solution previously. Did this cell involve any carbonates. The lengthwise split suggest the outer surface expanded (like putting wedges in a wooden log). What exactly where the chemicals involved. In other words - Need more info. Who said chemistry wasn’t exiting.--Aspro (talk) 19:39, 15 December 2011 (UTC)[reply]
It is the chemists who "exit" suddenly when things start exploding.Edison (talk) 01:24, 16 December 2011 (UTC)[reply]
Please blame my spell checker for losing the 'gh' there in the last word. And agree also, that they do from time to time they disappear in a puff of smoke with a loud report (or should that be a loud retort) :-)--Aspro (talk) 14:59, 16 December 2011 (UTC)</smal>[reply]

how can i get :- a)ammonia gas ---- b)potassium hydride

how can i get :- a)ammonia gas ---- b)potassium hydride from :- water - oxygen - carbon dioxide - potasium - nitrogen ("Note : you can use some or all the previous substances")


My try to solve this : a) 6K + N2 -----> 2K3N then K3N + 3H2O -----> 3KOH + NH3 but i am not sure from the reaction between potassium and nitrogen , is it true reaction???

b)i don't know how to get this :( — Preceding unsigned comment added by Mido22 (talkcontribs) 12:47, 15 December 2011 (UTC)[reply]

I have no idea how "practical" (vs "write a balance reaction") an answer this is, but our article on potassium notes that nitrogen is used to extinguish potassium fires--therefore they do not likely react with each other. Looking up what K3N would be, we have an interesting article about nitride chemicals, which addresses whether this one is likely (with a side-note about Li3N being easy to make by the type of reaction you propose. Looking in the lithium#Chemistry and compounds section, you can find the answer to whether direct reaction between potassium and nitrogen is expected to occur. DMacks (talk) 15:19, 15 December 2011 (UTC)[reply]
I doubt K and N would react. How about adding K to water which IIRC will produce KOH and H2. You could then react the H2 with N to make ammonia (assuming you have the apparatus for the haber process. SmartSE (talk) 16:48, 15 December 2011 (UTC)[reply]
If we can get a source of carbon monoxide, we can generate hydrogen gas from the Water gas shift reaction. That's often a precursor for Ammonia production. --Jayron32 23:52, 15 December 2011 (UTC)[reply]
Potassium hydride or Potassium Hydroxide? Dropping water on potassium gives potassium hydroxide and hydrogen gas. If you want potassium hydride, you need to collect the hydrogen, then heat the potassium up and blow the hydrogen over it (as per Humphry Davy). Ammonia's harder - the nitrogen needs to be fixed. SmartSE is right about direct combination of nitrogen and hydrogen in the Haber Process, but you need high pressure, temperature, and typically a metal catalyst. Buddy431 (talk) 04:05, 16 December 2011 (UTC)[reply]

Gasoline fuel cell

Do these exist? How do they compare with ICE? I know they are more expensive, but performance wise. ScienceApe (talk) 12:58, 15 December 2011 (UTC)[reply]

I'd like to better answer the question but that means that I'd have to better understand it as well. And I don't.
I've often heard the term "fuel cell" used to refer to a small (a couple gallons) fuel tank in a street legal car that is used for racing on weekends. But looking at ICE, I have no idea what you're comparing a gasoline fuel cell to (or my definition of it).
And finally, in the first question, you asked if they exist but in the final sentence you say that they are more expensive. So which is it? Do you wonder if they exist or know they do and therefore wonder if they are worth the cost? Dismas|(talk) 19:49, 15 December 2011 (UTC)[reply]
ICE is almost certainly, from the context, an internal combustion engine, driving an alternator. A fuel cell converts a fluid fuel directly into electricity. They have three chambers, separated by two metallised semi-permeable membranes. The fuel is pumped into one of outer chambers, air/oxygen into the other. The 'burn' at membranes, and the exhaust is removed from the middle chamber. Methanol/ethanol fuel cells were going to be the next laptop power supply, a few years ago, but seem to have dropped of the radar. I'm also not sure how well they scale to the 1 hp (750 W) plus of gasoline engines. CS Miller (talk) 20:37, 15 December 2011 (UTC)[reply]
While proton exchange membrane fuel cells are what most people think of when someone says "fuel cell", they aren't the only type of fuel cell. Due to their low operating temperature (which makes them attractive for automotive applications), PEM fuel cells primarily use hydrogen fuel. There are some PEM variants which directly use methanol or ethanol as fuel (Direct methanol fuel cells and direct-ethanol fuel cells), but those are less developed than the hydrogen fueled ones. In contrast to PEM fuel cells, there are also solid oxide fuel cells and molten carbonate fuel cells. These operate at much higher temperatures, and as a consequence can easily use gasoline directly as a fuel source (there are some prototype PEM systems which use gasoline fuel, but they almost all use a separate step to first reform the gasoline into hydrogen). The drawbacks to using SOFC & MCFC for automotive applications are that their high operating temperatures mean that they take a long time to start up, so aren't good for the intermittent usage inherent in cars. Additionally, their construction tends to be more fragile than PEM fuel cells, so are better suited to stationary applications, rather than being bumped around in a car. Researchers are working on both the robustness and the operating temperature of SOFCs & MCFCs, so it's possible that at some time in the future there will be a direct gasoline fuel cell that is suited toward automotive applications. -- 140.142.20.101 (talk) 00:33, 16 December 2011 (UTC)[reply]

In what way is the Higgs mechanism not a fundamental force?

Seems that all the bosons of the standard model are mediators of forces, so why not the Higgs boson? What makes it different from the others? Goodbye Galaxy (talk) 15:36, 15 December 2011 (UTC)[reply]

The Higgs boson is a spin-0 boson (AKA scalar boson). All other Standard Model bosons are Spin-1 Gauge bosons (AKA gauge vector boson). The key word here is gauge, not boson. The Higgs boson is not a gauge vector boson so it is not associated with a gauge interaction (AKA fundamental force). Dauto (talk) 15:52, 15 December 2011 (UTC)[reply]

Odd math

Let's take an event which lasted 4 years, say from 1941 to 1945. Now if I want to briefly describe this event year by year, I arrive to the odd number of 5 years (including 1941). Is there some reference in math to such an odd division?--46.204.24.211 (talk) 18:41, 15 December 2011 (UTC)[reply]

It depends on if/how you count the end points, also known as inclusive vs. exclusive counting. If the event started on Jan. 1 1941, and ended on Dec. 31, 1945, then it will have lasted five years. On the other hand, 1945-1941=4, but this is "not counting" the year 1941. So we get a duration of four years if the event starts on Dec 31 1941 and ends on Dec. 31 1945. There's some relevant info at Counting#Inclusive_counting, and also at Closed_interval#Excluding_the_endpoints. SemanticMantis (talk) 19:02, 15 December 2011 (UTC)[reply]
See also: fencepost error. --Carnildo (talk) 02:10, 16 December 2011 (UTC)[reply]

December 16

principal stress

"Hydraulic fracturing in rocks takes place when the fluid pressure within the rock exceeds the smallest principal stress plus the tensile strength of the rock".

What does this mean? Does principal stress increase with depth? When fracture occurs -- is there negative feedback inhibiting further fracture? Or is there positive feedback (other than that there are now more cracks for fluid). elle vécut heureuse à jamais (be free) 00:37, 16 December 2011 (UTC)[reply]

Does Stress_(mechanics)#Principal_stresses_and_stress_invariants help? --Jayron32 04:14, 16 December 2011 (UTC)[reply]
It's literally all Greek to me. I don't understand tensors. I'm a biochemistry student. Basically, what governs crack propagation in hydraulic fracturing deep underground? At what point does the rock stop fracturing? elle vécut heureuse à jamais (be free) 04:44, 16 December 2011 (UTC)[reply]
Hey, if it were greek I may have understood more of it. I'm a chemistry teacher. I also never got to much of the higher algebras either. It was what I could find on the topic when I searched. Knowing what you are looking for, does the Wikipedia article Hydraulic fracturing or any references therein help? --Jayron32 04:49, 16 December 2011 (UTC)[reply]
It is basically just a fancy way of saying that a rock will break when the force applied to it is greater than the minimum force required to break the rock. Fractures propagate easily within a single crystal / rock, but geology is sufficiently heterogeneous that such advantageous fracture growth tends to be limited in extent. In general, when you apply a hydraulic overpressure to a geologic formation the effective overpressure will decrease with distance from the well (often as something around 1/r), which limits the distance over which hydraulic fracturing is generally possible from a single well. Dragons flight (talk) 08:53, 16 December 2011 (UTC)[reply]
Because of the way that rocks are laid down, there is often a direction of least stress. For example a rock might have been formed from sand and shale particles laid down in a prehistoric river. Over centuries the sand which would later form a rock would tend to become oriented in the direction of the river. After the sand in the now dry river bed is buried and compacted to form rock, secondary effects start to happen. The rock will may be tilted, or bent by faulting and mountain uplift. The tilting and bending will usually be in different directions than the river, so the rock ends up with a variety of stresses. Additionally, due to the weight of the rock above it, other stresses will be created. So, what that statement means is the rock will fracture in its weakest direction. If the rock has been bent enough by geologic forces, there will be millions of microfractures in the rock, which will also help start fractures.
When someone attempts to hydraulically fracture rock, they will first run sensors to try and determine the least stress direction and mechanical properties of the rock. They then create a model of the pressure required and the direction that the fractures should go. They use casing and perforations to control where the fractures began and then start pumping fluid downhole. When the pressure exceeds the strength of the rock, the weakest spot cracks open. The surface pressure will suddenly drop and the frac engineer will know that the fracture has started. They then play with pressure, pump rates and what types of sand that they pump downhole to try and extend the cracks. As Dragon flight says, the pressure decreases with distance, due to friction between the fluid and the rock and also due to an increasing surface area. Since pressure is force per unit area, if the area increases it takes more force to keep the same pressure. Eventually it becomes too expensive to open the fractures up any more, and the process stops.Tobyc75 (talk) 18:37, 16 December 2011 (UTC)[reply]
That quote originally comes from here, but not all of the relevant parts are visible (at least to me), so it lacks a full explanation. To produce a tensile fracture (that is a fracture in which the direction of opening is perpendicular to its length - also known as an 'opening mode' or 'Mode I' fracture in fracture mechanics) at depth in a wellbore (borehole), it is necessary to overcome the effects of the weight of the pile of rock sitting above, which produces what is referred to as a confining pressure. By raising the pressure of fluid it offsets the confining pressure until the rock eventually reaches the necessary condition for tensile fracture to happen. This happens naturally in the formation of mineral veins, where the high fluid pressure comes from metamorphic reactions that produce fluids, particularly water. The fracture orientation will be in the plane of the maximum and intermediate principal stresses and perpendicular to the minimum stress direction. Accidental or deliberate hyrofracturing is sometimes used in boreholes as a way of determining the orientation of the stress field at depth. Mikenorton (talk) 20:38, 16 December 2011 (UTC)[reply]

the meaning of isogene and examples are welcomed

Hi,everyone,

What i am puzzled was the exactly meaning of the term "isogene".

Dose anyone give me an answer? An detailed explanation will be appreciatedLiujem (talk) 04:33, 16 December 2011 (UTC)[reply]

There are several unrelated concepts which use similar terms:
  • Isogeny is a mathematical term from Algebraic geometry, that refers to a method of mapping one object to another.
  • Isogenicity is a synonym for Zygosity, which is a genetic term refering to the relationship between genes on the two homologous chromosomes in a genome.
  • Isogenic human disease models are simpler genomes used to study various diseases, especially cancer.
Does any of those help? --Jayron32 04:44, 16 December 2011 (UTC)[reply]
Isogene has two meanings: (1) a line on a map showing the distribution of a gene, by analogy to an isobar, isotherm, etc[9] (2) a copy of a gene that occurs multiple times in an organism's genome - this sense is common in biology[10][11] but I can't find a reference for a definition. --Colapeninsula (talk) 14:31, 16 December 2011 (UTC)[reply]

Gas cloud entering black hole in milky way

Would this be visible to any extent other than massive space telescopes; http://4.bp.blogspot.com/_YuR6V_Yr7Bk/S_0PvCTAelI/AAAAAAAAFF4/wNBkqw_INTM/s1600/black+hole.jpg Something like what the link looks like? — Preceding unsigned comment added by 109.224.25.14 (talk) 06:41, 16 December 2011 (UTC)[reply]

There's no fundamental reason why you couldn't see it if conditions were favorable. Such observation depends, particularly, on how far away the black hole is. Assuming that we're talking about something with the approximate visible-spectrum luminosity of the sun, it would have to be within 50 light years or so to be naked-eye visible. Mostly, though, such phenomena radiate in the x-ray spectrum, which you're not going to see except via specialized equipment. — Lomn 13:48, 16 December 2011 (UTC)[reply]
The centre of the Milky Way is heavily obscured by dust along the line of sight. At visible wavelengths (i.e. those that the human eye can register), essentially everything is absorbed. At infrared wavelengths, the absorption is much less severe which is why scientific observations of the centre of the Milky Way, such as the one that identified the gas cloud, are done in the infrared, which is essentially a matter for professional telescopes (not necessarily space-based - the interesting observations are made from the ground, notably with ESO's Very Large Telescope, and adaptive optics). In addition to X-ray emission, there may also be radio emission, presumably on a longer time scale, though. --Wrongfilter (talk) 14:53, 16 December 2011 (UTC)[reply]

how are promoter sequences discovered?

A rather urgent question-- thanks!

You mean, as (concisely) described here? --Ouro (blah blah) 12:45, 16 December 2011 (UTC)[reply]

Celsius vs. Kelvin

what ias thew relation bitwin Celsius and Kelvin degree — Preceding unsigned comment added by 77.28.22.143 (talk) 14:49, 16 December 2011 (UTC)[reply]

The Kelvin is exactly the same as a degree Celsius. See Kelvin#Use_in_conjunction_with_Celsius. The difference is where the two scales start. 0 Kelvin means absolute zero, but 0 degrees Celsius is the freezing point of water (at least historically, there are some minor modern modifications to the definition of 0 degrees C). Note that "degrees Kelvin" is not proper usage, a Kelvin is the unit. For example, we say "100 Kelvin", but "100 degrees Celsius". SemanticMantis (talk) 15:09, 16 December 2011 (UTC)[reply]
... so, just in case this is not clear from the linked articles, to convert degrees Celsius to Kelvin, just add 273.15 Dbfirs 17:13, 16 December 2011 (UTC)[reply]

Looking for details behind a Mr. Wizard science trick

One time when I was a kid, I was watching Mr. Wizard on TV and he did this thing where he took one clear liquid and poured it into a pitcher holding another clear liquid, mixed them around a little (the mixture stayed clear), and then had a kid assistant hold the pitcher high while pouring it into another pitcher, and he counted down "three, two, one..." and clapped his hands (for dramatic effect, not that that has anything to do with it), and all at once, the liquid in the bottom pitcher, the liquid in the pouring arc, and the liquid still in the pouring pitcher, all turned a dark purple in an instant. What were those two liquids? 20.137.18.53 (talk) 15:07, 16 December 2011 (UTC)[reply]

That's an iodine clock. Classic, performed it a few times myself :) Grandiose (me, talk, contribs) 15:11, 16 December 2011 (UTC)[reply]
Thanks!20.137.18.53 (talk) 15:14, 16 December 2011 (UTC)[reply]

Damastes--redirects to Procrustes, but recommends "Huntsmen Spider"

Hello!

I am just wondering why "Damastes," one of the names for Procrustes, is suggested in addition to "Procrustes."

What is the connection between the word "Damastes" and the Hunstman Spider?

Or between proctrustes and the Huntsman Spider?

THANK YOU SO MUCH!

Jennifer — Preceding unsigned comment added by 98.97.183.38 (talk) 17:41, 16 December 2011 (UTC)[reply]

If you see the Huntsman spider#List of genera section, you can see that Damastes is also a genus of Huntsman spider. We don't yet have an article about the spider genus. -- Finlay McWalterTalk 19:01, 16 December 2011 (UTC)[reply]
If you're curious as to why there's a spider genus called Damastes (which seems to contain at least the species Damastes nossibeensis), we don't appear to have that information (and what little non-wikipedia info I can find on the spider online doesn't help either). -- Finlay McWalterTalk 19:10, 16 December 2011 (UTC)[reply]

Extracting work from a positive current

So in a fuel cell, hydrogen is ionized into a proton and an electron and the electron then passes through a circut and does work. Ok fine, but the proton passes through an electrolyte and doesn't do any work. Why not? Why can't we extract work from the proton? Is it possible? ScienceApe (talk) 18:02, 16 December 2011 (UTC)[reply]

You can extract work from such an electric current. It just so happens that in the practical setups you're typically thinking of, the efficiency would be very low. This is because in most configurations, the maximum current density that can be carried by positive ions is very low.
If you could construct a large current of positive ions flowing through an electrolyte, you could generate heat (for the same reasons that electron flow produces waste heat), and you could, e.g., drive an incandescent light. You could use positive ions as carriers of an electromagnetic wave in an antenna, and propagate a wave (converting energy from the current into electromagnetic energy carried away as a propagating wave). You could, with some effort, build devices that operate with positive ions flowing through wet electrolytes, but otherwise behave analogous to the ways that we extract energy from electron currents in copper wires. You could even create a terrifically inefficient wet-chemistry semiconductor, limited only in practicality by its poor noise, power requirements, and frequency response - but not fundamentally very different from a solid-state semiconductor.
In most cases, including most plasmas and most wet electrolyte solutions, positive charge carriers are less mobile than negative charge carriers; so for optimum efficiency, we use electrons to carry current. This invariably just comes down to the fact that protons are more massive than electrons. Nimur (talk) 18:25, 16 December 2011 (UTC)[reply]
It's a little disingenuous to say that you extract work from the electron current and not from the proton current. You can't have the electron current unless you also have the proton current. In reality you're extracting work from the entire system (the complete circuit). There's only a limited amount of energy to extract from the conversion of a set amount of hydrogen, so what you're likely to find is that if you try to extract energy from the proton current directly, the current and/or voltage of the electron current will go down, reducing the amount of work you're able to extract from it. Given the difficulty of extracting from the proton current directly, you'll likely find the best way to extract the greatest amount of energy from the system is to place the load in the electron current, and minimize the resistance in the proton current path. -- 140.142.20.101 (talk) 19:49, 16 December 2011 (UTC)[reply]

December 17

By what means (what type of ship) - and for how long - would a Briton (specifically, a Scot or an Englisman) of high social standing have travelled from Great Britain to America in a.) 1767 and b.) 1909-1913? My query pertains to A.) John Witherspoon, signer of the American Declaration of Independence and B.) P.G. Wodehouse, author. How long did the trip take, and what was it like? 82.31.133.165 (talk) 00:41, 17 December 2011 (UTC)[reply]