Wikipedia:Reference desk/Archives/Science/2009 January 13

From Wikipedia, the free encyclopedia
Science desk
< January 12 << Dec | January | Feb >> January 14 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 13[edit]

Paternity Test[edit]

How early can paternity tests be done? I know they can use amniotic fluid during pregnancy, but how early in the pregnancy? The cliche "I'm asking for a friend" is honestly true in this case. 76.14.164.181 (talk) 06:33, 13 January 2009 (UTC)[reply]

This sounds like a request for medical and/or legal advice. If you need help finding the father of a child, please seek medical help from a qualified, in-the-flesh, medical professional. --Jayron32.talk.contribs 06:39, 13 January 2009 (UTC)[reply]
Requesting Medical advice would sound like this, "I have X, Y, and Z symptoms, what should I do to cure it?" (etc). I wasn't really asking for advice; rather, information about a procedure. If anybody knows the answer. 76.14.164.181 (talk) 06:46, 13 January 2009 (UTC)[reply]
There are two major invasive ways that genetic testing is carried out during gestation. One is amniocentesis, where some of the amniotic fluid is taken, the other is chorionic villus sampling (CVS), where a bit of the placenta is taken. Amniocentesis can be done around 15 to 18 weeks of pregnancy, and CVS can be done earlier, around 10 weeks. Both techniques also have not insignificant risks of miscarriage, and are usually only carried out for medical reasons, not for regular paternity testing. They also aren't cheap. Be aware, also, that one needs to sample DNA from the potential father and mother to do an accurate test, otherwise there isn't much point. There is also a less-invasive technique being touted these days, whereby you sample the mother's blood and extract DNA that is supposedly derived from the foetus. There is some debate about how accurate and reliable this is, and it isn't approved by many professional bodies (meaning it may not be admissible in court). Some companies claim they can get results as early as 13 weeks, but this certainly isn't well accepted by the scientific/medical community and it is certainly less reliable.
If this is something your friend is interested in for practical, rather than academic, reasons. The best advice you can give them is to see a doctor. Rockpocket 07:24, 13 January 2009 (UTC)[reply]
Rockpocket, thank you for your very detailed answer. I really appreciate it. 76.14.164.181 (talk) 07:47, 13 January 2009 (UTC)[reply]

penetration of UltraViolet A rays in flesh[edit]

when a 5 mm diameter spot of UV A rays is placed on human body part(by a UV laser); what amount of energy(mJ/cm^2) of UV spot is required for 5cm penetration of uv rays in flesh? 123.201.1.238 (talk) 12:13, 2 January 2009 (UTC)crony

5 cm ? That's 2 inches. You'd need enough energy to vaporize most of the flesh above it. Are you sure you don't mean 5 mm penetration ? StuRat (talk) 16:40, 2 January 2009 (UTC)
5 mm penetration? You must be great with the ladies. ok... y'all can go back to answering the question seriously now. I will stop. --Jayron32.talk.contribs 21:40, 2 January 2009 (UTC)

Thank you for attempting my question buddies but I sincerely need the answer and Yes it is quite possible that UV rays can penetrate 5 cm; just the query is with what energy... veterans please try to answer the Q I really need it... Crony —Preceding unsigned comment added by 123.201.165.120 (talk) 06:34, 13 January 2009 (UTC)[reply]

The absorption coefficient of UV-A in flesh is about ~50 / cm, meaning after 5 cm, the intensity would have fallen off by a factor of e-5*50 = 3×10−109. There is no meaningful signal at 5 cm depth. The only way to get there would be to have a beam so intense you literally burn through and ablate the overlying layers. Dragons flight (talk) 09:06, 13 January 2009 (UTC)[reply]
This is what I'd said, too, Crony, so where exactly did you hear that UV can penetrate flesh to that depth ? Using Dragon's method, I only get 1.4×10−11 at 5 mm, but do get 8.2% penetration at 0.5 mm, could that be what you meant ? Or are you actually talking about burning away the flesh, as in a weapon ? StuRat (talk) 13:40, 13 January 2009 (UTC)[reply]

I don't know what this whole thread is really abut, but when I put my hand over a (normal) 40w incandascent light-bulb that is narrowly directed, then rays clearly make it through the depth of my finger (~1 cm), illuminating it red, but not my palm (either at the knuckle, at ~2 cm, or near the wrist, more like ~3.5 cm) -- these parts of my hand stay uniluminated, you can't tell there's a light behind them. I imagine UV light must have an easier or harder time passing through tissue than visible light, but I don't see why the question isn't being answered as it -- it is clearly asking what amount of UV radiation is required for ANY penetration at 5 cm -- ie even faint, just a few photons, much less than 8% penetration. —Preceding unsigned comment added by 82.124.85.178 (talk) 14:52, 13 January 2009 (UTC)[reply]

I agree, visible light can clearly penetrate a significant amount of flesh, so UV should be able to to as well (I would think UV penetrates better than visible, since it is higher energy). If the question really is for any penetration, then the answer is that there is always going to be some penetration, as long as there are enough photons to start with, since it's an asymptotic process (you get fewer and fewer photons making it through, but it never reduces to none). --Tango (talk) 15:06, 13 January 2009 (UTC)[reply]
No, UV has an absorption cross-section 1-2 orders of magnitude higher in typical tissues than red light does and will be stopped much more easily. Dragons flight (talk) 19:59, 13 January 2009 (UTC)[reply]
It was answered. I believe that UV doesn't penetrate very well at all (this is why we get sunburns, after all; if it passed through the skin it wouldn't burn it, and people would get "muscle-burn" and muscle cancer from sunbathing.). If you want one photon of light to penetrate to that depth, you would need to use 3×10109 photons, which is probably more than there are in the universe, and would also vaporize the flesh. So, it simply can't happen without vaporizing the flesh. StuRat (talk) 15:06, 13 January 2009 (UTC)[reply]
Assuming low-energy ultraviolet and that Dragon's numbers are correct, you need about 1.4×1091 joules of energy, or about 3.5×1031 times the amount of mass-energy in the observable universe. --Carnildo (talk) 23:28, 13 January 2009 (UTC)[reply]
This paper (in particular, figure 4) illustrates the dramatic attenuation of UV (across a range of wavelengths) by corneal epithelium, which is essentially transparent to a wide range of wavelengths of visible light. Really fascinating how well organic molecules can attenuate UV light. --Scray (talk) 04:07, 14 January 2009 (UTC)[reply]

How Alternating Current Flows in elcetrical wire?[edit]

In Alternating Current flow between Phase and Nutral , is there any rise in nutral voltage against Ground, if so why cannot we measure that?Ranga333eie (talk) 07:44, 13 January 2009 (UTC)[reply]

What you're calling "phase" I call "hot". I assume you're talking about two-conductor plus ground 110 volts like in the United States. Neutral is ground, unless there's something broken somewhere. Both neutral and the grounding wire are connected to earth at the breaker box. See "Mains electricity" and "AC power plugs and sockets". --Milkbreath (talk) 11:26, 13 January 2009 (UTC)[reply]
While there normally isn't a voltage difference between the neutral and the ground, there are all kinds of sloppy installation practices which may lead to significant voltage on the neutral — in other words, don't rely on the white wire to be cold when you're poking around inside an electrical panel.
There are also some systems – typically for more specialized purposes – that use two 'hot' conductors and no 'neutral'; see split-phase electric power. TenOfAllTrades(talk) 14:25, 13 January 2009 (UTC)[reply]
If a phase and neutral are carrying current, there is a voltage drop along each conductor equal to the resistance of the segment of conductor in ohms times the current in amperes. A long extension cord carrying 12 amps might have 120 volts at the outlet and 116 at the appliance, There would be a 2 volt drop in the phase or "hot" lead and a 2 volt drop in the neutral lead. At the appliance, there would be a 2 volt difference between the neutral wire and the ground wire, which carries no current outside of fault conditions. At the main power panel (in normal U.S. installations) the neutral and earth ground are bonded together and have no voltage between them. Neutral are not magic wires immune to Ohm's Law. The voltage drop is not limited to the phase wire in a 2 conductor plus ground circuit. I have certainly measured a voltage difference of a volt or 2 between neutral and ground at various points in a residential or commercial structure. Edison (talk) 15:54, 13 January 2009 (UTC)[reply]

Ground Glassware Joint sizes[edit]

Hi, In USA glassware you have the joint size 24/40 while in the UK we have a size 24/29 and 29/32. The numbers being the mm sizes of the smallest (bottom) and largest (top) measurements of the joint. I wanted to ask if anyone knows whether the USA & UK standard sizes are on the same angles as each other (eg 24/29 or 29/32 would fit into a 24/40 but just not go all the way down/up the joint) or whether you wouldn't even be able to fit UK & USA joints together at all. Thanks AllanHainey (talk) 13:17, 13 January 2009 (UTC)[reply]

From Ground glass joint#Conically tapered joints:
Standard Taper symbol
The conically tapered ground glass joints typically have a 1:10 taper and are often labeled with a symbol consisting of a capital T overlaid on a capital S which stands for "Standard Taper"....
The US and ISO joints differ only in the length not in the slope, and can be used in combination.
Hope that helps! TenOfAllTrades(talk) 14:16, 13 January 2009 (UTC)[reply]

The genetic aspects of personality ?[edit]

Noting that different breeds of cats and dogs have quite different personalities (or is that "petalities" ?), there seems to be a genetic basis for these differences. So, then, is there a genetic basis for differences in human personalities ? Has this been studied ? I'm looking for things like "people with gene A tend to be constantly nervous". StuRat (talk) 14:57, 13 January 2009 (UTC)[reply]

One problem with humans (and probably even with pets) is that categorization of personalities is not exactly an exact science. It's a very complicated phenotypical expression, and no doubt has a complicated genotypical expression as well. No doubt most of us would object if we heard what another person decided our basic "personality" was. --98.217.8.46 (talk) 15:22, 13 January 2009 (UTC)[reply]
It's true that categorization isn't exact - but there must have been studies with identical twins separated soon after birth. If those have identical personalities then it's genetic - if they don't then it's (at least in part) nurture. An exact categorization isn't required - a very approximate test (Type A and Type B personality theory for example) would do fine for such a black-and-white thing. However, I bet that what we call "personality" isn't a single trait - so "tendancy to fly into a rage over the slightest thing" could easily be a hormonal response that's 100% genetic - but "workaholic" could be that one twin has a job they like while the other got stuck in a career that they are ambivelant about. Hence I strongly suspect that any such test would be unable to report that the twins have identical "personality" - although some details of that personality are genetic. SteveBaker (talk) 15:51, 13 January 2009 (UTC)[reply]
Most of the research right now is being done in areas of psychopathology: depression, schizophrenia, anxiety, obsessive-compulsive disorders, etc, since these phenotypes can be reproducibly categorized according to certain established standards (DSM-IV). There is also interest in the genetics of personality disorders (think of this like exaggerated archetypal personalities) where there is evidence for heritability. The bottom line is that there is almost certainly a genetic influence on personality but an equally important environmental ("nurture") component. If you view psychiatric phenotypes as a spectrum along which most of us fall somewhere in the middle, there will be a genetic influence on one's temperament that is acted upon by the environmental context -- the way a person was raised, extreme events in childhood, substance abuse, etc. to create the overall "personality". This is no different than any other complex trait. We just have to recognize the importance of both genetics and life experience and not become overly deterministic to think that a person with a particular allele of gene "A" will necessarily have trait "B" or exhibit behavior "C". We still hold sway over our own actions and people can and do learn to overcome personality traits that they view as being detrimental. --- Medical geneticist (talk) 19:12, 13 January 2009 (UTC)[reply]
Thanks. Have any genes been found so far that have been verified to effect personality traits ? StuRat (talk) 20:32, 13 January 2009 (UTC)[reply]
While twin seperate at birth studies are very useful, they are somewhat limiting in that there aren't that many of them and they are relying on a relatively small sample size. Of course,, if each studies shows the exact same personality then it does seem likely the trait is mostly inherited presuming you really mean exactly the same and not just similar Nil Einne (talk) 21:44, 13 January 2009 (UTC)[reply]
Hi Stu, serendipitously the article by Stephen Pinker which appeared in the most recent addition of the New York Times Magazine, which I just finished reading, addresses that very question, among others, and is also a good read in and of itself.[1]. Enjoy! - Azi Like a Fox (talk) 21:17, 13 January 2009 (UTC)[reply]
The problem with the situation where "people with gene A tend to be constantly nervous", would be the gene A would not define a personality trait, but a behavioural disorder. If one was constantly nervous it would be very difficult to function within societal norms. Where traits or tendencies end, and pathologies and disorders begin is sometimes difficult to define, but single genes that have been identified tend to inform on the latter, because the effect of dysfunction are much more apparent (and we typically characterize genes by their dysfunction, rather than their function). Therefore single genes that "encode" subtle personality traits — if they exist — are very difficult to identify because of their subtle phenotype. In all likelihood, personality traits such as shyness or confidence are the product of many interacting genes; and their environment, of course. Identifying a gene that has the sum effect of making you 1.2% more confident on average than someone with a different allele is a near impossible task! Especially given the size limitation of twin studies.
If we can't identify genes by their dysfunction in humans, we usually turn to animal models. A fundamental problem in behavioural genetics, is that most animals make rather poor models for human behaviour. There are a few reasons for this that I have personal experience of. Firstly, I've been involved in two studies that have identified genes in animals that have dramatic, and sometimes quite bizarre, behavioural effects. In one study the deletion of a single gene resulted in male animals mating with each other. We followed that up by identifying a single gene that was sufficient to provoke male/male aggression. The reason you probably haven't heard about this before (and I have not exploited it to be the ruler of all mankind) is because both genes were lost in the human lineage and therefore have zero impact of human behaviour. This is a recurring theme when we study dramatic behavioural traits.
Recently I have recently been working with a colleague on studying the molecular genetic basis of "fear", "nervousness" or "anxiety" using, as an experimental paradigm, the effect of cats and snakes on the behaviour of mice. I'm not going to tell you our results (they will be published soon enough), but during our studies it became very clear that "fear", "nervousness" and "anxiety" are all human conditions that we assign to animals based on what we think they are feeling given the context. We really have no good evidence that a mouse is "fearful" of a cat, all we can do is measure biomarkers, such as stress hormones, and record behaviours that best fit in with our (human) idea of fear. Therefore finding any gene that definitively encodes a human personality trait using an animal model is on inherently shaky ground.
That all said, there are some genes that were identified using animals models and/or naturally occurring examples of dysfunction, that are involved in human personality traits. One example is monoamine oxidase A, which appears to play some role in mediating aggressive behaviours in both mice (PMID 10591056) and humans (PMID 8211186, PMID 18463263). Rockpocket 02:15, 14 January 2009 (UTC)[reply]
Rock, I'd expect you'd have better results studying chimps than mice, as they are far more similar to humans in both behavior and genetics. Were all your studies done using mice ? StuRat (talk) 15:55, 14 January 2009 (UTC)[reply]
Using mice, yes. But primates have their own limitations: unlike mice they are not genetically tractable, which is almost a requirement to definitively prove the function of any gene. Monkeys also have extremely complex social behaviours most of which are not innate, making it much harder to isolate the genetic component from the environmental component. If all you experienced was isolation in a small cage in a sterile lab, then its likely the the environmental effects of your personality would rather overshadow any genetic modifiers. The same goes for chimps.
It would be logistically and financially prohibitive to obtain the number of chimps required to make the results statistically significant. For example, in the studies I mention, the number of mice used is probably close to the total number of chimps in US labs at any given time! Finally, obtaining legal and/or ethical approval to use them for these types of experiments would be near impossible (not to mention morally questionable). Rockpocket 00:54, 15 January 2009 (UTC)[reply]
I would think it would be ethical and reasonably cheap to study a group of chimps, say in a zoo, note which ones are aggressive and which are submissive, for example, then check out their DNA to see if the aggressive ones have a gene in common or the passive ones have another gene in common. Next you could look for this gene in people. It wouldn't be a rigorous proof of the function of the gene, at that point, but would be a good indicator that further study is warranted. StuRat (talk) 04:40, 15 January 2009 (UTC)[reply]
You would think wrong, because its not that simple, I'm afraid. We would have to have a defined and controlled behavioural assay. For aggression, that means purposely making animals fight with each other. No zoo would give you permission to stage monkey fights with their prized primates. And even if they did, I can't image the IACUC would look upon if favorably. But lets just say they did.
We would need to study enough chimps to find a single version of a particular gene was common to all the aggressive animals, and not found in all the submissive alleles. Chimps probably have around 20,000 coding genes. Each chimp has two copies of each gene (called "alleles"). But there are many different possible alleles for each gene, so a population of 4 chimps could have as many as 16 different alleles of a single gene between them, or they could all share the same allele. Different alleles could have exactly the same phenotype, or they could have a different phenotype. A number of factors influence the number of alleles in a population, including relatedness of the individuals and how inbred the population is. So, to "check their DNA" with we would need to sequence 40,000 alleles per chimp. That costs around 2 cents a base the average gene is around 2500 bases. That means we have a base cost of $2 million per chimp. This is only looking at coding sequences, remember, if we wanted to look a regulatory elements (which are just as likely to be involved, if not more so) then you could double this cost.
So how big a sample would we need to find a single gene that has the same alleles among all the aggressive animals, and a different one among all the submissive animals? How many other alleles would the aggressive animals share, and the submissive animals not share, just by chance? I don't know the answer to this, but consider a comparison: If you choose 6 humans with two different characteristics (aggressiveness vs shyness) it wouldn't be a big surprise to find the 3 aggressive people have blue eyes, while the 3 shy people had brown eyes, would it? Multiply that over element of chance over 40,000 genes and you would fine that a large number of alleles would segregate with aggressiveness by nothing more than co-incidence (not to mention genetic linkage). The bigger the sample size, the less likely that would happen by chance, right? And additional problem is that different alleles can have the same phenotype, so its possible that the aggressive animals could have different alleles from each other, yet still all end up making the same functional protein. Its likely that we would need a sample size of many tens (at least) to get rid of enough false positives to make a candidate approach valid. Realistically, because of a lot of environmental and other confounding factors, hundreds or thousands would really be needed.
But lets keep this simple and assume we could get a zoo that has at least 20 chimps that were significantly unrelated and controlled for age, sex and environment. That they were willing to let us stage fights. That we could devise a controlled assay protocol AND that the IACUC gave us permission. Then we would need around $40+ million to get us to the stage where we could say - at best - "these dozens of genes show a strong correlation with aggressive behaviour in chimps, further research is warranted." Considering most top labs run on a budget of between $0.25M to $1M a year, that is huge commitment of funds for very little pay-off. So would you like to start writing that NIH grant proposal or shall I? ;) Rockpocket 23:13, 15 January 2009 (UTC)[reply]
Stage fights ? Certainly we can come up with a safer way to measure aggressiveness than that. Put each chimp in a room with a club and a banana in an box that won't open. If the chimp gets mad and bashes at the box with the club, then call it aggressive. StuRat (talk) 02:38, 16 January 2009 (UTC)[reply]
The experiment could be much less expensive (~$2000 per sample) if you used a more efficient SNP detection system such as a microarray rather than sequencing each allele. But this doesn't change the statistical hurdle, as stated by Rockpocket, that you need a large sample size to detect the alleles that are enriched in the more "aggressive" animals. I don't know how many chimps are in captivity right now but I doubt there would be enough to reach statistical significance. And all of this is ASSUMING that we have the right assay for "aggressiveness". In a primate society there is a dominance hierarchy which governs behaviors -- a young male, even if very "aggressive" towards others in the group, would still act "submissive" to the alpha male -- you would certainly have to take this into accout. @StuRat: how do you know that your proposed assay outcome (bashing a box to get a banana) isn't a measure of some other characteristic than aggression? There are other possible explanations for this behavior (proneness to frustration, lack of problem solving skills). That being said, I'm not sure that staging fights is the right assay, either, but I'm not an expert in the behavior of non-human primates. The point Rockpocket is making is that the experiment is extremely difficult to do correctly and probably MORE difficult to convince a grant agency to fund. That's why it hasn't been done, and one reason why behavioral genetics is so tricky. --- Medical geneticist (talk) 15:18, 16 January 2009 (UTC)[reply]
The chimp being alone in the room is designed to eliminate any effect of the dominance hierarchy. I just came up with the banana in a box test off the top of my head; I'm sure a better experiment could be designed with a bit more thought. StuRat (talk) 15:29, 16 January 2009 (UTC)[reply]

Medication Expiration[edit]

(I am not asking for medical advice) Most medications have expiration dates. What do these dates mean? Would expired drugs be theoretically bad for you? or just less potent? What causes the expiration? Is it similar to food in that it can become contaminated after so long? or is it the chemicals in the medication subtlely changing like when hydrogen perioxide becomes water? Anythingapplied (talk) 17:41, 13 January 2009 (UTC)[reply]

I believe in most cases the drugs just become less potent. There are all kinds of different ingredients in drugs, though, in addition to the active ingredients. It's possible the coating of some drugs may go mouldy or something similar, or various other things. Unless you are a pharmacist or doctor and know what you are doing, you should always obey the expiration dates - if you have out of date drugs try taking them to your local pharmacy, they may well dispose of them for you (it's possible they may end up harming people or animals if thrown in the bin or flushed down the toilet). --Tango (talk) 18:09, 13 January 2009 (UTC)[reply]
It often means that they've tested the drugs up to a certain time period, and can certify them safe up until then, but have no idea what, if anything, might happen beyond that date, or how long it might take. For example, one possibly dangerous problem could be if time-delayed-release capsules instead release the med all at once. StuRat (talk) 20:20, 13 January 2009 (UTC)[reply]
See Wikipedia:Reference_desk/Archives/Miscellaneous/2008_September_11#Expiration date of common medications for my last answer. --—— Gadget850 (Ed) talk - 20:32, 13 January 2009 (UTC)[reply]
A reference that might be of interest is here (possibly same study as Gadget850 referred to in previous answer). --NorwegianBlue talk 21:04, 13 January 2009 (UTC)[reply]
I believe the US military did some study like this. They have huge stockpiles of all sorts of drugs stashed away in case of war - and they had the problem that they were throwing away (and repurchasing) vast numbers of drugs because they'd reached their expiration date. As I recall (and I don't have a handy reference) they found that most drugs were 100% effective for VASTLY longer than the expiration date indicated. It's certainly the case that the suppliers aren't doing adequate testing to determine the true safe storage time...but then that's not really a practical problem for most users. Drug stores and hospitals are going to get through their supplies plenty fast enough - and for individuals, it's probably a good idea not to take drugs that you were prescribed years ago without checking with your doctor that they are still appropriate to your condition - so making you toss them out rather than "self-medicating" is probably a good thing. It's really only people like the Army who stockpile and don't use large quantities of drugs who actually care about this. SteveBaker (talk) 15:10, 14 January 2009 (UTC)[reply]
It's not only them, no. I need aspirin maybe once or twice a year. At this rate, they expire before I can use them up. Many other meds I have fall into the same category. There is also the general "preparedness" thing, of having supplies of antibiotics and other meds available in case of emergency (since, during a plague, they would be hard to find). Unfortunately, there isn't much incentive for pharmaceutical companies to enable you to keep them longer, they would prefer if you toss them and buy more. Thus, we need others, like the military, to do studies that actually determine the age at which meds start to decline, and the way in which they decline. In the case of the plague, for example, where there aren't enough new meds to go around, is it better to give victims the expired meds or to let them take their chances with the disease ? StuRat (talk) 15:41, 14 January 2009 (UTC)[reply]
Just to clarify, who are you suggesting should have supplies of antibiotics for emergencies? You should never self-medicate with antibiotics even if you know it won't do you any harm directly (which you wouldn't anyway, but nevertheless), doctors have to be very careful about when they prescribe them and which ones they prescribe in order to avoid the build up of resistance. If people start self-medicating it messes all that up. In the case of a major epidemic, you run the risk of the plague, or whatever, becoming antibiotic resistant resulting in the deaths of millions. --Tango (talk) 16:07, 14 January 2009 (UTC)[reply]
The advice that "you shouldn't self-medicate" works fine for normal conditions, but during an out-of-control pandemic that has completely overloaded the health care system, so they have no meds or beds for anyone else, self-medication and death may be the only options left. StuRat (talk) 17:55, 14 January 2009 (UTC)[reply]
I can't imagine such a situation likely in the developed world though. Most serious pandemics nowadays tend to be viral, likely mostly because the fact we have antibiotics and efficient health systems means a bacterial pandemic is far less likely and perhaps also because good hygeine and sanitation is quite effective at controlling most bacterial diseases but some viral ones are more difficult to control. If a serious bacterial pandemic does arise in the developed world, it seems most likely to me it will be a resitant strain anyway. Nil Einne (talk) 19:07, 14 January 2009 (UTC)[reply]
Yes - I agree. If the practice of hoarding antibiotics were widespread, the consequences of uncontrolled use could be quite serious and actually trigger the very pandemic these people were trying to avoid. Evolution is a powerful enemy! When the doctor - and the instructions on the label tell you "You must complete the course" - they very definitely mean it. If you feel better after taking half of the pills provided and hoard the rest then the bacteria that you failed to kill are the ones that were able to hold out longest against the antibiotic onslaught. When you stop treating them - you may have the antibodies to keep them away - but enough will survive to spread to someone else who will then discover that the very same antibiotic that cured you - doesn't work for them. When people even OWN half-full bottles of antibiotics - the problem has already begun. Going on to take half a bottle of them again later simply repeats the problem on another strain of bacteria. If enough people do that - then a potentially curable pandemic becomes one that resistant to every kind of commonly prescribed antibiotic in existence. When it says "Complete the course - take ALL of the pills" - it means it! SteveBaker (talk) 22:20, 14 January 2009 (UTC)[reply]

STARS AT NIGHT[edit]

IF I LOOK UP AT THE STARS IN ENGLAND UK ARE THEY THE SAME STARS MY BROTHER CAN SEE IN AUSTRALIA —Preceding unsigned comment added by Gordon6767 (talkcontribs) 19:09, 13 January 2009 (UTC)[reply]

No, people in the Southern Hemisphere are looking at a different region of outer space than people in the Northern Hemisphere. Dragons flight (talk) 19:50, 13 January 2009 (UTC)[reply]
Let me modify that a bit. There are some stars only visible from England, like Polaris, and some stars only visible from Australia, like the Southern Cross, but there are other stars you can both see, which are "directly above" the equator. StuRat (talk) 20:15, 13 January 2009 (UTC)[reply]
Just imagine yourself standing on a sphere (which you basically are) - your local "horizon" is a small circle drawn around your position on the sphere and your "sky" is a hemisphere - with it's flat face parallel to your horizon. The earth spins on it's axis - so different stars enter and exit your hemisphere depending on the time of day and time of year. If you and your brother were on precisely opposite sides of the planet (which you more or less are) then your hemispherical "skies" wouldn't overlap - so there would be no stars that you could both see at the same time...however as the earth spins and orbits the sun...which will bring SOME of the stars that your brother saw twelve hours ago (and will see again in another 12 hours) into your field of view. The degree to which that happens depends on your latitudes - if one of you was at the North pole and the other at the South pole - then the spinning of the earth through the day would never bring any stars that one of you saw into the view of the other...and if you were both on opposite sides of the equator - then the sky that one of you saw would be in more or less exactly the same place in 12 hours time. But you are neither of those things - so some stars that you can both see will circle into view every night - while others will be forever invisible to one or the other. Because the earth's axis is tilted, the stars that you can both see will change through the year. However, all of that is predicated on you being on EXACTLY opposite sides of the planet - but you aren't. Hence, there ought to be a very few stars that you'd both see right against the horizon just before dawn and just after dusk. Sadly, seeing stars that are close to the horizon is tough (lot's more atmospheric distortion - and too much clutter like hills and trees getting in the way).
SteveBaker (talk) 23:02, 13 January 2009 (UTC)[reply]

Here's another way to put it. Because the stars look as though they're all at the same distance, you can imagine the sky as a dome over your head. This imaginary dome is part of an imaginary sphere surrounding the whole world, and you can draw lines of "latitude" and "longitude" on it just as you can on the Earth. (Astronomers call them declination and right ascension.)

Now, if your latitude on the Earth is within 90° of a star's declination ("latitude") in the sky, then you can see it from where you are at least sometimes. So in London, at about latitude 50°N, you can see stars in the part of the sky from 40°S to 90°N. In Sydney, at about 35°S latitude, they can see stars from 90°S to 55°N. So all the stars from 40°S to 55°N can be seen sometimes from each city, but the other stars can only be seen from one to the other.

A further point is that if the star's declination ("latitude") plus your latitude (both north or both south) is greater than 90°, then the star is circumpolar, which means it never sets where you are -- it can be seen all night on every clear night. Thus in London, stars from 40°N to 90°N are circumpolar. The farther away from circumpolar a star is, the less often it can be seen. People at the equator can see the whole sky at some time or another, but they have no circumpolar stars. People at the poles can see only half the sky, but all the stars they can see are circumpolar. For others it's in between. Polaris, the North Star, is at almost 90°N and is circumpolar for the whole northern hemisphere. --Anonymous, 04:24, edited 04:29 UTC, January 14, 2009.

Stellarium_(computer_program) is a great program which will actually demonstrate to you what the sky looks like from any position in the world. After you install it you can go into options and just click where you are on the earth and it shows you the sky from that location. It's a great program, has mac, pc and linux versions and it's free. 203.110.235.129 (talk) 05:45, 15 January 2009 (UTC)[reply]

Medical question[edit]

I have not been able to find any answers to a question concerning the Autonomic nervous system.

Question---- can a compressed nerve in the thorasic area of ones back cause reflex bradycadia? There are several listings that suggest this but nothing in writing. Is it possible? —Preceding unsigned comment added by Stehawk (talkcontribs) 19:54, 13 January 2009 (UTC)[reply]

Reflex bradycardia is mediated by the vagus nerve. There are several possible causes for increased vagal activity, including pain. Axl ¤ [Talk] 20:54, 13 January 2009 (UTC)[reply]

what is the relationship between heat and information?[edit]

what is the relationship, if any, between heat and information?

(this is not a homework question! though I suspect it might be thermodynamics territory...) —Preceding unsigned comment added by 82.124.85.178 (talk) 20:01, 13 January 2009 (UTC)[reply]

I imagine there are many relationships. Here are some I can think of:
1) Heat may convey information. For example, a warm tailpipe on a car conveys the info that it was likely used recently.
2) Heat may obscure information. For example, rising heat waves in air can cause mirages or distort stars.
3) A device for recording information may generate heat, like a hard drive on a computer. StuRat (talk) 20:24, 13 January 2009 (UTC)[reply]

Maxwell's demon might be what you're looking for. --- Medical geneticist (talk) 20:27, 13 January 2009 (UTC)[reply]

I suspect Entropy (Thermodynamics) and Entropy (Information theory) may also be relevant. -- Coneslayer (talk) 20:33, 13 January 2009 (UTC)[reply]

can an Electric shock actually throw you across the room?[edit]

Hello

I have heard many stories about lightning striking people and them being rooted to the spot, while I have heard others about people being shocked and thrown backwards {EG Benjamin franklin was apparently thrown back after a show from a layden jar) So my question is can electric shocks throw someone across the room, and If so how, body spasm? shockwave? other. Would appreciate it, also maybe explain how extremely powerfull lightning bolts only sometimes throw things around and other times not. —Preceding unsigned comment added by 79.67.141.236 (talk) 22:01, 13 January 2009 (UTC)[reply]

A powerful shock can cause abrupt muscle contraction - I suppose you could imagine situations where that would throw you across the room. SteveBaker (talk) 22:47, 13 January 2009 (UTC)[reply]
I only see two possibilities here, one of which Steve has mentioned. The other would be the electrical shock causing an explosion externally, of which the blast sends you flying; however, I don't think that was your question. If you're suggesting that a shock directly to the body could send you across the room, I think the only possibility is muscle contractions as mentioned by Steve. —Cyclonenim (talk · contribs · email) 23:53, 13 January 2009 (UTC)[reply]
I've heard that the shockwave from a lightning strike landing very close to someone can physically throw them sideways. Don't know if it is true though. Dragons flight (talk) 01:18, 14 January 2009 (UTC)[reply]
There's also the case that if you see a giant spark in front of you, you'll probably jump back pretty quickly without consciously thinking about it. I once accidentally set off a spark on a circuit and I jumped the hell back, even though I wasn't shocked myself (I also had a nice giant spark burned into my vision for about an hour). To an outsider it might have looked like I was thrown back. --98.217.8.46 (talk) 01:03, 14 January 2009 (UTC)[reply]

When I have been seriously shocked, either by high voltage DC or by AC, it has basically knocked me down where I stood, gobsmacked, with the cheek bitten through, and glad to be still alive. Being "knocked across the room" sounds like dramatic license. Edison (talk) 05:24, 14 January 2009 (UTC)[reply]

I've seen TV footage where lightning struck a football field and all the players fell over in dramatic fashions, forwards or backwards. Nothing near to being thrown any distance though. Sandman30s (talk) 14:29, 14 January 2009 (UTC)[reply]

It's also possible for electric shock to do quite the opposite. The muscle contraction thing can cause someone who accidentally grasps a live wire to be unable to let go of it because the muscles in the fingers contract and won't release again. So far from being thrown across the room - the person simply cannot move. The couple of occasions I've had 240volt shocks (back in the UK where electricity is PROPER man-sized electricity - not the wussy 110v stuff we have here in the USA!) the spasm was in my arm muscles - the arm contracted (HARD!) and broke the contact before anything nastier could happen. The resulting pain was mostly due (I think) to the super-fast muscle contraction - which made the muscle ache for days afterwards. But I was lucky - I didn't even get a burn on the finger that touched the live wire...other people no so. A friend of my sister lost her husband that way when he was drilling into a wall to hang a shelf and hit a live 240volt wire - he died within seconds. SteveBaker (talk) 15:03, 14 January 2009 (UTC)[reply]

Was he standing in water connected to ground ? StuRat (talk) 15:23, 14 January 2009 (UTC)[reply]
If he was drilling I would suspect both hands were on the drill. A lot more likely the current will go through the heart then if one hand was at ther side of the body or whatever (a common suggestion when dealing with anything connected to live) Nil Einne (talk) 18:57, 14 January 2009 (UTC)[reply]
I don't know the exact circumstances - and I confess it surprised me - I mean the body of most drills are well insulated plastic. You might have one of your hands contacting one of the screw heads holding the body of the drill together - but the odds of getting an across-the-chest jolt that way seems really remote. Also, you'd need to imagine that one hand was touching a properly grounded bit of metal and the other a bit of metal that's electrically connected to the drill bit and NOT to ground. Seems to me that grounding the drill bit is an elementary precaution that drill makers would aim for...but then the drill could have been faulty or something. However, even remote chances do sometimes happen. But no, he was putting up a shelf in his home - it's hard to imagine he was standing in water. SteveBaker (talk) 19:21, 14 January 2009 (UTC)[reply]
One way someone could be thrown across the room was if he literally exploded, due to the electricity turning the water inside his body into steam. Whichever side of him split open first, the steam would escape in that direction and propel the rest of him in the other direction. I would expect such an event to kill the person instantly, although I suppose a leg could explode and the rest of him could possibly survive, if only a small portion of the electricity made it there. Such an off-center explosion would be more likely to cause him to spin than fly across the room, though. StuRat (talk) 15:23, 14 January 2009 (UTC)[reply]
No way! Think of the amount of energy required to flash-boil that much liquid - then look at how long it takes for an electric kettle to boil. Pulling that much energy all at once would blow the fuse LONG before any noticable amount of water would boil. SteveBaker (talk) 19:21, 14 January 2009 (UTC)[reply]
I'm talking about lightning strikes here. They vary dramatically in magnitude, with the most powerful capable of exploding a person. StuRat (talk) 04:33, 15 January 2009 (UTC)[reply]
I believe DC is a greater risk for the 'unable to let go' problem. At least that's what our Electric shock article says although looking elsewhere there appears to be some dispute [2]. I myself have been shocked by 240V before. Once with doing something with ann old AT computer case where the insulation on the power switch wasn't complete, poossibly once more from a soldering iron who's wire got burnt. The AT case I too had pain I believe resulting from the muscle contraction although can't remember it lasting for days. I think I might have also hit something in the case. Nil Einne (talk) 18:57, 14 January 2009 (UTC)[reply]
For me, the worst time was a set of Xmas tree lights which had one of those old-style in-cord switches. The switch was cracked and the plastic fell apart as I picked it up leaving me holding the bare (live) metal. The other time was when I was a kid wiring up my model train layout...I don't really remember how I screwed up that time - but I'm pretty sure it was my own fault. SteveBaker (talk) 19:21, 14 January 2009 (UTC)[reply]

In distinction to the unlikeliness of an electric shock causing muscle contraction make you involuntarily jump across the room, there is the very real possibility of the explosive force of a lightning strike or an high energy electric fault throwing you a distance. I have seen lightning strike a 90 foot tall large oak tree and throw a strip of bark 3 inches across and 2 inches deep 60 feet from the tree. I have seen a solid metal door blown off a 12 kv circuit breaker cubicle when the breaker successfully interrupted a close in fault. The heavy steel door was not only blown off the cubicle, it hit metal shelving and was badly bent, as if dynamite had gone off in the cubicle. This was apparently from the rapid expansion of superheated air from the arc generated. So physical/mechanical consequences of a powerful fault might move a person across a room. Edison (talk) 21:15, 14 January 2009 (UTC)[reply]