Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 345: Line 345:


= June 6 =
= June 6 =

== To charge a phone on a bicycle ==

Hi, is there any gadget I could get anywhere online (or preferably off) that will charge my phone while I pedal my bike? I remember [[bike odometer]]s that hooked a little wheel to one of the tires (therefore, called "flywheels?") in order to spin the numbers. Could that same small wheel utilize the spinning of the tires to recharge my phone?

If so, where is a device that'll do exactly that? I would hope to find one before a long bike-ride. Thanks. --[[Special:Contributions/70.179.165.67|70.179.165.67]] ([[User talk:70.179.165.67|talk]]) 04:29, 6 June 2011 (UTC)

Revision as of 04:29, 6 June 2011

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


June 2

Help with snake identification, peculiar feature

Hello. So today I was walking out to my boat, and I encountered a snake. I didn't see it at first...I HEARD it. I turned to find a rather large snake (perhaps 1.5-2 meters), brownish/green in color, with a kind of "blurred" diamond pattern (I don't know if "diamond" is a good description, but it had a discernible pattern). Anyway, its head was raised into the striking position, and its tail was erect and shaking vigorously, producing the "rattling" sound. My first instinct (aside from jumping ten feet) was that it was a rattle snake. But when I moved away, the snake slithered of the rocks into the water. It didn't seem very graceful in the water, rather just sort of coiled up and floated, all the while hissing and shaking its tail. It was then, when I could get a good close look at him, that I realized the "rattling" sound had stopped. Despite continuing to shake its tail like a rattle snake, it did not have any visible "rattles." I am very certain of this point. The sound I had heard, I gathered, was from the tail shaking amongst the nearby grass and weeds it was hiding in before going into the water. So, my question is, are there snakes that will mimic the characteristics of a rattle snake as a defense mechanism? Is this common? Any articles you can point me to? Also, do snakes have "nests" that they defend. (This one seemed particularly protective of the area, which is why I think it moved into the water...to get away from us, but still stay near.) And can a snake strike or bite while on top of, and, separately, while swimming underneath the water? (This last part is a question my son had. We didn't see this snake actually swim "underneath" the water, but we have seem some snakes, such as water moccasins, exhibit this sort of aquatic behavior.) I am in the Southeastern U.S. near a large freshwater lake, if it matters. Sorry for the long post. Thanks, Quinn BEAUTIFUL DAY 04:20, 2 June 2011 (UTC)[reply]

Looking at Venomous snake led me to Crotalus adamanteus, the eastern diamondback, which sounds like it could be your snake, as it hangs around in marshes and can swim. It's possible it lost its rattle somewhere along the way but still "rattles" instinctively. Do the pictures in that article look anything like your snake? ←Baseball Bugs What's up, Doc? carrots06:33, 2 June 2011 (UTC)[reply]
As for the rattling stopping when it went in the water, that sounds like what I would expect. The rattle is a series of loose scales that strike one another when shaken. If they have water between them, that would dampen the sound considerably. StuRat (talk) 07:07, 2 June 2011 (UTC)[reply]
And, in case it isn't obvious, BACK AWAY FROM THE SNAKE. Our article lists a "mortality rate of 30%" for those bitten, so don't play Russian roulette by hanging around it. StuRat (talk) 07:11, 2 June 2011 (UTC)[reply]
There are several harmless snakes that mimic the rattle of a true rattlesnake, among them the corn snake and kingsnakes. As the corn snake article mentions, they're hugely variable in colour and patterning, so it could well have been one of them. While the old saw about telling whether a snake is venomous by the shape of the head is largely bunk, rattlers do tend to have a rather broad flat head in comparison to corns and kings and also tend to have a stockier body, though that will vary more with how successful the hunting has been. Matt Deres (talk) 14:07, 2 June 2011 (UTC)[reply]
Wow, yes! I do believe it was a corn snake. Thanks! And upon reflection last night, I wonder if we had "cornered" him against the water, which would explain his aggressive behavior. Quinn BEAUTIFUL DAY 16:25, 2 June 2011 (UTC)[reply]
Many venomous snakes can indeed inject venom on and under the water. Some snakes other than rattlesnakes do shake their tails as a warning. Edison (talk) 04:13, 3 June 2011 (UTC)[reply]

List of Tropical cyclone names that were retired after only one use?

What are examples of tropical cyclone names that were retired after just a single use? This includes North Atlantic hurricanes, Western Pacific typhoons, Eastern and Central North Pacific hurricanes, South Pacific and Australian region tropical cyclones, but does not include North Indian Ocean or Southwest Indian Ocean tropical cyclones or cyclones that form between the equator and 10°S and between 141°E and 160°E in the Australian region as their names are retired after a single use anyway (although in the North Indian Ocean once all the names have been used they will create a new list while in the Southwest Indian Ocean a new name list is used every year so they are technically not retirements). Narutolovehinata5 tccsdnew 09:19, 2 June 2011 (UTC)[reply]

I can't help you with the "one use" part, but you can find an extensive list of retired cyclone names here. Looie496 (talk) 17:38, 2 June 2011 (UTC)[reply]
A lot more than I initially thought: Starting in 1954 in the North Atlantic, a new list of names was developed every year, replaced in 1960 by a 4-year cycle. Thus any retired name from 1963 and prior would have been used only once: this list is Hurricane Audrey (1957), Hurricane Connie (1955), Huricane Carla (1961), Hurricane Diane (1955), Hurricane Donna (1960), Hurricane Gracie (1959), Hurricane Hazel (1954), Hurricane Hattie (1961), Hurricane Ione (1955), and Hurricane Janet (1955). I don't have time to look at others, but that should get you started. -RunningOnBrains(talk) 23:32, 2 June 2011 (UTC)[reply]

How serious is to find E. Coli on your salad? (I don't mean the deadly strain, but simply trifle E. Coli). — Preceding unsigned comment added by 80.26.37.77 (talk) 17:22, 2 June 2011 (UTC)[reply]

As for the EC alone, there would be nothing to worry about, because you already have a lot in your guts. On the other hand, when you start to wonder on how it might have got there, you cannot exclude that someone or something has dumped on your salad, probably leaving more than just EC there. That's to say, it indicates some hygienic deficits. 95.112.146.231 (talk) 17:51, 2 June 2011 (UTC)[reply]
I'm sorry but that's a ridiculous thing to say. E. coli may be fine in your gut, but that in no way indicates that they're fine to chomp down with your salad. E. coli ingestion is a huge concern worldwide (our article mentions 200 million+ cases of diarrhea and 380,000 deaths a year). I don't think it's a stretch at all to say that the strong avoidance normal people have for ingesting shit comes entirely from the fact that people who eat E. coli in quantity have a tendency to get sick and die. Please don't post nonsense of that kind on a reference desk. Matt Deres (talk) 20:05, 2 June 2011 (UTC)[reply]
I don't know, maybe we don't eat crap because it tastes like crap. Googlemeister (talk) 20:27, 2 June 2011 (UTC)[reply]
No OR please. Matt Deres (talk) 13:24, 3 June 2011 (UTC)[reply]
I am sure we have an article on those who are into that sort of thing. Googlemeister (talk) 15:23, 3 June 2011 (UTC)[reply]
You wouldn't "find it on your salad" because it's a bacterium and you can't see them without a microscope. There are hundreds if not thousands of varieties of E.coli, most of which could be fatal in the right circumstances. The first you'd know about ingesting it is you'd be sick. I have to disagree with 95.112, though. Yes you do have E.coli in your guts and that's where it should stay. If it gets into the upper digestive tract you will have problems, of which vomiting is but the first sign. --TammyMoet (talk) 18:08, 2 June 2011 (UTC)[reply]

This is more about quantity and type than about presence per se. With the exception of a sterilized surgical theater, pretty much every surface in the world contains some quantity of E. coli. Problems arise when you have (a) a lot of it, or (b) a particularly pathogenic strain. Looie496 (talk) 18:23, 2 June 2011 (UTC)[reply]

It would be very ironic if OP wrote this before the current outbreak of disease stemming from E. Coli in Germany, which has led to numerous deaths and many people on dialysis machines. Germany blamed Spain, who burned vast quantities of vegetables, only to hear from Germany that the disease came from German hothouses growing bean sprouts. They are understandably very angry. But it does show that bean sprouts can be more dangerous than knuckle of pork, which must have been news to the green vegetarians who are now lying in hospitals. It is also in line with (my notion) that food production will be more and more disposed to infection by increasingly deadly diseases as it moves to hot house and water farm production. This type of concentrated farming is far more amenable to hosting and propagating such diseases, as the vector can hide and spread more easily, just as plagues are more pravelant in cities than they are in villages. Have a look here Myles325a (talk) 08:24, 7 June 2011 (UTC)[reply]
Waiter, waiter! What are these things jumping in my salad? These are vitamins. And why are they jumping? This is because they are so healthy! 95.112.146.231 (talk) 18:32, 2 June 2011 (UTC)[reply]
The above reminds me of the Australian (and I am one) who was in Paris for the first time and was dining in a swish restaurant there. He called the waiter over: "Garkon, garkon, come over 'ere a sec. There's a fly in soup!" The waiter, dripping Gallic hauteur came over and, looking down his nose, said "But, Monsieur, thet iz vot "Soup de Jour" means. It minz "Fla Soup". The Aussie looked at it closely and then said "Well, how come there's only one of them?"

How does the poison get to the kidneys? On my understanding, EC is a normal inhabitant of the intestine and the poison is a protein. Proteins are digested, that's what the bowls are for, and this poses a major problem on the Route of administration for drugs that consist of proteins but should not be digested. If the poison was just excreted into the liquids of the gut, a drastic laxative would do the trick. Some doctors are tight, but not all, so if that would work it would be known by now. 95.112.146.231 (talk) 18:20, 2 June 2011 (UTC)[reply]

See Shiga-like toxin. The protein binds to cells in the gut lining specifically (humans, but not cows) and actually mechanically creates little invaginations, which break off inside the cell. Wnt (talk) 19:36, 2 June 2011 (UTC)[reply]
With these invaginations (never heard that word before) the toxin (or the whole of the bacteria???) arrives within the cells lining the gut. So how does the story go on? They are not yet inside the blood stream. 95.112.146.231 (talk) 20:01, 2 June 2011 (UTC)[reply]
The frequent presence of bloody diarrhea during EHEC infection is a pretty good hint that it is breaking through the bowel barrier in a way that is not typical. Dragons flight (talk) 19:47, 2 June 2011 (UTC)[reply]
That thought is exactly mine, this is why I asked that question. 95.112.146.231 (talk) 20:01, 2 June 2011 (UTC)[reply]
Well, the toxin doesn't always get to the kidneys - only in about 10% of cases of bloody diarrhea from EHEC (as I've recently added to the article). According to [2] (a site about an interesting drug to stop the damage) the kidneys and intestine both have the same Gb3 glycolipid receptor for the toxin; the kidneys thus are affected if it gets to the bloodstream. I'm at the moment still a bit hazy on how it gets from the affected intestinal cells to the bloodstream, but as we're speaking of dying cells and widespread bleeding it doesn't seem entirely implausible. Ah, sounds like it is transcellular transcytosis: [3] [4] There's even a drug latrunculin B that can increase the amount transcytosed, though I think one to decrease that amount would be more useful! Wnt (talk) 20:52, 2 June 2011 (UTC)[reply]
If the toxin is transported inside the cell by Receptor-mediated endocytosis (and for any odd reason not transported on into a lysosome), why shouldn't it be possible to saturate the receptors in question with whatever non-toxic proteins they transport normally? 95.112.137.166 (talk) 11:52, 3 June 2011 (UTC)[reply]
Well, the toxin gets in by binding the glycolipid in the cell membrane and physically creating tubules. The link about the drug I linked above talked about saturating the Shiga-like toxin to compete with the glycolipid, rather than the other way around. It's easier to make a harmless high affinity drug that targets the toxin, and there should be less toxin to saturate. Wnt (talk) 03:25, 4 June 2011 (UTC)[reply]
Ahem, except actually the other way around does work: [5] ref 4. This paper also suggests that people with Fabry disease may be resistant to the harmful effects of EHEC, perhaps because the Gb3 in other tissues soaks up the virus, so it doesn't all hammer down on just a few tissues. Wnt (talk) 23:32, 4 June 2011 (UTC)[reply]

Immune system response to vaccine

When a vaccine is administered and successfully confers resistance, how is the pathogenic material "remembered" by the immune system? I've read Vaccine#Developing_immunity, which doesn't offer any explanation. My understanding is that the vaccine induces production of specific antibodies, but my question is more about the mechanism. Specifically, are different genes being expressed after vaccination? If so, is it fair to say that (some) vaccines trigger epigenetic effects? (I am not seeking any form of medical advice.) SemanticMantis (talk) 18:38, 2 June 2011 (UTC)[reply]

As far as I remember, some kind of leukocytes are actively mutating their genes expressing the antibodies. Those with "matching" antibodies are selected to survive. I would greatly appreciate if someone closer to the subject could check if what I memorize is correct and give more details. 95.112.146.231 (talk) 19:02, 2 June 2011 (UTC)[reply]
Active_immunity#Immunological_memory is the most detailed article I can find. It doesn't say much about the mechanism of the memory (how the immune system knows which patterns should be remembered). DMacks (talk) 19:09, 2 June 2011 (UTC)[reply]
The immune system contains a vast number of B cells which randomly produce different antibodies. Those that recognize "self" are deleted, but those that recognize specific pathogens are expanded exponentially. Vaccines use antigenic parts of viruses in combination with an adjuvant to make something the immune system will respond to by amplifying up the appropriate B cells, but it is still a difficult thing to do, and doesn't always work in every vaccinated person. (I was just reading that flu vaccine only produces immunity 70% of the time...) The B cells do the immune system's "R&D", and when activated they produce a large clone of cells which includes "large-scale manufacturing" called plasma cells; some are also set aside as memory B cells, a sort of "information archive". Wnt (talk) 19:46, 2 June 2011 (UTC)[reply]
Thanks, this is how I see it now: after e.g. a successful chicken pox vaccination, the immune system has developed a type of B cell that can make antibodies that will bind to active chicken pox viruses. These antibodies and B cells are novel to that individual, because they were not produced prior to vaccination. Some memory B cells persist, and so are ready to rapidly deploy that antibody should the pathogen be seen again. As for the crux of my question: because the original novel B cells were produced via mutation, it is not appropriate to say this is an epigenetic effect, because the new B cells have novel DNA, not just different expression of the genes that make the antibodies --Is that basically correct? SemanticMantis (talk) 20:07, 2 June 2011 (UTC)[reply]
Well, so far as I know is known, the body doesn't really have the ability to respond to a new antigen by mutating the DNA (via V(D)J recombination) in just the right way to make the perfect antibody to match an antigen. Rather, a vast number of B cells already exist with all sorts of different forms of the gene for the variable region of the immunoglobulin. One simply hopes that among all these cells, one can be found which produces an antibody that sticks; this one then gives rise to a vast horde. As a result, while the triumph of this clone can be seen as a special case of natural selection, what is changed is not actually the DNA sequences present but only the number of cells expressing one of them. Wnt (talk) 20:32, 2 June 2011 (UTC)[reply]
Immunological long-term memory is believed to be maintained by memory T cells, not by B cells. I'll state that without any specific references, just google "memory cells", and you'll get the point. B-cell memory is short lived in the abscence of antigen. It is true that new B cell clones arise all the time through the random process of V(D)J recombination. Some of these are autoreactive and potentially harmful, some react to pathogens that happen to be around, and are potentially beneficial. However, to get any further, the B cells need confirmation from helper T cells, that it is ok to multiply, and to launch an attack against whatever their antibody is directed at. It is within the T cell population that self-tolerance resides, and it is also there that memory is preserved. After B-cells receive the go-ahead signal from helper T cells, they go through a process of affinity maturation, i.e. somatic mutations of their antigen receptor (=antibody). This is a further genetic alteration of the receptor - after the VDJ rearrangement. Affinity maturation is a feature of B-cells alone, nothing similar happens in T cells. --NorwegianBlue talk 20:54, 2 June 2011 (UTC)[reply]
I'm not sure how to respond to the idea that memory B cells would die off without antigen - clearly they are maintained in humans for far long. I did omit talking about the role of T cells in the process; however, my impression is that memory T cells are involved in maintaining memory of cell-mediated immunity rather than antibody immunity.
I should also note that what I said before about the lack of directed mutation is starting to show a few cracks: according to [6] a B cell undergoes hypermutation when antigen and CD40 and CD38 ligands are present in the germinal center. They looked at the pattern of mutations, and while there wasn't much to write home about, I think the fact that three different signals converge to trigger mutations would seem to allow for the possibility that something about the antigen or cytokine environment of the infection could hint to the hypermutation machinery about what type or location of mutation would be best. (The authors do not say that - even after all these years Lamarckism suffers a bad reputation. ;) Likewise they didn't actually show stepwise mutation of the B cells to account for the improvement of antibody affinity that occurs during the immune response, but they said they thought they might see it with technical improvements. So the underlying question there I think remains open, though proving directed mutation or evolution in the course of developing the perfect antibody is probably nearly as hard as for nature to implement it. Wnt (talk) 22:38, 2 June 2011 (UTC)[reply]

gravity

If the gravitational "constant" changes would this reveal that the force of gravity is growing stronger in proportion to the product of 2 masses divided by the distance between them squared by indicating that the distance between masses is growing smaller or that mass is increasing in value or both? I am asking because the implication of this circumstance is that space is not expanding but that matter is simply increasing in density. Please correct me (after some thought) if you feel my hypothesis is wrong. --DeeperQA (talk) 18:47, 2 June 2011 (UTC)[reply]

I'm having difficulty trying to parse your first sentence. I generally overlook simple grammar errors in questions, but in this case I can't figure out what it is that you're intending your first sentence to mean. Could you perhaps rephrase it? Red Act (talk) 19:13, 2 June 2011 (UTC)[reply]
Geez... I am having difficulty myself... Sorry for not double checking the grammar after doing some inline editing while posting.. --DeeperQA (talk) 19:28, 2 June 2011 (UTC)[reply]
Newton's law of universal gravitation is
,

where:

  • F is the force between the masses,
  • G is the gravitational constant,
  • m1 is the first mass,
  • m2 is the second mass, and
  • r is the distance between the masses.

You observe that the attractive force F can be changed by changing either of the masses or their separation r. The only way to show which of these has changed is by measuring it separately. The value of G can be measured and is as far as we know a constant. Over 200 years ago the first measurement of G found a value within 1% of today's value. Have you read the article Gravitation which notes present theories about gravity? Cuddlyable3 (talk) 19:20, 2 June 2011 (UTC)[reply]

Not recently but I will read it again... The time scale here I have in mind would be in billions of years rather than only 200 but I guess we are stuck with only that... --DeeperQA (talk) 19:33, 2 June 2011 (UTC)[reply]
I think the OP is curious about the difference between:
  • changing the value of G, the gravitational constant - which would indicate that the "rules of physics" are changing (since G is a fundamental physical constant); as opposed to
  • changing the units by scaling the way we measure mass, relative to force - see Natural units to understand why this is not an issue. It doesn't matter if we change units; the relevant, empirically observed laws of physics will still be valid.
This stuff can be a little bit mind-bending, so read the natural units article a few times if you lose your way through there. Nimur (talk) 21:02, 2 June 2011 (UTC)[reply]
Thanks for you comment. My actual interest is in whether space is expanding and if not what the alternative explanation might be of distance between centers of mass (Black Holes) might really be. While the decreasing size of a Galaxie is perhaps explainable by increased density due to a change in the gravitational constant (making it a variable rather than a constant) this does not explain the increasing distance between the centers of mass themselves. On the other hand if the gravitational "constant" is changing in the opposite direction then this would explain both increasing distance between centers of mass and perhaps an increase distance between centers of mass and objects surrounding them.
Bottom line is that the earliest possible time we could have enough data to make a determination as to whether the gravitational constant were variable is around the last strike of the clock known as the Long Now in absence of supercomputer simulation. --DeeperQA (talk) 17:21, 3 June 2011 (UTC)[reply]

Diamonds

How do they tell between a diamond that is naturally mined (and cleaned and cut) and one that is made in a laboratory? (the latter with the intent of making jewellery/resembling a natural diamond, of course) They are chemically identical, aren't they? 72.128.95.0 (talk) 19:47, 2 June 2011 (UTC)[reply]

In the past I've read that the synthetic manufacturers were pressured into adding ultraviolet dyes or other markings to avoid angering the cartel. I don't know if that's still the case. Also Synthetic diamond mentions that laser-inscribed serial numbers have now become widespread on the mined stones. Wnt (talk) 19:49, 2 June 2011 (UTC)[reply]
An indication (though not conclusive) is that synthetic diamonds are more likely to be free of impurities and flaws, but once you get cleaned and cut, then this distinction is less obvious. Googlemeister (talk) 20:15, 2 June 2011 (UTC)[reply]
Of course, they could also add impurities and flaws to the synthetic diamonds, to better mimic mined diamonds. I have to think it's just a matter of time until diamond prices collapse, when lab diamonds flood the market. StuRat (talk) 20:43, 2 June 2011 (UTC)[reply]
Sigh. I've been thinking the same thing myself, for at least 20 years. But the nature of wealth is not tangible or sensible; it is a mass hallucination. The difference between a 100 million dollar painting and a worthless beginner's fumble, the difference between a Hollywood diva and a dangerous homeless man with a criminal record on the street, a $100,000 dress and a bargain rack purchase, a million dollar webpage and a worthless scrap of spam - all imaginary. Humanity is mad, and it savages its "poor" based on pure delusion. Wnt (talk) 22:44, 2 June 2011 (UTC)[reply]
[7] [8]. Nowadays natural (mentioned in Synthetic diamond) and I think most synthetic diamonds sold for the jewelry market tend to have serial numbers. And the Kimberley Process Certification Scheme even if it's perhaps not to successful at stopping conflict diamonds probably helps stop large quantities of synthetic diamonds winding up as natural ones. Nil Einne (talk) 04:02, 3 June 2011 (UTC)[reply]
Asking myself: "Why would that be of any interest?", I guess (but might be wrong) that you are concerned about US-debts and/or Euro-crises but don't want to invest into Rubels or Renmimbi Yuan because of the political dependence of these currencies, nor into already overrun "small market" currencies as the Swiss franc or the Norwegian krone, nor into natural resources based currencies like the Canadian or Australian dollar (prone to collapse on the economic slide to follow the financial crash) and you clearly see that gold is already dangerously high priced. If so, please ask yourself: who would re-buy diamonds, artificial or not, if the global currency system would break down? If I had the money, I would invest into an electricity generator with a sterling motor, some packs of dried rice (which I already have) and some extra tins of food (accumulating). 95.112.146.231 (talk) 20:37, 2 June 2011 (UTC)[reply]
And the guy next door could invest in a shotgun and some ammunition, and thus get both his shotgun as well as a generator, a sterling motor, and some food. Googlemeister (talk) 20:48, 2 June 2011 (UTC)[reply]
Why did you have to give me away? And I would have got on the liver, too. 95.112.146.231 (talk) 21:05, 2 June 2011 (UTC)[reply]
To 95.112.146.231: your generator would be useless if the market collapses because you won't be able to buy gas for it, whereas my precious diamonds and gold will still appeal to the woman, and that's more important than the food. – b_jonas 15:20, 5 June 2011 (UTC) [reply]
see stirling motor can run on sunlightIdreamofJeanie (talk) 13:39, 6 June 2011 (UTC)[reply]

Cutting a moving object in half

Hello.

Let's imagine we have a 100 kg rock hurtling through a vacuum with an kinetic energy X. Suddenly, a high-power laser appears and dissects the rock into two different parts. Both parts are exactly the same size (volume) but, due to density differences, part 1 carries 60% of the weight of the original rock and part 2 carries only 40%.

How will the original kinetic energy X of the primordial rock be distributed? Will the two child rocks receive 50% each, or will they receive 60% and 40% respectively? I.e., will the kinetic energy be redistributed according to mass or to volume, or otherwise?

Thanks in advance. Leptictidium (mt) 20:02, 2 June 2011 (UTC)[reply]

Let me ask you this, where in the equation that calculates kinetic energy does volume fit? Googlemeister (talk) 20:13, 2 June 2011 (UTC)[reply]
Kinetic energy is proportional to mass. Unless the cutting-process causes different parts of the mass to exit with different velocities, (i.e., an "inelastic collision with a laser beam"), then the kinetic energy does not change; and each sub-element's kinetic energy can be calculated as always, , for the i`th particle. Nimur (talk) 20:21, 2 June 2011 (UTC)[reply]

This question is reminiscent of a statement I heard from BION an executive in a major satellite communications company. He maintained that the orbiting height of a satellite depends on its weight. I asked him to consider a satellite in stable orbit that suddenly breaks into two unequal parts. Do the two parts move apart and take up orbits at different heights? Cuddlyable3 (talk) 21:33, 2 June 2011 (UTC)[reply]

Very likely, yes: the two halves of the satellite are almost certain to have centers of gravity that differ from the CG of the satellite as a whole, which will put them in (slightly) different orbits than the original satellite: one further out than the original, and one further in. --Carnildo (talk) 01:40, 4 June 2011 (UTC)[reply]

About divding any moving object to two unequal pieces - mass is dividing two pieces, the first momentum will divided equal, then the velocity will differ wise, and the light piece will have further velocity, then its difference of first velocity, (V2-V1)x(V2-V1) will be further, and little one will have further kinetic energy.

A. Mohammadzade — Preceding unsigned comment added by 81.12.40.120 (talk) 07:27, 3 June 2011 (UTC)[reply]

Generally, it will depend on mass difference and first velocity. May it be (M-m) further than square(V2-V1). --81.12.40.120 (talk) 07:35, 3 June 2011 (UTC)[reply]

Excuse me, my page has difficulty and problem for better editing or blockout. --81.12.40.120 (talk) 07:39, 3 June 2011 (UTC)[reply]

No, momentum will be divided in proportion to the masses (not volumes), and velocities will remain equal unless there is some explosive force between the fragments, so kinetic energy will be proportional to mass, as explained above. Dbfirs 22:29, 3 June 2011 (UTC)[reply]
Please forgive me my second reference on the same day to Jules Verne novels from this desk. Off on a Comet, book 2 chapter 17 talks about such a situation. (Well, the rock is a bit heavier and is split by seismic activity instead of a laser.) – b_jonas 15:11, 5 June 2011 (UTC)[reply]

we can suppose two masses move together in same velocity in global dependend system (M&m). So the remnant momentom law sayes MV1=mV2 then V2/V1=M/m......(V2-V1)/V1=(M-m)/m--78.38.28.3 (talk) 03:13, 7 June 2011 (UTC)

how much butter

How much butter a day could a farmer in the 1870s US have made from 1 cow? I assume that there is a pretty decent range based on cow breed, and there would be a day to day varaince, but I am just looking for a ballpark estimate. Googlemeister (talk) 20:30, 2 June 2011 (UTC)[reply]

According to Dairy cattle, "production below 12 to 15 liters of milk per day is not economically viable", and according to this site (linked from Churning (butter)), "it takes 21 pounds of fresh, wholesome cow’s milk to make each pound of butter." So, 12 litres is about 25 pounds of milk, and thus will produce about 19 oz of butter. Tevildo (talk) 23:55, 2 June 2011 (UTC)[reply]
But only if the milk is "wholesome." I wonder in what scientific units "wholesomness" is measured :-) . {The poster formerly known as 87.81.230.195} 90.201.110.199 (talk) 08:04, 3 June 2011 (UTC)[reply]
Cells / ml, as it happens. Tevildo (talk) 09:55, 3 June 2011 (UTC)[reply]
Yield will be much different historically. Texas lists the average going from 2,940 lbs per cow per year in 1928 to 20,900 in 2009.[9] About a seven-fold increase and we still need data from 60-years earlier. 75.41.110.200 (talk) 14:22, 3 June 2011 (UTC)[reply]
The definition of economically viable would be very different too I presume, especially for a farm where dairy output is for internal consumption rather then sale. Googlemeister (talk) 15:21, 3 June 2011 (UTC)[reply]
The statement concerning economic viability in our Dairy cattle article refers to current conditions and has no implications for conditions in the 1870s. For example, modern dairies mostly use Holstein cattle, but a farm in the 1870s would have been more likely to use a different breed, such as Jerseys, which produce a high quality of milk, or Guernseys, which tend to be gentle cows easy to work with. The Texas data linked above is probably a better guide. Note that it shows an average of about 3000 pounds per year until about the middle of the 20th century, when per-cow productivity began a rise that continues today. Using Tevido's link, that implies something like 143 pounds of butter per year. Note that this does not mean 143/365 pounds per day. Cows produce milk only some of the time, and when they do produce milk the amount varies. So 19 ounces of butter is probably not out of the question in the 1870s, but that would have been a very good day, or a cow that (at that time) produced an unusually large amount of milk. John M Baker (talk) 18:41, 3 June 2011 (UTC)[reply]

You can not make butter from cows. μηδείς (talk) 01:20, 4 June 2011 (UTC)[reply]

ROFL! AndyTheGrump (talk) 01:52, 4 June 2011 (UTC)[reply]
Butter was sometimes diluted with tallow by unscrupulous farmers and merchants, so you could make some "butter" from a cow, loosely speaking.Edison (talk) 20:50, 4 June 2011 (UTC)[reply]

Fractal antenna for TV ?

The log-periodic antenna has existed for decades, but are any other fractal designs in use for TV reception ? If so, how do they compare, performance-wise, with other types ? In particular, I'd like an omni-directional TV antenna (or perhaps a bi-directional antenna with two wide lobes, so that two such antennae would give me full coverage). StuRat (talk) 21:09, 2 June 2011 (UTC)[reply]

An omni-directional antenna cannot discriminate between signal in line-of-sight from the transmitter and reflected signals that show as Ghosting (television). A directional antenna has gain in the preferred direction, which is needed to receive weak signals and reject interference arriving from other directions. Antenna gain (see article) is the main performance measure of an antenna and for the VHF and UHF frequencies used for TV the basic Yagi-Uda antenna wins on gain per weight of metal and on simplicity. Cuddlyable3 (talk) 21:23, 2 June 2011 (UTC)[reply]

The easiest, simplest, most "populistic" way to prove to a person the earth is older than 6000 years old?

thanks. 109.67.42.106 (talk) 22:39, 2 June 2011 (UTC)[reply]

Varves. Even an idiot can understand them, and the varves in the Green River Formation are 6 million layers, i.e. 6 million years, deep. Red Act (talk) 22:50, 2 June 2011 (UTC)[reply]
Varves are good, as are ice layers. Dendrochronology is as well. I tend to stay clear of fossils because most people don't really understand the topics well enough to be convinced -- biology (bone physiology), chemistry (radioisotopes) or physics (etc.). DRosenbach (Talk | Contribs) 13:09, 3 June 2011 (UTC)[reply]
Dinosaurs. - DSachan (talk) 23:08, 2 June 2011 (UTC)[reply]
  • Hmmm, I though that God created the Earth with varves, Dinosaur fossiles etc. 6000 years ago :) . That's why I accept that I can't prove that the Earth isn't 6000 years old, but I then also say that they can't prove that the Earth wasn't created 5 minutes ago. Count Iblis (talk) 23:09, 2 June 2011 (UTC)[reply]
  • I'd opt for Dendrochronology. We have about 11000 years of fully anchored series of tree rings. And they match the radiocarbon date. That said, science does not provide absolute proof, and it's very hard to convince people that have non-scientific reasons for believing 6000 years. After all, god can grow two tree rings per day, or a varve every minute, if she exists. --Stephan Schulz (talk) 23:11, 2 June 2011 (UTC)[reply]
(edit conflict)Fossil fuels need millions (or at the very least hundreds of thousands) of years to form. However, I feel any explanation simple enough to be understood by someone who believes in Young Earth Creationism is going to fall victim to the "God put it there" argument. If someone doesn't believe in science you can't use scientific arguments on them. Really, the best you can hope to do is just stick to patience and explaining how everything we use every day, from phones to cars to computers to the foods that we buy, would not be possible if science was wrong; and science and Young Earth Creationism can't both be right. -RunningOnBrains(talk) 23:12, 2 June 2011 (UTC)[reply]
Looking at all the chaos in my room, I can't believe (now that I'm sober) that I should have done that. So the only explanation to me is that the world as such was created no longer than 20 hours ago and what I see are the remnants of primordial chaos. To be more serious: if someone believes in some almighty force that can make us believe anything, any reasoning is void. 95.112.146.231 (talk) 23:23, 2 June 2011 (UTC)[reply]
Following on from what others have said, you might be on more productive ground trying to convince them that the Bible isn't literally true (which will be the main basis for their belief). The difference between Genesis 1 (where the sequence is plants (Gen 1:11), land animals (Gen 1:25) and humans (male and female - Gen 1:27)), and Genesis 2 (where the sequence is Adam (Gen 2:7), plants (Gen 2:9), land animals (Gen 2:19) and Eve (Gen 2:22)) is a useful start. Tevildo (talk) 23:41, 2 June 2011 (UTC)[reply]
Once upon a time I met someone who asserted that the Bible was "literally true" and then went on to explain how it was all "entirely consistent" with science, e.g. he accepted old earth, radiocarbon dating, and everything. Sadly, I don't remember the interpretation he was using. However, by starting from the position that the Bible is "true" he had rather more success getting skeptics to listen to him and think critically about his arguments than someone might have had if they approached it from a purely scientific angle. Dragons flight (talk) 23:58, 2 June 2011 (UTC)[reply]
If someone reconciled the Bible to science, then they can't take it as being literally true (i.e., creation in 6 x 24 hour days). AFAIK reconciling requires taking the Bible metaphorically in order to line up Creation to what science tells us of the Earth and the universe. PЄTЄRS J VTALK 00:30, 3 June 2011 (UTC)[reply]
I admit that it seems contradictory to believe in both a "literally true" Bible and in science, but that is how he described his beliefs to other people, and it seemed to be rather effective to discuss it that way. Dragons flight (talk) 01:13, 3 June 2011 (UTC)[reply]
Old Earth creationism. For example nowhere does it say that a 'day' was 24 hrs/86400 seconds while god was doing the creating, so make that little terminological twist and you've got as much time to play with as you like. If you believe hard enough, you can make any fact fit your beliefs. --jjron (talk) 14:19, 3 June 2011 (UTC)[reply]
Africa and South America look like they fit together (even children notice this), and they are moving apart at a rate of 2.5 cm / yr. They are approximately 5000 km apart today so it would take about 200 million years to get that way. (The actual separation occurred 120 million years ago, so it isn't a terrible estimate.) Of course, like most arguments for an old earth, one can invoke any number of supernatural explanations to counter this, but if someone is actually open-minded (rather than simply dogmatic) then they might realize that all the special pleadings sound silly after a while. Dragons flight (talk) 23:47, 2 June 2011 (UTC)[reply]
Kuhn taught us that the paradigm of science is also dogma. Dont forget it. — Preceding unsigned comment added by 129.67.39.207 (talk) 23:52, 2 June 2011 (UTC)[reply]
Absolutely anything you claim is easily countered with "God made it look like that when he created Earth." The only way to counter that argument is the assertion that God is not capable of creating a planet that is older than 6,000 years. Then, the other person is left either agreeing that God is not all-mighty or that it is possible for the Earth to be older than 6,000 years. Similarly, you can claim that it is impossible for God to created a Bible that is not literally true. That forces the other person to argue that God is capable of creating a Bible that is not literally true. As with just about anything, the strength of the creationist/young Earth argument is that God is all-mighty. That is also the weakness of that argument. -- kainaw 01:02, 3 June 2011 (UTC)[reply]

I like to conduct the thought experiment of imagining a child who was exposed to no religion while growing up, but only given the best scientific explanations for everything, along with the the fact that science is always seeking better explanations of everything. Would that child invent religion for themselves? HiLo48 (talk) 01:06, 3 June 2011 (UTC)[reply]

The Emperor Commodus will always be the definitive counter-example to any thesis that education (alone) can produce morally admirable behaviour. Whether atheism is morally admirable is another point altogether. :) Tevildo (talk) 01:24, 3 June 2011 (UTC)[reply]
The age of the Earth is not necessarily linked to the age of the rest of the universe, but some grasp of the age of the universe might challenge the thinking of a person contemplating signing-up to the 6000 year-old model. Astronomers have decided the nearest large galaxy cluster, the Virgo Cluster, is about 59 million light-years away. See Light-year#Distances in light-years. That means the light from the Virgo Cluster that can currently be observed on Earth started its journey 59 million years ago. However, be aware that the creation scientists are likely to respond to this by postulating that the speed of light is much slower than it used to be in the good old days before there was so much sin about. Dolphin (t) 01:32, 3 June 2011 (UTC)[reply]
I would just note that the "God put it there to fool us" argument assumes a God who would willfully deceive us in a terribly contrived way. There is really no evidence for such a God in the Bible — the God of the OT and NT certainly does a lot of strange things, but elaborate practical jokes on all of humanity aren't among them. The closest analog I can think of is Job, but that was sort of a one-off case of God being a huge jerk just to see what would happen. --Mr.98 (talk) 01:34, 3 June 2011 (UTC)[reply]
A similar discussion occurred at Wikipedia:Reference_desk/Archives/Science/2010 August 21#Noah's Ark. I find it appealing to consider (as a model) that there might be two orthogonal dimensions of time, namely time as we experience it, in which the events of the universe play out over time with apparent causality, and time as God experiences it, in which whole universes can be seen as sculptures in spacetime, and one is replaced by another and another. Thus the first day, in the divine dimension of time, sees the creation of a whole universe that lives and dies as shades of light, and the fourth one where plants but not animals are permitted by the laws of physics. These universes might either be considered as standalone sculptures, or as a sort of continuum in the second temporal dimension. Ideas about the waters of Lethe, for example, can be rephrased in the sense that one universe in which bad things happened will be replaced by another where they cannot.
Now of course none of this is proof, but the point is, there is room in the scientific philosophy to allow for a God who is truly omnipotent. There is no need to reconcile the biblical notes on the book jacket about the writing of the story with the actual plot of the story of this universe itself. We can all get along.
Last but not least we should note that some models, which pixelize space or otherwise suggest that there is a finite amount of information in the universe, mean that the entire universe at a moment can be expressed as a single (long!) rational number, which expresses everything that is to be known about it. The laws of physics as we experience them are just one particular mathematical series of these numbers, which progresses according to some set of rules to create - somehow - a sense of a temporal series of events that we can experience. But everything that could possibly exist, could possibly be seen or felt or heard within our laws of physics, exists as other numbers. And there could be other physical laws, other mathematical series, which link those together into other possible narratives. The question of what is real, what is miraculous - it goes deeper into philosophy than I can fathom. There is room for anything of any size there, even God. Wnt (talk) 01:53, 3 June 2011 (UTC)[reply]

There are plenty of scientists who are creationists. Not the most eminent scientifically, but an interesting one because he writes about creation as a scientist, is Nathan Aviezer. --Dweller (talk) 13:19, 3 June 2011 (UTC)[reply]

I'll cite again here, what a friend who is both a scientist and a Christian, has to say about this: (1) God does not deceive; and (2) Evolution is how God works. ←Baseball Bugs What's up, Doc? carrots15:04, 3 June 2011 (UTC)[reply]
But if that's the Christian God, he has allowed the creation of the Bible, which describes something very different from evolution as the model for creation. Isn't that being deceptive? HiLo48 (talk) 19:02, 3 June 2011 (UTC)[reply]
God gave us free will. Don't blame God for what humans choose to do. ←Baseball Bugs What's up, Doc? carrots19:04, 3 June 2011 (UTC)[reply]
Evolution gave us both inquisitiveness and imagination. The need to provide explanations led man to imagine God. Delusions of God leads some people to discount evolution. Therefore evolution is deceptive.  ;-) Dragons flight (talk) 19:16, 3 June 2011 (UTC)[reply]
Which is still consistent with "God putting it there". If I were God, I could have used the laws of physics to create the universe 13.7 billion years after the big bang, instead of right at the big bang. The two universes are related by a time evolution operator, so there is no a priori scientific reason to believe that if one scenario is possible, the other isn't. So, if you believe in creation in the Biblical sense, you can just as well believe that God created the universe 5 minutes ago. From a theological POV, the latter possiblity is better, because terrible things like WWII wouldn't really have happened, so it would address the criticism that God let terrible things happen. Count Iblis (talk) 16:48, 3 June 2011 (UTC)[reply]
Here we get into fine details of omnipotence and scientific knowledge. Consider for example the classic Torah story was that God can create anything there is, and quickly, but not instantly, and not without effort. There were days of labor involved. Now if you assume this is true for a moment, and that God created the laws of physics and the phenomenon of evolution, then it implies there was a time before the cosmological constant or the idea of natural selection existed. We do not see such a time by natural science, not even if we go all the way back to the Big Bang - and that's a sort of temporal singularity that rules out our asking what happened before it. We can't, of course, because natural science must assume that the same set of natural laws always applied; our past is a deduction according to them. But they do not rule out some other sense of time that works differently. Wnt (talk) 17:28, 3 June 2011 (UTC)[reply]
The Deistic philosophy would be that God started the process rolling and then sat back and watched. If there's anything God has plenty of, it's time. ←Baseball Bugs What's up, Doc? carrots17:29, 3 June 2011 (UTC)[reply]
(ec) Alternatively (or equivalently?), if one takes the view that God is "outside of time", as a number of theologians have done, including C.S. Lewis, Thomas Aquinas, Augustine of Hippo..., so "time exists only within the created universe, so that God exists outside time; for God there is no past or future, but only an eternal present" as our article puts it, it is not clear that this distinction is meaningful; because on this view God created the entire universe, its entire past, entire present, and entire future, all in one go, in a single act, in a single instant. For such a universe there isn't really a distinction between a universe created 14 billion years ago; or a universe created 6000 years ago with a 14 billion year history; or a universe created in the 6 billion years in the future with a 20 billion year history that our present experiences are part of. If past, present and future are all created at once, then all these descriptions are equivalent.
More to the point, if God created a universe 6000 years ago, and thought it was worth giving it a fully-formed consistent 14 billion year history to, what kind of lazy good-for-nothing followers do they think they are, to decide they can't be bothered to be interested in what has been created for them? Jheald (talk) 17:53, 3 June 2011 (UTC)[reply]
As for another way of getting him to emotionally bridge the idea that there might be more than 6000 years of reliable history, I wonder whether talking about Y-chromosome mutations and how they can be used to work out family trees could be helpful? -- ie how, if you start say from say an original emigrant to America in the 1600s, the branches of the family tree separate, mutations occur on particular branches, and get passed down, so that now if you start with a big group you can look at who has which mutations and who doesn't, and use that to put the entire branching family tree back together. My feeling is that this might work as a way in because it relates to something very everyday and immediate and practical and useful -- working out one's family tree; and one can concretely look at such branching trees discovered for real family studies that have been done. But of course once that is accepted, it's not such a step to say: but of course we can do that for the whole of mankind, and fit them all into such a tree - which one can concretely show, and the mutations all hierarchically stack in beautifully ... only it must have taken more like 6000 generations to happen, than 6000 years. But then of course you're only a hop away from saying you can apply very very similar genetic techiques to species, with some examples, and they too fall into a hierarchical family tree of mutations. Though I guess that might take you beyond the "I don't want to know that" threshold. Jheald (talk) 18:54, 3 June 2011 (UTC)[reply]

There is no point in trying to prove that the world is older than 6000 years. The person you are addressing will either have bothered to learn the science or won't. The real question is, what kind of sick, malevolent, all-powerful being would create false evidence to convince those who use reason and their senses that the universe is 13,000,000,000 years old if it weren't? μηδείς (talk) 01:25, 4 June 2011 (UTC)[reply]

To Medeis: sure, no point.
Why would religious traditions, stories, visions, and inspirations disagree with our knowledge of the physical world? Well, for one thing, if they didn't, they'd merely be scientific treatises. But more fundamentally, religion serves the useful purpose of fouling up the plans of those who have too much simple order in mind for the human race. It's like Frank Herbert's "Bureau of Sabotage" in Whipping Star, which prevents bureaucracies from working too efficiently. There's always someone who has to bring his little ceremonial dagger on the airplane, who demands to be photographed only while wearing a chador, or perhaps not at all, who won't swear on the Bible because the Bible says not to, who won't take the Pledge of Allegiance because his allegiance is to another, who will march into the clubs of the police singing that they won't be moved (or stopped, for that matter). The beauty and truth of religion comes not from what it proves, how it is deduced, but from what it is, its simple essence, as people choosing to be people. Wnt (talk) 05:50, 6 June 2011 (UTC)[reply]
Oh, to get back to the OP's question, I think that "continental drift" (just the observed separation of the New and Old World, without getting into the details of plate tectonics) is a good thing to use. We can measure the rate of the continents' separation in centimeters yearly; we can show data that there's a midocean ridge in the middle of the Atlantic, with symmetrical patterns of magnetic reversal. And it is all too apparent that you can't get that gap in 6000 years. Wnt (talk) 06:00, 6 June 2011 (UTC)[reply]

In answer to the "deception" question, traditional beliefs have it that the first man was created as an adult. Similarly, the world was also created 'adult'. As this wonderfully anthropocentric view of the universe goes that everything was created in order for man to be put in it, creating it already old (and ready for mankind) makes perfect sense. To read into this "deception" is the input of the viewer, not the creator. --Dweller (talk) 13:23, 6 June 2011 (UTC)[reply]

There is a theological term for the concept that God created the Universe "in media res" as it were, with light waves already on their way from distant stars, and trees with internal rings denoting seasons that never existed, and rivers with beds they had never gouged. But I cannot remember what it is, although I will look it up and get back to y'all.
And it was said (in the past and only by a very few) that God made it thus to challenge our faiths in the Word. However, in contrast to what many of the preceding posters have written, VERY few fundamentalists take this road. It is a most unsatisfying explanation, and even for them has a absurdly ad hoc quality to it, especially when we realize that it follows that everything could have been made 5 minutes ago. I suspect that many of the posters who have written to characterize fundamentalists as taking this line have never looked into what they do say. The road they take is probably even harder to traverse. The vast majority of the young Earthers insist that the evidence, properly considered, DOES show that the world, indeed the whole Universe, is only 6000 years old, and that scientists are variously deceived by the Devil, too proud to accept the Bible as the real authority, conniving in hiding and discarding evidence which does not support their evolutionistic theories, denying funds and airspace to alternative views and so on. Myles325a (talk) 08:41, 7 June 2011 (UTC)[reply]


June 3

E. Coli Outbreak

How fast it is affecting its victim? roscoe_x (talk) 06:07, 3 June 2011 (UTC)[reply]

Bacterial food poisoning takes 12-72 hours for primary symptoms to appear. -- kainaw 13:46, 3 June 2011 (UTC)[reply]
The media tell something about 8 to 10 days. I would not belive any of them.--Stone (talk) 13:49, 3 June 2011 (UTC)[reply]
Eight to ten days is consistent with the data in this case. People are still falling ill after having stopped to eat raw vegetables. Also, it is known that you only need to ingest a few becteria, which will multiply over a period of days before any symptoms appear. You can wash a cucumber, but that won't remove all bacteria that are on it, so the initial recommendations of washing vegetables is not good enough in this case. That's why the new recommendation is to not eat raw vegetables at all. Count Iblis (talk) 16:29, 3 June 2011 (UTC)[reply]

Countermeasures against RFID

It's become tradition for me to take any new pair of shoes and microwave them on HIGH for 30 seconds, due to paranoia about RFID spying. It doesn't seem to hurt anything, or even warm them much. While other clothing items can be found with RFID tags at the supermarket,[1] what I've seen of earlier such tags was bulky and not readily concealable. I was always most suspicious of shoes because they are difficult to take apart completely. But it isn't 2000 any more, and this isn't really my field...

  • Do microwaves really kill RFID chips for sure - even the new ones which are much smaller, or designed to be incorporated in fabric?
  • Is it now feasible to hide RFID chips in clothing fabric in such a way that no one would notice them?
  • Is there any cheap handheld device available by now that can be used to sweep a house for RFID chips the way people in a spy movie sweep for bugs? Which would detect any possible RFID chip, not just one model and frequency? (Ideally it should also detect if other electronic devices are operating as base units to detect RFID chips in the area)

Wnt (talk) 10:41, 3 June 2011 (UTC)[reply]

References

  1. ^ [1]
The RFID in the new German passports is to track you (even if all-wise government denies). So it would be probably made illegal if many people do what you are doing. 95.112.137.166 (talk) 12:20, 3 June 2011 (UTC)[reply]
You're probably looking for a nonlinear junction detector. A handheld rfid reader/writer isn't cheap. Generally, rfid chips are supposed to be degaussed at the store, though it depends on the local law. Microwaving does disable most (non-military) rfid chips. Alternatively, rather than destroying the chips, you could clone them, or recode them to exploit buffer overflows, though this may be illegal in certain areas.Only the Paranoid Survive =P. Smallman12q (talk) 12:32, 3 June 2011 (UTC)[reply]
One thing (at least) that I don't understand here. Even granted the hypothesis that someone or some agency wants to track your movements in the first place - why would they do that using RFID tags in your clothing or shoes ? Surely the data trail that you create every time you use a debit/credit card, use your mobile phone, use a season ticket on public transport or drive your car past a traffic camera is much more extensive and reliable than depending on something that only possibly works if you are wearing your new shoes or shirt. Microwaving your new shoes would seem to give you very little privacy protection unless you already walk or cycle everywhere and only ever use cash and landlines. Gandalf61 (talk) 13:33, 3 June 2011 (UTC)[reply]
If you value your privacy, there are three simple things to do. First, stop using cards (credit cards, frequent shopper cards, discount cards, etc...). Those are specifically designed to track you. Second, get rid of your cell phone. By design, the towers track your phone. Third, stop using the Internet - especially social websites. The absolute worst offender is Facebook - if you have a Facebook account, they keep track of every single website you visit that has one of those "like" buttons on it. So, simply using the Internet tracks what you are looking at. Even the queries you type into search engines can be used to identify and track you (consider the stink about AOL releasing "deidentified" queries for research purposes). After you do all of that, then worry about RFID. -- kainaw 13:40, 3 June 2011 (UTC)[reply]
Having done all these things, go home to your cave, safe and secure that nobody is paying attention to you... --Mr.98 (talk) 14:12, 3 June 2011 (UTC)[reply]
Has someone hijacked Wnt's account? Assuming it's actually a serious question he's posing (the irony of which has already been hinted at by the responses), probably the most useful thing that nuking your new shoes could do would be to kill off any residual foot-fungus left by folks who had previously tried on those shoes before you bought them. Or would it? ←Baseball Bugs What's up, Doc? carrots15:02, 3 June 2011 (UTC)[reply]
I suspect that foot fungas is reslient enough to survive 30 sec in a microwave, but I would be surprised if someone has done an actual study on that method. Googlemeister (talk) 15:14, 3 June 2011 (UTC)[reply]
There's kind of an urban legend about sterilizing underwear via microwave, and random selections in google strongly advise against it. (Of course, they might be just trying to keep folks from zapping the tracking devices.) ←Baseball Bugs What's up, Doc? carrots15:36, 3 June 2011 (UTC)[reply]
You can get very simple RFID readers that plug into a laptop for less than $200 these days. Most will beep (or otherwise respond) when they get a signal in the right frequency range, even if they can't interpret what it means (for example due to secret encoding). In addition, with the advent of near field communication one can expect that many next generation mobile phones (and some already available phones) will be able to read RFID chips. In general these devices require you be pretty close to the chip in order to pick it up, so it would take quite a while to scan over your possessions if you really intended to be thorough. Dragons flight (talk) 15:30, 3 June 2011 (UTC)[reply]
If you ever find yourself hopelessly lost in the middle of a desert, or break your leg while climbing an obscure peak in the Alps, or whatever, the ability to be tracked might come in handy. ←Baseball Bugs What's up, Doc? carrots15:33, 3 June 2011 (UTC)[reply]
An RFID chip isn't that kind of tracking. It's more of a "you walked past this detector at this time on this date" sort of tracking, not the "I know where you are right now" sort of tracking. (There would have to be a detector in the middle of the desert for them to know you were there.) --Mr.98 (talk) 16:01, 3 June 2011 (UTC)[reply]
To find a RFID chip in the middle of the desert, they would have to blanket the area with waves, and the reply from the chip could only be detected from a few meters of distance. Basically, they would have to comb the whole area until they happen to pass a few meters away from it.
Could be very useful, though, to find people buried in snow avalanches, since the buried people would be in a relatively small area. A run-of-the-mill detector would limit the search to a area a few meters wide. A custom detector could be able to detect the direction and distance to the chip (it beeps stronger when the reply from the chip gets stronger, it is able to tell and display the direction the reply signal is coming from), and pinpoint the almost exact location of the chip. --Enric Naval (talk) 16:38, 3 June 2011 (UTC)[reply]
A simpler version of this system exists - see RECCO. It doesn't use RFID, but a directional radio reflector instead (on the principle that you don't need a coded signal as you don't really care who the buried person is - hopefully you are going to dig them out whoever they are!).Equisetum (talk | email | contributions) 11:52, 5 June 2011 (UTC)[reply]
The consensus here seems to be:
  • "military" RFID chips are immune to microwaves. What makes them special? How do you know some commercial chips don't work the same way?
  • cheap handheld devices to detect RFID are available. Which makes me wonder --- are people finding chips when they scan through their possessions?
I am disappointed to see some people still wondering why such tracking should bother them, or assuming all their privacy is forever lost already. The existence of companies like Facebook or supermarkets with shopper cards is evidence that someone finds all this personal information to be profitable, and if they're making money, probably the subject of their efforts is losing money, somehow or another. Even if it's only the legendary money you can make filling out shopper surveys on the Internet, the loss of privacy can be evaluated in tangible financial terms.
The suggestion that "next generation mobile phones" would detect these chips is particularly disturbing, though it clashes with the $200 price tag mentioned above. While the chips can only be detected within "a few feet" (around 100 in actual tests) the prevalence of phones would mean that they would be more or less continuously tracked in populated areas. One expects a repeat of what I've read about certain GPS phones, where you have people in a "free country" carrying around devices that are tracking their movements to be used against them by prosecutors, but which they are not themselves permitted to access because they "don't have the software". There is such an strong feeling of contempt that pervades when people are expected to buy and care for the things that spy on them, and aren't even able to use the devices for their own purposes. Wnt (talk) 17:13, 3 June 2011 (UTC)[reply]
Those who have nothing to hide have nothing to fear. :) ←Baseball Bugs What's up, Doc? carrots17:27, 3 June 2011 (UTC)[reply]
Er, I'm sorry, but your assertion that when somebody else is making money, the subject must be losing money, is pretty fallacious. Wealth is not a zero sum game (if it were, we'd still be using stone tools), and information certainly is not (it can be replicated, shared, exploited "without cost" to anyone in many cases), and just because someone can make money off of knowledge about me, doesn't mean that I'm losing anything by that fact, at all. That's an awful argument in favor of privacy, anyway. The real argument probably ought to be couched in terms of possible abuse of information or potential negative effects. --Mr.98 (talk) 18:35, 3 June 2011 (UTC)[reply]
While paranoia about privacy is good, I've never been impressed by RFID paranoia. To me it seems like mostly fear of a buzz-word. Some people will physically recoil if you consider using RFIDs in place of a barcode! Most of the "tracking" with RFIDs that people freak out about has been going on since long before RFIDs were invented. They've ALWAYS kept track of who crosses national borders, they don't need the chip in your passport to do that. Supermarkets have always tried their best to track their inventory, a more technological bar-code doesn't suddenly make it sinister. Even the chips implanted in pets and livestock are just replacing older forms of tagging.
If you honestly think anyone is going to implant RFIDs in your shoes and then implant RFID readers on the sidewalks, you're behind the curve, paranoia-wise. Doing all that would represent a significant expense, and a serious risk of negative media exposure. All that's needed to track people as they walk down the street is cameras and fancy computer-vision software. (So long as at least one camera get's a good look at your face or car license plate.) That's far more cost effective and the public has already accepted cameras.
As discussed above, the readers wouldn't be in the sidewalk but in the cell phones in the pockets of the people passing you (together with GPS data). Also, whether or not people have accepted cameras, my impression is that facial recognition software is not very reliable, and most of them are not even networked. Wnt (talk) 23:16, 3 June 2011 (UTC)[reply]
All that said, RFID readers are fun toys and not very expensive. I recommend buying one if you're interested in the topic. I haven't tried it, so I can't personally recommend it, but this looks like a fun beginner's kit. APL (talk) 18:04, 3 June 2011 (UTC)[reply]
I must reiterate what others have said here: The potential risks of privacy invasion or identify theft that come just from using the internet, appear to be far greater than the potential risks connected with a "tracking chip". ←Baseball Bugs What's up, Doc? carrots18:47, 3 June 2011 (UTC)[reply]
I think these are different kinds of risks. For example, tracking someone's physical position on a daily basis is more useful for serious physical attack, robbery, kidnapping, etc., whereas the content of the e-mail would suggest more when and why someone might want to do that to you. Wnt (talk) 23:16, 3 June 2011 (UTC)[reply]
Your typical criminal just wants your money. A high-tech criminal is most likely to want to steal your identity, and the greater your electronic presence, the greater likelihood of such theft. I have trouble imagining why someone would go to all the trouble of putting chips in your shoes, when they can do what they always do: look for vulnerabilities. They don't need to steal everyone's stuff - just the stuff from the path of least resistance. For example, they might try to open the back door of every house in a given neighborhood. The one they're going to break into is the one who left it unlocked. ←Baseball Bugs What's up, Doc? carrots23:27, 3 June 2011 (UTC)[reply]
If someone is planting RFID readers all over town, paying off local businesses to put RFID transmitters into my shoes, and building up massive database of my movements around town just to corner me and take the $20 I have in my wallet, then I would gladly turn it over. You have to admire such a coordinated high-tech effort. I'd be proud to be targeted by such professionals. APL (talk) 22:28, 4 June 2011 (UTC)[reply]
In any case, if it is as easy as boiling one's shoes, it would be relatively easy to defeat any future nefarious uses once they were uncovered. This makes this sort of paranoia a lot less threatening to me than medical records, spending records, e-mail interception, net use logging, and so forth, which are much hard to disentangle oneself from even if one knows probable abuse is taking place. The RFID approach seems rather clunky to me, by contrast to the type of data that we know the NSA and probably other agencies can and do collect. --Mr.98 (talk) 19:45, 3 June 2011 (UTC)[reply]
Has anyone bothered to tell the workers in YY factories that they are meant to be inserting RFID chips into the sneakers they make ? That's a lot of chips. Sean.hoyland - talk 20:11, 3 June 2011 (UTC)[reply]
The employees in the factories making credit cards didn't raise any special ruckus when RFID chips were added to some of them; that was left to consumers. But really, my question here wasn't about all privacy, which is too broad a topic to cover well - I was asking specifically about RFID because it's something I don't know as much about the capabilities of. Wnt (talk) 23:24, 3 June 2011 (UTC)[reply]
If the RFID chip isn't disabled when you purchase the item, a more important reason why you may want to kill it is because you don't know when it's going to set off a store anti-theft security alarm which while not dangerous (unless you are actually a thief) tends to be annoying. I know someone who had a jacket that used to set off such an alarm, possibly in one store or chain only. I believe it didn't happen were the item was purchased (which was Kathmandu (company)).
On a personal note, I once had one of those cloth shopping bags which I eventually worked out was the reason I kept setting off such alarms. From memory it wasn't consistent i.e. I didn't always set off the alarm in some stores. Strangely there was also one store where I would sometimes/usually? set off the devices when entering the store (and usually no one paid attention), but not on the ones at the checkouts (i.e. exit). Funnily enough the store most likely to search me was the one who sold the bag. Eventually I worked out it was probably one of my shopping bags (I had purchased 2 recently from different stores) and found a RFID embedded in the bag which I bent a few times and that resolved the problem. I guess a conspiracy theorist would suggest it intentional from the plastic bag manufacturers.
I've read other stories of similar problems. One person even claimed they set off the alarms at the Israeli border (or some other high security area) which seems a bit strange (why would the security devices detect RFIDs or if they did why didn't they at least tell the operators what it was?).
Perhaps with the growing usage things are improving, this was a few years back.
BTW I don't know if I'd entirely agree with some of the above. I don't know about the sidewalk thing but it seems to if unique RFID were widely present in shoes it would be a good way for stores to track the way people move about their stores, where they stop, how long they stop etc. This could be primarily for general purposes, to see what displays work, whether the layout needs redesigning or stuff like that. But this info could also be tied to transaction cards or loyalty cards for individualised profiles.
I don't think mobile phone network tracking would have sufficient resolution to do this, unless perhaps you install a lot of picocells. And you'd need the cooperating of the networks in any case. You could do it with Bluetooth, something similar is done on motorways in NZ [10] [11] but that relies on people leaving a Bluetooth device on and unhidden. You could perhaps do it with cameras, which would also enable you to do things like see precisely what people look at, for how long etc. But I'm not convinced this would be cheaper or the software is there yet, particularly for stores to be able to do it for most people in the store. However I'm not saying this is actually happening, I'm sure it isn't since most people don't have unique RFID in their shoes so it's not going to be worth it. And it would also potentially violate the law in a number of countries.
P.S. [12] [13]
Nil Einne (talk) 08:53, 4 June 2011 (UTC)[reply]

How much calories does it burn?

I am an adult fat man who weigh 180 pounds, so if I stand for one hour how much calories dies it burn? Wut if u sit for one hour? Wut if I lie down for one hour awake? Wut if I sleep fir one hour — Preceding unsigned comment added by 166.137.138.50 (talk) 12:45, 3 June 2011 (UTC)[reply]

Basal animal metabolic rate is a good place to look. There are some formulae there that you can use to do some calculations. Just bear in mind, if, as you say, you are 'fat' then you will burn less calories than a 180 lb man who is muscular, as muscle burns more energy that fat even when you're resting. --jjron (talk) 14:23, 3 June 2011 (UTC)[reply]

Creating phylogenetic trees

What are the best tools for taking a collection of related DNA from many organisms (for example, that for 16S ribosomal RNA) and turning it into a phylogenetic tree? I am aware of Clustal, but I am wondering if there are other widely used tools for generating the branch structures, and whether there may be superior tools in terms of creating an attractive and/or easily manipulated graphical representation of the result. Dragons flight (talk) 15:12, 3 June 2011 (UTC)[reply]

You could try the R_(programming_language), which has many packages for phylogenetics here:[14]. I can recommend R as a free, powerful general computation tool, but I have not used any of the phylo packages. SemanticMantis (talk) 19:28, 3 June 2011 (UTC)[reply]
Phylip is a very commonly used program to generate trees for phylogenetic analysis. It takes input from various alignment formats, including Clustal, and is quite easy to use. --- Medical geneticist (talk) 00:52, 4 June 2011 (UTC)[reply]

Finding the source of the German E. Coli outbreak

If the symptoms start to show up within 12 - 72 hours, as pointed out above, shouldn't it be a piece of cake to discover where it comes from? You question the people about everything they did in the last 12 - 72, what for he most wouldn't be difficult to remember. On the basis of that, you'll find some common elements and given the high number of infected patients, that should be a very limited group of common elements. Quest09 (talk) 15:55, 3 June 2011 (UTC)[reply]

First, you have to determine an outbreak - which will be days after the symptoms appear, which is around 2 days after consumption of the tainted food. So, you are asking people to list everything they ate last week. Then, you try to find something common - which is hard. Then, you have to find some of that food and verify it is tainted. Then, you recall the food and you try to track it to a distributor. From there, you are on a paperwork trail - which may or may not be valid. Food goes to a lot of places before it reaches a store. Even if you track the whole line the food went through, it isn't necessarily the farm that tainted it. One case in the United States was traced to group of farms in Mexico, but the farms were clean. The source of the bacteria turned out to be a vegetable packing station that was completely separate from the farms. -- kainaw 16:13, 3 June 2011 (UTC)[reply]
And it is not 2 days, it's more like 8 days with very low amounts of bacteria ingested. I've read that ingesting just a dozen of this strain of E-Coli bacteria will maky you ill 8 days later. Count Iblis (talk) 16:33, 3 June 2011 (UTC)[reply]
Agree with Kainaw. I've been involved in a couple of investigations and it can be confoundingly difficult to pin things down. On the one hand, we have an unprecedented ability to track and monitor shipments, but on the other hand, the food web we've is so far reaching and complicated that there's a crazy amount of data to go through. Matt Deres (talk) 17:22, 3 June 2011 (UTC)[reply]

This is not that easy if you have 100 patients and they ate 20 different things a day for 8 days you have a lot of tests. If you have a quicktest for the EHEC you are searching for it gets easier, but in the beginning you have to look at an E. Coli you find. --Stone (talk) 17:30, 3 June 2011 (UTC)[reply]

Perhaps the cause is the drought in Western Europe? Farmers are spraying water on their fields a lot more than usual and they want to reduce the cost for that. So, they may use water from ditches, the water in there can be contaminated by cows. Count Iblis (talk) 03:10, 4 June 2011 (UTC)[reply]

Assisted suicide vs. euthanasia

What is the difference between assisted suicide and euthanasia?--188.146.40.115 (talk) 18:16, 3 June 2011 (UTC)[reply]

Assisted suicide implies that the death is desired by the person dying. Euthanasia is not as clear on that point — it says more about the intentions of the person assisting than it does about the intentions of the one dying. Voluntary euthanasia is the same thing as assisted suicide, but there is also involuntary euthanasia. There is no involuntary assisted suicide. --Mr.98 (talk) 18:29, 3 June 2011 (UTC)[reply]
One is a method of helping someone kill themsevles and the other is a phonetic spelling for a person who rarely logs in to edit Wikipedia (sorry, I couldn't help but answer this one). Really, euthinasia doesn't require the person to kill themself. Youth in Asia (talk) 18:31, 3 June 2011 (UTC)[reply]
Euthanasia as a word originally meant "happy death",[15] and has come to mean "mercy killing". As noted above, assisted suicide is voluntary on the part of the human receiving it. Euthanasia toward a human might or might not be. When an animal is put down, obviously it's not voluntary on the animal's part. ←Baseball Bugs What's up, Doc? carrots18:43, 3 June 2011 (UTC)[reply]
Well, there are animals that seem to commit suicide, like whales that beach themselves. I suppose you could argue that this behavior is the result of instincts, rather than a decision to die. There are also old and sick animals with the instinct to go off and die alone (probably so as to not infect others related to themselves). StuRat (talk) 23:30, 3 June 2011 (UTC)[reply]
That doesn't qualify as assisted suicide, nor is necessarily even a conscious effort to die. Animals that go off to themselves to die might do so just to feel safer somehow. Your typical animal is ignorant of micro-biology. I assume this has come up because Kevorkian died today? ←Baseball Bugs What's up, Doc? carrots23:45, 3 June 2011 (UTC)[reply]
It would be assisted if you then finished off a whale that beached itself, etc. StuRat (talk) 02:56, 4 June 2011 (UTC)[reply]
Is it not assisted if the pack leaves an old wolf to die alone? – b_jonas 12:00, 5 June 2011 (UTC)[reply]
How about the mother spider which sacrifices herself so her spiderlings can eat her alive ? This make the spiderlings assistants in her suicide, right ? StuRat (talk) 06:18, 6 June 2011 (UTC)[reply]
Wikipedia has an articles about Assisted suicide and Euthanasia. For the do-it-yourself enthusiast (euthanasiaphile?) there are Euthanasia devices such as the Thanatron, Mercitron, Deliverance machine and Exit International device. Cuddlyable3 (talk) 18:58, 3 June 2011 (UTC)[reply]
"Librarians have moved to ban the suicide 'do it yourself' book, Final Exit, from their shelves, not because they object to the content, but rather since nobody ever seems to return the book." -:) StuRat (talk) 01:08, 4 June 2011 (UTC) [reply]

relationship between forces

What is the relationship between infrared light and heat that allows you to see heat using it? And also what is the relationship of electrons in electricity and light, that is made up of photons, yet makes up Electromagnetic radiation; while you can use magnets to make electricity? Bugboy52.4 ¦ =-= 18:23, 3 June 2011 (UTC)[reply]

Infrared light is heat...which is thermal radiation which is a form of electromagnetic radiation. Infrared light is seen with a infrared thermometer which measures the frequency of photons in a certain wavelength. Electrons are in a sense made up of photons...additionally see wave–particle duality.Smallman12q (talk) 23:15, 3 June 2011 (UTC)[reply]
"Electrons are in a sense made up of photons" - no, not in any sense I'm aware of. Both electrons and photons display wave–particle duality, but they are otherwise quite different. In particular, an electron has rest mass and electrical charge, while a photon has neither. --Stephan Schulz (talk) 07:42, 4 June 2011 (UTC)[reply]
I think it's more correct to say that warm objects give off infra-red light (and hot objects can give off visible light, starting with "red hot" then going up to "white hot", or even "blue hot")). StuRat (talk) 23:20, 3 June 2011 (UTC)[reply]
Infrared EMR carries heat away from the source. When the infrared EMR irradiates a target, the target absorbs the infrared EMR with its heat. The target converts some of the heat into different forms of energy, the rest is reemmited back in the form of EMR. The EMR may include visible, infrared and others. Infrared is a medium for heat or thermal energy, as energy can only be passed from one medium to another.
Electrons interact with EMR by absorbing and emmiting it, leading to the transferal of energy. Plasmic Physics (talk) 00:30, 4 June 2011 (UTC)[reply]
Note also that a black body at high enough temperatures will radiate electrons and positrons. Count Iblis (talk) 02:58, 4 June 2011 (UTC)[reply]
You don't even need a black body for that, you just need high frequency gamma radiation and a cloud of gas. See pair production. Note, moving magnets don't produce electrons. They simply energise them from their ground state, and creates a current of flowing energised electrons. Plasmic Physics (talk) 11:33, 4 June 2011 (UTC)[reply]
You can have heat with nothing but photons. It's a misconception that heat is just about kinetic energy of massive particles. Heat is random energy, period; doesn't have to be kinetic. If you have an evacuated chamber at 3000K, with ideally black walls, the contents of the chamber will be a mix of photons at 3000K, and it's perfectly correct to say that the chamber has a temperature of 3000K even if there is no matter in it at all. --Trovatore (talk) 11:57, 4 June 2011 (UTC)[reply]
Photons are fundamentally connected to electricity: they're the auditors in charge to make sure electromagnetic force works. Electrons are not related in any such fundamental way: they're just one of the particles with an electric charge that happen to be the most convenient carriers to make electric current in the materials practically available to us. However, even with our technology, we can have electric current pass through a solution where the carriers are ions, it's just not very practical to work with. (Positronic brains are just fiction.) – b_jonas 11:56, 5 June 2011 (UTC)[reply]
All light, incident on any object, will heat it up to some degree. Infrared light, however, is particularly good at it; better in fact than light of shorter wavelength (like visible, or UV) or longer wavelength (like radio) of an equivalent intensity. The reason for this is that most molecules have vibrational energies which have energy levels in the IR range. Vibrational energy is one form of heat energy, so IR light which is incident on many substance will efficiently absorb that light and convert it into increased vibrational energy very efficiently, which is why infrared energy will heat objects up so well. The reverse is also true; objects will emit IR photons in buckets as they cool off, which is why "night vision" or "infrared vision" cameras are able to "see heat". There is some pretty good discussion on the relationship between IR and molecular vibrations at Infrared_spectroscopy#Theory. --Jayron32 19:11, 5 June 2011 (UTC)[reply]

Sideways gravity

An experiment in 1774 was to calculate the horizontal attraction of a mountain by measuring the tiny deflection from the vertical of a pendulum nearby. The normal vertical had to be found by painstaking observations of the stars and the pendulum deflection was measured in arc seconds. If one repeated the experiment today using a CCD digital camera looking upwards as the pendulum, how small deflection angle could one measure? What precautions would be necessary? Cuddlyable3 (talk) 18:42, 3 June 2011 (UTC)[reply]

Commercially available star trackers publish angular resolution limitations. I think we've entered the era of "as many decimal places as you can afford," out to 10 or 20 or maybe even 30 decimal places bits. (So, ~ 6 to 10 decimal places).
I guess one issue, since you are measuring from the ground, is that atmospheric thermal aberrations will cause noise in high-resolution star position measurements. Equipment vibrations can be easily compensated for by adaptive optics, but atmospheric distortion must be estimated in software and compensated; so your algorithm (or your software developer) will be your limiting factor. Nimur (talk) 00:18, 4 June 2011 (UTC)[reply]
The lead of the article (adaptive optics) says it is used for atmospheric distortion though. Rmhermen (talk) 17:54, 4 June 2011 (UTC)[reply]
Yes, but of course that correction is not accurate to infinite decimal places; the correction can only be as good as the estimate of the distortion. Nimur (talk) 02:48, 6 June 2011 (UTC)[reply]
Suppose we use star position measurements to find latitude and longitude. Does anyone have accuracy numbers? Cuddlyable3 (talk) 08:49, 6 June 2011 (UTC)[reply]

Could the E. Coli outbreak be the result of a bio-terror attack?

Could someone grow a deadly strain of E. Coli, inject that using a syringe in vegetables stored at some farm and cause a deadly outbreak like the current one in Germany? I guess that injecting E. Coli inside vegetables could explain why nothing has been found, because investigators are collecting samples from the skins of vegetables. Count Iblis (talk) 21:21, 3 June 2011 (UTC)[reply]

I don't really know but I doubt heavily that it could work that was. Take a clean, unused syringe needle and poke into some vegetables. Leave them in the fridge for 3 days. I bet you will see brown degradation at those spots big enough that those specimens would be sorted out. 95.112.137.166 (talk) 22:20, 3 June 2011 (UTC)[reply]
Since pathogens find their way into vegetables in accidental outbreaks, I think this can be done; I'm not sure I should really bother to think up cleverer ways to do it though. I think that as a rule, the less technology involved in a biological attack, the greater its chances of success. The 1984 Rajneeshee bioterror attack is an example of a quite nearly successful attack - but some technology they didn't need led to their eventual prosecution. Wnt (talk) 23:40, 3 June 2011 (UTC)[reply]
I think maybe that article is misnamed. It was certainly a reprehensible action, but I'm not sure it was terrorism. To me terrorism is an attempt to frighten a population into submission. Here it seems that any fear was a byproduct; they just wanted people to be too sick to go to the polls. (Note that I'm not saying that's better than terrorism, just different.) --Trovatore (talk) 07:16, 4 June 2011 (UTC)[reply]
Well, the article says people were afraid to go out, that restaurants were affected economically. It's true that by not seeking immediate publicity and recognition the Rajneeshis differed from many contemporary terrorists; though the damage and risks were different, somehow the attack seems more comparable to anonymous goons who burn down synagogues by night. But we were writing the article I'd say we have to stick with reliable sources; here I'll say, terrorism is a word, not a black bordered category coded in the laws of physics, and like all words it is prone to elastic definition by a wide range of cultural forces. Wnt (talk) 15:05, 4 June 2011 (UTC)[reply]
Yes, and I think that Bin Laden's last message referred to the KISS principle. :) Count Iblis (talk) 03:02, 4 June 2011 (UTC)[reply]
Crop dusting plane? Seriously though, getting the right pathogen is often much harder than deploying it (though anthrax might be an exception). Dragons flight (talk) 06:57, 4 June 2011 (UTC)[reply]
I think the answer is, yes, it COULD be a bio-terror attack, but far more likely is incompetent handling and/or negligence, possibly of the criminal kind. But not a deliberate attack. HiLo48 (talk) 07:24, 4 June 2011 (UTC)[reply]

Organs to donate

How many anatomical organs one can donate and retain relatively normal life capability afterwards (and what are they)?--178.180.38.43 (talk) 21:42, 3 June 2011 (UTC)[reply]

One kidney, part of your liver, and one lung. Count Iblis (talk) 21:52, 3 June 2011 (UTC)[reply]
Skin, bone marrow, blood and blood vessels could also be recycled leaving the donor alive. But as in liver, this is not a whole organ, and could be termed a tissue donation. Graeme Bartlett (talk) 23:07, 3 June 2011 (UTC)[reply]
As above, if we allow partial organs, we can throw in partial pancreas. There's also no technical barrier to a living donor giving up a single cornea for transplant, though there are obvious and serious ethical issues. A single ovary can be transplanted from a living (female) donor; as far as I know this procedure has only been performed between identical twin sisters. See also our article on transplantable organs and tissues. TenOfAllTrades(talk) 04:05, 4 June 2011 (UTC)[reply]
What serious ethical issues could arise from a single cornea transplant as long as the donor is willing? 173.2.165.251 (talk) 13:58, 4 June 2011 (UTC)[reply]
There is a common view in medical ethics that you shouldn't harm one person to help another, even if the person being harmed consents ("first, do no harm"). Most people accept that the harm from donating a kidney, say, is small enough that they are willing to make an exception. Being blinded in one eye (or at least having your vision in that eye significantly impaired) is quite a bit more harmful. --Tango (talk) 22:58, 4 June 2011 (UTC)[reply]
Uterus transplantation has been tried, but not successfully according to the article.Sjö (talk) 06:35, 5 June 2011 (UTC)[reply]
Could one donate a finger to a close relative? A teeth? I guess there's no need for the latter because it's more practical to use prosthetics. – b_jonas 11:46, 5 June 2011 (UTC)[reply]
Even replainting a tooth from an individual back to him- or herself (say, for instance, it avulsed completely) is less predictable than current dental implant technology (and required multiple procedures with to allow it to remain in the mouth properly (root canal therapy, possible post and core, and a crown). DRosenbach (Talk | Contribs) 03:45, 6 June 2011 (UTC)[reply]
You probably could donate a finger. Modern medicine can re-attach severed fingers with a very high success rate so I can't see why they couldn't attach another finger instead if the original one was too badly damaged. I'm not sure it would be worth having to take anti-rejection drugs for the rest of your life, though. Fingers are very useful, but hardly essential for life. One thing I have heard of is grafting a toe onto a hand - since it comes from the same person, there isn't the same risk of rejection. Also, the loss of a toe isn't as harmful to them as the loss of a finger (particularly a thumb, which is the most important finger so the one usually replaced) would be to you. --Tango (talk) 16:59, 5 June 2011 (UTC)[reply]

Pretty sure you could donate more than one lung, so long as it was still rather less than two. --Dweller (talk) 19:02, 5 June 2011 (UTC)[reply]

John and Lorena Bobbitt are to be awarded a Geld Medal for their work on reversible organ reduction. Cuddlyable3 (talk) 08:45, 6 June 2011 (UTC)[reply]

I have doubts that you you live a realitively normal life after donating a lung. I would think that the capability to do aerobic activity like jogging would be seriously limited. Jogging with 2 lungs is hard enough. Googlemeister (talk) 13:28, 6 June 2011 (UTC)[reply]

A beatiful question about Parrots and their voice producing system..

We all know that Parrots can Imitate Human language (are they the only one's on the planet?)..

well, i ask, is this ability Neural, or is it because of their unique Vocal-producing-Throat system (which i believe to be Human-like)?...

sorry for the ignorance.

blessings. 109.67.42.106 (talk) 22:52, 3 June 2011 (UTC)[reply]

Both. They need the physical ability to make those sounds, and they need the brain power to hear a sound and convert that into the appropriate nerve signals required to duplicate it. A dog might be an example of of an animal with sufficient brain-power to imitate human speech, but which lacks the physical equipment. Thus, when you try to get a dog to say "I love you", it comes out "rawr raw rawr". Also, there are other species of birds which imitate sounds, including human speech, such as the myna bird. StuRat (talk) 23:09, 3 June 2011 (UTC)[reply]
Common_Raven#Vocalization 95.112.137.166 (talk) 23:32, 3 June 2011 (UTC)[reply]
"Nevermore!" Looie496 (talk) 23:44, 3 June 2011 (UTC)[reply]
You may be interested in our articles on bird vocalization and more particularly lateralization of bird song, which discusses the neurophysiological work that goes into the vocalizations of songbirds. Put briefly, it's not terribly human-like. For one thing, they essentially have a two-part vocal apparatus called a syrinx (compare with the human larynx). Matt Deres (talk) 02:25, 4 June 2011 (UTC)[reply]
Some parrots are talented enough to work for large corporations.[16]Baseball Bugs What's up, Doc? carrots14:19, 4 June 2011 (UTC)[reply]
Incidentally, Jules Verne: Dick Sand, A Captain at Fifteen, chapter 6 mentions in passing “a dog that could actually pronounce quite distinctly nearly twenty different words”. (Find links to full text of both the original and the English translation from the article.) – b_jonas 11:44, 5 June 2011 (UTC)[reply]
Were two of them rough and ruff? Googlemeister (talk) 19:28, 7 June 2011 (UTC)[reply]

Canines

Is it true that dogs can't recognise most phonetics of a phrase, except for ones like vowels? So that the phrase, "fetch the newspaper" sounds like, "etch e ews-a-e". This means that you can essentially substitute most consonants and they won't even notice the difference. I'm talking about recognition not vocalisation. Plasmic Physics (talk) 00:07, 4 June 2011 (UTC)[reply]

That's also true of humans, though (to some extent). Our article on Speech Recognition algorithms outlines the physiology and psychology of human language and phoneme recognition, and methods to model it using machinery (particularly, software / computers). Nimur (talk) 00:14, 4 June 2011 (UTC)[reply]
[17] [18]. Deor (talk) 01:17, 4 June 2011 (UTC)[reply]
This is pure WP:OR, but in an experiment I just now conducted, dogs can definitely hear the difference between the words "small" and "ball", which only involves a difference in consonants. Red Act (talk) 01:28, 4 June 2011 (UTC)[reply]

June 4

Resolved

Confirmation please. I would think that walking from the center of a face to the corner, even if the surface is perfectly flat, would result in me going from level ground to a 45 degree incline by the time I made it to the edge. Is this accurate? 70.177.189.205 (talk) 00:32, 4 June 2011 (UTC)[reply]

A 45 degree incline from what? ←Baseball Bugs What's up, Doc? carrots00:46, 4 June 2011 (UTC)[reply]
45 degrees relative to the hypothetical tangent of the sphere - yes. It won't gravitationally flat, gravity will be strongest in the centers of each face. This would give you the perception of always walking uphill or downhill. Plasmic Physics (talk) 00:54, 4 June 2011 (UTC)[reply]
No, that's wrong. At the center of an edge it would be 45 degrees. At a corner it would be 60 degrees. Looie496 (talk) 00:56, 4 June 2011 (UTC)[reply]
But gravitationally, most of the attraction is to the material closer to you. So, this means that someone standing on a corner wouldn't feel "down" as towards the center of the cube, but rather somewhere between the center and their current location. This means the angle wouldn't seem quite as steep. I'd be interested to see the actual results, if somebody wants to "run the numbers". StuRat (talk) 01:02, 4 June 2011 (UTC)[reply]
No need; the symmetry alone dictates that the attraction be to the center, Taruts. Clarityfiend (talk) 01:11, 4 June 2011 (UTC)[reply]
Both 45°and 60° are incorrect.
Use a coordinate system such that the origin is at the center of the cube, and the surfaces are at x=±1, y=±1 and z=±1. At the point (1,1,1), gravitational "up" must be in the direction (1,1,1) from symmetry; consider rotations of the cube by 360/3° around a line that passes through the origin and the point (1,1,1). A vector of unit magnitude that points in the gravitationally "up" direction at (1,1,1) is thus . The unit normals to the adjacent surfaces of the cube, i.e., the unit normals to the planes x=1, y=1 and z=1, are (1,0,0), (0,1,0), and (0,0,1), respectively. We thus have , where B is any of the three unit normals to the adjacent surfaces of the cube. But then the identity , along with and , leads to . Red Act (talk) 03:51, 4 June 2011 (UTC)[reply]
What about the edge? As far as I know every triangle has a total internal angle of 180 degrees, taking away 90 degrees made by two adjoining faces, leaves another 90 degrees. Due to the symmetry of the resulting isosceles triangle, the remaindingg 90 degrees is divided into two equal sections of 45 degrees. Plasmic Physics (talk) 09:03, 4 June 2011 (UTC)[reply]
A formula for the incline at the mid-edge would be 45° cos (θ). Theta represents the surface vector, where zero degrees means pointing away from the mid-face, and 90 degrees means pointing towards a corner. Plasmic Physics (talk) 09:21, 4 June 2011 (UTC)[reply]
What ever the incline is at the corners, it is less than 45 degrees, not 54. Red Act is basing his calculations on the tetrahedron. Plasmic Physics (talk) 10:00, 4 June 2011 (UTC)[reply]
No, I most certainly am not basing my calculations on a tetrahedron. You're correct about the 45° for mid-edge, but you're confused and mistaken about the corners. Red Act (talk) 13:22, 4 June 2011 (UTC)[reply]

Confirmation from my "common sense": If I made it to the corner, using the above concept of symmetry, I should be at the peak of a 3 sided mountain, and in each direction the slope would be 45 degrees down (I assume I would be standing feet on corner, head directly away from center of mass/cube) no? If I continue down this slope in a path equidistant to the 2 ridges (edges) I should halfway to the far corner find myself on level ground no? And if I continue on this perfectly flat face of the world in any direction, continuing to any edge/corner find myself again taking my last step on a 45 degree slope? In essence, walking on perfectly flat terrain, I find my journey from a mountain, taking me into the center of a valley and back up to a very steep peak? 70.177.189.205 (talk) 12:37, 4 June 2011 (UTC)[reply]

Red Act: why is it then that your calculated value equals the angle between a face and an edge for a tetrahedron as given in the table on that page?
70.177.189.205: No, it wouldn't be 45 degrees down, to have your head directly away from the centre of mass, you'd have to look straight up. Yes, your final premise is correct. Plasmic Physics (talk) 13:58, 4 June 2011 (UTC)[reply]
It's the same number because sometimes two different problems happen to have the same answer. Red Act (talk) 14:50, 4 June 2011 (UTC)[reply]
What do you mean by "each direction"? If you're at the peak of a mountain that descends at a slope of 45° in every direction, then the peak must be conical. But if three ridges lead to the peak, then you can descend the mountain faster by heading down in between the ridges, rather than heading down a ridge. In this case you're descending at an angle of about 54.7356° if you head down between a pair of ridges toward the corner at the opposite end of a face, and descending at an angle of if you head down one of the ridges. I calculated that latter angle as being the angle between the vector (1,1,1) and any of the vectors (1,1,0), (0,1,1) or (1,0,1).
I think your common sense here amounts to you being able to tell that the angle you're looking for is in the ballpark of 45°, so it's intuitively feeling to you like it must be 45°. It really works better in this case to use a little analytic geometry on the problem, so that you won't be led astray by your intuitive guess. Red Act (talk) 14:48, 4 June 2011 (UTC)[reply]
Just to be extra clear: the slope calculated as 54.73 is the amount the surface drops down from horizontal toward the centers of the cube faces. The corresponding slope at the edges is calculated the same way, but the edges are not 1 unit but the square root of 2 units away from the center for the same calculation. Thus the slope is gentler, only 35.26 degrees down, going down these ridges. Now it so happens that 54.73 + 35.26 = 89.99 and change - indicating that there is a right angle between the edge and the face when you look at a diagonal slice through the corner of a cube. Wnt (talk) 14:51, 4 June 2011 (UTC)[reply]
This question and its answers don't make sense to me. Let's suppose I'm standing on a cube that's maybe 10 feet on each edge, and is positioned horizontally. So I walk the few feet from the center to a corner. How am I suddenly on a "45 degree incline"? ←Baseball Bugs What's up, Doc? carrots15:08, 4 June 2011 (UTC)[reply]
Well, people are debating the direction of the gravity vector at the corner of a gravitating cube relative to its direction at the center of one of the faces. Obviously you needn't be at that angle yourself (you're entitled to lean or lie down) but that's the way you'd stand naturally. 129.234.53.36 (talk) 16:14, 4 June 2011 (UTC)[reply]
Do you mean the gravity (such as it is) from the cube itself?Baseball Bugs What's up, Doc? carrots16:36, 4 June 2011 (UTC)[reply]
Yes. The question is about Htrae, a cubical planet. Red Act (talk) 16:44, 4 June 2011 (UTC)[reply]
"I don't think you understand the gravity of the situation", might be the correct response for Mr. Bugs ;) The Header for the question was for to describe a planet sized cube and the slopes described were for a traveler, with respect to gravity's effect on his travels on the six faced world. Thanks to all for the help. you confirmed my theory and thank you for helping to resolve my oversimplification of the 45 degree vs the actual angles involved. 70.177.189.205 (talk) 16:54, 4 June 2011 (UTC)[reply]
Actually, we never did figure out the interesting part of the question: the direction of gravity as you start walking down the mountainside. We know gravity points to the center at the vertex, at the middle of the edge, and at the center of a face. But we don't know exactly where it points when you're at some other arbitrary position. Like the lower gut grumbling, I feel an integral coming on...
Now the problem should be basic enough to state. Defining the cube as +- 1 unit in three dimensions, the magnitude of the gravitational force should be +1 -1 ʃ +1 -1 ʃ +1 -1 ʃ K/r^2 U dx dy dz) where r^2 for the inverse square law = ((x-x0)^2 + (y-y0)^2 + (z-z0)^2), K is proportional to gravity and U is the unit vector toward the point x y z being examined, i.e. ((x-x0),(y-y0),(z-z0))/sqrt ((x-x0)^2 + (y-y0)^2 + (z-z0)^2)) . To convert this into gravitational components in three directions I think we just consider x-x0 or y-y0 or z-z0 depending on which, thus for example +1 -1 ʃ +1 -1 ʃ +1 -1 ʃ K (x-x0)/((x-x0)^2 + (y-y0)^2 + (z-z0)^2))^1.5 U dx dy dz). Ach, but now I have to remember what is done to try to solve a beast like that. Ah, yeah, I remember. Google. Which gets me page 14 of this. http://www.congrex.nl/09m01/papers/11_TU_Delft.pdf You know you were sunk when you don't really understand the answer when you find it. "Adapted from Nagy, 1966, Geophysics, 31, 362-371": gx = ϱG |||x tan−1 (yz/xr) − y ln(z + r) − z ln(y + r) | x2 x1 | y2 y1 | z2 z1. I suppose if you work through the derivatives that has to come out to the integral above, but I wouldn't have guessed it.. Well, anyway, being out of my expertise I'll see if this gets further comment. Wnt (talk) 21:34, 4 June 2011 (UTC)[reply]
I calculated the inclines for the corners: ascending the corner along an edge gives arctan (1/√3) = 30°, ascending along a face gives arctan (2/3) = 33.690°. These values were calculated by taking the diagonal cross section of a cube with a side length 2. A cross section should give a hexagon with side length 1, then three sides are extended to form a triangle with side length 3. The height of the resultant pyramid is half the distance between two opposite corners of the cube, calculated using Pythagoras' theorem twice, equating to √3. Using trigonometric theory twice, the inclines were calculated for a triangular pyramid with base length 3 and height √3.
That proves that in all aproach vector at the peak has inclines less than 45°. No complicated calculus necessary. Plasmic Physics (talk) 23:28, 5 June 2011 (UTC)[reply]
Your error starts in your third sentence here. For example, with the coordinate system as above, you can form a regular hexagon from the points on the cube (-1,1,0), (-1,0,1), (0,-1,1), (1,-1,0),(1,0,-1) and (0,1,-1), but the length of the sides of that hexagon are √2, not 1. Red Act (talk) 02:55, 6 June 2011 (UTC)[reply]
Red Act: The angle between a face and an edge cannot be the same for a tetrahedron and this pyramid, as you are suggesting. If it was, then it would imply that there is more than one solution for the Opposite in a triangle with a known angle between Hypoteneuse and Adjacent. It just makes sense that a shorter pyramid has a lesser incline than a taller one. Plasmic Physics (talk) 23:51, 5 June 2011 (UTC)[reply]
For the cube to be symmetrically truncated, the six edges involved are bisected, a bisected edge is 1 long not √2.
I never claimed that the angle at the corner of a cube between an edge and the face it intersects at that corner is the arcos(1/√3). That angle is 90°. That's one quick easy way to tell that your answers of 30° and 33.690° above must be wrong, because30 + 90 + 33.690 ≠ 180. Red Act (talk) 02:58, 6 June 2011 (UTC)[reply]
I don't understand your first statement. Why are you adding 30 to 33.690? They aren't on on the same plane i.e. they belong to different triangles. Plasmic Physics (talk) 03:06, 6 June 2011 (UTC)[reply]

Water

I am writing an article about the world water content.

"how much water does a zoo use per day."

How much water would an wild elephant take during one day.

regards lesley freeman — Preceding unsigned comment added by Waterlessdams (talkcontribs) 01:38, 4 June 2011 (UTC)[reply]

Mammals' daily requirements for water for hydration varies significantly according to physical activities undertaken, air temperature during that activity, and relative humidity. It is likely that the daily consumption by elephants in captivity is well known, particularly those that are kept in zoos. But for elephants in the wild I would expect little to be known because of the difficulty of measuring how much an elephant in the wild actually drinks during its daily wanderings. I suggest you go looking for some information about how much an elephant in captivity drinks in a day. Dolphin (t) 04:34, 4 June 2011 (UTC)[reply]

What species is this fungi?

I found it in a jungle.--Inspector (talk) 03:44, 4 June 2011 (UTC)[reply]

A jungle in China? Looie496 (talk) 04:28, 4 June 2011 (UTC)[reply]
In any case it looks like some type of cup fungus, but there are a lot of them. Looie496 (talk) 04:35, 4 June 2011 (UTC)[reply]

anatomy

which is the most constricted part of gastrointestinal tract? — Preceding unsigned comment added by Akash541 (talkcontribs) 06:49, 4 June 2011 (UTC)[reply]

There are three 'pinch points' (sphincter muscles) in the GI tract all evolved for their particular purpose. The highest is the pyloric sphincter cardiac sphincter where the oesophagus enters the stomach. The next is the pyloric sphincter where the stomach empties into the duodenum and the last is the endpoint, the anus. My opinion is that the anus is the strongest of those three and (thank goodness) the most constricted when not defaecating. Richard Avery (talk) 07:03, 4 June 2011 (UTC) I have removed your duplicated question[reply]
Is not defecation a "normal circumstance", Richard? I wouldn't be too happy if my anus was constricted at that time. (I can't believe I'm referring to "my anus" in a place where the entire world online community can read it, but there you go.) -- Jack of Oz [your turn] 08:08, 4 June 2011 (UTC)[reply]
Perhaps the significant thing is that, unlike the earlier pinch points, you (hopefully) have conscious control over releasing that final constriction. (I'm happy to keep talking about it, since you started it. HiLo48 (talk) 08:22, 4 June 2011 (UTC)[reply]
Right I take your point Jack, I've reworded my reply. (Hmm, we're talking about 'down under' in down under!)
@ HiLo, of course there are some people who can release their oesophageal sphincter at will to belch, vomit or ?swallow swords. Right I'm off to have breakfast. Richard Avery (talk) 08:28, 4 June 2011 (UTC)[reply]
I think that releasing the pyloric sphincter is not that difficult either - not particularly more difficult than gaining control of the anus was earlier in life, I would say. It rather appalls me that so many people have become dependent on taking special pills for gastric reflux, when a simple motion can alleviate the pressure. (Though IMHO the pylorus has certain tastes of its own which are hard to override - likes sour, hates scratchy stuff - thus milk and cereal are not good to eat in the evening...) Wnt (talk) 14:58, 4 June 2011 (UTC)[reply]
Err, that's a hard-working pyloric sphincter that has to work both sides of the stomach :). The connection between the esophagus and the stomach is the lower esophageal sphincter (cardia). Also note that there's a higher pinch-point at the upper esophageal sphincter which is in your throat. That's technically not part of the GI tract, but I mention it as some people use that term to encompass the entire digestive tract from mouth to anus. Matt Deres (talk) 11:55, 4 June 2011 (UTC)[reply]

DC vs AC

I am currently reading a book made by an electrician expert and he says this

"Low-frequency (50- to 60-Hz) AC is used in US (60 Hz) and European (50 Hz) households; it can be more dangerous than high-frequency AC and is 3 to 5 times more dangerous than DC of the same voltage and amperage.Low-frequency AC produces extended muscle contraction (tetany), which may freeze the hand to the current’s source, prolonging exposure. DC is most likely to cause a single convulsive contraction, which often forces the victim away from the current’s source."

Also he said DC just stops the heart while AC is making it fibrilate which makes the heart harder to get it back to work. yet I know exactly the opposite :). DC is deadlier in terms of same voltage/current because it is constant and not like AC alternating. --Leonardo Da Vinci (talk) 07:08, 4 June 2011 (UTC)[reply]

You know how? And what do you mean 'DC is deadlier in terms of same voltage/current because it is constant and not like AC alternating'? Anyway there is some discussion at Electric shock. It doesn't go in to detail in the AC vs DC but I would trust what it does say where sourced more then what 'you know'. Also did I miss something? What exactly is the question? Nil Einne (talk) 09:39, 4 June 2011 (UTC)[reply]
There is much difference in opinion over this, and the argument has been going on for a very long time. The comparison between the dangers of AC and DC at similar voltages is complicated by the fact that the effect depends mainly on exactly how the shock occurs. Two people can receive apparently the same shock, but one can walk away unharmed whilst the other doesn't walk again. Does anyone know of any published scientific research on this? I tried using myself as a guinea-pig many years ago, and I can confirm (OR warning) that AC feels more dangerous, but I stopped short of passing dangerous currents in the region of my heart. Dbfirs 11:50, 4 June 2011 (UTC)[reply]
See War of the currents for an account of late 19th century demonstrations (paid for by some inventor/industrialist whose name does not come to mind), which showed AC to be more lethal than DC at a range of voltages. Edison (talk) 20:39, 4 June 2011 (UTC)[reply]
The same inventor/industrialist who made a Snuff film of poor Topsy? Cuddlyable3 (talk) 23:29, 4 June 2011 (UTC)[reply]

How to check for signs of life in a non-pulsatile person

Before this gets unfairly written off as a request for medical assistance, I'd like to clarify that if I was indeed looking for help for myself or someone else, I wouldn't come here, type a question, then wait 20 minutes for an answer while mine or someone else's life quickly dwindles. Now my question is how do you check for signs of life in a person who is non-pulsatile should they become unconscious? How would an average person with no medical knowledge who is not aware of the non-pulsatile person's condition be able to know? 173.2.165.251 (talk) 13:46, 4 June 2011 (UTC)[reply]

First of all, I find it hard to imagine a person who has no pulse being concious. From the first-aid viewpoint if you find someone who you believe may have recently lost conciousness and has no pulse and is not breathing, or you are with someone who collapses under the same conditions then do not waste time trying to determine whether the person is 'alive' or not. Call for emergency help, ensure your own safety and commence first-aid cardiopulmonary rescusitation, CPR. If you are not familiar or practised in this procedure then you should get in touch with a local first-aid organisation, enrol in a class and become proficient. You might someday save a life (correctly, postpone a death) but in any case you will not have niggling doubts about what to do if you are ever in the position you originally imagined. Caesar's Daddy (talk) 14:06, 4 June 2011 (UTC)[reply]
I should've clarified that the particular type of people I'm talking about are those who are on a ventricular assist device, which if I read that article correctly, some are non-pulsitle. And I believe Dick Cheney is one of those people who use such a device, because the last I heard he really has no pulse. 173.2.165.251 (talk) 14:35, 4 June 2011 (UTC)[reply]
Ah ha, well, that changes things slightly. Firstly address the person with a shake of the arm or shoulder to elicit some response, if that is negative then lightly pinch the ear lobe or similar place to elicit a pain response, if there is no reaction then check whether they are breathing by putting your cheek close to their mouth or nose and looking down their body for any respiration movements for about 10 seconds. If both these are negative then call for emergency aid and commence CPR etc. The need for further training still applies. Caesar's Daddy (talk) 15:16, 4 June 2011 (UTC)[reply]
Current (2010) AHA standards scrapped the whole "look, listen, and feel" stage after "check for responsiveness"...now one starts compressions much earlier--even before establishing airway or giving first breaths. NB, this is not medical advice, just a statement about their published standards and overhauled training materials. DMacks (talk) 20:06, 4 June 2011 (UTC)[reply]
I am sure there is an article about it, but the question reminded me of stage magicians who persuade the audience their pulse has stopped. I don't know how it is done but I doubt it is magic. Kittybrewster 15:24, 4 June 2011 (UTC)[reply]
Advice used to be to hold a mirror over the person's nose and mouth for 10 - 15 seconds. If it frosts up you know they are breathing. I agree with the pinching of the ear lobe - it wakens people who are in a deep sleep. --TammyMoet (talk) 16:35, 4 June 2011 (UTC)[reply]
In Charade, George Kennedy stuck a needle in his victim. This is not medically advised. Kittybrewster 17:03, 4 June 2011 (UTC)[reply]
In US CPR courses a few years ago they emphasized making noise, and not being stealthy, when kneeling by a downed person. One was to say loudly "Are you OK?" and shake the person, so that a bypasser would not assume you were mugging the person. Edison (talk) 20:36, 4 June 2011 (UTC)[reply]
In the current curriculum for first aid in Germany (well, I was re-certified in February, IIRC), people are taught not to look for a pulse anymore - just to check breathing (by bringing your face close to the nose and mouth, and observing the chest). Apparently, people had a hard time reliably finding a pulse under stress situations. If a patient does not breathe, he won't have a pulse for long, anyways (and, rare anomalies excluded, vice versa). So if there is no breath, you start CPR. BTW, does anybody know if AEDs are programmed to recognise ventricular assist devices? I suspect a shock would not be advised... --Stephan Schulz (talk) 21:17, 4 June 2011 (UTC)[reply]
Patients with ventricular assist devices tend to be prone to heart problems, and compatibility of VADs with external defibrillation is an important design consideration. (Here's an interesting case report. The patient was in ventricular fibrillation for seven hours, and his LVAD kept him alive until he could be electroverted. They shocked him three times with external paddles before normal rhythm was restored.) In general, most of the electronics are located a reasonable distance below the heart; the only bits directly in the path of the current are going to be plastic plumbing. (Similarly, implantable pacemakers generally sit well above the level of the heart.)
On the other hand, conventional CPR can be very dangerous for these patients, as the chest compressions can dislodge the tubing that connects the ventricular assist device and cause massive internal bleeding: [19]. TenOfAllTrades(talk) 22:01, 4 June 2011 (UTC)[reply]
So are these devices obvious or detectable by exterior examination, if they are then that's one more thing to check and if they are not you would have to play the percentages and get on with CPR. Caesar's Daddy (talk) 22:18, 4 June 2011 (UTC)[reply]
If they aren't obvious, then the patient should wear a Medical identification tag. That article does list "Pacemaker or other implantable medical devices" as one of the reasons for wearing one. --Tango (talk) 01:30, 5 June 2011 (UTC)[reply]
How about some references, here on the Reference Desk? A quick test is the sternum rub, but some unconscious patients do not respond to it. More formally, here is a handy PDF file discussing how to elicit the gag and cough reflexes, corneal reflexes, and so forth. Comet Tuttle (talk) 03:49, 5 June 2011 (UTC)[reply]

Calibrating length markers for micrographs?

In scientific articles, micrographs often include a line which is equal to a stated distance (e.g. one micrometre). How do the authors determine the correct length for this line? How is it calibrated? --129.215.4.89 (talk) 16:10, 4 June 2011 (UTC)[reply]

The details depend on the type of microscope and imaging system used, but it boils down to geometry. The magnification factor associated with each set of optical elements in a microscope will be known (specified and calibrated by the manufacturer), which means that one can figure out the size of the image formed by those optics compared to the size of the object that you're looking at. (If the field of view with a low-ish power objective is one millimeter across, and the camera sensor is 1000 pixels across, then you know that each pixel in the resulting image is 1 micrometer – 1 micron – wide.) In practice many modern microscopy systems hide all the math behind the scenes, and automagically spit out a calculated size for each pixel in the image. Some software allows the user to draw scale bars on directly.
If you want to check your math, you can also calibrate the scale using a test object with features of a known size. For high-precision work, one can purchase a stage micrometer; essentially a test slide marked with a very tiny ruler: [20]. If you're in a biology lab, someone probably has a hemocytometer, which incorporates a grid of regularly spaced lines. TenOfAllTrades(talk) 19:28, 4 June 2011 (UTC)[reply]
There are also grids available, but I suppose that's too simple, and you were asking about instances in which no grids were used. DRosenbach (Talk | Contribs) 20:31, 5 June 2011 (UTC)[reply]

EHEC again, receptor

Shiga-like toxin says the receptor the toxin binds to is called Gb3. (I can't find an article on it.) What is that receptor good for if no toxin is around? I guess it does something useful and is not only loitering around waiting for The Wrath Of The Cucumbers to come along. 77.3.146.181 (talk) 20:29, 4 June 2011 (UTC)[reply]

Googling around, Gb3 appears to be globotriaosylceramide, a member of the glycosphingolipids. DMacks (talk) 20:39, 4 June 2011 (UTC)[reply]
Hmmm, it seems to interfere with HIV infection.[21] The α-galactosyltransferase that synthesizes it is one of the markers for HIV resistance. Wnt (talk) 23:48, 4 June 2011 (UTC)[reply]

Sleep in older people

I've read that older people need as much sleep as younger adults, but they just wake up earlier so they end up sleeping less. Is it true that the amount of sleep needed by humans is not diminished even though they generally sleep less as they age? Is it known, biologically, why older people tend to sleep less? --173.49.79.135 (talk) 20:54, 4 June 2011 (UTC)[reply]

Here is an article in PubMed that speaks directly to the issue. The abstract says that the need for sleep is the same, regardless of age, but that the ability to sleep is damaged. Bielle (talk) 21:04, 4 June 2011 (UTC)[reply]
Well, as for my original research on getting old myself, I can't say that I need less sleep, nor do I actually sleep less. Might be other people accumulated too much of a bad conscience that won't let them have a sound sleep. 77.3.146.181 (talk) 21:06, 4 June 2011 (UTC)[reply]
if you look at the article, you will find no mention of "conscience", but only of physical disorders that lead to less than the ideal amount of sleep. Bielle (talk) 21:13, 4 June 2011 (UTC)[reply]
Now I see clearly why we should mark comments with an "(ec)". I didn't have a look at the article or your comment before I wrote mine. 77.3.146.181 (talk) 21:28, 4 June 2011 (UTC)[reply]
It is very clear that older people don't sleep as long, on average, as younger people, but whether they need less sleep is much less clear. There are research studies pointing in both directions. My take on the overall data is that in the elderly, a sleep duration of around 7 hours leads to the best performance on fatigue-sensitive tasks, and solid sleep is better than broken sleep. In biological terms, some of the causes of altered sleep have been identified (altered circadian rhythms, for example), but I don't think anybody has identified a functional reason why the elderly would sleep less. Looie496 (talk) 21:28, 4 June 2011 (UTC)[reply]
It is very clear? The old people I know seem to sleep about the same amount as other adults... --Tango (talk) 23:18, 4 June 2011 (UTC)[reply]
I have read some time ago that young adults of up to 25 years of age need far more sleep than they usually get. They need 9 to 10 hours of sleep, while they typically get 7 hours sleep. Also, I've read that before the 20th century, people did sleep a lot longer than we do today. Count Iblis (talk) 23:38, 4 June 2011 (UTC)[reply]
As an "older" person (in my 60s) I simply must add to this discussion that a reason I sleep for shorter periods is the basic need to urinate more frequently. I'm pretty confident that this is not a rare condition. (NOTE: I am NOT seeking medical advice) OR from other "older" folks would be welcomed. HiLo48 (talk) 23:42, 4 June 2011 (UTC)[reply]
I'm not that old, but I also have to urinate quite frequently. But I still manage to sleep 8.5 hours per day, I just have to go to bed about 9.5 hours before I have to wake up. So, I think this is the fundamental problem: Most people go to bed way too late so that any disruption of sleep is going to lead to less sleep than they ideally need. I think another factor is that people don't get enough exercise. If you don't do a few hours per week hard exercise like fast running, your sleep cycles may not kick in with full force. Count Iblis (talk) 00:06, 5 June 2011 (UTC)[reply]

3 questions about Endocrinology

1.Why do we separate between Genitals and Gonads?. 2.What comes first in the Fetus? - The Genitals or the Gonads?. 3.In what word shall we use to describe both Genitals and Gonads as "1 reproductive package"?, whether it be Male, Female, or of an Inter-sex...?

sorry for the ignorance.

1000 thanks. 109.67.42.106 (talk) 21:01, 4 June 2011 (UTC)[reply]

According to our article Gonads: The gonad is the organ that makes gametes. The gonads in males are the testes and the gonads in females are the ovaries. Gonads in females are internal and gonads in men are external. They are a part of the genitals; the word "genitals" or "genitalia" covers all the parts. Bielle (talk) 21:10, 4 June 2011 (UTC)[reply]
I don't think very many people use the word genitals in such a way as to include the ovaries. The testicles, yes. I think the usage notes at [[22]] (WARNING: if you're at work and you think your IT department might snoop around in browser caches, don't click) are a bit dubious, frankly. --Trovatore (talk) 21:18, 4 June 2011 (UTC)[reply]

Isn't there a Greek\Latin word for BOTH organs? (both Vagina\Ovaries | Penis\Testicles)

Thanks.—Preceding unsigned comment added by 109.67.42.106 (talk) 03:00, 5 June 2011

Sex organs or genitalia most certainly include gonads, birth canal parts, and penis. Sex organs are, naturally, any organs that an individual does or does not have depending on the individual's sex (gender). --PeeKoo (talk) 11:20, 5 June 2011 (UTC)[reply]
Note that carpel and stamen are also sex organs. --PeeKoo (talk) 11:23, 5 June 2011 (UTC)[reply]

June 5

Foods S.PROVEN to include Somatotropin?

can somebody list some?,

Thanks?. — Preceding unsigned comment added by 109.67.42.106 (talk) 02:57, 5 June 2011 (UTC)[reply]

This is addressed in Growth hormone, which says that most of the supplements actually advertise that somehow they "release" HGH. The only food I can think of that is sure to contain somatotropin is pituitary... but note that even primate GH is inactive in humans. So if you want GH that is active in humans you have to cannibalize human pituitaries Dawn of the Dead style. No, wait, that doesn't work, because the stuff has to be injected to avoid being broken down by the digestive system. In general - human proteins do many fascinating things, but they are not readily accessible as herbal medicine. Wnt (talk) 06:21, 5 June 2011 (UTC)[reply]

Please help me identify unknown bacteria

I initially thought I had Serratia fonticola, but my instructor basically hinted that this is wrong since we didn't see this baterium during lab. I'm thinking I should have a very common bacterium (perhaps an enterobacter). Also, because our reagents are getting old, my Indole red test may be negative or positive; I don't know.

Here's my results: Gram negative rod. Methyl Red: POS. Vogues-Proskauer: NEG. hydrogen sulfide production: NEG. Motility: POS. Citrate: POS. O/F glucose: positive (turned yellow). O/F glucose + 1/2" oil on surface: positive (turned yellow). facultive respiration. Oxidase: NEG. Catalase: POS. Nitrate reduction: POS. Nitrite to Ammonia: POS. Urea hydrolysis: NEG. Casein protien hydrolysis: NEG. Starch hydrolysis (amalase production): NEG. Tryptophan degradation (using tryptone broth): NEG. Phenylalanine deamination: NEG.

All of the following tested positive for acid and gas production: glucose, glycerol, lactose, sucrose, mannitol, maltose. Thank you for the help161.165.196.84 (talk) 04:20, 5 June 2011 (UTC)[reply]

There's a policy here about doing homework problems - just out and out giving you the answer is frowned upon. More to the point, I don't know what to do with this information anyway. So to start with let's see if we can figure out the logic to this problem.

Now even at this point I should point out, there are already unwarranted assumptions. By chance I was just reading about Richard Lenski's famous ten-year E. coli selection experiment, in which he found that in one flask, the bacteria suddenly started to use citrate as a food source. Before long they became quite competent at it, even "speciating" into a large Cit+ population and a small specialist population that could only use glucose. But according to the tests above, Cit+ means your bacterium is not E. coli! Since surprises like this also exist in nature, it's hard to be confident about such tests ... and yet, generations of microbiologists have somehow managed to do extraordinarily good work with them. It boggles the mind. Wnt (talk) 06:10, 5 June 2011 (UTC)[reply]

Thank you for the response. To clarify, I do understand Wikipedia's hw policy; I just need some direction. My lab manual says I should look at Citrobacter, but this guy produces hydrogen sulfide (so that's not it). I've gone through the Bergie's manual a few times now, to no avail. I'm thinking I have an enterobacter, but I need help figuring out which one. I KNOW I don't have Bacillis subtillis, Providencia, Morganella, Proteus, Serratia, and probably not Klebsiella or Shigella — Preceding unsigned comment added by 161.165.196.84 (talk) 06:23, 5 June 2011 (UTC)[reply]
I don't think it really boggles the mind that much. The fact that such things can and do exist in nature doesn't meant they are common. It's obvious that Cit+ E. coli are rare enough in nature (which was after all one of the reasons the discovery was significant), that the test works the vast majority of the time. The reason the trait is so rare would likely be because in most environments E. coli live in (particularly those likely to lead to E. coli ending up in food or water) the trait isn't beneficial enough compared to the cost. That's hardly surprising or difficult to understand, except perhaps to the founder of conservapedia. Nil Einne (talk) 06:52, 5 June 2011 (UTC)[reply]

Half-life of Proton-AntiNeutron bound state

The class of exotic atoms consisting of Protons, Electrons, and anti-neutrons would seem potentially interesting, given they might provide a way to store anti-neutrons in a normal matter world. But only if the half-life is long enough. I presume that the anti-neutrons wouldn't get close enough to neutrons in surrounding matter to annihilate. Is that correct? How long would bound proton-antineutron nuclei last?

BTW, what are these sort of atoms called? I can see no reference to them in Wikipedia.88.104.247.65 (talk) 10:08, 5 June 2011 (UTC)[reply]

Do you have a reason to think such a combination would be bound? (I'm no QCD expert myself, so if it's blindingly obvious, no big.) --Trovatore (talk) 10:28, 5 June 2011 (UTC)[reply]
I don't know about any kind of bound state either, but I would expect that the proton's down quark and one of the anti-neutron's down anti-quarks as well as one of the proton's annihilate quickly, leaving a meson made up of an up quark and a down anti-quark (e.g. a pion or a rho meson). Icek (talk) 13:37, 5 June 2011 (UTC)[reply]
That's correct. The proton and the anti-neutron would annihilate each other into a set of mesons almost instantaneously. Dauto (talk) 14:42, 5 June 2011 (UTC)[reply]
A neutral pion isn't dissimilar, in that it contains a quark and the corresponding anti-quark, and is stable enough to exist as a known particle. The mean lifetime is only about 10-16s, which you may consider "almost instantaneous", but it's still far from zero. I know very little about the subject, but it seems plausible to me that a proton-antineutron particle could have a comparable lifetime (which is, of course, far too short for the purpose proposed by the OP). --Tango (talk) 16:45, 5 June 2011 (UTC)[reply]
The neutral pion decay is governed by the electromagnetic interaction while the proton-neutron decay is governed by the strong interaction. That makes the latter decay much faster than the former. My back of envelop guesstimate gives about 10-23s which is pretty much instantaneous by any standard. Dauto (talk) 17:55, 5 June 2011 (UTC)[reply]

EHEC, and now it's biogas

Newest reports now blame biogas. Experts think it is possible that bacteria could mix in the tanks and thus generate new strains. Biogas lobby instantly denies and states that the substrate would be heated for at least one hour to 70C. Now I wonder if that heating takes place at the begining of the process, at the end or two times. I wonder further if that heating would not consume a lot of energy and void the benefits of biogas. Does anyone know some facts and details about that? 77.3.180.185 (talk) 11:16, 5 June 2011 (UTC)[reply]

I take it you are talking about the 2011 E. coli O104:H4 outbreak? I cannot find any reports connecting this with biogas - could you provide a link? 81.98.38.48 (talk) 11:44, 5 June 2011 (UTC)[reply]
So it probably only available in German language. AFP via Google and the press release from the biogas lobby 77.3.180.185 (talk) 13:10, 5 June 2011 (UTC)[reply]
The story you provide only appears to quote one person going by a machine translation. It's also not clear how much expertise they have in the matter (they work for a medical laboratory although in what role is unclear, even what the medical laboratory does is unclear). There's something on the Agricultural and Veterinary Academy which I'm not clear about but I think it's either saying it's unlikely or it's something to look in to (which they probably say with all possible leads) Nil Einne (talk) 15:46, 5 June 2011 (UTC)[reply]
The medical laboratory is doing those kinds of test as they are proposing. So if they are right, they would have dug up a really big business for themselves and other laboratories like theirs. Also, they, as well as those from the veterinary academy are not saying the actual EHEC strain does provably come from a biogas facility, only that it was possible that it did. Anyway, my question is about the heating of the substrate for biogas generation and its costs in terms of energy. 77.3.180.185 (talk) 16:19, 5 June 2011 (UTC)[reply]
Would seem to be a very big COI then (it's questionable if they even have to be right or just spread enough FUD to get business). And this still doesn't show whether or not they are likely to be experts on the evolution & spread of new E.coli strains who have any idea what they are talking about (your first statement was 'Experts think it is possible') I wouldn't normally expect those who are primarily involved in the testing side of things to be. (As I said about the Agricultural and Veterinary Academy, I can't really understand that part but it sounds to me like they are mostly just saying 'it may be possible, we will look in to it' which from a political POV is better than saying 'the idea is nonsense and the people suggesting it have no idea what they are talking about' even if they really think that.) Also, many things are possible, it doesn't mean it's likely. Nil Einne (talk) 03:16, 6 June 2011 (UTC)[reply]
Ha, so Germany may have more deaths due to an industrial accident involving green technology than with nuclear technology. And I think they are also against irradiation of food. :) . Count Iblis (talk) 14:52, 5 June 2011 (UTC)[reply]
Whatever you think of German "ecological" politics, believe me, it's worse. 77.3.180.185 (talk) 16:25, 5 June 2011 (UTC) [reply]
That hypothesis is newly a week old. It is more likely a fault with the water recycling treatment plant used in bean sprouting. [23]--Aspro (talk) 16:53, 5 June 2011 (UTC)[reply]
Since no one else has answered. About the second question, this is not something I have much experience with but while the heating process obviously won't help the equation in favour of biogas I don't see any reason to think it's going to make it completely untenable. Any energy extraction and production system has lots of costs that would be nice to do away to improve the energy efficiency equation with but are ultimately part of the cost. This would include biogas and I doubt the heating cycle is the biggest cost. Since from the sound of it this isn't a new requirement, if biogas is worth it then this would presumably be with the heating cycle requirements. Nil Einne (talk) 03:25, 6 June 2011 (UTC)[reply]
About the first question, a simple search for 'biogas heating 70' finds plenty of discussions. [24] for example suggests pre and after heating is done in some circumstances (and also looks at the effictiveness of the sanitisation). [25] shows the heating cycle and suggest it is primarily part of the production/fermentation cycle rather than simply intentional heating for sanitisation purposes. [26] appears to also have two 70 °C preheating cycles although there's little discussion of the biogas production so it's not clear if 70°C is reached there. [27] appears to use a post-heating pasteurisation but doesn't mention much of the initial process. [28] uses a CHP system to achieve 70-90°C apparently to increase production efficiency. [29]and [30] appears to only use pre-heating. From these, I think it's clear it depends a lot on the precise process. Generally pre-heating is the norm, which makes sense from a biological/sanitisation POV. (Remember other outputs would often be fertiliser or compost.) Post-heating may also be done. And in some cases the heating may be part of the fermination process. Nil Einne (talk) 04:08, 6 June 2011 (UTC)[reply]

What is the name of alkane that has 1000 and 10000 carbon atoms?

--Inspector (talk) 14:01, 5 June 2011 (UTC)[reply]

1000 is kiliane, but the table of length prefixes for generic long-chain molecules in the IUPAC_Blue_Book does not have 10000. DMacks (talk) 15:45, 5 June 2011 (UTC)[reply]
As DMacks mentions, there doesn't appear to be an official term for a 10,000 carbon atom alkane, but note that the factor of tens are based around the SI prefixes. There isn't currently an SI prefix for 10000, but at one time there was myria-, so were there ever a need to have a name for a 10,000 carbon atom alkane, something like "myriane" might be proposed (possibly with "diriane", "tririane", ... for 20000, 30000, etc.). Note that you'll only need those terms if it's a linear alkane of 10,000 carbon atoms. A branched hydrocarbon would be something like "260,375,1503,2673,2654,4536,4675,5430,7650,7895-decahectylnonaliane". As final note, at that size you're really not in the standard hydrocarbon regime, but rather in the land of polymers. Most chemists would likely view a saturated 10000 carbon atom molecule (linear or branched) not as an alkane, but as a polyethylene molecule. - 174.31.219.218 (talk) 17:50, 5 June 2011 (UTC)[reply]
Its also kinda moot; the ability to do reasonable organic (non-biological) synthesis on molecules is so small that most organic chemists don't deal with discrete pure substances composed of identical 10,000 carbon-atom chains. As noted by 174.31, very long carbon chains generally fall in the realm of polymer chemistry and polymers are composed of a range of molecular sizes, one sample of polyethylene may have an average chain length of, say, 10,000 monomer units, +/- 1000 units with a certain level of branching, while a different sample may feature a chain length of 20,000 +/- 500 units with less branching, or something like that. You just don't deal with bulk substances composed of pure 10,000 carbon strait-chain hyrdrocarbons. The only other molecules that get that large are biomolecules like nucleic acids and proteins and stuff like that, and that sort of stuff is dealt with by biochemistry, and biochemical molecules do not follow the IUPAC naming standard for obvious reasons. What would be the IUPAC-standard name for hemoglobin? Does it matter? --Jayron32 18:57, 5 June 2011 (UTC)[reply]

Hair colour

I was wondering why natural human hair colour has such a narrow spectrum - no greens or blues, no proper red (whatever we may call it). And then I wondered if any of our close relative primates have hair/fur outside of these boundaries? Cheers --Dweller (talk) 18:58, 5 June 2011 (UTC)[reply]

There are only a three pigments availible for human hair color. The relative amounts of each determines your hair color. There is brown eumelanin, black eumelanin and pheomelanin, which is redish in color. Whether you have blond, brown, brunette, mousy brown, black, strawberry blond, gray, red, auburn, carrot orange, etc. is determined by how these three pigments are combined in your hair. There is a discussion of this at Human_hair_color#Genetics_and_biochemistry_of_hair_color and in links from there. --Jayron32 19:19, 5 June 2011 (UTC)[reply]
Sorry I wasn't clear in my question... I wasn't asking what the mechanism was for producing those colours, I was asking why we evolved with such a narrow spread... --Dweller (talk) 19:30, 5 June 2011 (UTC)[reply]
All three pigments are different types of melanin. They are very similar and the genes for them vary by only very small changes. Evolving completely new pigments would require much bigger genetic changes, which makes it much less likely to happen. --Tango (talk) 19:36, 5 June 2011 (UTC)[reply]
The technicolor specimens of hominidae got picked off by even the legally blind hawks. DRosenbach (Talk | Contribs) 19:49, 5 June 2011 (UTC) [reply]

East and west reversed when looking at the Moon?

The article Near side of the Moon says that although while standing on the Moon, east and west are where you'd expect them to be, when looking at the Moon on the sky, they are reversed. I couldn't understand why, and looking at Talk:Near side of the Moon#Orientation of the Moon, I see I'm not the first one to wonder at this. I thought it could be because of a mirror effect - after all, when standing on the Moon, the lunar surface is below your head, but when looking at it on the sky, it's above your head - but then I looked at the "Blue Marble" photograph on the article Earth, which is a genuine photograph of Earth from space, and it showed east on the right and west on the left, just as you'd expect them to be. Could anyone explain why they are supposed to be the other way around on the Moon? JIP | Talk 19:49, 5 June 2011 (UTC)[reply]

The article is not very well written, but maybe what it's getting at is that if you think of the Earth and Moon at a time when the Greenwich meridian is pointing directly at the Moon, then the Western edge of the Earth is "opposite" the Eastern edge of the Moon, and vice versa. It's a similar idea to the (supposed) left-right reversal of the image in a mirror. AndrewWTaylor (talk) 20:07, 5 June 2011 (UTC)[reply]
(ec)The statement makes no sense. The moon's orientation in the sky depends on where you are standing on the earth, since the earth is round. Dauto (talk) 20:08, 5 June 2011 (UTC)[reply]
I think the question is about the idea of east and west on the Moon itself, which is not obviously a well-defined notion as it stands; some convention must be chosen. You could say that the Moon's "north pole" is the pole that's in the same direction from the Moon's center that the Earth's north pole is from its center. Or, you could say that the Moon's north pole is the pole where the Moon, turning under your feet, turns the same direction that the Earth turns under your feet at the Earth's north pole (that is, counterclockwise, if I've done that right in my head). I think the two definitions would give the opposite answer. --Trovatore (talk) 20:12, 5 June 2011 (UTC) No, I take it back, I think they give the same answer. Still, it's a point that needs further clarification. --Trovatore (talk) 20:14, 5 June 2011 (UTC)[reply]
I'm assuming the North Pole on the Moon is the pole that points the same way from the Moon's center as the North Pole on Earth points from the Earth's center. I have no experience on being on the Moon, so the only way I can visualise this is imagining looking at another Earth on the sky instead of the Moon. However, I cannot help but imagine that if I am standing on the Earth, facing north, then east is on my right and west is on my left, both on the Earth under my feet and on the Moon up in the sky. Would the Moon look different if I was facing south? "North is up, south is down" is, to the best of my knowledge, an arbitrary decision and not some irrevocable basic law of physics. Surely it's possible to view the Earth from space as upside down from the "Blue Marble" image? JIP | Talk 20:32, 5 June 2011 (UTC)[reply]

When you look at a mirror, why is left and right exchanged, but not up and down? (I know the answer, but I hope the question will lead you to think into the right direction.) 77.3.180.185 (talk) 20:19, 5 June 2011 (UTC)[reply]

Please don't forget to end-small next time to prevent everything afterwards from being registered as small. DRosenbach (Talk | Contribs) 20:29, 5 June 2011 (UTC)[reply]
When I look at myself in the mirror, what I am seeing is light that is coming towards the same direction as me, but bounces off the mirror, reversing direction. So it's only natural that left and right are reversed. But when I look at the Moon, what I am seeing is light that is coming towards the same direction as the Moon, just as if I were looking at some other object above my head. There is no mirror up in the sky where the Moon would be reflected. JIP | Talk 21:20, 5 June 2011 (UTC)[reply]

The North pole of the moon is defined as that pole pointing in the same direction as the Earth's North Pole. If you are in the Northern hemisphere and the word Moon were written across the face of the moon at its equator as you looked at it, and there were a man walking from the emm toward the enn, he would be walking to the east on the moon but appear to be moving toward the west horizon of the Earth. Simple. μηδείς (talk) 20:52, 5 June 2011 (UTC)[reply]

I understand what you mean, but I still cannot visualise it when imagining another Earth orbits the Earth instead of the Moon. (The Earth is the only planet where I know intrisincally where "east" and "west" are supposed to be when on the planet's surface, because I have not been on the surface of any other planet.) JIP | Talk 21:04, 5 June 2011 (UTC)[reply]
A standard map of the Earth is oriented in the way you would see it from the outside: East is right, west is left (always taking north to be up). With the night sky things are reversed because we're looking at the celestial sphere from the inside. That's why maps of the night sky are drawn such that east is left when north is up; that's also the standard orientation of astronomical images. Now, with the moon you have two options: Either you consider the moon as hanging in the night sky and you define east and west to be the same directions you assign to the sky, i.e. east is left - I think this is referred to as the "astronomical orientation". Alternatively, you consider the moon to be a physical body, like Earth, and you use the same orientation you would for a globe: east is right - that's called the "astronautical" orientation. The latter is the one that is officially in use today (defined by the IAU in 1961 [31]). Observers of the moon actually prefer the terms "preceding" and "following", referring to the daily motion of the moon (the way the moon would move through the field of view of a fixed telescope). --Wrongfilter (talk) 21:40, 5 June 2011 (UTC)[reply]

Force = mass x ____

Perhaps I'm just understanding this incorrectly, but suppose I have a post submerged into the ground and I run into it with a horizontal beam fitted to a car that is traveling at a constant speed -- won't the speed of the car have an effect on the force applied to the submerged post. My confusion lies in the idea that force is equivalent to mass x acceleration, and in my example, the car is traveling at a constant speed (and is experiencing no acceleration) -- so why isn't it mass x velocity or something like that? DRosenbach (Talk | Contribs) 20:27, 5 June 2011 (UTC)[reply]

The car is slowed (changes velocity) by the impact of the post. That change in velocity times the mass involved gives the force. μηδείς (talk) 20:55, 5 June 2011 (UTC)[reply]
But is that the absolute value? How can it be that a car accelerating at that (positive) absolute value provides the same force as one deccelerating at the (negative) absolute value? And that's sort of using the impact (i.e. the "transfer of force," so to speak) to retroactively provide the acceleration. DRosenbach (Talk | Contribs) 21:07, 5 June 2011 (UTC)[reply]
The force provides the acceleration, rather than the acceleration providing the force. So here, the post decelerates the beam, whilst the beam accelerates the post. - Jarry1250 [Weasel? Discuss.] 21:18, 5 June 2011 (UTC)[reply]
Mass x velocity is something called momentum. This is not force. Is that, perhaps, your confusion? The force you hit the post with is dependent on how quickly you change speed. So, if your initial speed was 50 miles per hour, and you drop to 0 miles per hour in 1 second, you would impart the exact same force on the pole as if you were traveling 100 miles per hour and stopped in 2 seconds. In reality, your stopping time would not double as your speed doubles, indeed it will probably only go up a small amount, so lets say that at 50 miles per hour, your stopping time is 1 second, but at 100 miles per hour your stopping time is 1.1 seconds. You clearly impart more force in the second example because your acceleration (deceleration, whetever, same math with a negative sign) is significantly higher, 100/1.1 is a bigger number than 50/1. --Jayron32 00:28, 6 June 2011 (UTC)[reply]

WKD colour loss

Hey all. I kept some leftover bottles of WKD Original Vodka for a couple of months. They were originally the typical blue colour [32], but when I got round to drinking them yesterday, they were colourless. Tasted exactly the same, but colourless. What chemical could be responsible for this? What's the story? Regards, - Jarry1250 [Weasel? Discuss.] 21:24, 5 June 2011 (UTC)[reply]

They weren't by any chance exposed to direct sunlight ? This can cause many colors to fade. Another possibility is that the blue coloring came out of suspension or solution. In that case, I'd expect to see some blue residue, perhaps at the bottom of the bottle. StuRat (talk) 01:03, 6 June 2011 (UTC)[reply]

Sound-guided missiles

It seems most new military aircraft are "stealth" and harder to target by heat-seeking or radar guided missiles. Has anybody ever developed a missile that flies towards the source of a loud sound? Would this be possible and how could it work?--92.251.222.88 (talk) 21:41, 5 June 2011 (UTC)[reply]

This sounds like a good idea, but it has some problems. For one thing, most anti-aircraft missiles fly over the speed of sound. Even when they don't, the sound of the missile flying at hundreds of miles an hour would overwhelm the microphone. Also, new stealth aircraft are also designed to minimize sound as well as radar. And an aircraft could fly at supersonic speeds and the missile would be useless. I think the next anti-stealth weapon will be low-frequency radar. --T H F S W (T · C · E) 22:29, 5 June 2011 (UTC)[reply]
Stealth aircraft are only really stealthy when flying over countres that use pre-1970s air defense systems. Modern air defense systems, like the MIM-104 Patriot or the S-400 (SAM) don't have problems shooting down the most advanced stealth aircraft from more than 100 km distance. Count Iblis (talk) 01:23, 6 June 2011 (UTC)[reply]

Safety of BPA-free plastic water bottles

The plastic water bottles sold these days all seem to claim to be BPA-free. However, they all seems to have some "plasticky" smell, suggesting chemical emissions. What material(s) are BPA-free plastic water bottles usually made of? Do we actually know that these materials are safer than the polycarbonate they replace, or are we just trading one known hazard for some other unknown ones? --173.49.79.135 (talk) 23:59, 5 June 2011 (UTC)[reply]

It's largely a ruse that water bottles need be free of bisphenol-A because it's so dangerous, because many other things still contain it, such as most of the esthetic white dental fillings (composite, glass ionomer, etc.) Are we looking at a threshold? How much is "too much"? Apparently, it doesn't make too much of a difference. DRosenbach (Talk | Contribs) 03:39, 6 June 2011 (UTC)[reply]
I'd go with your instincts here. If you can smell it and/or taste it, then a significant quantity of chemicals must be given off by the bottle. Whether those chemicals pose a health risk is probably unknown, but it seems reasonable to avoid them, wherever possible, say by using glass bottles. StuRat (talk) 04:02, 6 June 2011 (UTC)[reply]

June 6

To charge a phone on a bicycle

Hi, is there any gadget I could get anywhere online (or preferably off) that will charge my phone while I pedal my bike? I remember bike odometers that hooked a little wheel to one of the tires (therefore, called "flywheels?") in order to spin the numbers. Could that same small wheel utilize the spinning of the tires to recharge my phone?

If so, where is a device that'll do exactly that? I would hope to find one before a long bike-ride. Thanks. --70.179.165.67 (talk) 04:29, 6 June 2011 (UTC)[reply]