Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 71.119.131.184 (talk) at 00:57, 12 February 2016 (→‎Why do doctors give saline to the patient instead of water?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


February 7

Man with no brain

I've seen a photo of a man with no brain yet he functions normally. Does this give evidence for existence of the soul, or consciousness is beyond the brain? — Preceding unsigned comment added by Money is tight (talkcontribs) 00:07, 7 February 2016 (UTC)[reply]

Can you link us to this photo? The complete absence of a brain, or anencephaly, is lethal at or just after birth. It is true that people can have remarkably large brain defects and function reasonably well in daily life, but this is not 'no brain'. As for your other questions, the soul does not exist, and there is no evidence for conciousness 'beyond the brain'. Fgf10 (talk) 00:13, 7 February 2016 (UTC)[reply]
Whether there are souls is a theological question outside the scope of this Reference Desk. Wikipedia does discuss the beliefs of various religions and other belief systems about the soul. Robert McClenon (talk) 00:22, 7 February 2016 (UTC)[reply]
No, it's a scientific question on the Science refdesk. There is no evidence for souls in science. Whatever various works of fiction claim is of no consequence. Fgf10 (talk) 01:08, 7 February 2016 (UTC)[reply]
The existence of souls, or not, is a matter of opinion, not of science. ←Baseball Bugs What's up, Doc? carrots→ 08:25, 7 February 2016 (UTC)[reply]
At the science desk? Don't be ridiculous. The soul, as described in various works of fiction, is incompatible with the laws of physics as we understand them, so unless we've got it very wrong, according to science it doesn't exist. End of. Fgf10 (talk) 12:49, 7 February 2016 (UTC)[reply]
The so-called "laws of physics" are a human interpretation. "...as we understand them..." is the key point. They are their own kind of religion. ←Baseball Bugs What's up, Doc? carrots→ 12:57, 7 February 2016 (UTC)[reply]
I've gone far afield with this recently, so I won't again, but I should note that science has not disproved the existence of the soul, nor has it provided a satisfactory explanation of qualia. That said, I have not heard of anyone able to use skeletal muscles in a controlled manner who does not have some apparent central nervous system to control them. This is biology, so there is no law of nature that would prevent the autonomic nervous system, enteric nervous system etc. from growing efferents and somehow learning to control muscles without a brain present; or even preventing cells of the skin, muscles etc. from expressing proteins that lets them spread action potentials and think; but there's no evidence they have the capability and by this point such things would seem extraordinarily, extraordinarily unlikely, as would most other brain-free processes of control. Wnt (talk) 00:29, 7 February 2016 (UTC)[reply]
Science also hasn't disproved the existence of the Flying Spaghetti Monster. So what? Fgf10 (talk) 01:08, 7 February 2016 (UTC)[reply]
Then it could exist. ←Baseball Bugs What's up, Doc? carrots→ 12:57, 7 February 2016 (UTC)[reply]

There are various photos of men who are missing most of their brains and still function normally. This sometimes the result of extreme hydrocephaly or physical trauma to the brain. But rest assured, these people still have some brain left. If you find anyone claiming a human can function with literally no brain, you are being lied to without a doubt. Someguy1221 (talk) 02:49, 7 February 2016 (UTC)[reply]

Just realized this is an election year. You'll likely be seeing lots of men with no brains walking around and even talking. Someguy1221 (talk) 10:39, 7 February 2016 (UTC)[reply]
Well, let's go with malfunctioning brains - for the sake of science! SteveBaker (talk) 15:22, 7 February 2016 (UTC)[reply]
Or perhaps they just never figured out how to use them... Double sharp (talk) 04:37, 8 February 2016 (UTC)[reply]
So you're advocating that they figure out how to use their brains using...um...their brains? What could possibly go wrong?! :-) SteveBaker (talk) 17:57, 8 February 2016 (UTC) [reply]

https://www.google.com.au/?gfe_rd=cr&ei=kK-2VufIBcbN8geWxYHYBg#q=half+head+man — Preceding unsigned comment added by Money is tight (talkcontribs) 02:45, 7 February 2016 (UTC)[reply]

Some of those photos are real - but a lot of them are very bad photoshopped images. We do know that people can and do survive with very little brain tissue remaining...there have been cases reported where a person survived with just a 1" thick layer of brain surrounding a fluid-filled void. But with no brain at all - that's utterly impossible. The brain handles a bunch of functions such as the control of breathing - that you simply can't do without.
As for the soul - no, science has not disproved the concept - but it also hasn't disproved the concept of green aardvarks playing pianos on the far side of the moon...that doesn't mean that we have to assume that they exist. The default hypothesis in this case is that souls don't exist (and neither do those aardvarks) and since you're asking this question on the science reference desk - the scientific answer is that since we have no evidence for the existence of a soul, it is meaningless to ask whether a man without a brain (who couldn't be alive anyway) would or would not have one. SteveBaker (talk) 15:35, 7 February 2016 (UTC)[reply]
Are you telling us that The Clangers did not exist? I am gutted!;-) DrChrissy (talk) 15:47, 7 February 2016 (UTC) [reply]
The clangers were (a) not green and (b) evidently played slide whistles rather than pianos...but aside from that, of course they existed! SteveBaker (talk) 05:28, 8 February 2016 (UTC)[reply]
Like in arguments about Wikipedia consensus, arguing about default hypotheses involves a lot of gaming about what is "default". "Flying spaghetti monster" is usually applied as an argument for the non-existence of God, but it's one thing to assume the burden of proof is against a very specific made-up religion, and something else (say) to conclude confidently that the universe was not designed, has no plan or purpose, that the answer to why people really feel things and really see beauty in it is that actually they don't, and that everything about the universe, including the laws of mathematics, is purely random. ("But where did random come from? Isn't that just begging the question?") Perhaps the better approach here is to ask -- what, specifically, scientifically, do you mean when you say the soul doesn't exist? Because maybe that's not part of the definition... Wnt (talk) 18:14, 7 February 2016 (UTC)[reply]
The default is to presume that what we can measure is "real" and what we can't measure has to be demonstrated indirectly. No such demonstration of a "soul" has ever been made - and, to the contrary, when we stick someone's head into a brain scanner, we see it light up in an appropriate and consistent manner when the person thinks about different things and in different ways. There is sufficient complexity in the brain for "emergent behavior" to appear - so there is no reason to assume that there is "something else". That's not to say that there isn't a "soul" - but merely that this shouldn't be the default hypothesis.
There is no evidence that whatever religion you're considering was not "made up" too - in fact, because there are so many religions in the world - many of which are sharply contradictory - the evidence is that even if one of them turns out to be correct, at least 99% of religions must be nonsense. Wondering what the odds of 99% of religions being incorrect rather than 100% of them provides additional reason to eliminate them from the default hypothesis.
As for "beauty" - you make the absolutely classic (and exceedingly naive) mistake of presuming that atheists see no beauty in the universe - and nothing could be further from the truth. The beauty is in all of the amazing mechanisms that emerge from the simplest of representations. That the key laws of physics can be written on the front of a T-shirt (I have one) - and that is enough to understand very nearly all of it. That, to most scientific thinkers, is beauty. That the leaves of a tree are the result of random evolutionary processes that result in the near perfect optimisation for capturing sunlight - is incredible. That flowers have beautiful markings on there petals that humans can't see because they are in the UV spectrum - and that the plant evolved to put them there to help bees to figure out how to orient themselves as they land to do pollination. Please - don't tell me that you need religion to see beauty - that's complete and utter bullshit. If all I had to believe is that a magician waved his magic wand and it all popped into existence - the world would seem to be an arbitrary, ridiculous, foolish place - and much of the beauty would evaporate.
The laws of mathematics are not "random" - they may all be deduced from the most simple axioms imaginable - you're entirely mistaken if you believe that.
The randomness of the universe comes about from quantum randomness and the randomness that comes about in some systems that are susceptible to sensitive-dependence-on-initial-conditions...Chaos theory. So we're very well aware of what those sources are.
What is meant by "the soul does not exist" is not a question I really need to answer. I have not been provided with a definition for this term - it's a vague piece of description that's conveniently never pinned down. Without a definition, it's nothing more than a word. So we have not discovered any evidence for a thing that's vaguely described in the first place.
The argument that a lot of people believe in something, so it must be true has been disproven more times than I can count. An enormous number of people believe that vaccination causes autism - does that make it true? Actually - no. It's been tested beyond reasonable need - and it's not true. Despite that, only 52% of Americans believe that vaccines don't cause autism. 68% believe in god (in some form or another). Does that make them right?
SteveBaker (talk) 05:28, 8 February 2016 (UTC)[reply]
I'll leave this article, about a neighbour of mine when I lived in Barnsley, here. --TammyMoet (talk) 16:00, 7 February 2016 (UTC)[reply]
Yes, I suspect the OP may be thinking about something like the case Tammy mentions. As for "the soul doesn't exist" I don't believe we can say that either. The definition of a soul is "the spiritual or immaterial part of a human being or animal, regarded as immortal"[1] and all we can say is that there is no scientific evidence for that to be the case. However, we still don't have a complete theory of everything and so it is just possible that there are things or mechanisms that exist that we don't yet know about. And just to make my position clear, personally, I am 99% sure there is no God as visualised by religious people and no afterlife. I'm quite happy with that as, if I'm right, when I die my consciousness will come to an end and I won't have to worry about it. However, like everything else in life, I always entertain the possibility that I may just be wrong (a very remote possibility in this case), and if I am, it will be interesting to find out what comes next. There is the (also very unlikely) possibility that the universe was created by some intelligent entity but, if it was, then I am sure they don't really care less whether we worship and pray to them or ignore them completely. Oh, and if it does all turn out to be an experiment run by the white mice then I'm in deep shit - but that's anther story. Richerman (talk) 12:43, 8 February 2016 (UTC)[reply]
The desire to believe that your life won't just suddenly just "end" is quite powerful. You don't have to resort to religion and the concept of a "soul" to get past that though. There is always the (MUCH more worrying) Quantum suicide and immortality hypothesis - I'm kinda hoping that's one hypothesis which turns out not to be true because it might just imply eternal (albeit religion-free) damnation! Even without the many-worlds hypothesis, you can get pretty much the same result if the universe turns out to be infinite and the weak anthropic principle is acceptable to you. Another one that I like is the concept of reincarnation - in which at the moment of your death, you are reborn as another human being - although you'd have absolutely no memory of your earlier life. Many people find that to be a much more comfortable situation than just "fade to black...nothingness" - although to all measurable tests, the outcome would be identical. So if you're OK with "no-memory-transfer" reincarnation, you have an unfalsifiable hypothesis that's every bit as good as any religious view. Then we have the Simulation hypothesis (another theory that I'm quite fond of) - and so maybe the universe will get a blue-screen and wind up being rebooted? SteveBaker (talk) 18:12, 8 February 2016 (UTC)[reply]
The simulation hypothesis is something that seems quite likely to me (relatively speaking, as likely as a big bang or a multiverse) possibly because I read about the Evil demon philosophical idea when I was young. The article on quantum suicide is an interesting read, I've wondered about that idea with regards to all possible outcomes resulting in infinite universes - it's a very scary idea so I'm glad "I'm" still in this universe right now! Mike Dhu (talk) 09:24, 9 February 2016 (UTC)[reply]
@SteveBaker and Money is tight: I've been trying not to give into the temptation to infinite digression, but it's actually relevant to the original question to note that the Ancient Egyptian concept of the soul comprised multiple components, some of which we have no controversy about the existence of, and others of which are perhaps more palatable when considered separately. In particular, I should note the possibility that the ka, the non unique component of the soul, is the same in all people and is what actually feels qualia (thus the moral basis of religion to be drawn from this is that evil done to others is suffered by oneself, even if there is no memory of that). However, the ba is more typically the portion focused on in Judeo-Christian tradition. In religions from the Egyptian onward, the preservation of worthy ba (or portions thereof, I would think) from one universe to the next offers a possibility for worthy personal actions to have a more enduring and fundamental significance. Wnt (talk) 08:31, 11 February 2016 (UTC)[reply]

What is the meaning of the word "lead" in context of ECG?

I understand that is one of 12 electrodes, but I'm asking about the meaning of the word. I opened dictionary and I saw many meanings, but I'm not sure which one is the right.93.126.95.68 (talk) 00:08, 7 February 2016 (UTC)[reply]

Lead is essentially used interchangeably with electrode, not any specific one. Fgf10 (talk) 00:15, 7 February 2016 (UTC)[reply]
Thank you, but I'm not sure if you're right because I know that a typical ECG machine has 9 electrodes while it result give 12 leads. (aVL+aVR+aVF are augmented leads without their own electrode, so actually you can not call them electrodes, then the word lead can not be used interchangeably with electrode.)93.126.95.68 (talk) 00:41, 7 February 2016 (UTC)[reply]
In some versions of English, 'lead' is a synonym for 'wire' or 'cable', as in 'extension lead'. Not sure whether that helps though. Akld guy (talk) 01:02, 7 February 2016 (UTC)[reply]
That is exactly the relevant meaning here yes. As per a quick read, the electrodes mentioned actually physically use the same electrodes as some of the main 9, but are referenced differently, so are essentially 'virtual' electrodes. REF Fgf10 (talk) 01:14, 7 February 2016 (UTC)[reply]
See our Electrocardiography article. The leads run from the machine and have detachable electrodes [2] [3] which are stuck on to the patient's skin with a conductive gel. Richerman (talk) 13:03, 8 February 2016 (UTC)[reply]

Do all the muscles of the body have origin and insertion?

Do all the muscles of the body have origin and insertion? and if they do have, doed the heart (as considered as muscle) also have origin and insertion? 93.126.95.68 (talk) 00:31, 7 February 2016 (UTC)[reply]

Hmmm, also External sphincter muscle of male urethra, external anal sphincter, iris sphincter muscle (sort of, though you can argue that starts as smooth muscle which we know is different). In the case of the anal sphincter there actually *is* an insertion, for one layer - might be worth looking deeper into the embryology to see if the circular layer is a late specialization in development? Wnt (talk) 00:49, 7 February 2016 (UTC)[reply]
I believe the tongue does not.DrChrissy (talk) 00:51, 7 February 2016 (UTC)[reply]
From our article:
"The eight muscles of the human tongue are classified as either intrinsic or extrinsic. The four intrinsic muscles act to change the shape of the tongue, and are not attached to any bone. The four extrinsic muscles act to change the position of the tongue, and are anchored to bone."
It goes on to describe which bones the four extrinsic muscles are anchored to. So for the tongue as a whole, half yes and half no. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 15:04, 8 February 2016 (UTC)[reply]

In genetics, sex can be dominant?

In genetics, sex can be dominant? I mean to male or female, does one of them can be dominant just because of his sex? 93.126.95.68 (talk) 00:56, 7 February 2016 (UTC)[reply]

If you mean dominant in the Mendelian sense, then one could argue that the answer is yes when sex determination is chromosomal. In placental mammals, one could say that male is dominant in the Mendelian sense because a single Y chromosome determines male sex. In birds, one could likewise argue that female is dominant in the Mendelian sense. If you mean something other than the Mendelian sense, please clarify. — Preceding unsigned comment added by Robert McClenon (talkcontribs)
Yes, User:Robert McClenon's comment is basically correct for mammals, insofar as the presence of the Y causes male characteristics, whatever the number of the X chromosomes. See XXY. That doesn't apply for certain birds and insects, e.g., though. μηδείς (talk) 02:20, 7 February 2016 (UTC)[reply]
To be nitpicky, it's specifically the gene SRY that causes development of male phenotype in mammals when expressed. SRY is normally located on the Y chromosome, but it is possible for mammals to have a Y chromosome and still be phenotypically female, to varying degrees, like if the SRY gene is broken, or if there are other conditions like androgen insensitivity. --71.119.131.184 (talk) 07:14, 7 February 2016 (UTC)[reply]
How rare is this? Are they any more likely to have male traits than XX women? What happens if both the mother and the father give a Y chromosome each? Sagittarian Milky Way (talk) 07:59, 7 February 2016 (UTC)[reply]
You might like to read this if you haven't already done so. Dbfirs 10:04, 7 February 2016 (UTC)[reply]

Are there any XY females who are actually capable of getting pregnant in the first place? Certainly it wouldn't be the ones with complete androgen insensitivity syndrome, because they don't have ovaries or a uterus. But I don't know for sure that it isn't possible in some other way. --Trovatore (talk) 23:18, 7 February 2016 (UTC)[reply]
I don't think people with 100% XY have gotten pregnant, but there are rare cases like this that come pretty close. - Lindert (talk) 23:34, 7 February 2016 (UTC)[reply]
Very interesting. The abstract says that the daughter got a Y from the father, but doesn't say, as far as I saw, whether the mother had any viable Y-bearing ova. Is it known whether that's possible? --Trovatore (talk) 23:47, 7 February 2016 (UTC)[reply]
Any mammalian embryo with no X chromosome is nonviable, as the X chromosome contains many essential genes. Embryos with abnormal chromosomes inevitably get created as a result of errors in meiosis. Down syndrome is a well-known example, but most chromosomal abnormalities are lethal and cause the pregnancy to spontaneously abort. The Y chromosome is not essential, which is obvious as half of mammals don't have one. Because of this, evolutionary pressure inevitably reduces the Y chromosome over time (see the article for details). --71.119.131.184 (talk) 10:58, 8 February 2016 (UTC)[reply]
Hmmm, I suppose you could say that maleness is a recessive lethal with some phenotypic effects in the heterozygote. (That link should go to lethal allele, but that article was written by someone who defines that term altogether differently than what I'm familiar with!!!) Wnt (talk) 18:20, 7 February 2016 (UTC)[reply]
You might like reading about the evolution of sex and anisogamy. SemanticMantis (talk) 15:45, 8 February 2016 (UTC)[reply]

Date of official information on the name of element 113

I could have asked the question any time I wanted to, but I chose now because we've reached the first time in a week when doing a Google News search on "ununtrium" doesn't reveal anything less than a week old. Can anyone predict the date I'll get official info?? Georgia guy (talk) 01:18, 7 February 2016 (UTC)[reply]

Only the people at RIKEN and IUPAC will be able to answer that one. Fgf10 (talk) 01:23, 7 February 2016 (UTC)[reply]

Odors emitted by the Feces and Urine of Mammals and Birds

Where can I find material on the intensity of odors emitted by the feces and by the urine of various mammals and birds? Thank you.Simonschaim (talk) 10:49, 7 February 2016 (UTC)[reply]

We don't have an article specifically on this topic (although Category:Feces might prove useful). It should be covered in any general work on woodcraft, and a web search on animal-specific terms ("bear scat", "fox scat", etc) will usually come up with the appropriate details. Tevildo (talk) 12:03, 7 February 2016 (UTC)[reply]
Bile contributes to the smell of feces. StuRat (talk) 18:13, 7 February 2016 (UTC)[reply]
This is a tough one. I mean... we all know from experience how much it can vary based on diet. Beyond that, intestinal microflora. If you take some lab animals and do a poo sniff-off, mostly you've learned what the lab techs are feeding the animals. I'd be wary of general statements. Wnt (talk) 18:23, 7 February 2016 (UTC)[reply]
It might be difficult to find information on the intensity per se, but articles the OP might want to look at include Pheromone, Vomeronasal organ and Flehmen.DrChrissy (talk) 18:31, 7 February 2016 (UTC)[reply]
As a long-time pet owner, I would point out that the intensity and nature of the smell of animal pee depends on its age and storage condition. Cat urine on cat litter which is less than a day old is different from cat urine deposited on a plastic bag or piece of fabric on the floor behind a couch which is not discovered for a week. The question seems like a readily quantifiable one.It would be surprising if no date had been collected and published. Subjects could give subjective ratings of odor strength for standardized samples under well defined experimental conditions, and we could learn the relative intensity of either a constant volume of parakeet/lizard/hamster/cat/rattlesnake/dog/human/deer/lion/bear/dolphin/horse/hippopotamus/elephant/whale urine or feces, or the relative subjective odor strength of a normal deposit of said substances. Edison (talk) 21:19, 7 February 2016 (UTC)[reply]
Skatole is responsible for much fecal odor, and the term may help you find more quantitative assessments. SemanticMantis (talk) 15:44, 8 February 2016 (UTC)[reply]

Thank you to all those who supplied me with answers. Simonschaim (talk) 11:44, 10 February 2016 (UTC)[reply]

Oil drilling -- env. impact

What substances/materials (if any) which are involved in oil drilling (particularly in offshore oil drilling) are classified as highly toxic? In particular, which are toxic not only by ingestion, but also by skin contact and/or inhalation of vapors? 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 12:01, 7 February 2016 (UTC)[reply]

Here's a list of the types of chemicals likely to be used during the drilling of offshore wells - not much there on toxicity though. The oil itself may be the most toxic chemical that people may come into contact with. Mikenorton (talk) 12:11, 7 February 2016 (UTC)[reply]
And that last material is only moderately toxic. 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 12:16, 7 February 2016 (UTC)[reply]
Found some data for benzalkonium chloride (used in oil drilling as a corrosion inhibitor) -- it's pretty toxic, rather more so than crude oil. Benzalkonium_chloride#Toxicology 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 12:39, 7 February 2016 (UTC)[reply]

Oil drilling, part 2

If there's an oil spill at sea and it catches on fire by itself, is it ever put out or is it universal practice to let it burn? (I know, for example, that oil spills are sometimes deliberately set on fire as a last-ditch cleanup measure.) In what circumstances, if any, should it be put out? Is it a conceivable scenario where a burning oil slick is first extinguished and later deliberately ignited again as part of the disaster response? 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 12:21, 7 February 2016 (UTC)[reply]

NOAA page on in-situ burning. Mikenorton (talk) 12:26, 7 February 2016 (UTC)[reply]
Crude oil does not inflame itself. Setting it on fire deliberately is most likely motivated by "saving" near coastlines, which else have to be cleaned up later. In contrast to coasts our oceans and atmosphere have always been treated as a dump for toxic wastes anyway. --Kharon (talk) 12:46, 7 February 2016 (UTC)[reply]
I did not mean spontaneous combustion, I meant accidental ignition from a stray source. 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 07:34, 8 February 2016 (UTC)[reply]

Near earth approach of 2013 TX68 in March 2016

Asteroid 2013 TX68 is due to make a near Earth approach next month. Per the article and news stories, it was only observed for 3 days in its previous approach in 2013, and is too dim to be seen when distant from the earth. Stories say it could come as far as 9.2 million miles or as close as 11,000 miles,(and equivalent 2 digit precision in metric units) but that it can't possibly hit the earth. Christian Science Monitor says "There is no possibility that this object could impact Earth" in 2016, per a NASA press release. Its nearest approach time is uncertain ("sometime between March 3–8, 2016",per the Wikipedia article) and we can't see it until it is within a couple of days of closest approach. So if the largest number is "9.2 million miles," apparently to two digits of precision, how can NASA be so certain that 11,000 miles is the closest possible approach? Is this just false confidence to avoid public alarm? I've seen a lot of confidence intervals, and "11,000 to 9,000,000" as stated in some news articles is an odd one. It's like saying "4505500 miles plus or minus 4494500 miles" if we take the average as the midpoint. Then they give odds on its closest approach on future occasions, but an approach to tens of thousands of miles would cause a huge deflection in its direction, with the deflection dependent on the closeness of approach. How does the certainty that the closest approach is 11.000 rather than zero square with the a large magnitude of the farthest approach?, Given apparent uncertainty about the nearness of this approach. how can there be much certainty about the next approach? Edison (talk) 14:29, 7 February 2016 (UTC)[reply]

JPL has a graphic that shows possible points of closest approach given the orbital uncertainties. It seems that these points are restricted to a plane that appears to be well constrained and does not contain Earth. The closest point of that plane to our dear planet is 11,000 miles away and thus gives the minimum possible approach distance. --Wrongfilter (talk) 14:56, 7 February 2016 (UTC)[reply]
In other places (HERE for example) you can see the calculations are being done to higher precision than the Christian Science Monitor quoted...in a news item, journalists rarely want to write hugely precise numbers because they are hard to read and assimilate.
But I agree with User:Edison - we know the plane in which the rock orbits with great precision - and we know that this plane only comes within 11,000 miles of Earth - but we have much greater uncertainty about where 2013TX68 will be within that plane at the point of closest approach.
Perhaps an analogy would be useful: It's kinda like worrying about cars on a fast stretch of a flat, straight road going right past your house. You have no idea whether they'll be driving at 30mph or speeding way over the speed limit at 90mph - so your error margin in their speed is huge. But you do know - with great precision and high confidence - that they'll stay within that narrow corridor prescribed by the edges of the road. So if you're walking home along the sidewalk and you see a car that's 5 miles away on the horizon coming towards you. You don't have any good idea at all of how close it'll be when you reach the safety of your home...but you're confident that it's not going to hit your house with almost complete certainty. If asked how close the car might get to you as you open your front door, the larger number would be "a couple of miles...maybe?" and the smaller number would be the distance from your house to the edge of the road (18 feet 7 inches).
Your error margin on the larger number is enormous - but you still know with near certainty that your house is safe.
SteveBaker (talk) 15:16, 7 February 2016 (UTC)[reply]
Thanks. Confidence of it being in a plane 11,000 miles from earth is a fine explanation for the amazing figures cited.It makes perfect sense. But wouldn't a possible pass 11,000 miles from earth deflect, it putting it in a different orbit/plane? It sounds like they are forecasting approaches years in the future based on a scant 3 days of observation 3 years ago. Te graphic from JPL is odd, since it basically shows two rows of dots, and nothing between them. Is there an explanation for that? Edison (talk)
That depends on how exactly that figure was created. I assume it was some sort of Monte Carlo simulation - randomly pick a possible value of starting parameters out of the possible range in 2013, calculate the orbit, plot the position of closest approach in 2016. Now, if they picked the extremes for the starting parameters (values around, say, the 1σ contour) rather than the best-fit values, those would map to something like an ellipse in the output parameters, i.e. this graphic. So the lack of dots inside these two rows would be due to them not bothering making the computation for those values. But this is just me guessing, I can't back this up with a publication or so. The deflection during this approach will certainly affect the prediction for the next one, and it will depend on how close this approach will actually be. --Wrongfilter (talk) 21:45, 7 February 2016 (UTC)[reply]
Hmmm - I wonder if one row of dots comes from the sunward-leg of the orbit and the other as it returns from the sun, heading out towards deep space? That would explain two neat sets of numbers like that. That's a guess though. SteveBaker (talk) 05:02, 8 February 2016 (UTC)[reply]
The orbit of the asteroid is known to an accuracy of a few thousand kilometres. The accuracy isn't the same at every point of the orbit or in every direction, but that's the order of magnitude. The uncertainty in the semi-major axis translates into an uncertainty in the orbital period, and in the 2.5 years since it was last observed this accumulated into a quite large uncertainty in the phase of the asteroid's orbit of about 14 million kilometres. In other words, we know quite well where the orbit is and that Earth will pass at 17000 km away from the orbit, but we don't know where the asteroid will be at that moment. I assume the figure published by JPL indeed results from a Monte Carlo simulation. One row of dots comes from the asteroid passing Earth's orbit ahead of Earth, the other row comes from the asteroid passing Earth's orbit behind Earth. PiusImpavidus (talk) 11:11, 8 February 2016 (UTC)[reply]

sources of oxidants in natural shale gas reservoirs

A few years ago I came across a lot of industrial presentations on natural gas reservoirs, especially shale gas, and something that was mentioned is that shale gas reservoirs can become "overmature" where the hydrocarbons become CO2. This was puzzling to me because I couldn't figure out what could be oxidizing the gas only after the organic material has been sitting there for around 200 million years, when it appears to be fine from the 50 million year period onwards.

Two questions: 1) What are the source of reducing agents that reduces longer-chain fatty acids and carboxylic acids to methane? Why can't we exploit these reducing agents directly? 2) What are the source of oxidizing agents that oxidise methane to CO2 deep in the ground, underneath the bedrock?

Yanping Nora Soong (talk) 17:18, 7 February 2016 (UTC)[reply]

I'm not familiar with this, but the first few sources I found [4] [5] [6] [7] give me the impression that there is an "oil window" and a "gas window" in which kerogen (of which there are apparently four types) is cracked under heat and pressure. I see different estimates for the window, doubtless because some specifics of how they are measured are different, but they say roughly 50-100 celsius at 2-4 km depth produces oil, maybe 100 to 150? 200? more? celsius at 3-6 km depth produces gas. Very hot gas undergoes "secondary cracking" that the first source says can first produce wet gas, and
"Metagenesis marks the final stage, in which additional heat and chemical changes convert much of the kerogen into methane and a carbon residue. As the source rock moves farther into the gas window, late methane, or dry gas, is evolved, along with nonhydrocarbon gases such as carbon dioxide [CO2], nitrogen [N2] and hydrogen sulfide [H2S]. These changes take place at temperatures ranging from about 150°C to 200°C [302°F to 392°F]. These stages have a direct bearing on source rock maturity." (This appears to be cited to Peters KE, Walters CC and Moldowan JM: The Biomarker Guide, 2nd edition. Cambridge, England: Cambridge University Press, 2005) Wnt (talk) 18:01, 7 February 2016 (UTC)[reply]
I'll pass the ball to someone else at this point. Wnt (talk) 18:01, 7 February 2016 (UTC)[reply]
To build a bit on Wnt's excellent information. Initially organic matter is trapped in fine clays and silts. As the organic rich clay is buried under millions of years of sediment accumulation, the clay turns into shale and the organic material transforms into kerogen. As it is buried deeper and deeper, the heat and pressure transform (or mature) the kerogen into oils and eventually gas (by combining hydrogen with carbon to form long chain {ex. octane} and eventually short chain hydrocarbons {ex. methane}). Once all the hydrogen in the organic matter has combined with carbon to form hydrocarbons, increasing heat and pressure will never create any additional hydrocarbons and the reservoir is overmature.
Obviously we can't apply 150-200 C or the pressure at 3-6km depth (about 5,000-10,000 psi) to convert organic matter into CO2 on the surface, or at least not economically.Tobyc75 (talk) 21:59, 8 February 2016 (UTC)[reply]
I'm talking about redox balance. What reduces the hydrocarbons to form methane? There must be a reducing agent. Similarly, what is oxidising the methane (and other hydrocarbons) to CO2? There must be an oxidising agent. Reduction and oxidation always pair together. I'm aware there is some disproportionation involved, such that medium-oxidised organics (aldehydes, alcohols, alkenes, etc.) oxidise/reduce each other, forming longer and shorter chains respectively, but you can't just oxidise all the hydrocarbons to CO2 without the electrons going *somewhere*. Are metal oxides in the ground being reduced? Yanping Nora Soong (talk) 09:37, 9 February 2016 (UTC)[reply]
Methane is just the last product of thermal cracking of the original long-chain hydrocarbons. As to the CO2, it is formed early in the generation process as the original kerogen breaks down (it does contain some oxygen). It doesn't migrate in the same way as the hydrocarbons as it is quite soluble in water and may remain in the shale layer after the hydrocarbons have moved off. CO2 also moves upwards from deeper levels, derived from thermal breakdown of carbonates. I don't think that there is any oxidation going on to produce the CO2. Mikenorton (talk) 09:53, 9 February 2016 (UTC)[reply]

Are all stars main sequence stars at some point in their life-cycle?

Do all stars belong either to the main sequence stars, have once been main sequence or will inevitably become main sequence stars?

See stellar evolution. It appears from that article that all protostars that are large enough to fuse hydrogen (and thus become stars rather than brown dwarves) will enter the main sequence for some period of time. Robert McClenon (talk) 20:28, 7 February 2016 (UTC)[reply]

are there compounds which are poorly soluble in aliphatic hydrocarbons (e.g. cyclohexane) but dissolve well in aromatic ones (like benzene or toluene)?

I note that neutral (zwitterionic) L-DOPA is weakly soluble in water but even less soluble in diethyl ether or chloroform. However, would it be more soluble in aromatic solvents? Yanping Nora Soong (talk) 21:25, 7 February 2016 (UTC)[reply]

Buckminsterfullerene is substantially more soluble in aromatics than in aliphatics. DMacks (talk) 21:33, 7 February 2016 (UTC)[reply]

Really how important is fruit?

I've gone months of having a bowl of fruit nearly every morning and months of having no fruit at all yet feel no different during that time. 2.103.13.244 (talk) 22:29, 7 February 2016 (UTC)[reply]

We cannot give actual medical advice here, but you might be interested in the underlying reasons for medical and public-health organizations publishing various food pyramids and promoting balanced diet. See whether it's strictly about the types of foods or the types of nutrients or the trade-offs in a real economy or other cultural/political environment. DMacks (talk) 22:33, 7 February 2016 (UTC)[reply]
Nutritional advice is not medical advice (or everyone who publishes a diet book would be arrested for practicing medicine without a license), so we are free to reply. Fruit does have some good stuff, like vitamin C in citrus, antioxidants/phytochemicals in berries, lycopene in tomatoes (technically a fruit), and healthy fats in avocados, but you can also get those from other things. So, in that sense they aren't essential. On the other hand, if eating fruit for dessert stops you from eating something far worse, that's a real plus. StuRat (talk) 22:39, 7 February 2016 (UTC)[reply]
Fruit also has soluble and insoluble fiber. On the downside, it does have a lot of sugar (at least if you're talking about the fruits most people think of as fruit, meaning not tomatoes, not green beans, etc). A lot of people track "added" sugar, but I think this is one of the tradeoffs DMacks is talking about — your body can't (or I expect it can't) tell whether the sugar is "added". But the experts don't want to discourage people from eating fruit, so they don't emphasize tracking total sugar. --Trovatore (talk) 22:46, 7 February 2016 (UTC)[reply]
Sugar isn't too bad in whole fruits, where it gets to be a problem is with juices, where all the fiber has been removed and the sugar concentrated, or where you actually add sugar, like sugar on grapefruit, whipped cream on berries, or even more sugar added to "juice". StuRat (talk) 23:09, 7 February 2016 (UTC)[reply]
Sugar is sugar. Well, certainly there are different kinds of sugar, but the dominant one in fruit is fructose, which is the same thing people get upset about in high-fructose corn syrup.
I don't think your body can tell whether you ate the sugar as part of a whole fruit or not. But the fruit has other benefits, which is why the experts don't want to discourage you from eating it. --Trovatore (talk) 23:36, 7 February 2016 (UTC)[reply]
Reducing the amount of sugar (by eating one orange versus the juice of 10), and increasing the amount of fiber in order to slow digestion, both reduce the sugar spike, which is what leads to most of the health problems associated with sugar. Also, it takes more energy to digest whole fruit, and some of the sugar can be burned in that way. StuRat (talk) 17:49, 8 February 2016 (UTC)[reply]
Avoiding sugar in "too sweet" fruits is a recommendation in the low-carbohydrate diet community. Also, not all sugars are equal. Glucose is more likely than fructose to reach cells throughout the body rather than get metabolized in the liver. Yanping Nora Soong (talk) 01:04, 8 February 2016 (UTC)[reply]

Thanks for the replies but I'm wondering why I feel exactly the same whether or not I eat fruit. Is the effect of eating fruit everyday to extend your life by a few years or are there present-day benefits? 2.103.13.244 (talk) 02:44, 8 February 2016 (UTC)[reply]

That's hard to answer unless we know what you're eating instead of fruit (or conversely, what fruit is replacing when you eat it). If you're eating good, nutritious stuff instead of fruit you're doing fine. If you're eating cheeseburgers and Twinkies and such instead of fruit, it will probably catch up to you over the long term, though not necessarily in a few days or even weeks. Shock Brigade Harvester Boris (talk) 04:34, 8 February 2016 (UTC)[reply]
Below, our OP also indicates an aversion to vegetables too...so I think that it's unlikely that there is good stuff being eaten in place of fruit. SteveBaker (talk) 17:43, 8 February 2016 (UTC)[reply]
Most of the damaging effects of unhealthy food have no immediate and obvious symptoms. For example, plaque forming in your arteries may not be apparent until a heart attack. StuRat (talk) 17:53, 8 February 2016 (UTC)[reply]
Not medical advice, again, but an anecdote: I'm a long-time diabetic who was placed on a low-carbohydrate, high-fiber diet recently (within the past two months). In this diet, green leafy vegetables and fresh fruits like apples and oranges may be relatively freely eaten, while "empty carbohydrates" like bread, refined sugar and grits (ground parched white corn boiled as a porridge) are off the diet. Meat and other protein foods are permissible in reasonable amounts. Regular daily exercise is part of the regimen, as well.
I've lost thirty pounds in less than two months, and my control over my blood sugar has increased to the point that it's at the upper normal levels with only dapaglifozin ("Farxiga") as glycemic control medication. Prior to this, my blood sugar wasn't well-controlled at all, despite daily therapy with sitagliptin ("Januvia"). I use less pain medication for my cancer pain, and the issues I'd begun to have with swelling of the feet and ankles have disappeared. Fresh fruit isn't entirely responsible for this improvement in my condition, but it replaces much less healthy food in my diet, and the fiber it contains is almost certainly good for my health.loupgarous (talk) 05:51, 11 February 2016 (UTC)[reply]

February 8

marshy gas from mines

as during mining ,the marshy gas are evolve ,why this happen? please give the scientific reason.https://en.wikipedia.org/w/index.php?action=edit&preload=&editintro=&preloadtitle=&section=new&title=Wikipedia%3AReference+desk%2FScience&create=Ready%3F+Ask+a+new+question%21# — Preceding unsigned comment added by Shahjad ansari (talkcontribs) 02:23, 8 February 2016 (UTC)[reply]

See Methane#Occurrence. AllBestFaith (talk) 10:55, 8 February 2016 (UTC)[reply]
See also firedamp. The methane is produced as coal is heated (due to progressive burial) and some of it is retained in the rock when the coal becomes uplifted sufficiently to mine, where it can be a problem. Mikenorton (talk) 21:42, 8 February 2016 (UTC)[reply]

Formula for lens

give the formula equation for lens ,in which one longitudinal part areat n1 refractive index , second part at n3 refractive index and lens of n2 refractive index.https://en.wikipedia.org/w/index.php?action=edit&preload=&editintro=&preloadtitle=&section=new&title=Wikipedia%3AReference+desk%2FScience&create=Ready%3F+Ask+a+new+question%21# — Preceding unsigned comment added by Shahjad ansari (talkcontribs) 02:32, 8 February 2016 (UTC)[reply]

Sorry, we don't do your homework for you. Check the articles Refraction and Lens (optics) for the info you need. 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 10:35, 8 February 2016 (UTC)[reply]

Possible to change taste buds in adulthood?

I'm 20 and hate the taste of vegetables unless it's been thoroughly cooked and/or mixed with other flavours. Could I change that and if so is there a known method? 2.103.13.244 (talk) 02:54, 8 February 2016 (UTC)[reply]

Apparently it's in your genes. Googling "why some people vegetables" throws up some interesting links, including this one which suggests you need "bitter blockers".--Shantavira|feed me 11:08, 8 February 2016 (UTC)[reply]
Technically that's a medical diagnosis, and we aren't supposed to do that. It's certainly possible that there would be some other mechanism in this case besides genetics, which is almost never 100%. Wnt (talk) 12:41, 8 February 2016 (UTC)[reply]
Technically, that isn't a medical diagnosis, it's a biology reference. See User:Kainaw/Kainaw's criterion. Unless we're telling someone that a) they have a disease or b) what the disease is likely to do to them personally or c) how to treat their diseases, there is no problem with providing answers about human biology. --Jayron32 15:09, 8 February 2016 (UTC)[reply]

"Apparently it's in your genes" diagnosis "this one which suggests you need "bitter blockers" treatment. μηδείς (talk) 18:55, 8 February 2016 (UTC)[reply]

I think you're a bit too keen to be jumping on the 'medical advice' bandwagon. This isn't a question about a medical complaint, pointing out that it's genetic is not a diagnosis and offering links for the OP to follow up is not prescribing treatment Mike Dhu (talk) 10:12, 9 February 2016 (UTC)[reply]
Have a look at our long, detailed, and well-referenced article taste. It's complicated, and involved taste buds, but also psychology, nutritional needs, evolutionary past, culture, childhood development, exposure, etc. etc. Most people I know enjoy some foods at age 40 that they did not at age 20. Here's a selection of articles that discuss aspects of how taste perception can change with age [8] [9] [10]. Here's a freely accessible article that discusses a bit about how children's diet preferences are shaped by the adults around them, and you might find it interesting background reading [11]. We have some references for treatment of [[12]] and also Avoidant/restrictive_food_intake_disorder#For_adults, so I would look at the refs there if I wanted to learn more details about methods for expanding my taste preferences. SemanticMantis (talk) 15:40, 8 February 2016 (UTC)[reply]
My experience is that a lot depends on how the food is cooked. Generally (as our OP mentions), brief cooking retains flavor and long cooking destroys it. Generally, short cooking is what people want because they crave the maximum amount of flavor - but I suppose that if you don't like those flavors then the reverse might be the case. Unfortunately, cooking for too long destroys much of the nutritional benefits of eating vegetables - and also destroys any crunchy, textured sensations and reduces them to an unpleasant mush. Honestly, I'd recommend re-visiting the taste of lightly cooked (or even raw) veggies...and if that's still unpleasant, dump them into some kind of sauce that you like. A chili or curry-based sauce will annihilate the taste of almost anything! Also, it's a horrible generalization to say that you don't like "vegetables" - there are hundreds of different kinds out there - and they don't all taste the same. Gone are the days when you had a choice between carrots/broccoli/cabbage/peas/french-beans/corn. Now you can get 'baby' versions of lots of things - there are 50 kinds of beans out there - there are leafy greens of 20 different kinds to choose from - there are things like asparagus (which used to be ruinously expensive - and now isn't), avocado and artechokes to play around with. It would be really surprising if you hated all of them, and even more surprising if you hated all of them no matter how they were prepared. Modern cuisine encourages us to mix weird, contrasting things together - so go ahead and mix jalapeno peppers, a little melted chocolate and peas (yes, really!) - or cook your cabbage in orange juice instead of water (one of my personal favorites!) - or mix nuts and fruit into a green salad. There is no "wrong" answer here.
I grew up in an environment where veggies were low in variety, and invariably over-cooked. When I married my first wife (who is an excellent French cook) - my eyes were opened to the incredible array of better options out there. SteveBaker (talk) 17:24, 8 February 2016 (UTC)[reply]
My experience changing what I drink may be helpful. In my 20's I drank Mountain Dew (high sugar soft drink). Then I switched to herbal tea, but needed lots of sugar in it to make it palatable. I then gradually reduced the amount of sugar, and now I don't need any. So, I suggest you initially mix just a bit of veggies with something you like, then gradually change the ratio until it's mostly veggies. StuRat (talk) 17:30, 8 February 2016 (UTC)[reply]
Incidentally, I notice that our OP recently asked a question about eating fruit that suggests that (s)he doesn't eat that either. That's a more worrying thing. SteveBaker (talk) 17:41, 8 February 2016 (UTC)[reply]
I think Mouthfeel is something you may want to look at, along with food neophobia and there's also RFID, an escalated version of picky eating. It's interesting that SteveBaker mentions the texture of food. I wouldn't touch vegetables until my early 30s, even though I had a girlfriend who worked as a chef at The Savoy in London (I'm sure your wife is much better Steve!). I disliked the "flavor" of foods from my childhood until my early 20s and retrospectively I think it was more the texture I didn't like. Mike Dhu (talk) 17:09, 9 February 2016 (UTC)[reply]
The thing with texture is that you can play around with it to an amazing degree. Consider just the potato. You can have creamy mashed potato, mashed potato with deliberate chunks of potato and/or skin in it, you can have french fries, boiled potatoes (with and without skin) and also roasted and baked potato. You can do hash-browns or fry crispy potato skins - or you can make potato chips. That's a MASSIVE variation in texture and crunch with just one vegetable being involved. With creativity, you can do similar transformations with other veggies too. If you don't like (say) peas - rather than just having warm round things - you can cook them, mash them, form them into patties, then fry them ("Peaburgers"!) - or you can blend them into a smoothie or a soup - there are lots of options if you're prepared to be creative and are open to trying new techniques. SteveBaker (talk) 17:27, 9 February 2016 (UTC)[reply]
I totally agree with your points re the texture of food, but my point to the the OP was that the texture and the flavor of food may be interlinked. I like the taste of creamy mashed potato (not a vegetable of course), but lumpy mashed potato is something I can't eat, I find the lumps in it unpalatable, not because of the taste per se, but because I don't like the texture of it. Mike Dhu (talk) 19:19, 9 February 2016 (UTC)[reply]
Yeah - you probably don't want to go there. What is a "vegetable" and what isn't is a topic of frequent and prolonged debate around here. Bottom line is that there is a strict scientific definition, a strict culinary definition and a whole messy heap of what-people-think-a-vegetable-is. From the lede of Vegetable:
"In everyday usage, a vegetable is any part of a plant that is consumed by humans as food as part of a savory meal. The term "vegetable" is somewhat arbitrary, and largely defined through culinary and cultural tradition. It normally excludes other food derived from plants such as fruits, nuts and cereal grains, but includes seeds such as pulses. The original meaning of the word vegetable, still used in biology, was to describe all types of plant, as in the terms "vegetable kingdom" and "vegetable matter"."
So...um...I claim victory. A potato is a vegetable. <ducks and runs> SteveBaker (talk) 20:57, 9 February 2016 (UTC)[reply]
I can see how that could lead to a very lengthy discussion, and in my mind I always thought of potatoes as a vegetable, in the same way that I think of poultry and fish as meat (although I've just looked at the meat article and see the same situation applies). Anyway, good job you ducked (bad pun, I know!) Mike Dhu (talk) 11:08, 10 February 2016 (UTC)[reply]

Falling from a building

If someone fell from the fifth floor of a building, would they die or just be badly hurt? 2607:FB90:1225:2047:A4E6:5421:24F2:7B82 (talk) 03:49, 8 February 2016 (UTC)[reply]

It depends how they land and what they land on. ←Baseball Bugs What's up, Doc? carrots→ 03:59, 8 February 2016 (UTC)[reply]
If they land on concrete? 2607:FB90:1225:2047:A4E6:5421:24F2:7B82 (talk) 04:12, 8 February 2016 (UTC)[reply]
Then it depends on how they land. But their odds are not good. Here is someone's idea for a strategy. ←Baseball Bugs What's up, Doc? carrots→ 04:16, 8 February 2016 (UTC)[reply]
It would be far better to land on a Life net. That's a little article I wrote a few years ago. Cullen328 Let's discuss it 04:20, 8 February 2016 (UTC)[reply]
Obviously. But the OP specified concrete. ←Baseball Bugs What's up, Doc? carrots→ 05:02, 8 February 2016 (UTC)[reply]
On page 17 of this OSHA document [13], figure 6 shows the distribution of workplace fatalities as a function of number of feet fallen. From that, you can see that a small number of people died after falls of less than six feet - and most people in the workplace who die after falling, fell less than 40 feet...which is less than 5 floors. So for sure, lots of people die every year from falling fell from considerably less height than the 5th floor.
A few other sources I checked with suggest the the risk of death starts to go up sharply at falls of around 8 to 10 meters - with about a 50/50 chance of dying if you fall from 15 meters and a near certainty of dying at around 25 meters. A typical building floor height is about 3.5 meters - so 5 floors would be 17.5 meters - and that's about a 75% chance of death. But there really is no 'safe' fall height. People trip and fall and whack their heads against something as they reach ground level and die as a result - so even a fall from zero height can be fatal.
CONCLUSION: If you fall from the 5th floor - you have roughly a 3 in 4 chance of dying - there is no 'safe' distance.
SteveBaker (talk) 04:59, 8 February 2016 (UTC)[reply]
Would it be a quick death or a long and agonizing one? 2607:FB90:1225:2047:A4E6:5421:24F2:7B82 (talk) 15:13, 8 February 2016 (UTC)[reply]
I don't see any data on that. One would presume that a head-first impact would be quick - and feet-first much less so - but it's very hard to say, and as skydivers soon discover, bodies rotate during free-fall in ways that can be hard to control. I wouldn't want to make any bets on that one. SteveBaker (talk) 17:07, 8 February 2016 (UTC)[reply]
Quick, call the Mythbusters before they're cancelled! FrameDrag (talk) 20:48, 8 February 2016 (UTC)[reply]

Is it best for a man/woman to see a male/female psychiatrist respectively?

Just curious if it's generally best for a man to see a male or female psychiatrist and for a woman to see a male or female psychiatrist, or if there's no recommendation in the psychology community. 2.103.13.244 (talk) 05:22, 8 February 2016 (UTC)[reply]

Most psychiatrists base their treatment on pills. I hardly see how it could matter the gender of those who prescribes you pills. Psychiatrists are also not necessarily part of the psychology community, they could be psychotherapists too, but primarily they are physicians. I suppose you want to know whether gender of psychologists, psychotherapists, counsels and the like matter.
On the practice it's clear that psychiatrists are mostly male, and the psychology community is mostly female. That reduces your chances of picking a specific gender. Anyway, the role of gender in the quality of psychotherapy seems to be negligible, in the same way that you don't need a therapist with the same age, religion, race, as you. I see that it could even be an advantage to have a certain distance from your therapist, since you both are not supposed to enter a private relationship. --Llaanngg (talk) 11:35, 8 February 2016 (UTC)[reply]
[citation needed] for a lot of this, perhaps most importantly on the first sentences of each paragraph. SemanticMantis (talk) 15:30, 8 February 2016 (UTC)[reply]
SemanticMantis, here they are:
[14] "Like many of the nation’s 48,000 psychiatrists, Dr. Levin, in large part because of changes in how much insurance will pay, no longer provides talk therapy, the form of psychiatry popularized by Sigmund Freud that dominated the profession for decades. Instead, he prescribes medication, usually after a brief consultation with each patient"
[15] "Psychiatry, the one male-dominated area of the mental health profession, has increasingly turned to drug treatments."
[16]: The changing gender composition of psychology.
And [17] Need Therapy? A Good Man Is Hard to Find. "He decided to seek out a male therapist instead, and found that there were few of them."
I do admit though that the effect of gender matching with your therapist (or not) is debatable. The debate is still open. I suppose it comes down to the patient's world-view. If it's important for the patient, then probably it can influence outcome. The same probably applies to ethnicity. --Llaanngg (talk) 09:56, 9 February 2016 (UTC)[reply]
[18]"As Carey's timely article notes, there is nothing in the rather limited mainstream scientific literature on gender and treatment outcome suggesting unequivocally that either males or females make better, more effective psychotherapists."
[19] "a female therapist genuinely is able to help a male client as well as a female client, and a male therapist is truly able to help a male client as well as a female client, the fact is that if a client comes in with a pre-conceived notion about the therapist based on gender, it has the potential to affect treatment if not addressed."
--Llaanngg (talk) 09:56, 9 February 2016 (UTC)[reply]
User:Llaanngg, thank you. Your claims sounded reasonable, but this is, after all, a reference desk :) SemanticMantis (talk) 14:51, 9 February 2016 (UTC)[reply]
For some people, maybe. A psychiatrist is indeed different than a psychologist, but gender match in medical and therapeutic professions can indeed be a factor in outcomes. Here is a study that specifically looks at effects of gender matching in adolescents [20]. That one is freely accessible, these two studies [21] [22] are not, but they also discuss gender matching in therapeutic contexts. Note that all three also discuss matching of ethnicities as a potential important factor too. SemanticMantis (talk) 15:30, 8 February 2016 (UTC)[reply]

Having been treated by half a dozen psychiatrists and therapists, I will say that the race/culture, age and gender of your treatment providers definitely matters in some cases, even for "pill prescribers" because your story may sound different to different doctors. For example, I've been routinely noted to have "poor eye contact" and be diagnosed with borderline personality disorder and bipolar disorder by old white men, but younger psychiatrists are more up to date on neuroscience research and my female psychiatrists (including a South Asian) tend to agree with post-traumatic stress disorder or complex PTSD. Also Asian treatment providers definitely get cross-cultural struggles and Asian cultural values like conflict aversion, whereas white providers often don't, frequently chalking it up to some personality defect or saying that you're "non-assertive". Yanping Nora Soong (talk) 16:06, 8 February 2016 (UTC)[reply]

I'd say that if it's important for you as a patient, then, it is important for the outcome. However, I don't believe it is a general factor per se.Llaanngg (talk) 09:56, 9 February 2016 (UTC)[reply]

cramps or a "charley horse" after orgasm

My girlfriend often has serious cramps (or a charley horse)after she has an orgasm. The cramp is usually in her lower left calf. This is not a medical question. I am just curious how an orgasm and a cramp in the lower leg can be connected (given the very different muscles involved). 147.194.17.249 (talk) 05:41, 8 February 2016 (UTC)[reply]

For bemused readers.... Charley horse. Ghmyrtle (talk) 08:49, 8 February 2016 (UTC)[reply]
Orgasm often involves muscular contractions not just in the groin area, but throughout the body -- so in some cases, different muscles can cramp after orgasm. (I know first-hand, I've pulled a leg muscle once or twice during sex.) FWIW 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 08:42, 8 February 2016 (UTC)[reply]
Differ love and porn! Porn can be violent. In some cultures sex is a secret and porn is the only “manual” and not a good advice at all. We have wikipedia and it sould give some more reliable information. The next step is You to care what You are doing. But some human are very fragile. When the charley horse is always on the same place You can find the reason. --Hans Haase (有问题吗) 11:37, 8 February 2016

(UTC)

Does Hans Haase 有问题吗's post above make sense to someone? In this case and in previous cases too I am unable to even guess what he's trying to say. --Llaanngg (talk) 11:45, 8 February 2016 (UTC)[reply]
Yes, I get the basic gist of it, and I usually can with Hans' posts. Then again, I have lots of experience reading listening to ESL. Respectfully, this is not the best place for such comments and discussion. SemanticMantis (talk) 15:19, 8 February 2016 (UTC)[reply]
Our articles on this are really, really bad. Charley horse confounds multiple conditions and multiple colloquial terms until there's no telling what is what. Cramp does virtually the same - it is hard for me to accept that the usual sort of "charley horse" has anything to do with failure of ATP to loosen muscles, since generally it is a sudden onset of a muscle contraction. We'll have to look this one up from scratch... after which, we might want to rewrite those articles quite nearly from scratch. Wnt (talk) 12:06, 8 February 2016 (UTC)[reply]
I should share the first good reference I found at [23] (I just did a PubMed search for leg cramp and this was one of the first things) Apparently there is a treatment for leg cramps ...... it involves injecting 5 ml of 1% lidocaine into the "bifurcation of the branches that is located in the distal two-thirds of the interspace between the first and second metatarsals" - this is a nerve block of "the medial branch, which is the distal sensory nerve of the deep peroneal nerve". The site is on the inside of the base of the big toe. The effect was to reduce cramps by 75% over a two-week study period. As part of their discussion they say

The mechanism(s) of leg cramps are yet to be clarified, but disturbances in the central and peripheral nervous system and skeletal muscle could be involved (Jansen et al. 1990; Jansen et al. 1999; Miller and Layzer 2005). Electrophysiologically, cramps are characterized by repetitive firing of motor unit action potentials at rates of up to 150 per sec. This is more than four times the usual rate in maximum voluntary contraction (Bellemare et al. 1983; Jansen et al. 1990). In a human study, Ross and Thomas indicated a positive-feedback loop between peripheral afferents and alpha motor neurons, and that this loop is mediated by changes in presynaptic input. This loop is considered a possible mechanism underlying the generation of muscle cramps (Ross and Thomas 1995). The frequency of nocturnal leg cramps has also been suggested to result from changes in hydrostatic pressure and ionic shift across the cell membrane in the calf muscles in the recumbent position, inducing hyperexcitability of the motor neurons. Consequently, the pain of the cramps may be caused by an accumulation of metabolites and focal ischemia (Miller and Layzer 2005). The difference in these conditions in each patient may explain the diverse symptomatology of the cramps.

So the thing I'm thinking of is possibly, not certainly, related to some kind of feedback, possibly via the spine only, between sensation of what the body part is doing and a motor response. It seems easy to picture how infrequent activities might somehow jiggle such a sensitive mechanism. Honestly, because this is a regulated phenomenon with different characteristics than usual contraction, I'm not even entirely sure it is pathological - for all I know, the body might be administering it as some sort of health intervention on itself. Note that I definitely cannot and will not diagnose the woman involved here - there are a thousand things she could be experiencing that aren't what I have in mind. Wnt (talk) 12:25, 8 February 2016 (UTC)[reply]

Have the OP and his girlfriend tried different positions? Seriously: I myself often used to (and still occasionally do) get leg cramps when sitting on a hard chair for extended periods – this first arose during long services in a cramped (heh!) school chapel – but avoiding such a position makes them much rarer. It may be that different postures during the act might change the forces on the relevant muscles sufficiently to lessen the problem. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 15:19, 8 February 2016 (UTC)[reply]

Jump cushion

Are jump cushions ever used in firefighting in lieu of life nets? If so, how effective are they? Do they even actually exist, given that they're not on Wikipedia? 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 10:31, 8 February 2016 (UTC)[reply]

See [24]. Quoted maximum jump height is 40m. AllBestFaith (talk) 10:49, 8 February 2016 (UTC)[reply]
Thanks! 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 05:57, 9 February 2016 (UTC)[reply]

How many defecators?

Is it possible to come up with a reasonable estimate of how many humans are defecating at any given moment? -- Jack of Oz [pleasantries] 11:56, 8 February 2016 (UTC)[reply]

If I were to pull a number out of my ass...50 million. Make a ballpark assumption the average human spends 10 minutes a day pooping, seven billion humans, and there you go. Should be within an order of magnitude of reality. Someguy1221 (talk) 11:59, 8 February 2016 (UTC)[reply]
Given that there are certain times when defecation is more likely (when you get up in the morning, and perhaps also before bed in the evening), the number doing it at any given time may depend on the population density of the time zones matching those times of day. First thing in the morning in China is likely to see a lot more poopers than the similar time in the mid-Pacific. — Preceding unsigned comment added by 81.131.178.47 (talk) 14:37, 8 February 2016 (UTC)[reply]
Today's SMBC comic [25] is highly relevant to this question [26] . SemanticMantis (talk) 18:29, 8 February 2016 (UTC)[reply]
Which of those two links should I follow? —Tamfang (talk) 08:10, 10 February 2016 (UTC)[reply]

Perspective machines

What's a perspective machine, or in particular, a railroad perspective machine? The main source for Nester House (Troy, Indiana) says "The building's 1863 design is attributed to J. J. Bengle, the inventor of the railroad perspective machine." Google returns no relevant results for <perspective machine>, and the sole result for <"railroad perspective machine"> is this main source. Nyttend (talk) 15:46, 8 February 2016 (UTC)[reply]

I haven't the foggiest but my guess would be that he invented a machine that helped with making accurate perspective drawings. Architectural drawings showing a building from an angle are normally axonometric projections where parallel lines stay parallel rather than using perspective. A nice perspective drawing helps with selling a design to a client. Dmcq (talk) 16:20, 8 February 2016 (UTC)[reply]
Just had a look around and machine like what I was thinking of, the 'perspectograph plotter', was made in 1752 by Johann Heinrich Lambert, see [27], which is before that man's time. So it was either something else or a refinement on that. Dmcq (talk) 16:39, 8 February 2016 (UTC)[reply]
There are several kinds of quasi-realistic perspective - "single point" and "two point" being the most commonly mentioned. I wonder whether the term "railroad perspective" might refer to single-point perspective - implying that the way that two parallel railroad rails seem to meet at the horizon. This is just a guess though...take it with a pinch of salt! SteveBaker (talk) 17:04, 8 February 2016 (UTC)[reply]
Yes, long parallel straight lines are relatively rare in nature, and in that time frame railroad rails would have been an ideal application for a perspective drawing. StuRat (talk) 17:22, 8 February 2016 (UTC)[reply]
My thoughts exactly. Thinking about a railroad "perspective-machine" didn't get me very far - but thinking in terms of a "railroad-perspective" machine definitely makes me suspect that we're thinking in terms of a single-point projection. Our article on Perspective mentions the word "railroad" three times when discussing this - so I'm starting to believe that this must be what's meant here. SteveBaker (talk) 17:31, 8 February 2016 (UTC)[reply]
Typeset content describing the building in the cited PDF says "railroad perspective machine" and "Bengle", but the hand-written inscription on the drawing of the building says "railway perspective machine" and spells the name "Begle" (no "n" in it). Googling for "railway pespective" finds tons of hits for the same one-point perspective that SteveBaker suspected. I'm not finding anything in Google's patent database for Begle though ("perspective" is a poor search term, since damn near every object patent includes a perspective drawing of it). DMacks (talk) 20:29, 8 February 2016 (UTC)[reply]
This newspaper article confirms that a "J. J. Bengle" lived in Denison, TX in 1907. I don't know how that ties in with any other known dates and places of residence of the architect. The newspaper article does not give any helpful details - "J. J. Bengle has returned from a trip to Galveston and other points." That's it in its entirety, I'm afraid. Tevildo (talk) 21:01, 8 February 2016 (UTC)[reply]
I often wonder how people would feel, knowing that their only mark on modern history is the fact that they once returned from Galveston. :-( SteveBaker (talk) 15:17, 11 February 2016 (UTC)[reply]
When my father was employed by the State Railways many years ago, as an Inspector of Permanent Way, he showed me a device he used which I recall was called a "perspective sight". It was essentially a modified pair of binocculars. It is critical that ralway lines be accurately parallel and straight, but get out of true over time for various reasons. Bad weather (washouts from exceptionally heavy rain) and extremely hot days can cause the lines to buckle. If you look with the naked eye, you cannot see buckling that will derail a speeding train. Binocculars foreshorten perspective, so if you stand between the two railway lines and look along the track with binocculars, you see the distance reduced, and because of the binoccular's magnification, any buckling becomes easily visible. The binocculars the Railway supplied (the "perspective sight") had an adjustable pair of lines that converge on a point (the vanishing point). You adjusted the lines so that they aligned with the railway lines - giving a minor advantage in seeing any buckling. There were horizontal calibation marks (which have non-linear spacing due to viewing height & perspective) so that the inspector could say to the maintenance crew things like "go forward 320 metres and straighten there." They had a special instrumented carriage for detecting rail missalignment, but the binocculars facilitated a quick response to any problem due to extreme weather, regardless of where the instrument carriage was. 1.122.229.42 (talk) 00:53, 9 February 2016 (UTC)[reply]
  • As a matter of curiosity, what country's "State Railways" did he work for? --76.69.45.64 (talk) 05:13, 9 February 2016 (UTC)[reply]
That might explain why there was little concern about curved tracks...L-O-N-G stretches of dead straight train tracks there. SteveBaker (talk) 20:52, 9 February 2016 (UTC)[reply]
The South Australian Railways actually. And I'm not within 1000 km of Perth. The poster previously at 1.122.229.42.58.167.227.199 (talk) 03:11, 11 February 2016 (UTC)[reply]
Nothing quite as long and straight as the Trans-Australian Railway I'd guess though, the curvature of the earth probably matters more there! Dmcq (talk) 16:26, 11 February 2016 (UTC)[reply]
Excellent info ! StuRat (talk) 00:58, 9 February 2016 (UTC)[reply]
Wow! That's a typically ingenious invention for the era. Sadly, these days a couple of visible light laser beams would make a much simpler and more efficient solution. I wonder how they coped with warping around curves and across varying slope though. SteveBaker (talk) 03:38, 9 February 2016 (UTC)[reply]
"Sadly"? What an odd perspective to find a simpler and more efficient solution to be sad. (No insult intended, just an observation.) Deli nk (talk) 14:20, 9 February 2016 (UTC)[reply]
Sadly - because I love the ingenuity of the binocular approach...while recognizing that using a couple of lasers is probably a more efficient way to do it these days. SteveBaker (talk) 20:52, 9 February 2016 (UTC)[reply]
Evenings and mornings was just what I was going to suggest, when you still have enough light to see the tracks, but not so much as to drown out the laser. That would make the inspector crepuscular. StuRat (talk) 03:31, 11 February 2016 (UTC)[reply]

Technology for the disabled

What is the current status for:

  1. Body part less people.
  2. Blind sighted people. exclude surgery.

Are there any satisfactory mechanisms out there to grant capability?

Apostle (talk) 18:31, 8 February 2016 (UTC)[reply]

Fixed title to be proper English. StuRat (talk) 18:33, 8 February 2016 (UTC) [reply]
-- Apostle (talk) 22:36, 8 February 2016 (UTC)[reply]
1) I assume you mean people missing body parts. See prosthetics.
2) I don't think most causes of blindness can be addressed without surgery, assuming implanting electrodes into the brain is considered to be surgery. I think there was some research on attaching a grid of electrodes (with just tape) on the back, and using those to convey visual images, so that might qualify. StuRat (talk) 18:35, 8 February 2016 (UTC)[reply]
There is an enormous amount of technology for the blind - from talking clocks to software able to scan a printed document and turn it into artificial speech. — Preceding unsigned comment added by 81.131.178.47 (talk) 18:56, 8 February 2016 (UTC)[reply]
Some blind people use a device that helps them to "see" using their tongues [28] [29]. SemanticMantis (talk) 21:16, 8 February 2016 (UTC)[reply]
I'll go through the links... Thank you -- Apostle (talk) 22:36, 8 February 2016 (UTC)[reply]
And a About number 2): BBC was showing a program where this blind woman was viewing throw her eyes (black & white) fuzzily. The mechanisms they implanted inside her eyes are apparently compulsory to repair every 6 months. There was also a electrical box, her brain was probably connected... - can't recall properly.
The technology was very depressing; knowing that its the 21st century (or something). -- Apostle (talk) 22:36, 8 February 2016 (UTC)[reply]
See visual prosthesis for this particular type of device. Tevildo (talk) 23:10, 8 February 2016 (UTC)[reply]
The technology to interface nerve fibers to electronics is extraordinarily difficult. It's not like there is a red wire labelled "Video In" in the interface between eyes and brain - instead there is a large bundle of unlabelled nerves - all different from one person to another. It's not like each nerve is a "pixel" or anything useful like that. Maybe one of them says "There is a high contrast, vertical line, about a quarter the height of the retina that's moving left to right" - figuring out what to say to each nerve from a camera is beyond what we can currently do...we can try to rely on brain plasticity to cope with whatever incorrect data we're sending - but that's how you end up with fuzzy, low-resolution monochrome - and experimental devices that don't survive long-term implantation. Also there are at least a dozen reasons why someone might be blind - and each one needs a separate, and equally difficult solution. This is an exceedingly difficult problem and it may be decades before we have something that truly works and is actually useful to people. SteveBaker (talk) 03:34, 9 February 2016 (UTC)[reply]
The neural plasticity is exactly what they rely on. The brain has an amazing ability to learn, and this includes learning which nerve corresponds to which pixel. And, for people who have been blind all their life, the mapping would never have been defined in the first place, since that happens as a baby, based on visual feedback. As for how to teach the brain quickly, I would suggest hooking up only the corner pixels in the image frame first, then, once they have been learnt, add more pixels, maybe one at a time, until the full grid has been learned. StuRat (talk) 18:44, 9 February 2016 (UTC)[reply]
My mistake. I recall now that its was gray-black background instead of black, and white/light colour objects that she had to differenciate; was the only colour that she could see. The image via her eyes looked like as if you were turning a TV on and off about every 3-5 millisecond or something. She did/might have/had a box (unless I'm confusing with another program).
Thank you all once again. I'll definitely look into it... Regards. -- Apostle (talk) 22:09, 9 February 2016 (UTC)[reply]

StuRat, SemanticMantis, Tevildo, SteveBaker: Just for clarification - Will it ever be possible to create glasses (or any other thing) for the blind people so that they can see, without an operation; given all the above facts still? -- Apostle (talk) 21:09, 11 February 2016 (UTC)[reply]

This isn't a field in which I can confidently speculate, but it might be possible to stimulate the visual cortex with some form of RF or magnetic system fitted to the glasses - see Deep transcranial magnetic stimulation. Whether that will ever be safer than a surgical implant is another matter. Tevildo (talk) 21:40, 11 February 2016 (UTC)[reply]

Accelerating a particle with light

If I accelerate a tiny speck of dust using light, what max speed could be it reach? Let's suppose that hypothetically we can know exactly where this speck of dust is, and that we know how to point a laser at it. --Scicurious (talk) 19:22, 8 February 2016 (UTC)[reply]

Theoretically you could accelerate it to almost the speed of light. StuRat (talk) 19:24, 8 February 2016 (UTC)[reply]
Assuming you find a void in space that (with much luck) presents no molecule of gas to hinder the speck's progress, there is still microwave background radiation defining an approximate cosmic rest frame, which would become blue-shifted as the particle approaches it as the light source you use becomes red-shifted - also starlight of course, which is similarly in a fairly consistent rest frame all around. As a result, if you assume a constant light intensity in a perfectly focused beam, I think there would be a maximum level that you can use at the beginning to avoid vaporizing the particle, which eventually becomes weaker than the oncoming radiation. On the other hand, if you continue to turn up your light source (or increase its frequency) then I suppose the particle might accelerate without limit, coming arbitrarily close to light speed. Unless, of course, I forgot something else... Wnt (talk) 19:52, 8 February 2016 (UTC)[reply]
Isn't this how solar sails work? Nyttend (talk) 21:10, 8 February 2016 (UTC)[reply]
So, you can approach the speed of light as much as you want, but not reach it ever? --Scicurious (talk) 16:15, 10 February 2016 (UTC)[reply]
Yes, for two reasons.
1) Just with conventional Newtonian physics, you could never accelerate one object with mass to match the speed of another, by having them hit each other. Even if a star runs into a proton, the mass of the proton + star is now slightly more, meaning the total speed is slightly less, for it to have the same inertia.
2) Relativity prevents objects with mass from being accelerated to the speed of light, although this is tricky as it depends on precisely how "mass" is defined. See rest mass. StuRat (talk) 21:17, 10 February 2016 (UTC)[reply]
The faster something moves, the heavier it becomes (relativistic mass). The kinetic energy of its motion, as viewed from your rest frame, is a kind of energy, and has mass per E=mc2. The more kinetic energy you add to the particle, the more massive it becomes, and the more energy it takes to speed it up. In the extreme case, all of the (relativistic) mass of a photon is energy - you might add more energy to it, but the mass increases in direct proportion, so the speed never changes. I should note that relativistic mass has become unpopular in recent years, but I feel like that's a fad - since ultimately, many kinds of "rest" mass are predicated on the kinetic and potential energy of the substituent particles. Wnt (talk) 16:02, 11 February 2016 (UTC)[reply]

Immunity vs resistance

Is there a difference, and if so, what is it? Are they the same but used for different species, or is there a clear but subtle difference? In other words, does "She is immune to the flu" mean the same as "She is resistant to the flu"? What about "This strain is resistant to drug X" and "This strain is immune to drug X"? 140.254.77.216 (talk) 19:51, 8 February 2016 (UTC)[reply]

"Immune" means 100%, unless some qualifier is added like "partially immune". "Resistance" is less than 100%. StuRat (talk) 19:54, 8 February 2016 (UTC)[reply]
The problem here is that you are using a literary definition of immune, StuRat, and that while I agree with you in that way, SemanticMantis and the heretical Wnt much more closely approach the received biological notion. In the school where I got my undergrad biology major (focusing in botany), you had to have four years of chemistry and four years of bio-major "bio" before you could even apply to take Immunology 396. So I would take their comments as read. μηδείς (talk) 02:47, 9 February 2016 (UTC)[reply]
You know, I can see how you'd think that. The problem is that your explanation is completely incorrect in terms of medical and physiological terminology. Immunity_(medical) discusses how the term is used. An easy example sentence "All vaccines confer immunity, but not all vaccines are 100% effective, and so some people who have acquired immunity from a vaccine may still get infected." My dictionary says "Immune: resistant to a particular infection or toxin..." Wiktionary says "Protected by inoculation", Miriam Webster says "having a high degree of resistance to a disease <immune to diphtheria>". The only time immune means 100% resistance is in fiction, games, or legal matters. SemanticMantis (talk) 21:28, 8 February 2016 (UTC)[reply]
Active immunity represents a process of natural selection within immune cells of the body (cell mediated immunity or antibody mediated immunity) by which molecules become common that (in some context) interact with a pathogen and allow it to be destroyed. In drug resistance, bacteria produce molecules that neutralize a drug, frequently by enzymatic means, often using plasmids to allow trading of useful resistances within a broader genetic background. So the selection for immunity takes place within an organism, but the selection for resistance occurs between organisms - most bacteria die, a few live and become resistant. So to be "resistant" to something is more of an inborn trait, generally speaking, while "immunity" usually implies past exposure to the agent or a vaccine etc. Exception, sort of: multidrug resistance in cancer occurs within an organism. But if you look at it another way, every cancer cell is out for itself, and (apart from the one that mutates) is either born resistant or not. Another exception, sort of: innate immunity may not require a selective response; the thing is, we rarely hear that someone is innately immune to a pathogen because they never know they might have gotten sick. This reminds me, say, of toxoplasmosis which preferentially affects those of the B cell type. (There was actually a huge outbreak in postwar Japan, and Japanese became known for "blood type personality theory", to this day never having been aware of the role of the protozoan in affecting their minds...) Wnt (talk) 20:05, 8 February 2016 (UTC)[reply]
Wnt I work at a research institution where several groups study Toxoplasma gondii and I don't think I've ever heard of a connection between ABO blood type and susceptibility to infection. For the sake of satisfying my curiosity, could you link me to where you read that, (or maybe I misunderstood what you said up above). Thanks, PiousCorn (talk) 06:03, 9 February 2016 (UTC)[reply]
@PiousCorn: I don't remember which source I originally went by, but [30][31] mention it. On the other hand [32] reports a lack of association with B blood type ... but rather, with Rh negative status! Also [33] says that. I had found the B blood type association in an older source ( [34] ) in a question I asked back in 2010 about it. [35] I think even back then I had lost track of some earlier source specifically about the Japan postwar outbreak... Wnt (talk) 09:22, 9 February 2016 (UTC)[reply]

February 9

Synthetic turquoise

Is there such a thing as fully synthetic turquoise (as opposed to imitation turquoise)? If so, how is it synthesized? 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 06:02, 9 February 2016 (UTC)[reply]

The second sentence of the lede in our article Turquoise says "In recent times, turquoise, like most other opaque gems, has been devalued by the introduction of treatments, imitations, and synthetics onto the market. - so evidently, there are synthetic stones out there. Geology.com says "A small amount of synthetic turquoise was produced by the Gilson Company in the 1980s...It was a ceramic product with a composition similar to natural turquoise." - so I guess it's arguable that this was not truly a synthesis of a material identical to the real thing. It goes on to say: "Synthetic turquoise, and turquoise simulants have been produced in Russia and China since the 1970s." - but no clue as to the manufacturing methods. SteveBaker (talk) 13:40, 9 February 2016 (UTC)[reply]
I found the Gilson name also - searching brings up a chemical analysis of a different synthetic [36] - seems like this one is not perfect somehow - not sure how to define a yes or no answer about it though. Wnt (talk) 15:59, 9 February 2016 (UTC)[reply]
Whew! So from what I gather, so far nobody made the real thing in the lab? That's good news for me, thanks! 2601:646:8E01:9089:A082:3561:E888:76F (talk) 01:02, 10 February 2016 (UTC)[reply]
Maybe. "The Real Thing" is a little tricky here. Just how close do you have to get before you say it's "real"? SteveBaker (talk) 15:33, 10 February 2016 (UTC)[reply]
A real gem comes from a little yellow idol, or the Cold Lairs, or is waiting for you behind the ranges... DuncanHill (talk) 15:44, 10 February 2016 (UTC)[reply]

Weight of paper

What will be the weight in kilograms of 0 r5eams of 60gsm paper having dimensions 10'x11x1' is this paper of A4 size.223.176.51.205 (talk) 12:09, 9 February 2016 (UTC)[reply]

This looks like your homework question. Wikipedia doesn't do students' homework for them because that would negate the benefits of practicing at home. If there is some part of the question that you don't understand, or you have got stuck part way through, ask a relevant question about the part you don't understand and we will try to point you in the right direction. Dolphin (t) 12:21, 9 February 2016 (UTC)[reply]
Also look up 'ream of paper' as it says how many sheets you have, the dimensions don't tell you that. Dmcq (talk) 12:50, 9 February 2016 (UTC)[reply]

No this is not a home work problem I a not a paper technologist I know 1 ream has 500 papers but I don't understand the basis weight concept and please tell me what is the weight of 1 ream paper or 1 of the 500 papers or how to calculate the weight because I cannot make it out from websites.223.176.51.205 (talk) 12:57, 9 February 2016 (UTC)[reply]

A4 sized paper is .297 metres times .210, so a single sheet of paper has an area of approximately 0.062 square metres. Each square metre weighs 60g (as in 60 gsm: grammes per square metre). Thus 500 sheets weigh 500 x 60 x .210 x .297 = approx 1.87 kg.--Phil Holmes (talk) 13:02, 9 February 2016 (UTC)[reply]
A4 is exactly a sixteenth of a square metre (0.0625) (see ISO 216 for details), so the weight is 500 divided by 16 times 60 g which is exactly 1.875 kg. In practice, Phil Holmes might be more correct because of the slight loss in cutting. Dbfirs 22:10, 9 February 2016 (UTC)[reply]
Really? If you're a "paper technologist" then you sure as hell ought to knowOK so you need to know that 'gsm' stands for 'grams per square meter'. You can easily calculate the total area of 500 sheets of paper of whatever size (length x width x number of sheets), convert to square meters. Then multiply by the gsm number to get the weight in grams. Then divide by 1,000 to get kilograms. SteveBaker (talk) 13:31, 9 February 2016 (UTC)[reply]
Steve, the OP said they were not a paper technologist. I'm not a linguist, but I know what "not" means. DuncanHill (talk) 13:36, 9 February 2016 (UTC)[reply]
Ooops! Sorry! My bad! SteveBaker (talk) 13:42, 9 February 2016 (UTC)[reply]

FWIW, a "ream" used to be 20 quires - or 480 sheets. Blame the British <g>. Collect (talk) 16:37, 9 February 2016 (UTC)[reply]

NB, by definition a sheet of A4 paper has surface area of m2, or one 16th of a square metre. LongHairedFop (talk) 22:18, 9 February 2016 (UTC)[reply]
Knowing that, don't you just wish they'd put 512 sheets into a ream? SteveBaker (talk) 15:32, 10 February 2016 (UTC)[reply]
I do, it would be 2^5 square meters exactly.--Lgriot (talk) 20:25, 10 February 2016 (UTC)[reply]

Widely distributed species

Phrynobatrachus ogoensis is a species of frog from western and central Africa. According to the article, which correctly reflects the IUCN Red List source, it's found in a small area of central Gabon and near Robertsport in Grand Cape Mount County, Liberia. How can a species be found in both spots, yet nowhere in between? I understand the concept of a species existing in disconnected locations that were once connected, e.g. the freshwater eel species (can't remember which one) found both in Europe and North America, and a species that's been human-transported from one spot to another, e.g. rats and house sparrows, but I don't imagine people transporting just another frog species in this manner, and what about the climate/topography would prevent the frog from spreading any farther from its current limited habitats in these highly rainforested regions? Nyttend (talk) 14:04, 9 February 2016 (UTC)[reply]

Without knowing the specifics of frog distribution in Africa off the top of my head (man, if I had a dime for every time I said that phrase) there are a variety of elements in play that restrict species' expansion. As you note, the two areas may once have been contiguous and the species just died off in the middle areas. That (and the lack of further outward expansion) could be the result of many things, including direct human action altering waterways, draining marshes, and so on, or by various forms of pollution. Frogs are an indicator species (not in our article yet, so ref), which means that they are particularly susceptible to pollutants. In other words, the area between their current habitats might seem pristine to us and many other animals, but not to the froggies. It would also be interesting to see if there are other frog species that compete directly against ogoensis within the same ecological niche. Matt Deres (talk) 15:18, 9 February 2016 (UTC)[reply]
The obvious answer is that the two locations probably represent two distinct species. The two populations were treated as the same species back in the 40s (before DNA was known) and that conclusion has persisted given the lack of any subsequent scientific effort to confirm or deny whether these two populations are from a single species. IUCN itself says they probably aren't a single species, but that more investigation is needed. Dragons flight (talk) 15:37, 9 February 2016 (UTC)[reply]
It's entirely possible that the range was much broader, but has shrunk. Relict_(biology) describes this case. Think of how we have only small isolated patches left in the USA of old growth forest [37] or Tallgrass_prairie [38]. There are several species that may not exist only in those remnants, but will have very low density anywhere else.
I don't know specifically what's up with this one particular frog, but the situation you describe is entirely consistent with how we think about species distributions in a conservation/management context, and it's all too common of a story. While the CA tiger salamander is not so extreme, check out the isolated pockets in the distribution here [39]. Many other redlisted amphibians will have similarly disconnected distributions, as their habitats are degraded and they become extirpated from all but the most remote and inaccessible environs. SemanticMantis (talk) 18:50, 9 February 2016 (UTC)[reply]

The extinction of sandboxes

It looks like kids these days do not have access to sandboxes anymore (unless it's a sandboxed browser). When and how did this shift took place? Who decided that they should go? I suppose they were deemed unsafe, but was this move absolutely necessary? --Scicurious (talk) 14:04, 9 February 2016 (UTC)[reply]

I'm sure it frustrated cats in the neighborhood. ←Baseball Bugs What's up, Doc? carrots→ 14:34, 9 February 2016 (UTC)[reply]
This site declares "If there’s one thing that kids love more than slides and swing sets, it’s the sandbox! These can be found in all parks and playgrounds and kids can safely play all kinds of games in there, or build sand castles and other cool thing with the sand." However maintaining the sandbox requires protecting it from rain and from all animals and pets, including insects. Observing a child's play with toy models in a small sandbox is a form of non-directive Play therapy attributed to child psychologist Margaret Lowenfeld. AllBestFaith (talk) 14:54, 9 February 2016 (UTC)[reply]
(EC) 1) Plenty of kids have access to sandboxes. I think you must mean the decline of public sandboxes at children's parks, or perhaps you haven't noticed that small (coverable) backyard sandboxes like this [40] are still fairly common in the USA. 2) Very little is absolutely necessary. 3) Here is a selection of articles that describe some of the safety concerns [41] [42] [43]. I'm not sure about necessity and sandboxes, but exposing kids to Toxoplasma gondii seems like a good thing to cut down on, and that's just one of the more famous pathogens that can linger in sand... SemanticMantis (talk) 15:03, 9 February 2016 (UTC)[reply]
Yes, I mean the public ones, it seems that they are more difficult to protect than a little one in your backyard. Scicurious (talk) 15:36, 9 February 2016 (UTC)[reply]
Well put. The question also implies that this was an organized decision; toys fall in and out of fashion just like anything else. Matt Deres (talk) 15:20, 9 February 2016 (UTC)[reply]
I think it could be a Health Hazards Regulation. They could have been prohibited, in the same way that not wearing a seat belt was banned. Scicurious (talk) 15:43, 9 February 2016 (UTC)[reply]


  • The OP's premise is patently wrong, nearly every public park in my metro area, including those built or renovated in the past 10 years has a large open sand play area or sandbox in it. You can still buy sandboxes at Walmart and Target, and they sell large bags of "play sand" at Home Depot and Lowes. So the answer to the OPs "why?" question is "we can't tell you why, because the question makes no sense, because your premise is wrong". Unless the answer is " you aren't looking hard enough "--Jayron32 16:16, 9 February 2016 (UTC)[reply]
an aside on challenging the premise and reference desk conduct, e.g. who is supposed to do what.
  • Having observed a few new sand boxes in your area proves nothing about the trend. Let's see some sources, please. StuRat (talk) 17:15, 9 February 2016 (UTC)[reply]
No, our job, should we choose to volunteer our time, is to provide references. OPs can ask whatever questions they like. In the places I've lived, I've think WP:OR I've seen a decline in public sandboxes for children. Put more carefully, I think less new playgrounds constructed in the USA since 2000-2015 have sandboxes than e.g. those constructed in 1965-1980. Of course my observation could be incorrect too, so I've included a reference that gives some numbers below. In any case, demanding refs from OP is not something we really should do. While we may appreciate refs, we really can't demand them. Providing references is kind of what we're supposed to do here, isn't it? I think you've also expressed a desire to see less policy commentary on the ref desks, so please feel free to start a thread on the talk page if you'd like to discuss it further. I am collapsing this thread because it isn't terribly relevant to the OP, but feel free to uncollapse if anyone thinks it needs full visibility. SemanticMantis (talk) 18:38, 9 February 2016 (UTC)[reply]
In regard to the premise, here [44] is a NYT article from 1995 that gives some numbers, and says there were far more sandboxes included in city parks in the past. To wit "Since the 1970's, no new or renovated city playground designs have included sandboxes unless requested and lobbied for by the community, which also must maintain them." If anyone wants to find other stats for other areas, I'm sure they'd be appreciated. It seems as though the prevalence of sandboxes may change throughout time and place, which should really surprise nobody. It is clear that at least in NYC, there has been a precipitous decline in public sandboxes since the 1970s. SemanticMantis (talk) 18:38, 9 February 2016 (UTC)[reply]
The time between when that article is written and when it is referring to as the halcyon days of sand box glory is as long as the time between now and when the article is written. An article from 20 years ago saying how awesome life was 40 years ago isn't all that relevant to our discussion today. --Jayron32 01:05, 10 February 2016 (UTC)[reply]
So what? Do you really think there has been some resurgence of sandboxes since 1995? For that matter, OP never gave a timeline, he could be thinking in comparison to 10 years ago, or maybe 50. Here's another article about NYC that says "the number of sandboxes has dwindled from a peak of seven hundred to only fifty or so today" [45]. That article is from 2010, so I don't think it's fair to say the numbers are out of date. I only looked for NYC because it's a big famous city with a large parks dept. I don't disbelieve that your metro area still makes new sandboxes with new parks, but it seems like you're trying very hard to disbelieve the fact that public sandboxes do seem to have declined in many areas. This seems to be coincident with increasing awareness of some health concerns, and in 2008 the national sanitation foundation did an extensive study. That study and others are reported on here [46] in 2015, where parasitic roundworms are also mentioned to have been found in 2/10 daycare sandboxes. It is indeed hard to find good references on numbers of municipal sandboxes. But the references I do have show a decline. They also show an increasing concern from public health officials and doctors. Given these references that I found, along with my personal observations, those of the OP, and those implicit in many of the public safety articles, I conclude that there has been a change in public sandbox incidence in many places in the USA. This does not preclude any new sandboxes being built in your neighborhood this year. SemanticMantis (talk) 16:46, 10 February 2016 (UTC)[reply]
Wikipedia has an article about Playground surfacing and there are dozen options besides sand. The article does not mention a tendency towards other materials, but sand has all drawbacks, expecting cost, which is low.The Americans with Disabilities Act was passed in 1990, and sand does not comply with its requirements. So, it's clear to me that some communities could choose other materials for their playgrounds. And that's without entering into the Toxoplasmosis issue. Llaanngg (talk) 19:22, 9 February 2016 (UTC)[reply]
Yes, this is the big issue. Sand gets very dirty. Modern playgrounds are more likely to use rubber surfacing or maybe bark chippings. Blythwood (talk) 06:09, 10 February 2016 (UTC)[reply]
Sand isn't just used as a surface in a sandbox, it's used as a building material to build sand castles, etc. StuRat (talk) 20:56, 9 February 2016 (UTC)[reply]
Indeed. We're almost surely talking about sandpits here, not the open areas under/around whole playgrounds of equipment. DMacks (talk) 21:59, 9 February 2016 (UTC)[reply]
I don't know about in the US but, since retiring from my original occupation, I have worked as a relief caretaker in a number of local authority schools in the UK. One of the requirements of nurseries and early years units is that they must have provision for the children to play with sand and with water (usually both together). They often have facilities for this, both inside and outside and it is one thing that drives you mad when you have to clean it all up every evening - would you let your kids play with sand and water in a room with carpets? I have even worked at one nursery school where they had a one ton bag of soil and they asked me to regularly bring in a couple of buckets so the kids could mix it with water and sand and make mud pies - you can imagine the mess that made on the nursery carpets when they came back inside. The outside sandpit was always covered at night to stop cats and birds crapping in it but, obviously, in a public park it would be difficult to keep it covered and of course the public could drop sharp objects in it. However, the premise that children don't get to play in sand anymore certainly doesn't apply in the UK. Richerman (talk) 22:11, 10 February 2016 (UTC)[reply]
Clearly the problem there is not with the requirement that kids get to play with dirt, sand and water - but that they should do it indoors. Why not let them do it outside - when weather permits - and not otherwise? SteveBaker (talk) 19:15, 11 February 2016 (UTC)[reply]
They have automated catboxes that can comb the "lumps" out now. I wonder if a larger version could clean and then seal a sandbox at night. StuRat (talk) 22:24, 10 February 2016 (UTC)[reply]
No doubt such a thing could be devised - it could filter, wash and dry the sand at intervals and return it to the sand box - but the cost of building and running such a contraption would likely be prohibitive. Personally, I doubt that would be a good idea, even if it was plausible. There is undoubtedly a trend in trying to keep children super-clean and far from all bacteria and other such nastiness - but sadly, it starts to look like doing that causes them to fail to gain immunity to a lot of the things they encounter. There are suspicions that this may explain the increase in some diseases such as asthma - which is especially prevalent in children that are kept "too clean". As humans, our children evolved to sit around in dirt, sand, etc - it's dangerous to assume that cutting them off from those situations is a net advantage. SteveBaker (talk) 19:13, 11 February 2016 (UTC)[reply]

Starkiller Base superweapon

In Star Wars: The Force Awakens, General Hux gives the order to fire Starkiller Base's superweapon, which emits an energy beam strong enough to destroy entire planets. When I first saw the film, my suspension of disbelief was briefly dropped, when I thought "there's no way that energy beam can travel lightyears in minutes", but then I thought "Hey, I'm watching a film with interstellar spaceships and talking aliens", and kept on with the story.

Now, onto my question. Suppose such an energy beam is possible. Ignore its power, it doesn't have to destroy anything, just get at its destination without getting too much spread out and diluted. It can be just a fancy light show. But it has to be visible to the naked eye.

How would the people on the destination planet see it coming? Would it appear as a slowly-moving bright spot in the sky, getting gradually brighter, until it illuminated the whole sky? Or would the people just suddenly find the sky all illuminated? JIP | Talk 20:11, 9 February 2016 (UTC)[reply]

If it travels at the speed of light - they wouldn't see it at all until it arrived. If it travels faster than light then all bets are off because the laws of physics as we know them say that it's impossible - so any "What if..." answers would be nothing better than wild speculation.
In the real world, even a visible-light laser is invisible as it crosses a vacuum - and unless it has enough power to ionize the air and make it glow, it'll be more or less invisible all the way until it hits it's target (maybe it might vaporize a few dust motes or something). If it is powerful enough to make the air glow, it still wouldn't be visible until it hit the air - it would basically pop into view as a glowing shaft of light in such a tiny fraction of a second - that it would appear to be instantaneous.
But if it's fictional...it can look like whatever the director and the special effects department can imagine!
SteveBaker (talk) 20:44, 9 February 2016 (UTC)[reply]
OK, so it would go as I imagined, not as it was actually depicted in the film. I always thought the effect of a beam moving at light speed would have instantaneous effects when it's finally seen. Not like in the film where people can harmlessly watch it slowly approach for a few minutes, until it finally destroys the entire planet in a few seconds. I think the director made it move so slowly for dramatic effect. JIP | Talk 20:50, 9 February 2016 (UTC)[reply]
I haven't seen the film, but the effect sounds totally unlike a laser, and more like a plasma ball, as in Ball lightning, but perhaps containing a Quark–gluon plasma to carry that sort of energy. It would have to cover most of the distance via a created Wormhole. I suspect that the film-makers were more worried about the impression on the viewer than they were about explaining the exact physics. Dbfirs 21:55, 9 February 2016 (UTC)[reply]
Haven't seen the film, but if the region of space the beam passes through glows with ordinary light, and if the beam follows a spacelike path, then the beam would appear to emanate on the planet struck and move up into the sky. One way to see this is that if the beam is "instantaneous", linking the two planets at a single moment in their shared rest frame (assuming they're not moving relative one another) then it really isn't moving from one planet to the other - its appearance is symmetrical as seen from either world.
However, it is conceivable that the design of the beam would call for it to build up in a large spacelike path while the energy accumulated, but then one end gradually moves at a sublight speed toward the planet until it discharges, etc. As a rule, you can write apologia for the worst sci fi plots if you think them through carefully. Wnt (talk) 22:41, 9 February 2016 (UTC)[reply]
I see what you're saying - if the beam arrives faster than the light it emits along the way, then it's tempting to say that it's first visible where you are - then starts to appear backwards towards the source as the light from it's passage catches up with it's ultimate effect. But because the laws of physics don't allow for things that go faster than light, all bets are off. We can't make any reasonable statement about the physical reality of the square root of a negative number - and that's what the Lorentz transformation requires:
When v2 is greater than c2, the v2/c2 is greater than one - and we have the square root of a negative quantity. So the mass, length, time-dilation and energy of this superluminal 'effect' are all impossible to calculate. We know that in the real world, we never see the square root of a negative number in an actual result. It always cancels out somewhere else. So there is really very little likelyhood of anything physical, that can transmit information, travelling faster than light...and if it did, the consequences are a mathematical impossibility. Causality itself falls by the wayside. Making any statement whatever about what that might look like is entirely unreasonable in light of what we know.
Possibly the only reasonable speculation relates to the (not-real) concept of tachyons - which hypothetically might travel at beyond the speed of light. The kind of crazy math that results from this is that tachyons would require infinite energy to slow down to the speed of light (a kind of mirror image of regular particles that need infinite energy to reach the speed of light) - and that their lowest energy state would be when they were travelling at infinite speed. So even if we take a BIG stretch into the most hypothetical physics, we end up with a weapon who's effects would travel at infinite speed and not take the time that the beam weapon in StarWars takes.
All bets are off. This is a fictional thing - and the appearance of it is whatever your imagination (or the plot) needs it to be. SteveBaker (talk) 15:30, 10 February 2016 (UTC)[reply]
Well, the laws of physics allow phenomena to go faster than light, just not information. For example, the owners of this death-ish star might have launched a bunch of probes to line up along the trajectory of the intended attack years in advance, then ceremoniously press the button at the exact time they were all timed to go off ... in which case you would see the closest ones to the planet light up first. (Just ask a 9/11 truther ... they put explosives inside the planet, and the death star firing at it is just a misdirection...) Wnt (talk) 17:58, 10 February 2016 (UTC)[reply]
There is, in fact, a detailed cannon explanation for why the beam from Starkiller base appeared in the manner that it did, but as it is pure science fantasy, I shall not sully the reference desk with such dribble. Anyone curious can look up the weapon's entry on the Star Wars Wikia. Someguy1221 (talk) 22:41, 10 February 2016 (UTC)[reply]
Indeed, it is possible for a phenomenon of some kind to travel faster than light - but not in this case since the beam carries the information that someone on the death star pressed the "DESTROY THE PLANET!" button (and when and in which direction it was aimed and a whole lot more besides). Since information cannot travel faster than light, neither can a functional death ray. So, yes, it is indeed still hogwash. SteveBaker (talk) 15:12, 11 February 2016 (UTC)[reply]

February 10

io photographs

When everyone was all excited about the New Horizons space probe reaching pluto I remember seeing photos of one of Jupiter's moons called io. 2 of these photos stood out to me. They were possible infrared photos or something similar. They appear to be black and white like THIS picture. You could see little bright spots/mushroom clouds from the volcanoes erupting. Where did these pictures go?! I can't find them on google images nor can I find them on wikipedia. Can anyone help me find them? 199.19.248.82 (talk) 00:18, 10 February 2016 (UTC)[reply]

Our article on Io (moon) links to many great resources, including:
Nimur (talk) 03:02, 10 February 2016 (UTC)[reply]

hops as a preservative

The beer article mention that hops acts as a preservatives a few times. Which chemical in hops exactly is providing the preservative effects?

The beer article also says "the acidity of hops is a preservative", so would other acids work as well? Johnson&Johnson&Son (talk) 08:06, 10 February 2016 (UTC)[reply]

I notice that of the 2 references ([61} & [62]) used for that statement, the first no longer leads to relevant material and the second leads to the abstract of a possibly relevant article but does not mention the preservative property explicitly in the abstract (the property of aiding head retention is not quite the same thing).
From my own informally acquired knowledge of brewing, the preservative effect was the reason for the introduction of hops in the mediaeval period, after which the taste effect became appreciated, but in the modern era – with better control of hygiene in the brewing process – the preservative effect is less relevant and the effects on taste and other factors (e.g. mouthfeel) predominate.
I have a range of books about brewing at home which might contain the answer re hops, but will not be able to consult them until Thursday at the earliest. As for using "other acids", I'd assume it possible that other non-hop adjuncts formerly used such as sweet gale (Stonehenge Brewery still uses this for one seasonal beer) may have had preservative as well as flavouring effects. If however one was to use non-plant sources, I personally would no longer regard the resulting beverage as "beer" :-) . {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 15:16, 10 February 2016 (UTC)[reply]
There may be two effects at work here, and hops role may be more in one than the other. The first is that hops may actually act as a preservative, that is it may chemically prevent spoilage. The second effect is that hops may mask spoilage by its strong flavor. That is, you taste the hops rather than the spoilage in the beer. this reference for example notes that herbal mixtures (such as hops, but also other herbs and spices known as Gruit) "mask unpleasant spoilage notes". One of the characteristics of India pale ales, or IPAs, is their extremely high hop content, which covered the "skunky" or "stale" taste of beer shipped from Britain to India on long overseas voyages. This beer blog notes "High hop levels can preserve a beer’s flavor in two ways: they have a limited ability to protect beer from spoilage by some microorganisms, and, more importantly, their bitterness can mask stale flavors." (bold mine). Several other sources about IPAs note the use of larger quantities of hops than normal to mask staleness, spoilage, or undesirable flavors. --Jayron32 15:47, 10 February 2016 (UTC)[reply]
This reference, says that Primary Alpha Acids Humulone, Cohumulone, Adhumulone have an antiseptic effect, especially against Gram positive bacteria. DuncanHill (talk) 15:56, 10 February 2016 (UTC)[reply]
EC: If you scroll down your original link to hops, there's a subheading about chemical composition, which on expansion isn't simply about taste. It's the release of Alpha_acid and Beta_acid in the fermentation process that acts as a preservative. I think other acids could act as a preservative in beer, but then would it still be beer per se, as the hops are an integral part of the process. When fermenting wine Sorbic acid can be added as a preservative, so you could put some of that in fermenting beer as a preservative I guess. I'm not sure how that would affect the rest of the fermentation process or taste though. Mike Dhu (talk) 16:16, 10 February 2016 (UTC)[reply]
As far as other acids working, yes, anything that moves the pH outside the range a particular bacteria likes will act as a preservative to prevent that particular bacteria from growing. However, there are acidophile bacteria that may thrive in those extremes, so keeping those out is also important. Of course, the acid may also kill the yeast, so could only be added after the brewing process is complete, and people won't like extremely acidic beer either, so it would need to be later neutralized. Thus, there are easier ways to preserve it with modern technology. StuRat (talk) 17:29, 10 February 2016 (UTC)[reply]
  • I will point out that hops is the closest biological and linguistic relative of hemp, (i.e., Cannabis), and that the two words are either cognates or very closely related wanderworts. See also soma, which seems to be some sort of brewed drink, perhaps from poppies or hemp. μηδείς (talk) 02:56, 11 February 2016 (UTC)[reply]
The berries of the "chequer tree" Sorbus torminalis were widely used in England to flavour beer before the arrival of new-fangled hops (in the 15th century). Our article describes them as "usually too astringent to eat until they are over-ripe". I don't think I've ever seen one, but they're related to the rowan which certainly has acidic-tasting berries. Alansplodge (talk) 17:50, 11 February 2016 (UTC)[reply]

Unknown bird

Can anyone help me identify the bird shown? The photo was taken in the Ngorongoro crater, in Tanzania, in January. Thanks.

Unknown bird

. --Phil Holmes (talk) 13:53, 10 February 2016 (UTC)[reply]

Rufous-tailed weaver. Mikenorton (talk) 13:58, 10 February 2016 (UTC)[reply]

That's the chap. Thanks for your help. --Phil Holmes (talk) 15:20, 10 February 2016 (UTC)[reply]

Resolved

water temperature and baby bath

If the baby bath water feels at all warm to the touch (hand or elbow) does that necessarily mean that its temperature is above 37C, since the human temperature is (approx) 37C? Or can the water be 32C, still warm to the touch, because there is a difference between how we sense temperature on our skin and our core body temperature? If the water is really 32C (as indicated by the thermometer), will it necessarily feel 'cold' since my core temp is 37C? I'm trying to understand if there is a difference between core body temperature, and our sensation on the skin of warm/cool. Thanks if you can point me to a credible info source. — Preceding unsigned comment added by 94.210.130.103 (talk) 20:25, 10 February 2016 (UTC)[reply]

Our article on thermoregulation covers some of this. Your specific questions about whether or not something will "feel cold" are going to be highly variable from person to person and at different times (as the link above suggests). Broadly speaking, we're not very good at gauging temperatures. Our article at thermoreceptor is not very detailed, but my own experience is that we seem to feel temperature changes rather than actual temperatures. Matt Deres (talk) 21:12, 10 February 2016 (UTC)[reply]
Yes, skin temperature is what is being compared. You can verify this yourself by cooling one hand (just go outside with only one glove, for a bit, in winter), then put both hands in water that you have verified is body temp with a thermometer. The water will feel much hotter on the cold hand. StuRat (talk) 21:11, 10 February 2016 (UTC)[reply]
Notice too that babies are more delicate to the temperature. Bath water should be just above 100 F (which are the 37 C you mention) to prevent chilling or burning the baby. In case of doubt, simply use a bath thermometer. --Scicurious (talk) 00:33, 11 February 2016 (UTC)[reply]
Incidentally, the reason we're so poor at determining temperature by touch is that what our senses really detect is the rate at which heat flows out of or into the skin. That's a function of both temperature of the skin, temperature of the thing you're touching and the thermal conductivity of that thing. That's why wood feels warmer than metal when both are at the same (below 37C) temperature...wood is a poor conductor of heat and metal is good, so we are fooled into thinking that metal is "colder" because the heat leaves our skin much faster than it does when touching wood.
StuRat's example of sensing temperature with a hand which is cold is also caused by this since the rate of heat flow into the cold hand is faster than into the warm hand.
Bottom line is that we simply don't have a sense that can judge temperature directly...even though we all seem to think that we do.
SteveBaker (talk) 15:05, 11 February 2016 (UTC)[reply]

What does the X and Y (of chromosomes) stand for?

93.126.95.68 (talk) 20:45, 10 February 2016 (UTC)[reply]

See X chromosome, Y chromosome and XY sex-determination system. --Jayron32 20:47, 10 February 2016 (UTC)[reply]
They don't stand for anything, that's what they actually look like: [47]. Other than the Y chromosome, most healthy human chromosomes look something like an X, with some looking more like a U or V: [48] (see image 5). The Y chromosome, on the other hand, is missing a part, and that makes it look more like a Y. StuRat (talk) 20:54, 10 February 2016 (UTC)[reply]
No, that's incorrect. According to our article on X_chromosome it was so-named because "...Henking was unsure whether it was a different class of object and consequently named it X element, which later became X chromosome after it was established that it was indeed a chromosome. The idea that the X chromosome was named after its similarity to the letter "X" is mistaken. All chromosomes normally appear as an amorphous blob under the microscope and only take on a well defined shape during mitosis." And according to our article on Y chromosome, that name was chosen simply because it came after "X". Matt Deres (talk) 21:04, 10 February 2016 (UTC)[reply]
Interesting, but I bet those temporary names would have soon been replaced, had they not turned out to physically match the appearance of each during mitosis. (To me, the more obvious terms would have been "male" and "female" chromosomes.) StuRat (talk) 21:08, 10 February 2016 (UTC)[reply]
The ZW_sex-determination_system also doesn't have chromosomes that look like letters, and the letters don't stand for anything there either. The other main sex-determination system is X0_sex-determination_system, but I'm not sure if the X looks like an X there or not. SemanticMantis (talk) 22:10, 10 February 2016 (UTC)[reply]
Your bet would be foolish. All chromosomes look like an X (during early mitosis). Why aren't they all called X according to your 'logic' then? Fgf10 (talk) 08:04, 11 February 2016 (UTC)[reply]
For the same reason you don't give all your kids the same name (unless you're George Foreman). Because it would obviously be confusing to call them all the same thing. Of the sex chromosomes, only one type looks like an X and the other resembles a Y. StuRat (talk) 16:21, 11 February 2016 (UTC)[reply]

Does DEMKO approve Schuko (CEE 7/7) plugs?

I owned an old washng machine which had a Schuko plug (the "French-German compromise" CEE 7/7): among various certification labels (VDE, CEBEC, ÖVE...) there was also the symbol of DEMKO, even if Denmark did not accept Schuko plugs until very recent times. Can someone tell me why there was a DEMKO certification label on that plug?--Carnby (talk) 21:13, 10 February 2016 (UTC)[reply]

Yes. The Schuko plug originates in a patent granted in 1930 to a Bavarian manufacturer Bayerische Elektrozubehör AG. The company ambition, now partly realized, was to create a Europe-wide standard. It would be natural to seek individual European national approvals, especially in countries bordering Germany that are markets for German goods, at the earliest opportunity so that the approval logo could be included on the injection-moulded plug. DEMKO, the National Body for testing of electrical products sold in Denmark existed already before the Schuko patent(s) and could issue its D-Mark approval at any time. However since 1978 electrical products no longer need to carry the D-Mark for sale in Denmark. Safety note: A Schuko plug for a metal-cased washing machine is safe to use with an earthed Schuko wall socket but it creates a safety hazard if plugged into a different non-earthed 2-pole socket. AllBestFaith (talk) 13:47, 11 February 2016 (UTC)[reply]

Climate averages of Bacău

The page about the Romanian city of Bacău still has no climate averages; could someone please tell me where I can find a reliable source about climate averages for Bacău region?--Carnby (talk) 21:20, 10 February 2016 (UTC)[reply]

Have you tried Weather Underground (weather service)? I think they usually have this information somewhere for many places. It's usually my first stop for weather info. --Jayron32 21:23, 10 February 2016 (UTC)[reply]
AFAIK Wunderground does not show reliable climate averages (WMO recommends at least 30 years of daily record[ing]s)--Carnby (talk) 21:55, 10 February 2016 (UTC)[reply]
The Romanian Wikipedia has this at ro:Bacău, cited to the Administrația Națională de Meteorologie (which would be your best bet for further info):
Evoluția elementelor climatice măsurate la Stația meteorologică Bacău
Luna Ian. Feb. Mar. Apr. Mai Iun. Iul. Aug. Sep. Oct. Nov. Dec.
Temperatura minimă (°C) -4,13° -4,58° -0,30° 5,04° 10,18° 13,92° 15,83° 15,95° 10,37° 5,60° 0,85° -1,45°
Temperatura maximă (°C) 2,38° 2,50° 9,68° 15,73° 22,35° 25,82° 28,77° 28,45° 21,84° 16,43° 8,30° 3,86°
Smurrayinchester 14:56, 11 February 2016 (UTC)[reply]

For how long bacteria and viruses can live outside of the body (not in laboratory conditions)?

For how long bacteria and viruses can live outside the body- not in laboratory conditions? For example if someone has influenza or bacteria disease and he sneezed and spread the bacteria or the viruses and they reached to the bed / table / chair etc. (the other places where people used to touch). If someone touch these places he should be infected? (People say that HIV for example is destroyed right after some seconds after it goes out of the body/ Is that true?)93.126.95.68 (talk) 21:23, 10 February 2016 (UTC)[reply]

It varies a lot, depending on the pathogen. Since you mentioned HIV specifically, no, that is not true. Or at least it is not generally true that the virus always is destroyed within seconds of leaving a body.
From this [49] study published in 2007. Here's a nice overview of virus survival in the environment [50], it discusses several different groups. SemanticMantis (talk) 22:03, 10 February 2016 (UTC)[reply]
Thank you for the information. The study is amazing. 93.126.95.68 (talk) 22:44, 10 February 2016 (UTC)[reply]

Does chlorine destroy viruses like it does to bacteria?

Can chlorine destroy viruses like it does to bacteria? If it can, what is the mechanism? 93.126.95.68 (talk) 23:00, 10 February 2016 (UTC)[reply]

This is a substantially complicated subject. The short answer is yes, and it varies. There are LOADS of resources if you google chlorine virus inactivation. Can you make your question more specific? Or at least convince us this isn't a homework question? Vespine (talk) 00:15, 11 February 2016 (UTC)[reply]

Yes, it can. Chlorine reacts with double bonds (see Halogenation). Bleach (sodium hypochlorite, NaOCl) works in a somewhat similar way. I won't get into how the reaction works on the molecular level, but viruses, like bacteria, contain double bonds between two carbon atoms. Chlorine reacts with those double bonds. The usual result is a carbon-carbon single bond with a chlorine atom on each carbon. Disruption of the double bonds either destroys the virus's protective coat or its ability to reproduce, or both. Roches (talk) 00:23, 11 February 2016 (UTC)[reply]

Thank you. I asked it just for to know if I clean an area by chlorine if it's also against viruses. Today when I clean the working surface in the kitchen the question raised in my brain. no homework at all. 93.126.95.68 (talk) 01:27, 11 February 2016 (UTC)[reply]
The answer is still "It depends." Some viruses have thick protein coats which require a higher concentration of chlorine to inactivate them. Generally, according to a US Centers for Disease Control study, many of the enteroviruses (among the viruses they cited were Hepatitis A, Poliovirus, the Noroviruses (implicated in outbreaks of food-borne illness), and Rotavirus) are "moderately" resistant to chlorine's disinfectant effects, compared to bacteria. However, sodium hypochlorite-based cleaners such as the "Clorox" brand disinfectants are over ten times more effective than disinfectants using alcohol, phenol, or quaternary ammonium compounds at killing both bacteria and viruses.
The most resistant micro-organisms to chlorine disinfection, according to this study, are the protozoa, and some of these can cause very nasty diseases - Entamoeba histolytica, Giardia intestinalis, Toxoplasma gondii and Cryptosporidium parvum were cited in particular to be both highly resistant to chlorine disinfection and to be persistent to various degrees in water supplies, with Cryptosporidium parvum being the most troublesome micro-organism found in water supplies - it caused the largest waterborne-disease outbreak ever documented in the United States, making 403,000 people ill in Milwaukee, Wisconsin in 1993. loupgarous (talk) 07:18, 11 February 2016 (UTC)[reply]

What is the physiological reason for inappetence?

In a lot of conditions, especially in cases of infections, there is inappetence. What is the physiological reason for that? (I know for example that the fever caused for destroying the bacteria and viruses). I thought that the explanation is the the body want to fight with the pathogen and the eating disturbs it, because the body needs to Invest energy in the digestion. Am I right? 93.126.95.68 (talk) 23:10, 10 February 2016 (UTC)[reply]

I don't think "inappetence" is an actual English word, or at least not one commonly used in medicine or biology. The usual technical term is "anorexia".
Unfortunately, for a lot of people, that word has become synonymous with anorexia nervosa, and in fact anorexia is a link to that article. Our article for what you want is at anorexia (symptom). That's probably where you should start looking. --Trovatore (talk) 23:40, 10 February 2016 (UTC)[reply]
Indeed, inappetence forwards to Anorexia (symptom) (which is different from anorexia.--Scicurious (talk) 23:44, 10 February 2016 (UTC)[reply]
I'd just use the common term "lack of appetite". That's clear to everyone, except perhaps a geologist. :-) StuRat (talk) 00:20, 11 February 2016 (UTC)[reply]
If you want to search the technical/medical literature, it's probably good to know the name, which is "anorexia". You can use "-nervosa" to filter out that condition.
It seems "inappetence" actually is a word, at least according to Wiktionary, but I still think you are not likely to find much in English under that name. --Trovatore (talk) 00:23, 11 February 2016 (UTC)[reply]
Yes, it's in the OED with cites from 1691 to 1887. Dbfirs 11:59, 11 February 2016 (UTC)[reply]
In the case of an intestinal infection, like the flu, the body can't always tell it from food poisoning, so avoiding any more (potentially bad) food until the condition clears is the wise course of action. StuRat (talk) 00:24, 11 February 2016 (UTC)[reply]
Please don't answer science quesions in terms of natural teleology. Wisdom is a function of conscious reasoning, not unconscious bodily reactions. --76.69.45.64 (talk) 19:51, 11 February 2016 (UTC)[reply]
  • Without having any idea of the answer, given I focused in botany with my undergrad Bio major, the OP's question was well formed, and anorexia as a psychological condition has quite a different meaning from mere physiological inappetence due to a temporary infection. I find the above responses vary between irrelevance and rudeness. μηδείς (talk) 02:45, 11 February 2016 (UTC)[reply]
    Anorexia nervosa is a psychological condition. Anorexia by itself is lack of appetite. --Trovatore (talk) 03:47, 11 February 2016 (UTC)[reply]
In fact, our article which was linked above by Scicurious over 3 hours before Medeis's reply (so I guess is one of the rude or irrelevant replies) includes links to the International Statistical Classification of Diseases and Related Health Problems and Medical Subject Headings links (okay these are wikidata but I'm pretty sure they would have been there before any reply) on the symptom and several references (I think 4) which discuss anorexia of infection. Nil Einne (talk) 14:26, 11 February 2016 (UTC)[reply]
  • Purely as a thought experiment, perhaps your body has decided that the costs/dangers of bringing in new food and other possible issues, such as poisons and pathogens, outweigh the short and long term disadvantages of burning the body's reserves. And +1 to Medeis. Greglocock (talk) 02:58, 11 February 2016 (UTC)[reply]
The word "anorexia" literally means lacking appetite, but it's very commonly used as an abbreviation for anorexia nervosa, so its use this way could cause confusion. ←Baseball Bugs What's up, Doc? carrots→ 04:23, 11 February 2016 (UTC)[reply]
Nevertheless I believe it is the usual term in medicine, in English, for lack of appetite. However both words get plenty of hits on Google Scholar, so I can't be sure. --Trovatore (talk) 04:31, 11 February 2016 (UTC)[reply]
While I know counting search hits isn't generally useful when in the thousands, for me, 'anorexia -nervosa' on Google Scholar gets a few hundred k. 'inappetence' gets around 10k but many of these seem to be in animals. You need to include something like 'inappetence patient' or may be 'inappetence human' and that reduces results further. Doing something like 'inappetence -cat -dog -bovine -reindeer -sheep -cattle -porcine -cats -dogs -rabbit -horse -salmon -goat -rats -poultry -pigs -monkey' still seems to manage to find quite a few non human results. Even in animals, 'anorexia cat' seems to find a lot more results than 'inappetence cat' although not all results relate to anorexia in cats. Possibly dog is a better example since you avoid discussions of CAT scans and Cognitive analytic therapy, but I'm not a dog person. Nil Einne (talk) 16:26, 11 February 2016 (UTC)[reply]
I should also add that the references I found are the first one I could find, but are probably not the best ones. BiologicalMe (talk) 13:32, 11 February 2016 (UTC)[reply]

Thank you very much! the informaition about the factors is very interesting! 93.126.95.68 (talk) 18:37, 11 February 2016 (UTC)[reply]

February 11

Is it normal to never get angry?

I've been annoyed but never angry. Ennyone57 (talk) 03:49, 11 February 2016 (UTC)[reply]

It is for you. GangofOne (talk) 06:25, 11 February 2016 (UTC)[reply]
Emotions, like most all mental phenomena, are highly subjective and hard to quantify--so, it's hard to give any empirical comparison of whether your mental state with regard to anger (or most any emotion) is atypical. All of that said, human being clearly vary quite considerably in how they react to vexing or personally offensive stimuli. You may want to take a look at our articles affect (psychology), affect display and blunted affect, though note that each of these focuses more on behaviour than mental stimuli (again, going back to the deep issues with try to study the emotions themselves, which many cognitive scientists feel may present some by-nature-insurmountable difficulties). I will say this much--if you feel that you have no problem with the intensity of your other emotional states, I (personally) wouldn't waste any time feeling "abnormal" for a lack of particularly strong anger. Some people just run cool by nature and the result is often a very positive influence on those around them. That said, if your lack of intensity of emotion in this, or any, context makes you feel uncomfortable, incomplete or confused, a qualified psychiatric professional may be able to help you sort those feelings. Unfortunately, our policies here prevent us from digging too deep into that topic, since it impacts at least somewhat on our "no medical advice" standard. Snow let's rap 06:36, 11 February 2016 (UTC)[reply]
Macmillan Dictionary defines ANGER as the strong feeling you get when you think someone has treated you badly or unfairly, that makes you want to hurt them or shout at them. That definition may be extended to the case of someone close to you being treated badly. If the OP considers their own reaction to such an event, which in this stressed world is not hard to visualize, then that qualifies as the OP's own anger reaction. It need not present visible symptoms or have to match the anger reactions of other people. AllBestFaith (talk) 13:06, 11 February 2016 (UTC)[reply]
This seems like a good functional definition, though it doesn't actually get at whether the process internally is different for the OP. I wonder if a more meaningful approach wouldn't be to do comparative measurements with fMRI or something. Such studies exist [54][55] though it seems dicey to measure "genuine" rage except in weird scenarios like the first. I mean, as much as in speech we might correlate the feeling you get when you read an article about camps in North Korea to the feeling you would have if you actually caught your wife's rapist between a blind corner and a baseball bat, I don't know if it's really the same emotion at all - how much of it lies in the actual intent to do actual harm? (AFAIR there is an aspect of repression from the frontal lobe in all this, but I'm not sure that "without it" it is "the same thing") Wnt (talk) 13:53, 11 February 2016 (UTC)[reply]
I think the amygdala is the main center in charge of emotions like this, the adrenal gland produces the main associated hormones. If you don't feel much anger you might not feel much fear either as in the fight or flight reflex. A bit is good but we don't have to fight or flee saber toothed tigers nowadays. Dmcq (talk) 16:38, 11 February 2016 (UTC)[reply]
I guess I'd have to ask how you know you're not angry? It's a bit like asking whether the color you see as "red" looks the same to me. How would you ever know? It's quite possible that your feeling of annoyance "feels" the same as the feeling I'd describe as "anger" - but that your external manifestations are kept more firmly under control. It's very difficult to compare inner sensations between individuals like that. SteveBaker (talk) 19:02, 11 February 2016 (UTC)[reply]
For anyone wanting to dive more into this topic, the fancy-pants philosophical term for these "inner sensations" is qualia. --71.119.131.184 (talk) 23:33, 11 February 2016 (UTC)[reply]

Making a slushie without sugar

Slush (beverage)#Sugar states that sugar is needed to act as an antifreeze. My question then, is if some other "edible antifreeze" could be used (excluding salt, because I don't think anyone would want that, even if it worked). StuRat (talk) 16:18, 11 February 2016 (UTC)[reply]

Glycerol. DuncanHill (talk) 16:26, 11 February 2016 (UTC)[reply]
Or you could try using ethanol. DuncanHill (talk) 16:26, 11 February 2016 (UTC)[reply]
Well, of course Stu should try using ethanol, but given it is a less viscous liquid than water, rather than a solid like sugar that dissolves in water, it might defeat the slush goal. I am partial to protein shakes, which I make with skim milk, which does have its own inherent sugar. Perhaps Stu can hint at what his underlying goal is? Oh, and I also love milkshakes made with Breyer's low carb/no sugar added ice cream. μηδείς (talk) 19:24, 11 February 2016 (UTC)[reply]
Two words: margarita, daiquiri. --Trovatore (talk) 19:26, 11 February 2016 (UTC)[reply]
I like the ability of a slushie to cool me down more quickly than just a drink with ice cubes (no doubt because I actually consume the crushed ice in the case of a slushie, rather than waiting for it to melt first). However, I don't want all that sugar. I drink zero calorie iced peppermint "tea", no sugar or artificial sweeteners, so if I could make that into a slushie, that would be ideal. StuRat (talk) 19:51, 11 February 2016 (UTC)[reply]

The mask should defend on the sick or from the sick?

I saw in Eastern Europe people that put masks on their faces. My question about is: Is this mask defends on the person that put it on his face or it defends on the people around that worried about the sneeze and cough? It's not clear for me this issue. In addition, How could it be that very small viruses and bacteria can not get out of the mask while having sneeze or cough that are considered powerful. 18:42, 11 February 2016 (UTC) — Preceding unsigned comment added by 93.126.95.68 (talk)

A mask could either protect the wearer from the world, or the world from the wearer. Our article Surgical mask says that people in Japan who are ill often wear masks to reduce the risk of passing the disease on. Our article has a photo of a situation in the USA where people were not permitted onto public transport unless wearing a mask - and clearly that would be to prevent them from passing on disease rather than for their personal protection from an external source. A simple mask won't prevent all bacteria and viruses from spreading but certainly it would reduce the degree of risk. Our article points out a surprising 'bonus' benefit which is that they "remind wearers not to touch their mouth or nose, which could otherwise transfer viruses and bacteria after having touched a contaminated surface". SteveBaker (talk) 18:54, 11 February 2016 (UTC)[reply]
Do you really mean Eastern Europe? I have only seen people in East Asia using them. At least I have never seen them around Poland, Ukraine. --Scicurious (talk) 21:27, 11 February 2016 (UTC)[reply]

In which time have you been there? Have you been there in the summer time or in the winter? Of course, I'm not saying that the entire people here or even most of them wear it as you can find in East Asia, but it's not uncommon here especially in this time. 93.126.95.68 (talk) 00:16, 12 February 2016 (UTC)[reply]

I sometimes see people wearing masks here in Auckland, New Zealand, and they are without exception Asians. Akld guy (talk) 00:27, 12 February 2016 (UTC)[reply]

Why do doctors give saline to the patient instead of water?

I know that the saline is near to our physiological level of our blood (isotonic), and water is hipertonic or hypotonic, but my question is about the management of liquids through the vein, that it's different from management of liquids through the mouth. So if I understand well the water that comes though the mouth latter transforms to isotonic. What is the way or the mechanism that it happens? 93.126.95.68 (talk) 18:48, 11 February 2016 (UTC)[reply]

Great question! Blood contains a balanced mixture of contents from the food you eat (which contains more salt than you need on most modern diets) and from water you drink. In addition, your body gets rid of excess water and/or salt as needed through urine to keep its tonicity at the right level. Blythwood (talk) 19:28, 11 February 2016 (UTC)[reply]
The answer to your headline question is because it's sterile, doctors don't give saline water to drink, they use saline solution in a drip (through the veins, as you put it) in some cases to rehydrate patients (and also to administer medication). I'm not too sure about your question re hypertonic vs hyponotic, as I understand those terms they are relative to the overall balance of water and salts/sugars in your blood. A biologist will give you a better answer, but in a healthy person I think osmosis will balance the water your cells need and your internal organs (Kidneys in this case) will process and excrete the excess salts Mike Dhu (talk) 19:59, 11 February 2016 (UTC)[reply]
  • Both the question and the answers above are a bit confusing, but if I'm not mistaken you're asking why it's fine to drink water, but you have to use isotonic saline for intravenous ('through the vein') administration? You are entirely correct that normal saline is isotonic to our blood, ie it doesn't disturb the finely regulated levels of solutes in the blood too much. Water is indeed hypotonic to blood, and directly infusing it would rapidly lead to things like hyponatremia or hypokalemia. (Note: there are medical reasons to use hyper or hypo-tonic saline) Why doesn't this happen when you drink water? The answer is that is does happen. See water poisoning. The trick is to eat food or another source of salts (oral rehydration therapy) along with the water. This will be digested and absorbed in the blood stream, which, together with the regulatory mechanisms in the kidney, maintains healthy levels in the blood. Of course, with our modern Western diets we generally get too much sodium, leading to problems such as high blood pressure. It's a fine balance. Fgf10 (talk) 20:29, 11 February 2016 (UTC)[reply]
Yes, this is exactly what I meant to ask. Thank you for the answer. But according to what I know (and you can correct me if I'm wrong) normal people can survive with water some days, then according to what you're saying that they need to use the trick of eating something with the water, how can they survive for a long? I thought about other two possible options: 1) The body has a store of salts - as it as for sugar. and when coming hipotonic water the body secrete salts. 2) The body knows how to take the water and divide it to the parts. Then it takes the salts that it needs for isotonic liquid and the rest of the H2O it remove by the urinary system. I would like to get your opinion about.93.126.95.68 (talk) 00:12, 12 February 2016 (UTC)[reply]
It's the second. The kidneys regulate the amount of solutes excreted in the urine, to maintain homeostasis. As to why isotonic solutions are used for IVs, it's because if the tonicity of your blood becomes out of whack, it can kill you. Cellular processes will be disrupted, and cells can even rupture. --71.119.131.184 (talk) 00:57, 12 February 2016 (UTC)[reply]