Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by SemanticMantis (talk | contribs) at 19:18, 18 January 2016 (→‎Cartilaginous fish outcompeting bony fish as predators). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 14

Electric motors and hybrid vehicles

Hybrid in the sense of a flying car or a car boat, something along those lines. If you were to make these things using electric motors, do you need separate electric motors to drive the wheels and the propeller/jets? Or could you in theory use a single electric motor to provide power to both the wheels and the propellers/jets? ScienceApe (talk) 04:32, 14 January 2016 (UTC)[reply]

Yes you could use just one motor and a complex transmission, or you could devote one motor to each wheel/prop. If your design never uses both forms of propulsion at once then you may come in cheaper/lighter with a single motor solution. Greglocock (talk) 05:13, 14 January 2016 (UTC)[reply]
"propeller/jets" - probably not jets. The hard part for small planes is payload weight is traded for fuel. --DHeyward (talk) 05:35, 14 January 2016 (UTC)[reply]
I took "jets" to mean the boat kind, as in a Jet Ski, not the airplane kind. StuRat (talk) 05:45, 14 January 2016 (UTC)[reply]
Terrafugia has been pitching their "Transition" aircraft, which uses one engine: "Running on premium unleaded automotive gasoline, the same engine powers the propeller in flight or the rear wheels on the ground." Their proposed TF-X will have multiple electric engines. They just need somebody to design it, build it, program it, test it, operate it, fund it, ...
Nimur (talk) 17:33, 14 January 2016 (UTC)[reply]
For the flying car, weight is critical, so you really don't want an electric motor to start with, because they, along with the required batteries, have a lower power-to-weight ratio than gasoline, jet fuel, etc. Also, a car can't be designed with the same lightweight components you use in a plane, or it would be too fragile for the road. So, an electric-powered flying car is quite impractical (it might be possible, but that's not the same as practical).
For the amphibious vehicle, electric power would be a bit more practical, as power-to-weight ratio is less of a concern. There are basically two types I am aware of, hovercraft and ducks. The power-to-weight ratio would be less of a concern in the second type. StuRat (talk) 05:42, 14 January 2016 (UTC)[reply]
Electric engines can be split up very easily and effective down to multiple smallest size engines instead of one big because they are very simple constructions in essence. Combustion engines on the other hand are way more complicated because they need lots of additional parts like an injection system, a starter, often even additional special parts like a choke for starting cold etc.. Thus its much harder to replace one big combustion engine with multiple small ones.
Now in addition to that ist much easier and more effective to just lay two electric cables instead of adding a transmission to every location you want a central motor's power to be split up to. Thats why for example each of the 6 Curiosity (rover)'s wheels has its very own engine. --Kharon (talk) 00:18, 15 January 2016 (UTC)[reply]
There's also quite a bonus in torque if you drive the wheels and propeller of that vehicle directly with electric motors instead of using a transmission (where you get friction losses, perhaps even losses in torque owing to the gearing or teleflex cables used to transmit power).
Speed and torque both can be controlled by at each motor by pulse-width modulation circuitry, which is cheaper and easier than a physical transmission. If the point of the exercise is to get more motion per watt, electric motors to drive the wheels and the propeller of the vehicle could well be lighter than a transmission to do the same thing from a single motor. loupgarous (talk) 19:58, 15 January 2016 (UTC)[reply]
An electric drive does not need a transmission or gearbox. Electric motors have a high and nearly constant efficiency. Batteries can not store that much energy than fuel does. Weight and volume are limiting factors for traction batteries in a vehicle. Combustion engies have brake specific fuel consumption in load and rotational speed and less efficiency than electric engines. A transmission or gearbox transforms the ratio of rotational speed and torque of the combustion engine to keep it operating in the mode of the lowest possible fuel consumption. There are several variants of hybrid vehicles. Patents protect the use of some solutions. The one used in Japan, connects two motorgenerators (electric motors which can be used as a generator as well) over a planetary gear box, using it to sum or subtract torque between motors and drive axle. The combustion engine is attached to the smaller of the motorgenerators. A clutch is installed to protect the gears only. The combustion engine just adds torque to the system. This torque is beeing used to charge the traction battiery and drive the vehicle. The ration is controlled by the two motorgeneratos. Charging the battiery from a generator is a lost my the efficiency ratio of each of the converters. Batteries are changed with a higher voltage than the batterie output is. Driving the vericle directly from the combution enginge makes it more efficienty when the combustion engine is beeing operated in an optimal mode. The motorgenerators also keep the ratio by vehicle speed and engine operation mode. This video, shows it. The cobustion engine speeds up to its maximum power output, not to its highes rpm. Kicking the accellerator down, the vehicle still speeds up while the combustion engine is operating in constant rpm making the mechanic from the garage think the first time, there's someting wrong with the clutch, which is not true. --Hans Haase (有问题吗) 01:52, 16 January 2016 (UTC)[reply]
Uhh just so you know, that video is from a video game. You can tell by the "grass". ScienceApe (talk) 20:10, 16 January 2016 (UTC)[reply]
Sorry ScienceApe, indeed, geat simulation, have these from real camera.[1][2][3][4], This violent driving[5] – I do not support – shows how the combustion engine reduces rotational speed when not needed. Faster than 50 km/h (~30 mph) it never turns of due to high rotational speeds in the planetary gear. --Hans Haase (有问题吗) 12:51, 18 January 2016 (UTC)[reply]
Similar to the Japanese patent using motorgenerators on a planetary gear, a German patent uses hydraulic drive over planetary gear it machines for food production [6] It also shows how the Japanese hybrid operate in reverse gear on empty battery or cold combustion engine. --Hans Haase (有问题吗) 12:16, 16 January 2016 (UTC)[reply]

History of puberty blockers

When was the first one developed? When did they become widely available? -- Brainy J ~~ (talk) 04:44, 14 January 2016 (UTC)[reply]

This article from 2006 [7] is the earliest ref in our current article. It cites this 1983 article [8] describing a different treatment of precocious puberty in 1983. I do not know if that was the first puberty blocker, but it's a start. If you need access to these articles, ask at WP:REX. Sorting through the citations of the 1983 paper and our precocious puberty article (or even just reading them it carefully, which I did not) may answer you questions more conclusively. SemanticMantis (talk) 16:17, 14 January 2016 (UTC)[reply]

Platinum electrode

What electrochemical cell voltage would be considered as a safe upper limit, bellow which oxidation hence dissolution of the platinum electrode is negligible? I'm running an electrowinning setup using a complex leachate, containing multiple components. I've been using 4.0 V, but that is a complete guess. Plasmic Physics (talk) 10:53, 14 January 2016 (UTC)[reply]

Standard electrode potential (data page) indicates the standard reduction potential for Pt2+ --> Pt is +1.188 volts. --Jayron32 16:08, 14 January 2016 (UTC)[reply]
I don't think it works that way, besides, I've had no obvious degradation of the anode. Plasmic Physics (talk) 20:37, 14 January 2016 (UTC)[reply]

What's the difference between transvestites and transsexuals?

When you see one can you tell, this is a transvestite but not transsexual? Are transvestites just part-time transsexuals?--Scicurious (talk) 13:57, 14 January 2016 (UTC)[reply]

We do have articles transvestite and transsexual - the former could use some work though. It sounds from that like "transvestite" is coined not really that long ago to mean cross-dressing, but its creator used it to refer to more long term transsexuality? Wnt (talk) 14:35, 14 January 2016 (UTC)[reply]
I'd say transvestism is about behavior and transsexualism about gender identity. --Llaanngg (talk) 15:27, 14 January 2016 (UTC)[reply]
When you just see anyone, you can't necessarily tell that much about them. Sexual orientation and gender identity are not inherently visible. Note also that trans people can be homosexual or heterosexual or bisexual or even asexual. Gender identity and sexual orientation and other factors combine to a lot of different sorts of people. You may also enjoy background reading on notions of gender and sex, as well as cisgender, transgender, genderqueer, or attraction to transgender people.
Some people choose to present a visual image that helps signal their status to others, while other people do not. The general notion of this in biology is Signalling_theory. Depending on where you live, you may have seen many transsexual or transgender people and not known it, e.g. Passing_(gender).
To answer your question directly - If you see someone who you think is cross-dressing or transsexual, the only way to know for sure if they identify as either is to ask them. But that is very rude. If I were to meet you as a stranger in public, I wouldn't ask who you like to fuck or why you wear the clothes you wear, and I'd think you'd prefer it that way :) In the USA at least, the polite thing to do is to reserve questions of this nature for people with whom we already closely familiar. SemanticMantis (talk) 15:59, 14 January 2016 (UTC)[reply]
Our article on Transvestism says that it is, regardless of its underlying motivation, a behavior, dressing as a member of the opposite sex. Our article Transsexual defines the term as referring to those who identify with a gender inconsistent with their physiological sex, and wish to physically change to the gender with which they identify.
How to tell the two apart? You can't. Transsexuals at some point often are transvestites - especially after sex reassignment therapy or other medical interventions aimed changing the patient's ostensible sex to what the patient identifies with.
Transvestites are not "part-time" transsexuals. A transsexual is always a transsexual, never "part-time." Now, the reverse is possible - a transsexual may cross-dress only part of the time. Or, like the movie director Ed Wood, a transvestite may be heterosexual but have comfort issues which compel him or her to dress as the opposite sex. It's worth mentioning that women in Western countries often wear male attire without being called "transvestites." In some other countries today, this is not only transvestism, but culturally deprecated and even illegal. loupgarous (talk) 21:54, 14 January 2016 (UTC)[reply]
I wouldn't say that after sex reassignment transsexual are often transvestites. Quite in contrary. They are physically, mentally and legally a member of their target sex, who will very probably dress like a typical member of this sex.
I also would not say that Western woman wear male attire. Attire worn by men became unisex or with no gender mark. Women in trousers, box shorts, shirts, suits are all wearing these unisex clothes. Denidi (talk) 01:55, 15 January 2016 (UTC)[reply]
Well, pantaloons weren't always considered unisex - see women and trousers. Which brings us to an important point that since the definition is cultural, it changes, and there may be varying motives, pragmatism ranking very high on the list. Even non-transvestites might be affected when the other sex clothing is simply unpleasant. Wnt (talk) 17:10, 15 January 2016 (UTC)[reply]

popping zits

In nature are zits supposed to be popped or left alone? Which is most beneficial from an evolutionary standpoint? — Preceding unsigned comment added by 62.37.237.15 (talk) 20:15, 14 January 2016 (UTC)[reply]

Check out danger triangle of the face. It is rare, but popping acne or furuncles in that area can lead to infections that spread to other areas, causing Cavernous_sinus_thrombosis, or to the brain, causing meningitis. SemanticMantis (talk) 20:41, 14 January 2016 (UTC)[reply]
This is definitely one of those "No Medical Advice" questions. We aren't allowed to offer advice on diagnosis, prognosis or treatment of medical conditions...so we can't advise you on how to treat pimples - even if the treatment is something as seemingly mundane as popping/not-popping them. Beware of arguments from an evolutionary standpoint - evolution may not care whether you wind up with smooth skin or something that looks like the surface of the moon. These things tend to be cultural in nature. SteveBaker (talk) 20:57, 14 January 2016 (UTC)[reply]
I'm not asking for medical advice, I just asking what nature intended. How is this any more medical advice than asking if nature intended broken bones to heal or not? I don't have zits, I'm not asking this for myself. I'm asking generally as a scientific question. 62.37.237.15 (talk) 21:08, 14 January 2016 (UTC)[reply]
Or to put it a different way; have the millions of years of human evolution favored popping or not popping zits, from a purely evolutionary (therefore survival) standpoint. No cultural issues needed. 62.37.237.15 (talk) 21:12, 14 January 2016 (UTC)[reply]
Nature does not intend anything.
And apparently both zit-poppers and zit-non-poppers managed to survive. So, evolution did not made up her mind about this. But as said above, this is a bad perspective, you can be in a social environment that strongly prefers the one or the other. In the same way as people might have their preferences toward deformed/normal feet, or tanned/fair skin, thin/fat bodies. --Scicurious (talk) 21:52, 14 January 2016 (UTC)[reply]
Quick and dirty answer? Pop your zits in the presence of a potential mate, and your chances of passing your genes on through sex drop precipitously. This answers your question regarding evolution. Popping zits would seem to reduce the popper's chances of transmitting his or her genes to future generations. That being said, previous posters' remarks about "zit-poppers and zit-non-poppers" managing to survive imply a genetic basis for the behavior which hasn't, as far as I'm aware, been investigated, much less proven. It's just a gross habit that would only get you laid if you found someone who thought it was attractive and was aroused somehow by it. Think that's ever going to happen? loupgarous (talk) 22:03, 14 January 2016 (UTC)[reply]
lol, I'm pretty sure that I could pop my zits (I rarely have them, yay yay for testosterone blockers!) in front of most of my mates (male or female) and they wouldn't really care. My ex-girlfriend didn't really have zits either, but I'm sure that she would have popped them in front of me, and we would still have great sex afterwards. Yanping Nora Soong (talk) 22:21, 14 January 2016 (UTC)[reply]
De gustibus non disputandam... loupgarous (talk) 23:50, 14 January 2016 (UTC)[reply]
Regarding the claim that "Pop your zits in the presence of a potential mate, and your chances of passing your genes on through sex drop precipitously. [...] Popping zits would seem to reduce the popper's chances of transmitting his or her genes to future generations": the second statement does not follow from the first, because the smart zit-popper doesn't pop zits in the presence of a potential mate, but before meeting a potential mate, thereby accomplishing the zit-popping purpose of minimizing the zit's appearance, thus increasing the popper's chance of transmitting genes. —SeekingAnswers (reply) 05:26, 16 January 2016 (UTC)[reply]
From an evolutionary standpoint, nothing is "supposed" to do anything. Nature is not a conscious agent. The only question that makes sense is, "How does this behavior affect the fitness of the organism?" I suspect popping pimples might slightly decrease an organism's fitness, because it can lead to infection, but the risk is not that great, so I imagine the overall selection pressure is pretty tiny. --71.119.131.184 (talk) 22:16, 14 January 2016 (UTC)[reply]
I doubt that popping zits can lead to an non-treatable infection nowadays. Scicurious (talk) 22:32, 14 January 2016 (UTC)[reply]
This is a ref desk, what you doubt or don't doubt is completely irrelevant. Vespine (talk) 23:23, 14 January 2016 (UTC)[reply]
That's just an expression. I could have said: I don't have any ref at hand right now, but I don't believe popping zits can lead to an non-treatable infection nowadays. Scicurious (talk) 23:40, 14 January 2016 (UTC)[reply]
That's exactly the problem. Not having any references but an opinion. A reference like [www.webmd.com/skin-problems-and-treatments/teen-acne-13/pop-a-zit Before You Pop a Pimple] from WDMed, a web-site I normally trust, collides with your assumptions. Denidi (talk) 23:54, 14 January 2016 (UTC)[reply]
I have myself on occasion offered a "I doubt something", but ONLY if the thread had not already had some relevant replies leading towards an opposite conclusion AND even more critically was not related to something that could have serious negative health repercussions. Vespine (talk) 00:51, 15 January 2016 (UTC)[reply]

Eka-, dvi-, tri-, and chatur-

Mendeleev's 1871 periodic table. Double sharp (talk) 08:03, 16 January 2016 (UTC)[reply]

"Eka-" is sometimes used as a prefix meaning "one row below in the periodic table". For example "ununtrium" is sometimes called "eka-thallium". Have "Dvi-", "Tri-", and "Chatur-" been used in similar ways?? These are the corresponding prefixes for 2, 3, and 4. Georgia guy (talk) 22:07, 14 January 2016 (UTC)[reply]

See Mendeleev's predicted elements. At least rhenium was called dvi-manganese (because eka-manganese, i.e. technetium, wasn't known either). Icek~enwiki (talk) 00:22, 15 January 2016 (UTC)[reply]
There is and was no reason to use the higher numbers when the lower number could be used; hence you might see element 115 called eka-bismuth today, but certainly not dvi-antimony. The only cases where they were ever used AFAIK was exactly as Icek suggests: e.g. dvi-manganese, since eka-manganese was not known either. Given the rows of blanks in Mendeleyev's 1871 table where the rare earths should go, though, it's conceivable that rhenium would have become tri-manganese instead until the rare earths were separated out of the main body of the 8-group table. But there is no reason at all AFAICS to go to "chatur-" and beyond. Double sharp (talk) 08:03, 16 January 2016 (UTC)[reply]

solubility of zwitterions in organic or lipophilic solvents

What are the guidelines for whether a zwitterion (especially an alpha amino acid -- but not necessarily one of the 20) will be soluble in solvents like dichloromethane or cyclohexanol? (Chloroform or ether is also fine, I guess, since solubilities in them are tested more often.) For example, L-DOPA at its isoelectric point is weakly soluble in water, but insoluble in ether or chloroform. Which puzzles me -- how are you supposed to do acid-base extraction of an amino acid into an organic solvent if an amino acid at its pI is weakly soluble in water but even less soluble in an organic phase? Would adding a phase transfer catalyst like tetrabutylammonium bromide increase the solubility of AAs in organic solvents at the pI -- or would it reverse as soon as you tried to separate the two phases? Yanping Nora Soong (talk) 22:27, 14 January 2016 (UTC)[reply]

This article refers to the use of lanthanide complexes to not only extract zwitterionic amino acids, but to do so by chirality of the desired amino acid.
And this article discusses reverse micellar extraction of zwitterionic amino acids.
The Google search term I used to locate these articles is "extraction of amino acids zwitterions," and many articles came up which you may want to look at for more information. loupgarous (talk) 00:05, 15 January 2016 (UTC)[reply]
Thanks! That's a more useful keyword. I'm not sure if I'm understanding these "ion exchange" methods correctly (especially since I don't have any ion exchange resins at the moment). I'm looking at patents such as this one. [9] Does this patent imply I can dissolve zwitterionic amino acids in most weakly polar or non-polar organic solvents by dissolving 2M (or 1% w/w) quartenary ammonium salts (like TBAB) into the organic solvent? Then I add the sodium salt of my amino acid (whether in the aqueous layer or as a dry salt) -- sodium bromide precipitates and I get a solution of the tetrabutylammonium salt of my amino acid in cyclohexanol or dichloromethane? I'm trying to make sure I'm understanding these procedures correctly. Yanping Nora Soong (talk) 01:14, 15 January 2016 (UTC)[reply]
The patent specifies "a substantially immiscible extractant phase comprising an amine and an acid, both of which are substantially water immiscible in both free and salt forms." Now, you're proposing (if I understand you correctly), to use quaternary ammonium salts, which are generally cationic detergents and very miscible in water. So you've already departed from the procedure defined in the patent, I think.
I'd encourage you to read the sources the Google search "extraction of amino acids zwitterions" turns up for ideas that fit your specific requirements more closely. loupgarous (talk) 05:29, 15 January 2016 (UTC)[reply]
In the "previous art" section they discussed the use of trioctylmethylammonium chloride. There is also mention of others using Aliquat 336, which AFAIK is often interchangeable with TBAB for phase transfer catalysis reactions. I have tetrabutylammonium bromide (TBAB). AFAIK, TBAB has a high critical micelle concentration, so it's pretty inaccurate to call it a detergent and also why it's favored for PTC. They actually mention the use of quats in that patent (as well as their organophosphate anionic counterparts), and I think they actually use them, they just combine quats, lipophilic anions and other surfactant stabilizers in the same organic phase. Yanping Nora Soong (talk) 08:48, 15 January 2016 (UTC)[reply]

Do amino acids sometimes combine in water?

I mean without all the hardware that a cell has, that allows to combine them into proteins, do amino acids sometimes combine by chance just by bumping into each other if you shake the water they float in for long enough? --Lgriot (talk) 22:32, 14 January 2016 (UTC)[reply]

If you ask whether that happens at all: Definitely, even if you don't shake. The molecules are in thermal motion anyway. And conversely, peptides sometimes break apart (i.e. are hydrolyzed) without enzymes catalyzing this reaction.
Maybe more interesting is the question of what fraction of amino acid molecules can be expected to be free amino acids and which fraction will be in peptides. See chemical equilibrium for a general introduction. For a dilute solution of amino acids, the equilibrium state has far more free amino acid molecules than peptide-bound ones. In a very concentrated solution, water for hydrolysis is not so abundant, and you'll find a higher fraction of peptides at the equilibrium.
Icek~enwiki (talk) 00:31, 15 January 2016 (UTC)[reply]
Short-answer: they don't. Peptide bond formation is much harder than just simple Fischer esterification-- for the issue that if you try to catalyze the reaction by deprotonating the amino group (activating the nucleophile), you deactivate the carboxylic acid as an electrophile (it becomes a carboxylate), and if you try to catalyze the reaction by acid (activating the carboxylic acid group), you deactivate the nucleophile (the amino group gets protonated, losing its lone pair). You can create an amide or peptide bond by heating your reagents at 250C, but the problem is that these harsh conditions often cause a lot of side reactions to occur, at a risk of degradation or oxidation of the sidechain. Yanping Nora Soong (talk) 00:57, 15 January 2016 (UTC)[reply]
Selective peptide coupling is an expensive task and an entire industry of its own: see peptide synthesis. Yanping Nora Soong (talk) 00:59, 15 January 2016 (UTC)[reply]
I'm reminded of the Urey-Miller experiment. If there were a simple way to string those amino acids into peptides, that would be very exciting, but it's not really so. Now by contrast, I remember hearing that polymerizing hydrogen cyanide gives rise to actual polypeptides (once water is added) - here's a source [10] but I'm not sure it's the definitive one. Now as peptide bond says, water will spontaneously hydrolyze the bond on a very slow time scale, liberating substantial energy; and this implies that the reverse of the reaction does occur, but because of the energy difference, it would occur in a very, very small proportion of the total molecules (I'm afraid I'd have to reread Gibbs free energy, at least, to try to guesstimate what proportion that is.... but AFAIR 8 kJ/mol is a biological way to say "fuggedaboudit") Wnt (talk) 01:26, 15 January 2016 (UTC)[reply]
Polyaspartic acid can be accomplished by simple pyrolysis of aspartic acid followed by hydrolysis (easy undergrad lab experiment). The pathway for this polypeptide is not general for all the encoded amino acids, requiring at least some aspartic acid- or glutamic acid-like component (see also [11] that proposes this sort of reaction as relevant to prebiotic processes). DMacks (talk) 04:29, 15 January 2016 (UTC)[reply]
Remember, thermodynamic energy release is different from the kinetic activation barrier.
But if dG = -RT ln K = -8 kJ/mol, then K = e^(-dG/RT) = e^(8 kJ/mol / (298 K * molar gas constant)) = e^(3.22) = ~25
So the equilibrium constant for hydrolysis is around 25. The equilibrium constant for the reverse reaction would thus be around ~4%. This is actually kind of impressive, but the energy barrier is much higher -- I would guesstimate around 20-30 kcal/mol -- or around 80-120 kJ/mol. Yanping Nora Soong (talk) 08:56, 15 January 2016 (UTC)[reply]
If you want to understand the equilibrium dynamics of solutions of amino acids, perhaps our article on Michaelis-Menten kinetics would be helpful. loupgarous (talk) 20:31, 15 January 2016 (UTC)[reply]
This was a quick and dirty calculation. I'm well-aware of MMK. Of course, polymerization beyond forming more than just 1 peptide bond would be a different story... Yanping Nora Soong (talk) 00:57, 16 January 2016 (UTC)[reply]
The OP said "long enough", which is a pretty broad license to ignore kinetics altogether for purposes of the answer. Wnt (talk) 03:02, 17 January 2016 (UTC)[reply]

January 15

Is there such as "aspiration center" in nerve system?

(I'm not surely back) Like sushi 49.135.2.215 (talk) 00:48, 15 January 2016 (UTC)[reply]

I don't quite understand your question, further information about the context would be needed. Maybe what you are looking for is an aspiration center. This is a physical location in a hospital where some medical procedure like needle aspiration biopsy) is performed. --Denidi (talk) 01:20, 15 January 2016 (UTC)[reply]
I don't know what's up either, but this paper [12] discusses "fine needle aspiration biopsies of the central nervous system", so it might be helpful to OP. SemanticMantis (talk) 16:09, 15 January 2016 (UTC)[reply]
"Aspiration" has two three very different meanings. Aspiration can mean "ambition" (as in my son aspires to be a scientist). Aspiration can also mean "drawing in" or "removing by suction" by creating a negative air pressure difference in a hollow surgical instrument. And, finally, aspiration can mean the act of drawing in breath. Which one is it? :) Dr Dima (talk) 17:52, 15 January 2016 (UTC)[reply]
Aspiration in the first sense (that is, ambition) is thought to be functionally associated with frontal lobe; or at least one of the symptoms associated with prefrontal cortex / frontal lobe damage is the lack of ambition. Aspiration in the third sense - breathing in - is governed by the respiratory center in the brain stem. --Dr Dima (talk) 18:17, 15 January 2016 (UTC)[reply]

Is there such as "motive entropy"?

(I'm not surely back) Like sushi 49.135.2.215 (talk) 00:49, 15 January 2016 (UTC)[reply]

As above, the question is difficult to understand. Motive is usually a term used in law or psychology while entropy is a term usually used in physics, specifically thermodynamics. I have not come across those two words being used together to describe anything. If this misses your question entirely, you could try asking the question in your native language and someone might be able to translate it for you. Vespine (talk) 04:59, 15 January 2016 (UTC)[reply]
Two seconds of googling reveals it to be a phrase used by Carnot, hence something to do with thermodynamics. Greglocock (talk) 08:28, 15 January 2016 (UTC)[reply]
Motive can also be an adjective meaning "relating to motion and/or to its cause". Just like a locomotive is not generally a reason why loco people commit crimes, it's unlikely that "motive entropy" has anything at all to do with the law or psychology term. ;) --Link (tcm) 11:42, 15 January 2016 (UTC)[reply]
Of course it can! Fail on my part. Vespine (talk) 22:50, 17 January 2016 (UTC)[reply]
Hm, electromotive force, magnetomotive force, Projectile#Motive_force all speak to the usage in physics of "motive" in a different sense than the psychological/behavioral sense of motivation. Some of these terms (esp. motive force) may be losing currency, but were very popular not that long ago. SemanticMantis (talk) 16:06, 15 January 2016 (UTC)[reply]
Here [13] is a 2003 article, freely accessible, that explains a concept of "motive entropy", with references, thusly:
Here's another recent (2008) paper that defines the concept formally (eq. A20). The idea seems to be to separate "thermal" and "motive" portions of entropy, the latter of which is independent of temperature (or perhaps adiabatic?). The physical scope of these articles is is a bit over my head, but these should provide good sources for anyone who wants to explain further. (Also if anyone wants to help me out on a related sub-question - is this distinction at all analogous to latent heat vs. sensible heat? I'm fine with the math but fairly ignorant of thermodynamics) SemanticMantis (talk) 16:06, 15 January 2016 (UTC)[reply]

Norovirus and Pathogenic Escherichia coli

Does immersion in boiling water inactivate these two pathogens, and how long does it take? Edison (talk) 05:09, 15 January 2016 (UTC)[reply]

Yes, I found this article has some details. All common enteric pathogens are readily inactivated by heat, although the heat sensitivity of microorganisms varies. It appears that for most viruses they take just several seconds (~10 seconds) to be inactivated in boiling water. Just keep in mind that just putting a potato (for example) into boiling water for 10 seconds doesn't guarantee all virus will be inactivated. There could be a virus caught in a crack or cavity that is too small for water to get into, and the potato can act like a heatsink for long enough to cause the virus not to reach a high enough temp. Vespine (talk) 05:53, 15 January 2016 (UTC)[reply]
Agreed. This is why food has to be cooked right through. One more thing to mention is that even though cooking may kill the pathogenic organisms, it won't necessarily inactivate their toxins. -- The Anome (talk) 08:42, 15 January 2016 (UTC)[reply]
Douglas Baldwin's rather wonderful book on sous vide has both the full pasteurisation time/temperature tables for the internal temperature of the food (in the section linked), and timings for how long it would take to reach those temperatures given the meat, thickness, and water bath temperature (in the individual sections for the meats). E coli is considered in the beef, lamb and pork section. The point about toxins is a very valid one - and there are some pathogens (such as Clostridium botulinum) which will just spore up at cooking temperatures, and then continue to reproduce when the temperature drops. MChesterMC (talk) 09:22, 15 January 2016 (UTC)[reply]
The general recommendation in food preparation is to avoid the "Danger zone", which is reported as 5-60 celsius, which would imply that holding food at above 60 C is generally regarded as safe; since boiling is at 100 C, you're probably good. --Jayron32 13:22, 15 January 2016 (UTC)[reply]
Except that food in boiling water is not at 100C (or if it is, it's horribly overcooked!). Looking at my ref above, the pasteurisation times for actual cuts of meat are much higher than the government pasteurisation tables - because it can take several hours for the centre of a joint to get to a safe temperature. This will be a lot quicker at boiling than at typical sous vide temperatures, but the boiling water is the heat source, not the thing you're trying to kill bacteria in. Your best bet is to get a meat thermometer, and measure the internal temperature of your food - if it's above 70C, you're good. If it's been above 60C for 10 minutes, or 65C for 1 minute, you're good. Add 5C to each of those for poultry. Of course, full pasteurisation is overkill for most foods - steak cooked at medium or below won't be pasteurised throughout unless it's dove very slowly, but for the most part it doesn't matter since most of the bacteria are on the surface, which gets hot enough to kill them.
Also, while holding food at 60C is safe (since most dangerous bacteria can't grow), just getting food to 60 and then eating it may not be - the pasteurisation time at 60C is about half an hour for fatty poultry, so a short time at 60C isn't going to change much. MChesterMC (talk) 14:43, 18 January 2016 (UTC)[reply]
I see that a restaurant chain plans to reduce the incidence of e coli and norovirus illness as follows "Onions will be dipped in boiling water to kill germs before they're chopped.... Cilantro will be added to freshly cooked rice so the heat gets rid of microbes in the garnish." Past outbreaks of food-born illness have identified irrigation water or produce-washing water which might be contaminated as a source of germs, and presumably if the contamination is only on the surface of the plant material, there would be a decrease in the likelihood of illness, if not a guarantee. Edison (talk) 16:06, 15 January 2016 (UTC)[reply]

Question about Quarks

Hello! I was thinking about particle physics, when a thought occurred to me: "How did Quarks form in the early universe? I know that we are pretty certain that they are elementary, so why were they there in the universe, at that time? Are they little balls of energy or did some force (like Gravity) cause space-time to collapse on its self to form Quarks?" So how did they form then? By the way, I only know basic physics, so explain any "complicated stuff" to me. Megaraptor12345 (talk) 15:53, 15 January 2016 (UTC)[reply]

I'm not sure we have a very good idea. You might want to read up on baryogenesis (and by that I mean not just the Wikipedia article, but as a concept to focus your research on outside of Wikipedia as well), but the main take-away I've always had is that we have very general, broad, and not-at-all in focus ideas about what went on during this epoch of the early universe, and that there are several competing (consistent but as yet not well supported) theories on what really went down. --Jayron32 17:09, 15 January 2016 (UTC)[reply]
You actually have to page back quite a few epochs from the quark epoch to reach the legendary grand unification epoch, which lays somewhat west of the Wicked Witch of the West, perhaps. Our article is not very informative due to the lack of accepted grand unification theory. But it wasn't until the quark epoch that they (mostly) lacked more massive competition like W and Z, I suppose. Wnt (talk) 17:15, 15 January 2016 (UTC)[reply]
Sorry, what is W and Z? Megaraptor12345 (talk) 22:05, 15 January 2016 (UTC)[reply]
The bosons in the image below, I think. SemanticMantis (talk) 22:28, 15 January 2016 (UTC)[reply]
I'll have to check it out on my next trip to the Big Bang Burger Bar. --Jayron32 17:30, 15 January 2016 (UTC)[reply]
@Megaraptor12345: The W and Z bosons are mentioned in the previous epoch before the quark epoch, the Electroweak epoch. Note that electroweak theory has been tested and is pretty well agreed upon ... whether that means that period of the universe is well agreed on, I don't know. Wnt (talk) 23:09, 15 January 2016 (UTC)[reply]
Six of the particles in the Standard Model are quarks (shown in purple).
It is believed that in the period prior to 10−6 seconds after the Big Bang (the quark epoch), the universe was filled with quark–gluon plasma, as the temperature was too high for hadrons to be stable. Such a quark-gluon plasma was reportedly produced in the Large Hadron Collider in June 2015. However the quark has not been directly observed, it is a category of Elementary particle introduced in the Standard model as parts of an ordering scheme for hadrons that appears to give verifiable predictions of particle interactions, but as yet we have no agreed explanation why there are three generations of quarks and leptons nor can we explain the masses of particular quarks and leptons from first principles. AllBestFaith (talk) 18:25, 15 January 2016 (UTC)[reply]
This is all very well, but I am not sure I put my question right. Let me start again. How did matter form? I thought in my original question that quarks were the original particle, but they were not, were they? So how did any sort of matter form? Did it appear as a personification of energy or as globs of some force? Megaraptor12345 (talk) 22:14, 15 January 2016 (UTC)[reply]
Quarks will appear spontaneously through pair production and other interactions when enough energy is available. They are just a particular mode of vibration of the quantum field, and all of the vibrational modes are coupled together in complicated ways, so if there's enough energy there will unavoidably be quarks. If inflationary cosmology is correct (which it may not be), that energy came from the inflaton [sic] field, and where that came from is anyone's guess. -- BenRG (talk) 22:50, 15 January 2016 (UTC)[reply]
What is a "quantum field"? Megaraptor12345 (talk) 16:49, 16 January 2016 (UTC)[reply]
It's a field (physics), like the electromagnetic field but with more oscillatory degrees of freedom that correspond to particles other than the photon. It isn't really the field that's quantum, it's the world that's quantum. Because the world is quantum, oscillations of any field are quantized, and each quantum of oscillation energy is called a particle. The properties of the field tell you everything about physics at ordinary energies; all particles and interactions are oscillations of the field. The physics is described by the Standard Model Lagrangian and (the quantum version of) Lagrangian mechanics. -- BenRG (talk) 21:50, 16 January 2016 (UTC)[reply]
More specifically, it's one of the set of fields predicted by quantum field theory. This video focuses on the Higgs mechanism, but touches a bit on QFT, as you need to understand the basics to understand why physicists care about the Higgs. --71.119.131.184 (talk) 08:09, 18 January 2016 (UTC)[reply]
This isn't my field, but here's the perception I have: The universe is a bit like a three ring circus. New kinds of physics come in, stay a while, and eventually are ushered offstage. The way this ushering occurs is that there is physics for very high-energy particles that occurs on a very short time scale and physics for lower energy particles that occurs on a longer time scale. If proton decay occurs, and there is no end of cosmic expansion/heat death of the universe etc., then one day all of our protons and neutrons may be considered just some weird high energy physics that happened during the first few moments after the Big Bang, before the universe settled into steadier assemblages of neutrinos interacting at a more measured pace. Whereas if you look far enough back, there was a time when an ordinary photon of radiated heat had enough energy to make Z and W particles, and perhaps quarks were a minor constituent - something like neutrinos to us in that they didn't carry as much mass and so might have seemed relatively irrelevant ... though I'm not sure the analogy really makes sense due to quarks' charge - certainly I have no guess of how relevant charge was in the era before electromagnetic and weak forces were distinguishable. While free quarks can't exist in our space, back then there was so much energy crammed so close together that the quarks were in quark-gluon plasma, which is tolerable, so basically any thermal photon could make quarks and antiquarks spontaneously. But I don't think anyone knows just how many acts there were at the beginning of this circus, or even if it had a beginning, rather than just smaller and smaller intervals of time in a convergent series. Wnt (talk) 23:09, 15 January 2016 (UTC)[reply]

Crude Oil

Will the crude oil rate slash down further? — Preceding unsigned comment added by 59.88.196.26 (talk) 17:44, 15 January 2016 (UTC)[reply]

Maybe, maybe not. Please see WP:CRYSTAL; we cannot speculate on this. Here [14] is a relevant article from the Economist published today that you might be interested in. Other respondents may choose provide references that speculate on this, but they should not speculate here. SemanticMantis (talk) 17:50, 15 January 2016 (UTC)[reply]
The report I heard on NPR said that prices may well drop further, yes. They based that on current stockpile levels and the slow rate at which oil production facilities (and alternative substitutes production, like natural gas) currently are reducing production. Eventually, those production levels will be brought down to match demand, and then you can expect prices to stabilize. StuRat (talk) 22:10, 15 January 2016 (UTC)[reply]
The price of oil, or of any commodity for that matter, will either go up, or down, or remain stable. Guaranteed. ←Baseball Bugs What's up, Doc? carrots07:58, 16 January 2016 (UTC)[reply]
Today sanctions against Iran were dropped, allowing them to sell their substantial oil reserves on the world market. This can be predicted to lower prices due to increased supply. StuRat (talk) 22:05, 16 January 2016 (UTC)[reply]

If a person has an MBA, is it wrong to say he's an economist?

Moved to Wikipedia:Reference_desk/Humanities#If_a_person_has_an_MBA.2C_is_it_wrong_to_say_he.27s_an_economist.3F.

Questions about EM radiation

I've been wondering about some things and I'd appreciate any insight into them.

1) I often see visible light described as electromagnetic radiation. If I have a green light lit and had a radio capable of receiving at 563THz, would I receive anything? I assume that I wouldn't, since a radio deals with electrons and not photons.

2) I expect that in any case where a device is powered entirely from RF energy (e.g. a crystal radio), the current draw of the device must be somehow apparent at the transmitter. If this is true, is any conductive object which is connected to ground also drawing power from the transmitter? Does a powered, tunable radio draw less current from the transmitter when not tuned to the transmitter versus when it is tuned to it? Is the load actually measurable?

3) I recall a time when I was transmitting an AM signal into a dummy load and monitoring it with a handheld receiver. As I moved around, I noticed places in the room where the carrier was strong, but the modulation was weak. I observed the same when transmitting an FM signal using an antenna. If the signal wasn't simply just too strong for the receiver, what would cause this? SphericalShape (talk) 00:16, 16 January 2016 (UTC)[reply]

  1. A radio that receives at 563 terahertz is a photodetector. You can't build whip-antennas that small; your "receiver" would have to look and act like a photodiode.
  2. Yes, everything affects everything else; the electromagnetic impedance as seen by a transmitter is minutely affected by every natural and man-made "receiver" in the environment, all the way out to infinity distance; but the farther away, the smaller the effect; and the farther away, the longer the propagation delay. (The behavior of an object a trillion light-years away has a trillion year propagation delay in effecting any change to the impedance observed at the transmitter). For almost all practical purposes, these effects are so small that they are negligible; the ensemble averages out into the background, which is the impedance of free space. Sometimes, engineers use tuned, matched coils: this is about the only case where the effect of the receiver-antenna's loading is non-negligible: those systems are designed for power transfer. Chances are, the effective load caused by your crystal radio (drawing nanowatts of power) did not register on the VU meter tracking the power loading at the AM station fifty miles away. When you start throwing amplified circuits, and tunable radios, into the mix: well, the formal scientific method to analyze this is electrical engineering. The effective load on an amplifier's output is isolated from the input; but there is still some tiny, tiny effect: when you power on your powered radio, the antenna's impedance gets a slight nudge one way or the other; and just as before, that change propagates all the way back as a difference in the load for the radio transmitter. On the whole: these effects are absolutely tiny. The transmitter sees more load-variance based on the wind blowing molecules of air around, than due to all the cumulative effects of all the radio receivers out to infinity and back - and even that is smaller than the load-variance due to a hundred other effects, like the thermal noise inside the transmitter's power supply.
  3. Any number of effects might explain this subjective observation; everything from frequency-dependent tuning, to faulty equipment; but it's futile to speculate based on an anecdote.
Nimur (talk) 01:24, 16 January 2016 (UTC)[reply]
I always thought (on an intuitive level, since I don't have any systematic knowledge in the area) that radios didn't load the transmitter at all, that this had something to do with "near" and "far" fields and that it was the chief difference between radio transmission and stuff like transformers, induction chargers and NFC tags. Asmrulz (talk) 05:53, 16 January 2016 (UTC)[reply]
This is a matter of practical semantics. If the theoretical effect is so small that even our best measurement equipment can't see it ... does this constitute "no effect"? Well, in the same sense that there is "no chance" that you could win the lottery... we're overloading the meaning of "zero" to mean both "zero and almost zero." The semantic problem is very subtle: there is a practical meaning of "almost zero," and there is an even more rigorously-defined theoretical mathematical defintion of almost zero; and I admit some guilt in conflating both cases with "exactly zero." (This is because the English language is an inapt choice for certain specific kinds of descriptions). This ambiguity of natural language leads to a very common semantic problem and if we aren't very careful with our terminology, it can lead us to believe an incorrect conclusion, or to find a paradox where none really exists.
So, to directly address Asmrulz's concern: receiver-antennas in the far field have almost zero effect on the transmitter. How tiny is that effect? It is so small that we probably can't measure it; but we can surely calculate it by solving for the complete electromagnetic field equations, at all points, and then performing some mental and algebraic gymnastics to mathematically transform the result into something that looks like an impedance.
This is where real radio engineering intuition is needed: if we start from pure theory of physics, it might take us days to perform such a calculation, only to find out at the end that our result can be safely ignored. Instead, if we start from practical experience, we know the approximate result of the difficult calculation, and therefore we don't actually perform it. This intuitive leap - knowing the answer by gut-feel - is a constantly-recurring theme in applied electromagnetics. It is one reason that other engineers call high frequency radio and antenna work a "black art."
Nimur (talk) 17:10, 16 January 2016 (UTC)[reply]
My intuition is that of a hobbyist who loves electronics über alles but struggles with quadratic equations, not to mention PDEs. Perhaps in my next life... Asmrulz (talk) 19:16, 16 January 2016 (UTC)[reply]
Does a TV detector van work by this method or Van Eck phreaking (or is it even real?) Wnt (talk) 11:26, 16 January 2016 (UTC)[reply]
From the picture it looks to be real. The antenna would be detecting UHF or VHF transmissions from the TV local oscillator. The TV is a Superheterodyne receiver. Graeme Bartlett (talk) 11:32, 16 January 2016 (UTC)[reply]
I tend to disbelieve the practical utility or historicity of the TV Detector Van; but these vans are an important part of the common culture and mythos amongst paranoids and tin-foil-hat wearers. Surely, the technology could have existed; and in all seriousness, it probably was used at some time or other. But, proposing that it was, or is, part of an ongoing mass-surveillance effort - well, that's much more tenuous a claim.
Of all the effects one could use to remotely determine if a television set is presently powered, the effect of its antenna would be far from the easiest. There are so many stronger signals, where do we begin enumerating them? The conventional cathode ray tube emits all kinds of characteristic radio energy: the flyback transformer carries a lot of power, and some of that is radiated outward. That signal, or the high voltage generated by the electron gun, is probably the easiest signal to detect. Heck, the time-varying load on the AC power supply mains would be easier to spot.
Obviously, new televisions do not use cathode ray tubes: if there really is a TV detector van, and it really does operate in this century, then it probably uses "some other method." Were I in charge of designing such an invasive technology, I'd simply remark that it is incredibly easy to hide a "bug" on a circuit board these days; a sophisticated integrated circuit in today's technology is so small, it could piggy-back on the back side of the solder-pad of a passive component and nobody would even look for it; or you could hide it inside the software in any of the dozens of independently-programmable computers that can be found on any modern electronic device. Your television's power cable probably has more compute-power than a PDP-11; and it's being built by some third-party vendor of bargain-basement commodity electronics technologies: such commodity vendors might be in a dire financial way and therefore susceptible to outside "funding opportunities." The evil genius at Mass Surveillance, Inc., could simply repurpose the fuel budgets from the surveillance vans, use the funds to pay off the vendor, and presto - every television could be carrying a cooperative surveillance device, broadcasting a tiny unique identifer to self-report its activity via wireless link. The days of being able to detect, let alone counter, such electronic surveillance technologies are long over. If the baddies wanted to surveillance you, they shall do so; and you won't even notice it. When is the last time anybody ran a malware check, or inspected the open-source software, on their TV's power supply controller?
Nimur (talk) 17:47, 16 January 2016 (UTC)[reply]
Quibbles with the above are that the high voltage in a CRT televison is not "generated by the electron gun" but instead is rectified via a Voltage multiplier from a pulse winding on the Flyback transformer, and that identifying payment evaders of a decreed TV licence fee is a legitimate part of law enforcement. The article Television licence shows that funding sources in different countries vary between licence fee and advertising. A rhetoric that licence funding is imposed by "evil...baddies" is, at best, ignorant that TV broadcasting always needs to be funded somehow or, at worst, unsourceable covert conspiracy speculation. AllBestFaith (talk) 20:24, 16 January 2016 (UTC)[reply]
For what it's worth, I do not believe that the UK's television license fee is evil; it's a complex policy that might actually promote better-quality, less-biased broadcasting, and I have often wondered if that policy could be implemented in the United States. Broadcast economics has always worked very differently on this side of the pond, and our government has a different legal and historical relationship with broadcasters, and with respect to our "free press" in general; so the issue is not simple at all. That's a conversation for another time.
Nor do I actually believe that any government is conducting mass surveillance in the fashion described above - certainly not for the purposes of verifying television-usage. I hope my comments are not construed in that way. To clarify my intended point: if any malicious entity - government or otherwise - wished to conduct mass surveillance, for any purpose, the most efficacious methods in this century probably need not involve driving around in vans.
With respect to your other quibbles - point conceded; I was a bit sloppy in my paraphrased description of the CRT. Interested readers should refer to our article for more details. Nimur (talk) 04:17, 17 January 2016 (UTC)[reply]
Every new broadcasting station has to conduct field measurements throughout its coverage area to establish their radiated signal strength and the relative strength of interfering signals. This work is naturally done by engineers in a vehicle in the USA and in the UK a prominently labelled TV detector van.
This explains the basics of field strength calculations.
UK detector vans are typically equipped with panoramic display receivers such as this Eddystone combo on which the internal oscillator of a nearby TV receiver is traceable as a radiation spike at 38.9 MHz below a vision carrier frequency. A number of these vans must be kept in service also for investigating reports of illegal transmitters (including espionage devices) and interference. I can attest that during the British GPO monopoly control of broadcasting my complaint about interference to TV reception was met by visits by an enthusiastic engineer carrying a range of signal tracing and filtering equipment, all covered by the standard licence fee. It is obvious that licence collecting authorities find it cost effective to maximise the public expectation that unlicenced receivers will be traced while seldom actually expending resources on general surveillance. A non-technical lay person in the 1950's Britain might not easily distinguish between a dipole or a ladder on the roof of the ubiquitous telephone service vans and suspect that they were all TV detector vans! On Swedish TV I have seen placards that say "We are inspecting <name of town> with a receiver detector. Thank you for keeping your TV licence renewed."
This thread was deviated into politically charged speculation about malicious survellance hardware but the article Conditional access describes non-covert methods and hardware used to protect broadcast TV content pre-emptively. AllBestFaith (talk) 16:51, 17 January 2016 (UTC)[reply]
1) Your assumption that a radio deals in electrons, not photons, is incorrect. There is no dividing line between light and radio waves - they are both types of EM radiation and are both made up of photons. Although, as Nimur said, a standard radio antenna won't pick up light-frequency photons, you could in theory built a nano-scale antenna that would do just that. See the experimental device called the nantenna. It's very inefficient and impractical but in principle it does what you describe. --Heron (talk) 13:25, 16 January 2016 (UTC)[reply]
2) In practice, no to all your questions. The transmitter launches radio waves into free space and doesn't know what happens to them afterwards. The rest of the world acts as an almost perfect absorber unless the transmitter has the misfortune of being surrounded by tinfoil. It doesn't matter whether the energy is absorbed by a crystal set, a transistor radio or a tree, the transmitter justs sees its energy disappearing into a bottomless pit. Incidentally, I have to quibble with Nimur's statement that the impedance of free space is an average of the impedances of all the stuff in the universe - it's not, it's the impedance of any piece of empty space. The wave from the antenna immediately sees the impedance of free space when it leaves the antenna, not after it's had time to bounce around the universe and average everything out. --Heron (talk) 13:44, 16 January 2016 (UTC)[reply]
In case there was any confusion about my statement: the impedance of free space is not caused by the loading effects of all objects in the far field. Rather, the antenna sees a load, which is a superposition of free space plus any other loading effects. This value averages out to the impedance of free space because the additional loading effects is generally negligible. I apologize that my statement was confusing. Nimur (talk) 15:13, 16 January 2016 (UTC)[reply]
Heron is correct to point out that the impedance of free space is a physical constant that is fixed by definition relative to the S.I. base units. It is seen by a transmitting antenna both "immediately" and over prolonged time as radiation propagates away, never to return; I mention exceptions to this scenario in response no. 2. below.
1. 563 THz or 563x1012 Hz is the frequency of green monochromatic light emitted by a common DPSS laser pointer. Many other frequency distributions that stimulate the medium cones of the retina can give the same perception of green because the eye is not a precise spectroscope. A few specialized radio receivers used in Radio telescopes detect electromagnetic radiations at millimeter and submillimeter wavelength, see [[15]] and [16], but these are far below visible or infrared light frequency. Among the many types of Photodetector that can respond to green light, types such as photoresistors, photovoltaic cells, photomultipliers, photodiodes, phototransistors and others convert incoming photons to electron current, which could constitute a non-tunable receiver.
2. The antenna of a Transmitter is designed to deliver electromagnetic energy into the Impedance of free space (i.e. the wave-impedance of a plane wave in free space) which equals the product of the vacuum permeability or magnetic constant μ0 and the speed of light in vacuum c0 i.e. about 376.73 ohms. Theoretically any object with a different impedance that intrudes into the space around the transmitter causes a reflection at the point of mismatch, and therefore a mismatch effect at the input to the transmitter antenna. Usually in broadcasting the powers reflected by receiver antennas are negligible and undetectable in comparison with the power delivered to space. Exceptions include large metal structures near a transmitter that may necessitate adjustment using an SWR meter of the antenna matching circuit, and deliberate analysis of reflected radio waves which is the basis of Radar. The instruments known as Grid dip oscillator and gate dip oscillator may be regarded as small transmitters with inductive antennas that are sensitive to power absorbed in any nearby tuned circuit, and are useful for finding its resonant frequency by trial-and-error tuning. A tuned receiver front-end draws most power from the antenna when its resonant frequency equals the transmitted frequency. The power dissipated in an RLC circuit can be calculated if its Q factor is known.
3. The situation where signal is detected near a dummy load suggests the load is imperfectly matched to the transmitter or imperfectly screened, so the OP detected residual leakage. It is unlikely to have reduced AM though the sound volume from most AM radios decreases when the signal strength is weakened, e.g. by rotating the handheld radio. Most FM is transmitted at wavelengths of 3 m or less, see FM broadcasting, so where there are reflecting conductive surfaces moving a receiver by such a small distance can change the relative phasing of multipath interference which may locally distort, weaken or cancel reception. AllBestFaith (talk) 14:06, 16 January 2016 (UTC)[reply]
Just let me add that Near and far field is our article about what Asmrulz mentioned above. – b_jonas 12:52, 18 January 2016 (UTC)[reply]

January 16

Cartilaginous fish outcompeting bony fish as predators

Bony fish (Osteichthyes) vastly outnumber cartilaginous fish (Chondrichthyes) in terms of both number of species and biomass. But in the ocean, more cartilaginous fish than bony fish are apex predators. Why would cartilaginous fish have outcompeted bony fish in that particular ecological niche? —SeekingAnswers (reply) 03:12, 16 January 2016 (UTC)[reply]

You may be cherry picking your data a bit. The apex predators in the sea would have to include whales, but you excluded them by specifying fish. So, once you eliminated whales, then sharks are a fairly large portion of the apex predators that remain, and they happen to be cartilaginous fish. StuRat (talk) 03:33, 16 January 2016 (UTC)[reply]
That's not cherry picking, since I never asked about bony or cartilaginous fish vis-à-vis mammals; I asked specifically about bony and cartilaginous fish vis-à-vis each other. The question remains why cartilaginous fish would outcompete bony fish in that niche. —SeekingAnswers (reply) 05:03, 16 January 2016 (UTC)[reply]
Predators are typically vastly outnumbered by their prey. If they weren't, the predators would die out. Evolutionary pressure may have favored bony fish for survival of the prey, while sharks were just fine as they were. And are. ←Baseball Bugs What's up, Doc? carrots07:53, 16 January 2016 (UTC)[reply]
  • Look at r/K selection theory. Sharks have a low reproductive rate with small clutches fertilized internally, while bony fish produce a huge number of eggs, with the newborns often being planktonic. This means that in niches filled by smaller fishes, bony fish will tend to have an advantage. You can see the exact same thing comparing the more primitive conifers, which comprise the tallest and oldest trees with the angiosperms with their advanced systems of pollenization and seed dispersal. If you consider plant succession, angiosperm "weeds" will colonize open land first, but conifers like the Douglas fir will tend to be among the apex species. μηδείς (talk) 17:55, 16 January 2016 (UTC)[reply]
@Medeis: Yes r/K may come in to it, as may size at birth. But IMO that alone doesn't really explain the issue (if there even is an issue to explain ;). You might be interested to know that plant succession can go the other way too. E.g. Loblolly pines in the Carolinas come in first after clear cuts or fires, and the climax community has much more hardwood broadleaf species (See e.g. Christensen and Peet, 1984, or most any of the studies on the Piedmont or Duke Forest). The fast growth of many conifers is tied to early successional status, and is also why they are such an important timber source. A professor of forestry once told me that this can be seen as a broad, general, trend: east of the Rockies, conifers tend to be earlier successional, while in the west they tend to be late successional. As to the OP @SeekingAnswers: I think this is a very interesting question, but I do think the premise should be clarified and perhaps challenged a bit. Are you saying that most cartilaginous fish species are top predators? Or that most top predator species are cartilaginous fish? Or are you claiming that, among species, a higher percentage of cartilaginous are top predators, as compared to that figure for bony fish? Some of these might be true, but none of them are obviously true to me, and the hypothetical reasons should differ for each one. I'm fairly busy this week, but if you contact me on my talk page I can send more refs later. SemanticMantis (talk) 16:06, 18 January 2016 (UTC)[reply]
Characteristic shape of Juniperus virginiana in old field succession
What I was suggesting (and really to make the point you'd need an essay) is that there was an earlier stage at which most of the existing "fish" niches were occupied by chondrichthyes; i.e., sharks, rays, chimeras, all of which had internal fertilization, and produce relatively large eggs compared to bony fish. (I am not sure about the reproduction habits of the extinct spiny sharks and placoderms.)
When the teleosts arose (comprising the vast majority of bony fishes), they had largely external Teleost#Reproduction_and_lifecycle reproduction with a huge number of strategies for dispersal. While the teleost Mola mola lays up to 300,000,000 eggs, sharks like the great white Great_white_shark#Reproduction have only a few young after a very long gestational period.
True sharks existed in the Silurian period, and animals like the six-foot predator Cladoselache appeared in the Devonian, while teleosts only appeared some 100 million years later, in the Triassic. Sharks already filled K-selected niches at that point, hence the teleosts spread like "weeds" (see my angiosperm analogy above) into many r-selected niches, a great number of which might never before have been occupied.
My point with conifers was not to point out that they are only apex-succession fauna; they are not. Where I grew up, a sight like that at the left would be typical of a plot 5-10 years after colonization--but eventually the oaks will encroach. I am quite familiar with pine barrens, and the fact that if fires are suppressed in those areas the pines will be replaced by oaks and other late-succession hardwoods. My point is that there are no conifer weeds, just like there is no shark equivalent of duckweed or crabgrass. μηδείς (talk) 18:32, 18 January 2016 (UTC)[reply]
There are perhaps a few edge cases, but I agree that there aren't any squarely ruderal conifers today. However, there used to be [17], [18]. SemanticMantis (talk) 19:18, 18 January 2016 (UTC)[reply]

Why are women bad at chess?

In chess rankings there's only 1 or 2 women in the top 100 players. Does anybody know why? 2.102.185.25 (talk) 06:07, 16 January 2016 (UTC)[reply]

Maybe all but those 1 or 2 simply don't like chess. That doesn't mean that women are inherently "bad" at chess. ←Baseball Bugs What's up, Doc? carrots07:49, 16 January 2016 (UTC)[reply]
Maybe the OP's phrasing of women being 'bad' at chess was misleading, but it still begs the question why there are so few elite chess players who are women. Isn't also true that men's brains are generally wired to be more logical than women's brains, or has that theory been debunked now? 95.146.213.181 (talk) 18:59, 16 January 2016 (UTC)[reply]
If it was simply a difference in interest, then you would need something like 50 to 100 times as many men interested in chess as women to predict that 1 or 2 out of 100 figure. Is there really this much of a discrepancy ? Somehow I doubt it. StuRat (talk) 22:10, 16 January 2016 (UTC)[reply]

This debate is as old as the hills, with many hypotheses and speculations, but there are no definite answers (and I don't expect there will ever be). Here is a Scientific American Article blaming it on the negative effect of stereotypes, while chess Grand master Nigel Short claims girls not to be "hard-wired" in their brains to play chess well (personally having a 3:8 score against Judit Polgar). Personally I'm most sympathetic with the hypothesis that 100% confrontational and 0% cooperative games are far more attractive to men than women: Without a broad base, the pyramid of female chess players will not grow high, so always most of the leading chess players will be men, far beyond the gender ratio in total players. --KnightMove (talk) 08:34, 16 January 2016 (UTC)[reply]

It's difficult to disentangle "not interested in" and "not good at". If one group of people are dramatically less interested in some activity than some other group - then, inevitably, they will appear to be less good at it because of the lower probability of the most talented members of the group being involved. On the other hand, if some group is less good at something, they'll be less likely to participate in it. Sorting out which of those it is, is extremely difficult.
This applies in varying degrees to chess, mathematics, physics, computer programming and a range of other activities in which women are severely under-represented. One might argue about correlation and causation here - are they being actively discouraged in some manner - are they being passively left out in some manner - or are they simply less interested in those subjects for some reason of genetic pre-disposition - or are they (perhaps) actually less good at it? It may be a combination of such things.
It's equally possible to find groups where men are under-represented, or you can pick any other social, ethnic or religious group and come up with similar biasses in similar areas of human interest.
SteveBaker (talk) 15:52, 17 January 2016 (UTC)[reply]

Actually I play in Union Square, Manhattan a lot where I've met some pretty genius-y women chess players. In fact, I met a nine to ten-year-old girl who has only been training for less than a year and holds around a ~1600 rating. Yanping Nora Soong (talk) 05:03, 18 January 2016 (UTC)[reply]

Follicular lymphoma

[it looks like a new user posted this without a heading] Wnt (talk) 13:23, 16 January 2016 (UTC)[reply]

§information on follicular lymphoma — Preceding unsigned comment added by 24.210.25.88 (talk) 12:54, 16 January 2016 (UTC)[reply]

Well, we have an article on follicular lymphoma. Please say what aspect you're most interested in. Wnt (talk) 13:23, 16 January 2016 (UTC)[reply]

Is the “ankle” on both sides of the foot?

A little above of the foot, in both sides there are projections. One is from the inside of the led and the second is from the outside of the leg. My question is if the both sides are called "ankle" or just one of them? 92.249.70.153 (talk) 12:56, 16 January 2016 (UTC)[reply]

The ankle is a broad region that includes them both and much more. You're thinking of the medial malleolus (on the inside) and the lateral malleolus (on the outside). Wnt (talk) 13:27, 16 January 2016 (UTC)[reply]

Radio tuning by analog-to-digital conversion?

In the "Questions about EM" thread above I was reminded that, as "Superheterodyne receiver" puts it, "Virtually all modern radio receivers use the superheterodyne principle." But nowadays the instructions per second of many computer CPUs are up to 3 GHz, which is the top of the UHF band of the radio spectrum. I assume a system designed specifically for one purpose might even be faster. So is it possible to simply plug some kind of digitizer directly into the antenna feed, using no superheterodyne, and make a complete transcript of the entire radio-spectrum signal out there (well, up to UHF that is) like making a digital recording of sound? This should have some amusing features, can you confirm?

  • no TV detector van would work on it, apparently.
  • records every frequency at the same time.
  • can make custom algorithms to try to recover faint signals out from interference - even testing one after another, trial and error, until you find something to wring a distant TV program out of your record of its airing.

So do these things exist, or are they at least possible? Wnt (talk) 13:20, 16 January 2016 (UTC)[reply]

Software-defined radio, [19] -- Finlay McWalterTalk 14:29, 16 January 2016 (UTC)[reply]
Such things as a radio receiver with untuned front end comprising only an ADC exist, see this Wikibook. What can be received this way is limited by
1. The sampling rate of the ADC
The Nyquist–Shannon sampling theorem sets an absolute limit to the radio frequencies that can be analyzed in the digital record. Higher frequencies contribute only noise (see 3. below) and in practice there must always be some pre-filtering to reduce them. If the ADC samples at 48 ksamples/sec one must not expect to detect frequencies higher than 20-24 kHz which is VLF, and no amount of subsequent high speed digital processing can overcome this limit.
2. The quantising resolution in bits of the ADC
Commonly used ADC resolutions are 12 - 24 bits for audio (48 k sample/sec) and 8 bits for video (20 M samples/sec). An ADC resolution of Q bits imposes a Signal to Quantising Noise Ratio
in the receiver circuit. A wanted signal is receivable only if the ratio of its power to the peak sum of interfering signals is significantly more than the reciprocal of SQNR.
3. Interference and noise
These limit terrestrial radio communication more severely than free-space loss, and must not overdrive the ADC. Modern radios still depend on highly selective analog tuned circuits and for that reason most tunable radio receivers are single- or dual-conversion superheterodyne designs. Advances in instruction rate of CPU's that handle swings of several volts betwen binary 1's and 0's should not be confused with the state of the art of analog-to-digital converters which as yet cannnot compete with the microvolt signal-to-noise performance of an analog radio receiver front end. The OP who is interested in long-distance TV reception may find this article informative, and (legal advice follows) should not let their Television licence lapse. AllBestFaith (talk) 15:38, 16 January 2016 (UTC)[reply]
The performance of a wideband receiver is necessarily worse than the performance of a narrow-band receiver: see our article on Gain–bandwidth product. By extension, the performance of an ultra-wideband receiver is "ultra-worse."
In practice, what this means is that signal levels will be either too low or too noisy for your analog-to-digital converter (ADC). If you can spend a lot of money to buy a better ADC, you can improve the situation; but ultimately, there is a theoretical reason that explains why a heterodyned radio built on the same technology will still outperform your wide-band version. Where we stand with today's technology: most of the time, we can get the performance we need by using heterodyned radios with digitally controlled tuners, e.g. phase locked loop circuits built on CMOS integrated circuit technology. That's what you'd find if you took apart your cell phone or computer's WiFi radio circuitry. For unusual frequencies, we still use external tuners and mixers.
Nimur (talk) 17:57, 16 January 2016 (UTC)[reply]

how do I add the partition coefficient data to an article?

I would ask on a different forum, but this is really specific science editing stuff -- I have some sourced data on log P values of different compounds like carbidopa, etc., but I can't see any documentation on how to add log P data on a pharmacological compound. I see that chembox has one but I don't want to start a whole new chembox on a pharmacological article just to include its log P data. I think the partition coefficient data would be useful for researchers working on extracting amino acid compounds. Yanping Nora Soong (talk) 14:04, 16 January 2016 (UTC)[reply]

{{Infobox drug}} does not have a logP field. But that template (as all templates) has a talkpage where you can discuss possible improvements/additions. DMacks (talk) 19:37, 16 January 2016 (UTC)[reply]
If you want to discuss changing chemistry articles, you can talk about this at Wikipedia talk:WikiProject Chemistry. Graeme Bartlett (talk) 20:44, 16 January 2016 (UTC)[reply]

How often mentally ill people know they are mentally ill?

Common wisdom might claim that the crazy don't know they are crazy. However, are there serious studies about how common it is being deluded about your mental health, maybe broke up by illness? --Scicurious (talk) 14:16, 16 January 2016 (UTC)[reply]

The psychiatrists and neuronormative community like to call this "insight". I have been accused of having very poor insight (on my medical records) during my 71-day in a Beth Israel Medical Center psych ward because I argued I didn't need to be involuntarily held for treatment. After getting my hearing adjourned twice, the judge finally released me. A lot of people in the neurodivergent community have differing views on what having "insight" means. Yanping Nora Soong (talk) 15:03, 16 January 2016 (UTC)[reply]
Thanks for the concept, your post is very insightful.--Scicurious (talk) 15:27, 16 January 2016 (UTC)[reply]
If you think about it, most people don't think they're crazy -- it's the default assumption. People prefer to believe that they are a sane person -- perhaps even the one sane person -- living in an insane world. Thus people with depression see the world as objectively bleak and horrible, paranoids see the world as full of people plotting against them, and many people with schizophrenia see their intrusive thoughts as being beamed in from the outside, instead of believing that they might be mistaken about the nature of reality. This is similar to people's beliefs about the relationship between their own religious and political views and those of others: leftists and rightists look at one another's beliefs, and think these people must be either evil or insane to believe these things. -- The Anome (talk) 15:15, 16 January 2016 (UTC)[reply]
The question remains. Many among the mentally ill have ups and downs. A share of them will know they are out of their mind. How big is this share?--Scicurious (talk) 15:27, 16 January 2016 (UTC)[reply]
Are you considering diagnosed or undiagnosed mentally ill people, or both? -- The Anome (talk) 15:37, 16 January 2016 (UTC)[reply]
There has to be some sort of diagnostic. --Scicurious (talk) 22:13, 16 January 2016 (UTC)[reply]
The world is run by people pointing doomsday weapons at each other, people play tourist in space while others starve, there are 30 empty homes for every homeless person and in every single community the police treats them as the criminals to be watched, for the want of mass-printed pieces of paper ... surely everyone is insane, even if only a few of us know it. (Which doesn't mean we're any saner than the rest) Wnt (talk) 02:59, 17 January 2016 (UTC)[reply]
Another problem is that many mental health issues, such as depression, anxiety, OCD, etc. come in degrees. Is someone who is perhaps a little bit depressed but remains independently functional, mentally ill? An argument could be made either way. People who may have some difficulties, but can otherwise manage in everyday life, are less likely to consider themselves mentally ill regardless of what a psychiatrist might conclude. On the other hand, people with severe problems that make normal life impossible are more likely to be aware of and acknowledge that a problem exists (provided they are capable of coherent thought and communication at all). So a lot is going to depend on the group of people you are talking about and the severity of the illness. Dragons flight (talk) 15:50, 16 January 2016 (UTC)[reply]
There's also a big difference between people who are delusional, and people who are not. Would you consider people who are mentally ill (in the sense of meeting the criteria for a clinically defined disorder), but not delusional, "crazy"? Also: there are lots of people out there who are cranky, bitter, antisocial, unhappy, etc. but don't meet the current criteria for any recognized mental illness: would you count them as being "crazy"? -- The Anome (talk) 16:33, 16 January 2016 (UTC)[reply]
It has to depend on the nature of the mental condition - and what you mean by "know". For example - I have Asperger syndrome (a variety of Autism) which is a form of mental "illness" - and as you can clearly tell from the fact that I'm writing this, I'm well aware that I have it - and the limitations it imposes upon me are entirely self-evident to me. The fact that something was "wrong" was entirely obvious to me even before I obtained a diagnosis and had a label attached to it. But that's just one (relatively mild) condition.
So let's consider something much more severe - like paranoid schizophrenia perhaps. John Forbes Nash, Jr.'s story is a classic case: According to our article, initially..."Nash seemed to believe that all men who wore red ties were part of a communist conspiracy against him; Nash mailed letters to embassies in Washington, D.C., declaring that they were establishing a government."...so at the time, he clearly didn't know that he had a problem - he thought he was perfectly sane. However, we're told that later: "Only gradually on his own did he "intellectually reject" some of the "delusionally influenced" and "politically oriented" thinking as a waste of effort. By 1995, however, even though he was "thinking rationally again in the style that is characteristic of scientists," he said he felt more limited." - so by then, he clearly knew he had a problem - he'd quite deliberately stopped taking the medication (because it blurred his ability to think clearly enough to do mathematics) - and yet the problem was still present. His earlier inability to understand that he had a problem is self-evident - his later ability to intellectually reason that some of the things he definitely could see were unreal is a clear demonstration that one can have a very severe mental illness and be fully aware that one has it. In the movie about his life A Beautiful Mind (film), the closing scenes has him working in a university, talking to some students - and he opens the conversation by inquiring which of them are "real". It must be very bizarre to lead an existence where one has to ask such questions and distrust one's own senses to that degree.
Nash's case is a good one - at some point in his life, he was clearly "crazy" (to use a politically-incorrect term) and utterly unaware of it. Later, he was still clearly in the grips of the illness - and refusing treatment - yet was fully aware that some portion of his experience of the world was delusional. I think this demonstrates that the answer to this question is "Maybe".
But there is also a question about what you mean by "knowing". Knowing that something is true because a doctor tells you and you trust their judgement is one thing, but knowing that one has a problem from internal reference alone is quite another. In my case, the latter was clearly the case - but in Nash's case, it's unclear whether he would ever have figured out that he was delusional without being informed of it by people he trusted.
So the answer (as is so often the case) is an ambiguous "it depends..."
SteveBaker (talk) 15:29, 17 January 2016 (UTC)[reply]
A confounding factor for any study that would be regarded as useful regarding insight is that with mental illness insight can often vary during a person's lifetime. For instance, lack of insight is perhaps more likely in people that experience a mental health problem such as a psychosis for the first time. Then even once a diagnosis has been established, mental confusion and failure of insight can come and go depending on the person's condition such as with cyclothymia. --Modocc (talk) 15:39, 18 January 2016 (UTC)[reply]

what is it the partition coefficient of 1-octanol defined as?

Not a silly question, I hope. For example, cyclohexanol's reported log P is +1.23. Does this mean it is is more hydrophilic or less hydrophilic than 1-octanol? Yanping Nora Soong (talk) 17:10, 16 January 2016 (UTC)[reply]

It would help to provide the source, but note that the info box in our article on cyclohexanol says 3.6 g/l dissolve in water, vs. 0.46 g/l for 1-octanol. I'd hazard a guess it is more hydrophilic. Wnt (talk) 17:39, 16 January 2016 (UTC)[reply]
You mean you want to know what an octanol-water coefficient really means at a technical/mathematical level? DMacks (talk) 19:34, 16 January 2016 (UTC)[reply]
Rather, what is octanol's own octanol-water coefficient? It surely can't be 0? (log P = negative infinity?) Does that make sense? If we define octanol to be miscible in octanol, and note that octanol's "solubility" in octanol is 6.3 M. Then 3.6 g/ L implies an aqueous solubility of 276 millimolar. That means that octanol's own water-octanol log P is -2.36 ?
The solubility of 1-octanol in water can be easily found in the literaure, no need to imply or derive it (and probably cannot be calculated without knowing additional parameters anyway). You might be off by about an order of magnitude. DMacks (talk) 20:59, 17 January 2016 (UTC)[reply]
I'm basing it off the infobox data given to me by Wnt. I am talking about the theoretical consideration of the octanol/water partition coefficient of 1-octanol. What is it defined as? What is the log P of 1-octanol? Yanping Nora Soong (talk) 05:00, 18 January 2016 (UTC)[reply]

Alcohol and exercise

Apart from the calories in alcohol, what are the downsides to drinking alcohol after exercise or strenuous work (let's say 4 pints of lager a few hours after)?

I guess that drinking would have an impact on the body's ability to rebuild muscle, tendons etc. Maybe, it would also affect the glycogen stored in muscles, vitamin/mineral supplies and hydration, although I think these are all restored withing a few hours of exercise if you've had an appropriate meal and non-alcholic drinks?

Thanks, Mike — Preceding unsigned comment added by 95.146.213.181 (talk) 18:43, 16 January 2016 (UTC)[reply]

Alcohol is quickly metabolized into sugar, therefore I would expect the effects to be similar to eating lots of sugar. So probably not good when sedentary, where that sugar will be converted into fat. On the other hand, exercising after drinking might be a good way to burn off those calories, as long as the exercise can be done safely. StuRat (talk) 22:13, 16 January 2016 (UTC)[reply]
@StuRat: Ahem ... try ethanol metabolism. The conversion to acetyl-CoA has more in common with fat catabolism. For diabetics, alcohol is not as bad as glucose. Wnt (talk) 02:53, 17 January 2016 (UTC)[reply]
Why do you think alcohol consumption would impact the body's ability to recover from exercise? I'm asking to find out what information you're basing this on. I'm not aware of there being a significant impact. The biggest short-term effect I can think of, aside from the psychoactive effects of alcohol, is dehydration, which you touched on. Alcohol interferes with antidiuretic hormone, increasing urine production (something many drinkers are familiar with). This can lead to dehydration if you don't drink enough water to make up for the loss; dehydration is thought to be one of the things responsible for the effects of hangovers. Now, with all that said, there's a different issue here: four pints of lager a day is borderline excessive drinking. For long-term health, anyone consuming that much alcohol regularly should reduce their consumption. --71.119.131.184 (talk) 07:28, 18 January 2016 (UTC)[reply]
I'm not basing the impact on the body's ability to recover on any info, it was an assumption on my part. If the body is working to process the alcohol, isn't it using resources to deal with that rather than recovering from exercise? 95.146.213.181 (talk) 18:03, 18 January 2016 (UTC)[reply]

January 17

Is one of the following option right?

I found this question on Facebook (chemistry group), but I'm not sure if one of the given options is right.

"salt" in chemistry is:
a) compounds that have ionic bonding
b) compound s that consist of elements of halogen family, no matter what is the type of the bonding (ionic or covalent)
c) compounds that have the Cl element
d) compounds that have no metals
According to our article: "In chemistry, a salt is an ionic compound that results from the neutralization reaction of an acid and a base." and I don't see this option here. Are the options wrong? 92.249.70.153 (talk) 05:08, 17 January 2016 (UTC)[reply]
a. They are compounds with ionic bonding. Not all salts result from the neutralization reactions of acid and a base. For example, how would you explain the formation of tetrabutylammonium bromide? Yanping Nora Soong (talk) 05:30, 17 January 2016 (UTC)[reply]
Thanks, so if I understand you well the mistake is in the article here. Am I right? 92.249.70.153 (talk) 06:35, 17 January 2016 (UTC)[reply]
No, our article (Salt (chemistry)) provides a definition that is generally considered correct.
Most introductory chemistry textbooks will use a definition very similar to the one you find in our article. There are corner cases and subtleties of definition. Most importantly, if you study more chemistry, you will learn that the definition of acid and base is trickier than it first seems. In introductory chemistry, you will focus on the standard definitions and standard chemical reactions; but as you dive deeper, each successive complication necessitates a refinement of many definitions.
In some sense, we call this style of formal education a "lie-to-children," but that's not entirely fair. If you want a complete and total definition of the word "salt" in chemistry, you'll have to read hundreds of books and thousands of research papers. If you want the definition in one sentence, our article (Salt (chemistry)) does a great job introducing the concept in its lede.
If you strongly feel that the opening definition in that article is incorrect, then:
  • Find multiple reliable, encyclopedic sources to back you up. An internet-quiz on a social forum is not really a reliable source.
  • Engage with the regular contributors at Talk:Salt (chemistry) and discuss your proposal.
  • Reach consensus and make a change.
In this case, I do not recommend making the change first, because most educated chemists and scientists will agree that our article's lede definition is generally correct.
Nimur (talk) 17:53, 17 January 2016 (UTC)[reply]
The generic form of the given definition is:
[H+][B] + [A+][OH] → [A+][B] + HOH
so it's trivially easy to see what one would react with what to give any arbitrary cation/anion result. Just because you happen to know how to make the A+ from something other than A itself doesn't have any relationship to the fact that AB can be made starting with AOH. DMacks (talk) 20:52, 17 January 2016 (UTC)[reply]
The Wikipedia definition seems a little weird to me, but the text from ionic compound sheds some light on it: "Ionic compounds containing hydrogen ions (H+) are classified as acids, and those containing basic ions hydroxide (OH) or oxide (O2−) are classified as bases. Ionic compounds without these ions are also known as salts and can be formed by acid-base reactions." So the definition given is kind of a roundabout way of saying that if you have an ionic compound and you get rid of any H+ and OH- present by reacting them, you get a salt. But the thing is, a salt can result from reacting an acid and a base but it doesn't have to, if you don't neutralize all equivalents of H+ or OH-. (Also I suppose there must be some cute example where you have an OH- in a cage of carbon or something so it won't directly react with the acid?) I don't see an obvious reason not to adapt the ionic compound definition and say that a salt as an ionic compound that doesn't contain H+ or OH-. This is really semantic, not a matter of true acid or base nature as per Lewis acid, given that AFAIK lithium tetrachloroaluminate is a salt, even though it hazardous as an acid that will readily react with water. [20] I would assume that mixing lithium hydroxide and hydrogen tetrachloroaluminate will not produce much of a yield of lithium tetrachloroaluminate + water, since the reverse reaction occurs so readily, so its classification under the definition currently used in the salt article seems very iffy. Comments?? Wnt (talk) 14:35, 18 January 2016 (UTC)[reply]

what is it called the compounds that occur between metals to metals?

According to what I'm reading now on "chemistry essentials for dummies" book (p.72) ionic compound occur between a metal and non metal while covalent compound occur between two nonmetals. So my question is what is it called the compound the occurs between metals to metals? 92.249.70.153 (talk) 05:14, 17 January 2016 (UTC)[reply]

Update: I found the answer right there. It's called metallic bonding. 92.249.70.153 (talk) 06:36, 17 January 2016 (UTC)[reply]
Yup. Metallic bonding is one of the major types of chemical bonds. DMacks (talk) 11:43, 17 January 2016 (UTC)[reply]
To be strictly correct here, we're confusing two different terms. A chemical compound is different than a chemical bond. A compound is a bulk material, while a bond is a type of force of attraction between particles. For example, something like sodium sulfate is usually classified as an ionic compound, but it has both ionic bonding (between the sodium ion and the sulfate ion) and covalent bonding (between the sulfur and oxygen atoms within the sulfate polyatomic ion.) Metallic bonding, by its very nature, does not really fit into the "compound" thinking for many reasons, and we don't often use the term "metallic compound" in the way we use terms like "ionic compound", "molecular compound". We usually use terms like "pure metal" or alloy to describe metallic bonding where all atoms are the same, vs. one with multiple metallic elements. It has to do with the nature of metallic bonding, the so-called sea of electrons model. Ultimately, alloys exist in the fuzzy boundary between compounds, homogeneous mixtures, solutions, etc. It is far less important that, as a student of chemistry, whether one classifies an alloy as a compound or a mixture, and far more important that one understands what is going on at the atomic level. --Jayron32 02:10, 18 January 2016 (UTC)[reply]

Science(Physics?) Question

I have limited math skills and zero physics skills. My question is complicated. To begin: Light has no mass, as objects approach light speed they become more massive (E=MC squared?) however gravity bends light gravity has mass therefore light must have mass to be bent by gravity. Or am I going astray in my understanding of light, mass and gravity? I am 68 and not in school but interested in astronomy (mostly self taught) tHANK YOU FOR YOUR HELP.------ Dennis H

There are several ways to understand this. One complicated one is that gravity bends the universe, so light, traveling in a straight line, ends up traveling in a curved line. I personally find it much easier to understand that gravity actually affects ENERGY (for example an object moving very fast has lots of energy, so will be affect by gravity more), and since light has energy, obviously it would bend. (A complication is that light can not change speed, but gravity changes the speed of things, but light can't, so how is it able to be affected?) Ariel. (talk) 06:39, 17 January 2016 (UTC)[reply]
I think when we say light has no mass, that means no rest mass, but there is also a type of "mass" that's due to relative motion. StuRat (talk) 07:03, 17 January 2016 (UTC)[reply]
See Gravitational lens for a fairly non-technical explanation, and Two-body problem in general relativity for something a bit more advanced. On the question of whether light has mass, see Mass in special relativity. Tevildo (talk) 11:58, 17 January 2016 (UTC)[reply]
You've made a good observation. General relativity describes gravity as a warping of spacetime, and consequently, it predicts gravity will affect even things with no rest mass, like photons. This is a significant difference from Newtonian gravity, and observations of light from distant stars being bent by the Sun's gravity were a major piece of evidence that convinced many scientists of the accuracy of general relativity. These videos by PBS Space Time are a really good primer on relativity, and I highly recommend them. --71.119.131.184 (talk) 06:27, 18 January 2016 (UTC)[reply]

1966 Palomares B-52 crash

Did those Mk28-type hydrogen bombs in 1966 Palomares B-52 crash contain non-nuclear explosives because it was a non-combat mission? Or there was some other reason the nukes weren't armed and didn't explode? Our article doesn't seem to clarify that. --93.174.25.12 (talk) 10:40, 17 January 2016 (UTC)[reply]

All nuclear bombs contain a conventional, non-nuclear explosive as well as the fissile nuclear core. That chemical explosive compresses the core and that starts the fission explosion. When the article says "the non-nuclear explosives in two of the weapons detonated upon impact with the ground", it means those explosive detonated, but they didn't set off the nuclear explosives they were attached to. -- Finlay McWalterTalk 10:53, 17 January 2016 (UTC)[reply]
Ball bearing safety system in a British nuclear weapon
The article doesn't say why the chemical explosives didn't trigger the nuclear physics package. Presumably they have some safety mechanism to prevent inadvertent nuclear detonation - but I can't find out specifics of what that might be for this bomb variant in either the B28 nuclear bomb or Python (nuclear primary) articles. I know some nuclear bombs keep an inert material (in some cases steel ball-bearings) in the void inside the core - these had to be removed to "arm" the bomb, presumably in-flight during an actual nuclear bombing raid. This is the mechanism used in the British Violet Club and related bombs; presumably US weapons had some analogous system. -- Finlay McWalterTalk 11:16, 17 January 2016 (UTC)[reply]
Two things. One, every nuclear bomb contains non-nuclear explosives. The chemical explosives are what assemble the fission core into a critical mass, when they are detonated. See nuclear weapon design. Two, nukes aren't armed unless you're planning to set them off. This accident demonstrates why. Most nuclear weapons contain multiple safety devices to keep them from going off unless you're quite sure you want them to. For instance, nuclear missile warheads include devices that only arm the warhead when they detect the acceleration from being launched. --71.119.131.184 (talk) 11:41, 17 January 2016 (UTC)[reply]
In some designs it's important that the pressure wave from the chemical explosives is symmetrical, otherwise it won't compress the core enough to make it critical. If an impact accidentally sets them off, they'll probably fire first on one side of the sphere, whereas in an intentional detonation they're fired electrically, all at once. I have no expertise in the matter, but it makes sense that this could prevent the nuclear explosion from happening. --76.69.45.64 (talk) 19:43, 17 January 2016 (UTC)[reply]
The deal is that with conventional explosives, the materials are inherently unstable - always on the verge of an explosion. Whack a bomb the wrong way and KABOOM! But with nuclear weapons, it requires considerable finesse to bring the nuclear material together fast enough to get them to critical mass without the increasing temperatures as you approach criticality blowing the bomb apart before it can properly explode. This failure is called a fizzle. Almost any fault in the way the bomb goes off can cause this - so an accidental full-on nuclear explosion due to a damaged bomb is highly unlikely.
That said, a fizzle can be a very dangerous outcome in itself. Although all of that explosive power won't be unleashed, the conventional explosives and the heat of fizzle can cause horribly radioactive material to be spread over a large area resulting in contamination that would be a serious problem to clean up.
But even for a fizzle to happen, the conventional explosives have to explode - and that is no more likely than in a conventional bomb. Probably less so because of the extra care and attention that's paid to the safety of the design and construction of nuclear devices. Conventional explosive bombs with faulty fuses rarely explode spontaneously - even after 50 or more years buried in soil or rubble.
SteveBaker (talk) 20:34, 17 January 2016 (UTC)[reply]
Nitpick: your statement isn't true for all conventional explosives. Some are designed to be very stable. Many plastic explosives can be lit on fire and not explode. --71.119.131.184 (talk) 06:14, 18 January 2016 (UTC)[reply]

Bigger microclimates

In areas with generally uniform topography (whether flat or consistently hilly), what factors can produce climatological anomalies that are hundreds of square miles in area? Go to File:2012 USDA Plant Hardiness Zone Map (USA).jpg and look at Ohio; there's a big light-blue blob just northeast of Columbus, for reasons that I can't understand. The nearby city of Mansfield is large enough that it generally appears on statewide weather maps (the ones showing current or predicted temperatures for the state's larger cities), and it's routinely the coldest of any such city, despite lying in a region that mixes flat farmland with low-relief wooded hills no closer to major waterbodies than the surrounding terrain. The state's other light-blue areas are part of large zones or are the effects of smaller microclimates (see Milligan, Ohio for the area southeast of Columbus), with nothing comparable to the Mansfield area. Nyttend (talk) 15:35, 17 January 2016 (UTC)[reply]

A topographic map (e.g., here shows that this area is a few hundred feet (~100 m) higher than the surrounding region. Not exactly the Cascade Range but enough relief to have a modest climate influence. Shock Brigade Harvester Boris (talk) 15:55, 17 January 2016 (UTC)[reply]
That blue blob is the Tibet of Ohio. 1400-151x feet above sea level! (for comparison the Empire State Building antenna is 1,504 feet above sea level) [21] Sagittarian Milky Way (talk) 16:11, 17 January 2016 (UTC)[reply]
But the region that includes the state's high point, northwest of Columbus a short distance, has a climate similar to the surrounding region; the local ski resort (see File:Mad River Mountain and Valley Hi.jpg) exists because of snow-making machines, not because the area gets additional cold weather. And going to Boris' map — you also don't have a colder zone in Belmont County and areas north of there in the far east, which is the state's largest area of 1300+ feet, even when you get back from the river and its potential warmer microclimate. Nyttend (talk) 16:22, 17 January 2016 (UTC)[reply]
Just a guess but those two regions look steeper than the blue blob (especially the lowest of all three), causing faster drainage of cold air? Also, the highest point in Ohio is in a city park 2 miles from downtown (heat island effect?), and only 29-40 feet higher. Sagittarian Milky Way (talk) 16:45, 17 January 2016 (UTC)[reply]
I strongly doubt that it's a heat-island effect; look at the location, 40°22′13″N 83°43′12″W / 40.37028°N 83.72000°W / 40.37028; -83.72000, and it's easy to find other 1500+ spots out in the township, e.g. 40°22′21″N 83°39′24″W / 40.37250°N 83.65667°W / 40.37250; -83.65667 near the spot marked "New Jerusalem" on the USGS topo map, while only the highest spots in Mansfield are above 1300 feet, and Mansfield a good deal larger, it's more likely to generate the heat island effect, although I doubt a large effect; the final sentence of Urban heat island#Causes says that a 1-million-person city may create a 2-5ºF difference in mean annual temperature, and the two cities are 13K and 47K respectively. Nyttend (talk) 01:17, 18 January 2016 (UTC)[reply]
Just a few thoughts: binning continuous data into discrete chunks can always produce artifacts, e.g. discretization errors. The USDA hardiness zones for 2012 are computed via mean annual minimum temp, 1976-2005. Such temperature information at that resolution is the effect of downscaling, which involves all kinds of mathematical voodoo (which usually works well, but should not be universally blindly trusted, as that can lead to false precision errors in the gridded data).
Now, the good folks at USDA are clever, and I'm not saying the whole thing is an artifact. It probably is a bit cooler there. But perhaps the nature of the data product, combined with the high elevation, may make this anomaly more apparent on the map than it is in reality. I would not be surprised if 75% of the blue region you mention is only 1 F lower in mean annual min than a wide swath of the surrounding green. Finally, you may get a bit more out of looking at older hardiness maps. As you probably know, these zone are changing, and this previous version do not have that feature. Here [22] you can see how they have changed, and also note the weird banding structure in the diffs (I have no idea why those bands show up, but it is almost certainly not anomalous, and illustrates how these things often defy simple intuition - climate science is hard stuff!) SemanticMantis (talk) 15:52, 18 January 2016 (UTC)[reply]

universal basic income

Why do most variations of universal basic income assume that everyone will suddenly become utopians overnight instead of remaining feckless, lazy addicts? The human mind can't take endless free time, a strong work ethic only comes about through necessity for basic survival — Preceding unsigned comment added by DannyBIGjohnny (talkcontribs) 18:04, 17 January 2016 (UTC)[reply]

This question, as phrased, does not appear to be a request for scientific reference material. Would you like to rephrase it, or do you need help finding an internet discussion forum on that topic?
Nimur (talk) 18:19, 17 January 2016 (UTC)[reply]


There are a lot of assumptions in your question:
  • "The human mind can't take endless free time" - Firstly, how do you know that? People retire from work all the time - and remain perfectly sane despite having "endless free time". Secondly, what makes you think that people without work have "free time"? Perhaps they are taking care of children or a sick relative...maybe they are using their time to invent The Next Great Thing?
  • "a strong work ethic only comes about through necessity for basic survival" - Again, how do you know that? Plenty of people work harder than necessary for "basic survival" in order to have a better-than-basic life.
  • "remaining feckless, lazy addicts" - Why do you think people who don't get that universal basic income are "feckless", "lazy" or "addicts"? That is also far from the true in every case.
To answer the part of the question that seems to matter, read Basic income pilots which lists the outcomes of Basic Income experiments around the world. The three that were tried out in the USA had really good outcomes. The early studies found only 17% less paid work being done among women, 7% among men. The gender difference probably implies that women found themselves able to stay home and look after their children...so "feckless" certainly doesn't seem to have been a significant result. They found that the money was not squandered on drugs and luxury goods...so much for "addicts". There was an increase in school attendance. Another study reported reduced behavioral and emotional disorders among the children, an improved relationship between parents and their children, and a reduction in parental alcohol consumption. Again, contradicting your expectations.
I doubt many people think that a universal basic income would result in a "utopia", it's fairly clear that we would expect a significant number of benefits to accrue to society as a whole. SteveBaker (talk) 20:17, 17 January 2016 (UTC)[reply]
Social benefits, although not exactly the same, is also a testing scenario for the idea. Countries with it, including those with generous cash in hand social benefits, did not succumb to all the forms of vice. There is plenty of empirical hard data, beyond ideological worldviews, to analyze the effect of introducing a basic income scheme.Denidi (talk) 22:03, 17 January 2016 (UTC)[reply]
In case you are not aware, you have posted this question to a place that exists almost solely because of motivated people who are volunteering their time to a cause they believe it. You are probably less likely to run into people here who believe the "default" human condition is "feckless, lazy addicts". Vespine (talk) 23:21, 17 January 2016 (UTC)[reply]
You need money to make money, and if you don't have enough to begin with you might not be able to work your way up. Especially if a means-tested welfare system means working more doesn't actually result in a net increase in wealth. Those problems shouldn't apply in the case of a universal basic income, and the advocates of such would argue that some/most examples of people (apparently) "remaining feckless, lazy addicts" are actually the result of the first two problems mentioned.62.172.108.24 (talk) 15:49, 18 January 2016 (UTC)[reply]

How can black holes form?

I know this has probably been asked before or is in a wikipedia article but I can't find the answer.

To an observer it takes an infinitely long time for matter to pass the event horizon of a black hole. So how does the black hole form in a way we can be aware of it or its effects? If it takes an infinite amount of time for matter to get there, how can it 'exist' to us? I've read about the time dilation effect, and I think I understand the basics but how can two black holes collide to form a supermassive black hole when from our perspective that would take an infinite amount of time?

I hope my question makes sense! Thanks 95.146.213.181 (talk) 19:56, 17 January 2016 (UTC)[reply]

For an external observer they never collapse completely staying in sort of a frozen state with the radius close to the gravitational. Ruslik_Zero 20:14, 17 January 2016 (UTC)[reply]
Exactly. The infinities don't come about until the event horizon has formed - and once it has, it's meaningless to talk about what's happening "inside" while still considering events from the perspective of an outside observer. SteveBaker (talk) 20:21, 17 January 2016 (UTC)[reply]

OP here, thanks for the quick replies. I'm not concerned about what's happening inside the event horizon (or do I need to understand that before I understand what happens outside it?), I still don't understand how they can form from our perspective as outside observers. Could you give some links for a layman to understand please? I've read the wikipedia article on black holes and under the growth sub-heading it states 'Once a black hole has formed, it can continue to grow by absorbing additional matter'. How can it do that, if it takes an infinite amount of time?

I'm sorry if I'm not explaining my question clearly (and I realise that much greater minds than mine, or even the ref desk know how black holes form). To put it another way, as the mass of a 'proto-black hole' approaches the density of a black hole, to us (and the rest of the universe) matter moves into it at slower and slower speeds. The bit I don't understand is how, from our perspective, matter moving into the proto-black hole can get there to form a balck hole.

Thanks 95.146.213.181 (talk) 20:53, 17 January 2016 (UTC)[reply]

Leonard Susskind explains this by using the uncertainty principle to show that from outside we cannot tell if a particle falling into a black hole is still outside the event horizon or not. As something approaches the event horizon, a photon or particle to probe the position from outside has to become more and more energetic to determine where the infaller is. Until the energy required is more than the mass of the infaller or the blackhole. Resulting in the probe destroying what we are trying to observe. Graeme Bartlett (talk) 21:22, 17 January 2016 (UTC)[reply]
Let me try to give a few different perspectives on this...
  • The event horizon is a surface in spacetime. Spacetime doesn't change, it just is. Event horizons don't form, they just are.
  • It's physically meaningless to say that an event horizon forms at a particular time "relative to an outside observer" because of the relativity of simultaneity. You can draw surfaces in spacetime and decree that they represent the "now" and that everything on the surface happens at the same time, but there's more than one way to do it and they're all meaningless. When people say that the event horizon hasn't formed yet, they're probably thinking of the "now" as a constant t in Schwarzschild-like coordinates. If you instead use Eddington–Finkelstein-like coordinates, then the event horizon does form at some particular time "for you".
  • Independently of whether the event horizon "exists now", it is true that you will never see anything cross the event horizon, because by definition it's the boundary of the region of spacetime you'll never see. But it's rather solipsistic to say that something never happens just because you never see it happen. In an exponentially expanding universe like the one we seem to inhabit, there is a cosmological horizon and we will never see anything beyond it, sort of like a black hole turned inside out. If nothing outside that horizon happens, then the universe is a perfect sphere with us at the exact center. Even in special relativity, if you accelerate uniformly forever, there is an event horizon behind you (called a Rindler horizon) and you will never see what happens beyond it, but you don't have the power to prevent that half of the universe from existing just by accelerating away from it. These event horizons behave just like black hole horizons, even emitting Hawking radiation (in the case of uniform acceleration it's called Unruh radiation).
  • Classical systems can only asymptotically approach a ground state (in this case a perfectly spherical hole with no hair), but quantum systems emit a final photon/graviton/whatever and reach the ground state at a finite time. For black holes formed from collapsing stars, I think the time from seeing an "almost collapsed" star to seeing the final photon/graviton is a small fraction of a second, though I really should have a source for that. After that, you have a black hole as surely as you have a hydrogen atom in the ground state. (This is probably related to Susskind's argument in Graeme Bartlett's reply.)
  • Quantum black holes eventually evaporate. In Hawking's original semiclassical treatment, you see the hole finish forming at the same time as you see it finish evaporating (not because they happen at the same time, but because they happen on the same lightlike surface, and the light all stacks up and reaches you at the same time). I'm not sure that picture is accurate, though, in part because of the previous bullet point. -- BenRG (talk) 21:42, 17 January 2016 (UTC)[reply]
These are good questions. One thing I'd like to point out is we never "see" beyond the event horizon. We can't, with currently accepted physics, meaningfully say anything about what happens beyond the event horizon. We detect black holes by detecting their effects on things outside their event horizons, such as their gravitational effects on other objects. A singularity not "hidden" by an event horizon would be a naked singularity, which is a topic of discussion in theoretical physics, with debate over whether such a thing could actually exist. Also I'll recommend these two videos by PBS Space Time which discuss black holes. You'll need some background knowledge (and there are links to some other videos that may help), but they're intended to be accessible to laypeople. --71.119.131.184 (talk) 06:12, 18 January 2016 (UTC)[reply]

OP here, thanks for all the responses. I still haven't wrapped my head around things, I think I need to read up a lot more to understand your answers! Your answers have been very much appreciated :-) Mike 95.146.213.181 (talk) 18:10, 18 January 2016 (UTC)[reply]

How much as human DNA changed along the centuries

Genetically, how different are we from our ancestors of 10,000 years ago? We would look different due to the diet and environment. However, were we DNA-wise essentially the same as today? I suppose if we go 60,000 years back in time, as we left Africa, we would not see Caucasians or Asians, but what else is new? --Denidi (talk) 22:45, 17 January 2016 (UTC)[reply]

Based on human genome and mutation rate one expects about ~1 mutation in coding regions and ~60 mutations in non-coding regions (including regulatory sequences) of the human genome per generation. That mutation rate will accumulate noticeable variation over thousands of years. Of course, mutations that prove detrimental will be selected against, so the true number of accumulated mutations may be somewhat lower than one might expect via simply counting generations. Dragons flight (talk) 00:07, 18 January 2016 (UTC)[reply]
What mutations prove detrimental has changed. Denidi implicitly mentioned one factor. We would not see Caucasians or Asians, because light skin (via either of the two light-skin genetic changes) was detrimental in the African sun but beneficial in the European or Asian mid-latitude sun. Within the past century, modern medicine has reduced the lethality of various conditions and diseases. Robert McClenon (talk) 00:38, 18 January 2016 (UTC)[reply]
See Human evolution#Recent and current human evolution, which gives the examples of lactase persistence and resistence to diseases carried by domesticated animals. I suspect another example would be the increasing frequency of short-sightedness, which until a few centuries ago would have been a major disadvantage but since the invention and common availability of spectacles is no longer selected against.-gadfium 00:52, 18 January 2016 (UTC)[reply]
I disagree as to myopia. After division of labor, it was no longer a major disadvantage. It only dictated what occupational role the person could fill. They couldn't hunt. They could perform crafts. In an early literate society, it was possible that the nearsighted person could become a scribe, and being a scribe was a high-status occupation in early literate societies in which literacy was the exception rather than the rule. However, it does illustrate that, in general, technology changes what are harmful conditions. Nearsightedness wasn't one, in a society with division of labor. Robert McClenon (talk) 01:47, 18 January 2016 (UTC)[reply]
They would likely have more body hair. Head lice are supposed to have developed "30,000–110,000" years ago, as a result of humans having lost body hair in most places, leaving an isolated habitat for lice on the head. So, that puts the transition in the 60,000 years ago range you are interested in. StuRat (talk) 05:06, 18 January 2016 (UTC)[reply]

January 18

ASASSN-15lh ‎

Considering what our sources on ASASSN-15lh say, does it mean that if it was in our galaxy, the light from this hypernova would be seen in the northern sky, even if it exploded in the southern sky? If yes, what would be it's rough intensity compared to the southern sky? Brandmeistertalk 09:19, 18 January 2016 (UTC)[reply]

Well what I saw said brighter than the full moon. The full moon is not apparent on the other side of the world. A fully set sun can't be seen on the fully night side of the Earth. But perhaps you could see some odd lighting on the dark part of the moon, but otherwise if it was below the horizon you should not see anything. Graeme Bartlett (talk) 09:48, 18 January 2016 (UTC)[reply]
"If it was in our galaxy" is somewhat meaningless, given that our galaxy is 100–180 light years across. If it was nearby we wouldn't have much time in which to enjoy the spectacle.--Shantavira|feed me 10:14, 18 January 2016 (UTC)[reply]
Erm, double check your numbers; 100-180 thousand light years. Fgf10 (talk) 10:35, 18 January 2016 (UTC)[reply]
If we see a supernova in our Galaxy it would probably be nearer rather than further because dust blocks anything that's not close or up in the sticks astronomically speaking (page 348). Specifically, that link says most the dust is 100 parsecs wide (326 light years) and Earth is inside it. The part of the Milky Way that's visible is actually less densely populated with stars than the dark rift of dust running down the middle. This is why it's easier to see a much further galaxy through city light pollution than our own Galaxy. Because we are not in Andromeda's central plane but rather see through it at a glancing angle we see through more of the dustless upper or lower suburbs before the line of sight must be stopped by dust. Sagittarian Milky Way (talk) 12:14, 18 January 2016 (UTC)[reply]
Also you can see some southern stars from the Northern hemisphere. At the north pole you cannot see any, but as you approach the equator most of the southern stars will be above the horizon at one part of the day. I suppose you are asking because most of the Milky Way is in the southern sky. Graeme Bartlett (talk) 11:07, 18 January 2016 (UTC)[reply]
The Milky Way is half in each hemisphere, as is required for a great circle. Anywhere but the Arctic you should see more than half reasonably good. The galactic center is in the southern hemisphere (Sagittarius) so more than half the naked eye stars or supernovae (long term average) likely are in the southern hemisphere. But if by southern sky you mean "invisible or hard to see at middle latitudes" that might not be the case, as midnorthern latitudes can see the direction of the galactic center. It would require a majority of stars or visible Milky Way supernovae to be on the southern side of -40° to -50° declination or so and that might not be possible no matter how amazing the sky that far south is. Sagittarian Milky Way (talk) 12:14, 18 January 2016 (UTC)[reply]

Cost of Space Missions and Materials

The price of many space missions is available, but not how they have reached that amount of money- how much goes to advertising? how much do the materials cost? if I plan my own mission how do I know how much it will cost? And on the same note- where can I find information about the materials that spacecrafts are used for? (The questions are for hypothetical mission planning) 77.125.0.41 (talk) 14:41, 18 January 2016 (UTC)[reply]

Don't know what you mean by advertising. This is not generally a commercial product or service. Mars One might be an exception, since they appear to spend most of their funds in marketing themselves. For real programs, go to a project article, for example Space Shuttle program and follow the links. --Denidi (talk) 16:28, 18 January 2016 (UTC)[reply]
Our article on Space advertising covers launch and in-flight/in-space advertising, often a way to bring money into a government space program, not an expense. We don't seem to have any information on private space-launch providers advertising in Earthly sources to attract customers. Rmhermen (talk) 17:43, 18 January 2016 (UTC)[reply]
Indeed, there must be some client-hunting. However flying to space is such a niche industry, with so few players, that everyone must be aware of the existence of all other providers/buyers. Some sort of corporate relationship management must exist though, since there is a series of ancillary providers of products and services. — Preceding unsigned comment added by Denidi (talkcontribs) 18:57, 18 January 2016 (UTC)[reply]

CO2 pressure in a soda drink

At what pressure is the CO2 in a carbonated beverage? Does it make sense at all to talk about the CO2 pressure before opening the can/bottle? I ask because the the CO2 would be dissolved, and not a gas. --Needadvise (talk) 16:18, 18 January 2016 (UTC)[reply]

It varies depending on the brand and the packaging or dispensing method. Slate notes that 12-week old poorly stored plastic bottles can lose 15% of their CO2 compared to a aluminim can packed at the same time and pressure.[23] Fountain sodas can have their pressure manually adjusted. Rmhermen (talk) 17:33, 18 January 2016 (UTC)[reply]

Asian eyes (epicanthic fold)'s protection from the sun

Is it true that Asian eyes (the presence of the epicanthic fold and single eyelid) provides greater protection from the sun? I've heard this before, but has there ever been a study that indicates a lower incidence of eye related diseases among East Asians? ScienceApe (talk) 16:27, 18 January 2016 (UTC)[reply]

Epicanthic fold states that the reason is unknown. Some features have no use, so it would be no surprise if no explanation exists. We can speculate that their eyes are more 'squinty' to block out more direct rays of sun. But also we could say that they have more fat to protect against cold weather. These theories could be completely wrong, but they seem to at least have some intuitive merit. --Denidi (talk) 16:33, 18 January 2016 (UTC)[reply]
Sure, it could be due to a lot of things, or maybe nothing (e.g. genetic drift). Maybe sexual selection played a role. Especially in human biology and evolution, we should all be wary of a just-so story. SemanticMantis (talk) 17:35, 18 January 2016 (UTC)[reply]

Specific gravity of Urine is typically reported in either g/cm3 or kg/m3. Can anyone tell me which unit of measure is correct for the reference ranges provided in the article? — Preceding unsigned comment added by JohnSnyderDTRRD (talkcontribs) 18:46, 18 January 2016 (UTC)[reply]

Technically, specific gravity doesn't have units. Instead, it's the ratio of the density of a substance to that of a reference substance, usually water. That said, the density of pure water is normally taken as 1 g/cm3, so the specific gravity and the density in g/cm3 are usually numerically identical. (The density of water is 1000 kg/m3, so orders of magnitude considerations should probably have lead you to rule that out.) - P.S. If you're attempting to use these numbers for anything important, I wouldn't trust the values given on Wikipedia. Get them from a more reliable source, one which you're confident that you can interpret correctly. -- 19:02, 18 January 2016 (UTC) — Preceding unsigned comment added by 160.129.138.186 (talk)