Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 67.169.83.209 (talk) at 02:15, 21 January 2014 (→‎Food chemistry, eggs, milk, and salt). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 15

Is there any antibodies or antigens which be inherited by genes?

194.114.146.227 (talk) 07:06, 15 January 2014 (UTC)[reply]

When people refer to "antibodies" they are typically referring specifically to proteins affected by V(D)J recombination, which cannot be inherited genetically (although a mother can pass some antibodies directly to her infant child). However, there are other components of the immune system, such as the complement system, which are heritable. In brief, the complement system is a set of genes that are pre-encoded to recognize and combat certain pathogenic microorganisms, although they still work better if accompanied by a typical antibody response. See also Pattern recognition receptor and antigen. Someguy1221 (talk) 08:38, 15 January 2014 (UTC)[reply]

תודה על התשובה (Thank you for the answer)5.28.161.105 (talk) 11:31, 15 January 2014 (UTC)[reply]

Another interesting feature, which is heritable but prone to variation, is human leukocyte antigen (which is the human version of the major histocompatibility complex). --—Cyclonenim | Chat  10:37, 16 January 2014 (UTC)[reply]

Ampere vs. Coulomb

Why does the Systeme International use the Ampere, the unit of Current, as a base unit instead of the Coulomb, the unit of Charge? I would think that charge would be the basic fundamental quantity, comparable to length, time and mass, while current is simply charge/time.Inkan1969 (talk) 16:33, 15 January 2014 (UTC)[reply]

Arguably, electric charge is a more derived property and is more difficult to directly measure than electric current. Besides, you can have electric current even if no charge moves. S.I. defines electric current in terms of its mechanical effect, not in terms of its constituent motion of charge. Nimur (talk) 16:45, 15 January 2014 (UTC)[reply]
Thank you for the answer.Inkan1969 (talk) 17:08, 15 January 2014 (UTC)[reply]
Inkan1969's question is a good one. I have often pondered the same thing. The formal definitions of the Ampere and the Coulomb establish that the Ampere is the fundamental unit, and the Coulomb is the derived unit. However, I imagine that it is easier for a young student to comprehend electric charge (the Coulomb) first; and then to comprehend electric current (the Ampere). I'm not a teacher, but if I were a teacher of physics to teenagers I would first introduce the concept of electronic charge and the Coulomb, and secondly introduce the concept of electric current and the Ampere and the properties associated with an electric current. Dolphin (t) 05:16, 16 January 2014 (UTC)[reply]
Which is considered the base unit may change with the proposed redefinition of SI base units. The "why" is probably more determined by measureability considerations than which could be consider more fundamental. —Quondum 07:14, 16 January 2014 (UTC)[reply]
Yes - having worked in a metrology lab at one point I can confirm that the choice of base units has very little to do with the ease with which the concepts can be taught and is only somewhat related to how fundamental they are as concepts (basically you want to reduce the interdependence of the base units as much as practical, which does lead one towards more fundamental units in general). The main consideration is indeed measurability - specifically minimising the measurement uncertainty and ensuring the measured value is stable over time and between different laboratories (which is why there is so much effort going on to redefine the kilogram as something based on fundamental constants, rather than as the mass of a block of platinum-iridium in Paris). Equisetum (talk | contributions) 14:01, 16 January 2014 (UTC)[reply]
It would be logically more reasonable if the kilogram could be defined by counting some number of atoms of some particular isotope - and charge be defined by counting the number of excess electrons present. This would reduce those units to simple integer numbers with essentially perfect and unchanging precision. We've already done that with the definition of a second ("the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium133 atom"). The problem is in getting the required precision for normal use - and the theoretical reproducibility. Making a 1kg mass out of some specific number of 100% pure, single-isotope atoms is a lot harder than making a lump of metal that's more or less the same weight as some other lump of metal. Similar problems exist with dishing out some exact number of electrons to make a standard charge. So in many ways, we're constrained to definitions of units that are practically realizable - and that's more of a technological limitation than it is one of picking the best set of units and definitions conceptually. SteveBaker (talk) 14:36, 16 January 2014 (UTC)[reply]

What is the scientific basis for the claim that studying individually is better than studying in a group?

What is the scientific basis for the claim that studying individually is better than studying in a group? And under what conditions is this so? 140.254.227.120 (talk) 16:45, 15 January 2014 (UTC)[reply]

Who makes this claim, and in what context? It seems like too much of a generalisation to make any 'scientific' assertions one way or another. AndyTheGrump (talk) 16:52, 15 January 2014 (UTC)[reply]
But a study has shown that that is the case! I just can't seem to find the actual article again. I swear it's not a dream! :( 140.254.227.120 (talk) 17:08, 15 January 2014 (UTC)[reply]
I was hoping I could find some support or doubts on that study, since scientific studies are usually tentative. 140.254.227.120 (talk) 17:10, 15 January 2014 (UTC)[reply]
When an article in the media starts with "A recent study shows", it's best to take it with a giant grain of salt. p value for "significance" was set intentionally low so that studies would pick up things worthy of further examination, NOT at all to determine if they are actually real or true. To do the latter, you have to independently replicate the results a few times. Actually having a quick look, the Misunderstandings section of our p value article seems to cover it pretty well. Vespine (talk) 21:53, 15 January 2014 (UTC)[reply]
I could certainly see both advantages and disadvantages to group study sessions:
ADVANTAGES = Others can explain things to you which might be unclear from the lectures and/or book and/or labs.
DISADVANTAGES = Not tailored to your own weaknesses. Presumably you will study more where you know you are deficient.
In other cases, it might depend on the study group. One that encourages you to study when you want to stop would help, while a group that is always goofing off is not at all helpful. Also, I might apply the same rule I apply to learning chess, where I think it's best to play those who are just slightly better than you. They have something to teach you, yet are not too advanced to understand. Also, real experts often don't want to bother explaining "the obvious". StuRat (talk) 22:14, 15 January 2014 (UTC)[reply]
As a teacher I would recommend to students that they do both. Their benefits would be complimentary. HiLo48 (talk) 22:19, 15 January 2014 (UTC)[reply]
Citing the anonymous authority of a study someone might remember does not constitute a scientific approach. The result of such a study would itself be qualified by whether it was done by an individual or a group. A scientific method of studying the claim would be to design and carry out a Blind experiment that removes the influence of preconceived notions. This falls within the article about Educational research and a hypothesis that why one study method is better than another falls under Educational psychology. 84.209.89.214 (talk) 22:26, 15 January 2014 (UTC)[reply]
The trouble with this sort of study, for things that are as individual as study methods is that the study can usually only conclude that "on average" one method is better than the other for the population studied. This might just mean that method A is better for 40% of the study sample, for 30% of people it makes no detectable difference, and for 30% of people method B is better. Interpretation is further complicated by the fact that you can't guarantee, without close reading of the relevant paper, that the study is sampling a population of which you are a member - maybe the study was on humanities students, and you are a physics student. The results of these studies are potentially useful for schools and education authorities who need to find the approaches which are effective for the maximum number of people while being cheap enough to implement, but as an individual you are much better off trying different techniques and seeing which work best for you (ideally in a semi-formal manner to remove as many biases as possible). Equisetum (talk | contributions) 12:15, 16 January 2014 (UTC)[reply]

January 16

science behind makara jyothi

There lies a big mystery behind the makara jyothi(a light glown on the day of makara sankranti near the sabarimala hills,kerala,India).There is no proper scientific reason given yet.Most of the people believe it as a divine light and many criticise it as an 'artificial light created to cheat people'.Please solve the mystery and help the wikipedia users.Kindly use the following links to know more about it. http://sinosh.wordpress.com/2008/08/26/makarajyothy/ http://www.spiderkerala.net/resources/4618-Divinity-Makarajyothi-questioned-Is.aspx http://ayyappadevotionalsongs.blog.com/2010/12/01/the-science-behind-makara-jyothi-in-sabarimala/ Makara_Jyothi — Preceding unsigned comment added by Kathir kishan (talkcontribs) 12:57, 16 January 2014 (UTC)[reply]

I see no mystery. As stated in the article (and ignoring the blog sites you linked to, where anyone can write anything) Makara Jyothi is a star.--Shantavira|feed me 13:33, 16 January 2014 (UTC)[reply]
Indeed. According to our article on the star Sirius (technically, it's a binary star): "The star is referred as Makarajyoti in Malayalam and has religious significance to the pilgrim center Sabarimala." - which makes sense because Sirius is the brightest star in the sky (after the sun, of course). This star has many names in many cultures - and many of them give it religious significance and time ceremonies to it's rising and setting at particular days of the year. So there is really no mystery to resolve here. A complicating factor for people writing about this ceremony is that there is an entirely different ceremony carried out a few miles away called Makaravilakku which also entails worshipping Sirius - as well as lighting massive fires to celebrate various aspects of the star. These might very well be visible over the horizon in the hilly countryside of that part of the world and might add to the whole mystical aspects of it all. SteveBaker (talk) 14:24, 16 January 2014 (UTC)[reply]

Books like "Summa Technologiae"

Dear Gentlemen.

I am looking for more books like "Summa Technologiae" from Stanislaw Lem. Literature that deals with the possible far future and the technologies of such eras.--92.105.189.138 (talk) 14:52, 16 January 2014 (UTC)[reply]

Depends on how far is "far". The Diamond Age is a fun book set in the next ~80 years or so, and has a lot of interesting stuff about how nano technology interacts with society. Also a good adventure. If you want much farther future, one classic is Isaac Asimov's Foundation_(novel), as well as its sequels. --But really, In my opinion, most of the best Science fiction isn't really about the specifics of technology, it's about human nature and societies. For example, the classic A_Canticle_for_Liebowitz covers thousands of years, but has very little to say about specific technology. SemanticMantis (talk) 15:02, 16 January 2014 (UTC)[reply]
Diamond Age is a great choice - although it focusses entirely on nanotech. I'd like to recommend another book by the same author: Anathem - which describes a world where technology causes horrific boom/bust cycles and the future is largely held by a group of geek/monks who largely eschew technology and who (in a sense) hold the world together by virtue of nothing more than clear thinking and dedication....and it's also a great story! It's rather hard to come up with books where future technology is actually a good thing rather than causing problems. SteveBaker (talk) 15:19, 16 January 2014 (UTC)[reply]
From your description, Anathem sounds exactly like Star Wars!165.212.189.187 (talk) 15:31, 16 January 2014 (UTC)[reply]
Really? Well, the books couldn't be more different. The monks of Anathem are pacifists and work passively and quietly in the background to make change. I don't think the Jedi of StarWars are much like monks and the rest of the universe seems to be at a more or less static technological level rather than going through boom/bust cycles. The monks of Anathem are reclusive in the extreme - with some sects cutting themselves off from the outside world (and even other monks) for 1000 years at a time. Anyway - trust me, Anathem and StarWars are about as opposite as any two SciFi universes that I could imagine! SteveBaker (talk) 15:40, 16 January 2014 (UTC)[reply]
Jedi don't use guns, and do alot of "behind the scenes" social work. "Reclusive in the extreme"; You mean like Yoda?165.212.189.187 (talk) 19:22, 16 January 2014 (UTC)[reply]
Death Star: boom->BUST!165.212.189.187 (talk) 19:25, 16 January 2014 (UTC)[reply]
For an example of where technology turns out to be a very good thing, see Asimov's short story The_Evitable_Conflict. He was a rather optimistic guy :) We don't have a list of techno utopian novels, but there are some leads in Technological_utopianism. In contrast, there are tons of entries at List_of_dystopian_literature, many of which are broadly science fiction, and discuss technology to some extent. SemanticMantis (talk) 15:45, 16 January 2014 (UTC)[reply]
Technology has reached an "interesting" level in Iain Banks Culture series. His writing is also more literary than most of the classical hard-core SF writers, so it might be a better match for Lem's books than, say, Asimov or (shudder) David Weber. The Player of Games is often recommended as an entry to the series, and is excellent. --Stephan Schulz (talk) 07:43, 17 January 2014 (UTC)[reply]

Extrapolation of energy components and the end of scarcity

1. How reliable are the dotted line projections on this graph?

2. What is the proper way to extrapolate the sum total of the components of the graph?

3. What is the proper way to extrapolate the individual components of the graph?

4. When renewables overtake fossil fuels, does that mean the NAIRU can be reduced? Tim AFS (talk) 15:17, 16 January 2014 (UTC)[reply]

  1. How reliable are the dotted line projections on this graph? - they are highly susceptible to political change. If the major politicians of the world truly understood the global climate change problem - then oil, coal and gas ought to start to head downwards - but if they continue to ignore the problem then at least natural gas consumption will increase. The solar power increase is a technology-driven thing, and that's somewhat unknowable. Nuclear power is a matter of public opinion more than science - and one event like Fukushima is enough to set back development for another decade until it fades from public memory.
  2. What is the proper way to extrapolate the sum total of the components of the graph? - just plot a new graph that is the sum of the others with an extrapolation curve that is the sum of the other extrapolation curves.
  3. What is the proper way to extrapolate the individual components of the graph? - In the absence of any new knowledge of the future (eg some new technology that's due to come on-stream at some particular date), there are standard mathematical techniques as outlined in extrapolation. But in fields like this, it's common for people who are expert in the field to point out those kinds of things that will alter the mathematical expectation in some way. For example, we think we have a fair idea about how much oil there is left in the ground and what the cost and difficulty that extracting it would entail - and that might cause the graph to dip downwards in the future, even though the demand for the stuff might continue unabated.
  4. When renewables overtake fossil fuels, does that mean the NAIRU can be reduced? - that's a tough call. I don't have a good answer for you. But beware though - the vertical axis is a logarithmic one - it tends to make renewables look much more important than they really are and it strongly emphasizes change where there may not be much - and de-emphasizes drastic changes by turning exponential growth into a more gentle curve. This graph makes it look like hydro and nuclear are a huge chunk of the total energy supply - when in reality there is ten times less hydro power than there is oil - and thirty times less than all fossil fuels put together! Wind contributes only about 1% as much as oil and 0.3% as much as all fossil fuels combined!
SteveBaker (talk) 15:28, 16 January 2014 (UTC)[reply]
This is admittedly un-sourced speculation, but I don't really see how the projection for solar is remotely plausible. There are some pretty serious technical challenges with increasing the efficiency of solar cells, I don't really see how the sum total of energy produced by solar power could be expected to increase linearly without relying on a (as of yet non-existant) major technological breakthrough. It seems as though this chart conflates two issues, especially relevant to the solar example: 1) increased efficiency of solar cells and 2) increased coverage/usage of solar cells. Even if you get 100% coverage, you're still going to hit a ceiling unless you also develop better photovoltaics. (+)H3N-Protein\Chemist-CO2(-) 17:15, 16 January 2014 (UTC)[reply]
Oh - it's *FAR* worse than that. The vertical axis is on a log scale - so they are predicting exponential growth at an insane rate. I don't pretend to know whether those graphs are correct - and that's not really what we're being asked here. SteveBaker (talk) 20:00, 16 January 2014 (UTC)[reply]
The historical exponential rates depicted are not insane, and while there are always good reasons that no growth lasts forever, the potential for moving out into the Solar System has convinced me that the projections individually and collectively are sane and accurate enough for my purposes, which is depicting the changes in these production levels. Tim AFS (talk) 01:55, 18 January 2014 (UTC)[reply]
There is no technical reason that one can't grow solar power 100-fold solely by deploying 100-fold more capacity even without improvements in efficiency. As of 2010, you could increase US solar power 300-fold just by putting solar panels on every rooftop. That's not even considering that there is enough sun in the desert of the American southwest to meet all of the world's electricity demand. Now putting a solar panel on every roof or blanketing the desert wouldn't be cost-effective right now, but it doesn't actually require more efficient solar cells, just cheaper ones. Solar PV has a long history of falling prices, and with the recent expansion of Chinese production, that trend has actually accelerated. A solar module today costs only about 25% of what it cost five years ago, which is fueling a lot of growth, and if prices continue to fall then projecting solar as a significant component of the global energy mix might not be unreasonable. Dragons flight (talk) 18:22, 16 January 2014 (UTC)[reply]
The problem is that you can only collect energy from solar panels during daylight - and perhaps only usefully collect in summer at some latitudes. In order to make practical use of all of that energy, you need a means to store it efficiently - and that does not yet exist on anything like the scale required here. But in any case, that same argument could be applied to wind, tidal, hydroelectric, geothermal and nuclear - yet those graphs are not anywhere close to that crazily steep. SteveBaker (talk) 20:00, 16 January 2014 (UTC)[reply]
Thank you so much! Please see Sustainability#Energy and [1], [2], and [3]. Tim AFS (talk) 22:48, 16 January 2014 (UTC)[reply]
I should have also mentioned that solar and wind are much less expensive than hydroelectric and nuclear, and the former are falling while the latter rise in cost. Tim AFS (talk) 01:58, 18 January 2014 (UTC)[reply]
How much would it cost to subsidize renewables overtaking fossil in half the time, compared to flood and crop insurance costs? Tim AFS (talk) 22:48, 16 January 2014 (UTC)[reply]
It is a hard thing to estimate, but one can look at the difference between solar / wind costs and coal / gas costs. Based on current costs, replacing fossil fuels over the next 30 years would add roughly a $100 billion / yr to the cost of electricity. The hope is that as renewables continue to scale up, the added cost will come down. Global insured losses due to natural disasters (all types) average $20 - 50 billion / yr (though some years can top $100 billion). At present, it is probably cheaper to insure losses than to fund the wholesale conversion to renewables, though that probably won't always be the case. Dragons flight (talk) 22:13, 17 January 2014 (UTC)[reply]
@Dragons flight: Thank you! Does that include government flood insurance plans and self-insurance costs? Please let me ask the same question a different way: What proportion of the world's $650 billion per year of fossil subsidies should be moved to renewables in order to minimize total energy, flood mitigation, and food spending? I am convinced that the answer is on the order of 85% because raising the cost of transportation fuel raises the cost of food, but only in a very small proportion. Tim AFS (talk) 01:55, 18 January 2014 (UTC)[reply]
  • Was this graphic created by a bot? The "Author" is listed as a redlink user:AI.Graphic, here [4].
    exponential growth looks linear on a semi-log plot, while linear growth looks logarithmic.
    Anyway, I looked at the source and found this 45 page pdf [5], and this interactive charting tool [6]. There doesn't seem to be a similar chart in the pdf, nor can I get the tool to display the info broken down that way. The tool does show an exponential increase in all renewable sources over the time period 1967-2012. I did a few spot checks against the tabular data in the pdf, and it looks ok so far.
What I can't find anywhere in either document is projections, let alone exponential extrapolation. Naively, (without getting into geopolitics, etc), I'd say use of exponential extrapolation here is unwarranted, unless further justification is provided (which would constitute WP:OR, unless you find other WP:RS that specifically talk about extrapolating this stuff). Linear extrapolation would be much more conservative, but the graph is probably most informative without any projections at all. As it stands, it runs the risk of misrepresenting the source data. On summary, I think the best course of action is to keep only historical data on this plot, and use other sources for projections (there are indeed hundreds of such sources, but that's a different can of worms.) SemanticMantis (talk) 21:19, 16 January 2014 (UTC)[reply]
I do not know who created the graph. Tim AFS (talk) 22:50, 16 January 2014 (UTC)[reply]
Sorry, I thought you might know since it has so few links, and two are from you. I found that an IP editor claimed authorship here [7], but it would probably be difficult to track them down, if they are only associated with a redlink user and an IP. Anyway, I stand by my claim that the projections in the graph are generally unreliable, and should not be included in an article unless further justification and references are given. SemanticMantis (talk) 23:16, 16 January 2014 (UTC)[reply]
Well, in nature most growth is exponential unless it hits a snag. Much the same is true in economics - notice how we are always talk about GDP growths in percent - a constant 2% growth results in an exponential curve. More realistically, most of these "exponential curves" are probably the lower half of a logistic curve, but it's generally hard to find out where in the curve we are. In the diagram, solar, geothermal and wind seem to match exponential curves very well for the last few years (the solid parts of the curve). So yes, the exponential growths will stop (after all, the sun puts out only 3.846×1026 W, so that's a hard upper limit on the maximum solar and wind can provide). But it is very hard to say where the exponential growths will stop. --Stephan Schulz (talk) 12:26, 17 January 2014 (UTC)[reply]

Animal mating

During copulation, is there a biological mechanism that helps guide the semen to the appropriate external orifice of the female without entering the WRONG orifice (namely, the anus)? 140.254.227.253 (talk) 18:06, 16 January 2014 (UTC)[reply]

I'm talking about mammals here. It doesn't have to be specifically humans, since I've observed dogs and cats engage in copulatory acts, and apparently, the male dog's penis somehow finds its way to the correct orifice. 140.254.227.253 (talk) 18:10, 16 January 2014 (UTC)[reply]
I will answer for non-human mammals. There is not one single mechanism, but rather a suite of physical adaptations and evolutionary processes at play. In simplest terms, any individuals that don't successfully copulate cannot pass on their genes and traits to the next generation, so at a first approximation, we could say that a large percentage of unsuccessful copulation is maladaptive, and such individuals will tend to die out/go extinct (there are some caveats here). Have a look at Animal_sexual_behaviour, Reproductive_system#Mammals and mate choice for the general background. It's a bit unhelpful, but for most mammals, successful copulation by delivering semen to a vagina is simply instinct, as well as a path of least resistance. We do have an article Homosexual_behavior_in_animals, but note that many of those examples don't involve penis-in-anus copulation. SemanticMantis (talk) 19:00, 16 January 2014 (UTC)[reply]
For the caveats, there are ways that non-reproductive animals can pass on their genes and traits. In particular all ant species and many bee species have sterile workers, that still can contribute to evolution and natural selection. This evolution of sterility was confusing even to Darwin (Eusociality#Paradox), but it can be explained through haplodiploidy and kin selection, but that's a bit off-topic to your question. SemanticMantis (talk) 19:00, 16 January 2014 (UTC)[reply]
That makes sense. Instinctive behaviors are much more easier to explain in nonhumans than humans. I merely asked this question out of curiosity, based on the sex positions article. One might wonder how oral sex and anal sex evolved, when those are not procreative acts. One might also wonder how a male and female can engage in vaginal sex in so many different sex positions, or whether there is a quick-and-easy, intuitive, instinctive way of impregnation. 140.254.227.55 (talk) 21:28, 16 January 2014 (UTC)[reply]
The simple (but again not that informative) explanation for oral and anal sex in humans is "because humans are creative, there is no big cost, and some people enjoy those sex acts" -- But the big thing to realize is that, though sex is the sole method of reproduction in mammals, it can also serve other purposes as well, e.g. pair bonding. I recommend "Why is Sex Fun? The Evolution of Human Sexuality" by Jared Diamond, google books here [8]. I haven't read it yet, but the author is generally highly regarded, and the table of contents indicates that it discusses many of your questions. SemanticMantis (talk) 22:01, 16 January 2014 (UTC)[reply]
Cool. I've heard of Jared Diamond before on PBS, wherein he talks about Germs, Guns, and Steel. 140.254.227.55 (talk) 22:15, 16 January 2014 (UTC)[reply]
By the way, is human sexual foreplay required in successful copulation (that is, erection), or is that only for intimacy? 140.254.227.55 (talk) 21:55, 16 January 2014 (UTC)[reply]
No it isn't necessary but according to this it may improve the chances of conception. If you watch other primates copulating they don't usually bother with foreplay at all. Richerman (talk) 22:13, 16 January 2014 (UTC)[reply]

Marking of university exams

How are university exams in the UK marked compared to school GCSE and A-Level exams where examiners simply follow a mark scheme? I.e. Witnout a mark scheme, how are the allocation of marks decided? Clover345 (talk) 18:46, 16 January 2014 (UTC)[reply]

British universities use a system of internal moderation followed by assessment by External examiners. The systems used are overseen by The Quality Assurance Agency for Higher Education (QAA) who ensure the procedures used are fair and equitable. See: [9] Richerman (talk) 22:55, 16 January 2014 (UTC)[reply]

Microwaveable meal storage

How long can a microwaveable ready meal be safely stored in a fridge after it has been opened but not microwaved? 82.40.46.182 (talk) 19:11, 16 January 2014 (UTC)[reply]

It would certainly depend on whether you mean *opening the box* or *removing the plastic screen-cover*, the environmental conditions in the kitchen or cooking area, the type of preserved food, and the label on the food package that sets the approximate due date. 140.254.227.55 (talk) 21:03, 16 January 2014 (UTC)[reply]
The plastic screen cover opened and stored in a fridge. 82.40.46.182 (talk) 23:47, 16 January 2014 (UTC)[reply]

Delaying heat-death by pausing stars?

Is it conceivable that a star could be stopped and later restarted in the year 3000? --78.148.110.69 (talk) 22:39, 16 January 2014 (UTC)[reply]

"We don't answer requests for opinions, predictions or debate". AndyTheGrump (talk) 22:40, 16 January 2014 (UTC)[reply]
If you mean conceivable using human technology, the answer is most definitely no. Human technology cannot even control a terrestrial weather system or tsunami, which involve the movement of a tiny fraction of Earth's mass, which in turn is 1/300,000 the Sun's mass. --Bowlhover (talk) 23:59, 16 January 2014 (UTC)[reply]
There are many reasons why this is impossible - but one of the more serious is that stars are immensely heavy - to the point that they would collapse into a tiny dense core (a neutron star - or even a black hole). What prevents that from happening is the radiation pressure of all of the light coming from the star. So if you could somehow prevent it from consuming fuel by nuclear fusion, it would immediately collapse under it's own weight. SteveBaker (talk) 03:19, 17 January 2014 (UTC)[reply]

Thanks guys. Especially you, Grump. <3 78.148.110.69 (talk) 21:57, 17 January 2014 (UTC)[reply]

Of course we will be able to turn stars on and off. After all, reality is only a projection of the "real" universe, so once we are able to pull back the curtain, anything will be possible. I'm only half kidding. The more we find out about how the universe works, the stranger it becomes. It might be that we have it all wrong and one day some wise guy will show us that if you just look at the world sideways ... — Preceding unsigned comment added by 50.43.12.61 (talk) 07:16, 20 January 2014 (UTC)[reply]

January 17

Is celiac disease a genetic disorder? It's not listed in List of genetic disorders.Fundon1 (talk) 00:35, 17 January 2014 (UTC)[reply]

Wikipedia has an article you just linked to, which answers that exact question. --Jayron32 03:04, 17 January 2014 (UTC)[reply]
Let me add that Wikipedia's "list" articles are usually not constructed in a very systematic way. Basically editors add whatever they think of. Looie496 (talk) 03:08, 17 January 2014 (UTC)[reply]

Food capsules of the future?

Is it currently possible to compress all the things an adult human needs from food into a capsule to replace eating regular meals? — Preceding unsigned comment added by 174.65.117.118 (talk) 00:38, 17 January 2014 (UTC)[reply]

The closest is Nutraloaf which when fed to prisoners instigates riots.Fundon1 (talk) 00:44, 17 January 2014 (UTC)[reply]
One of the things that humans need from food is the pleasure of eating, often communally, and especially in celebration. Can't imagine a capsule delivering that. HiLo48 (talk) 00:59, 17 January 2014 (UTC)[reply]
If by capsule you mean something very small, the answer is no. The densest form of calories is pure fat, such as vegetable oil, and it takes about 2 cups of vegetable oil to provide as many calories as an ordinary adult needs for a day. If you substitute things that provide nutrients such as protein, vitamins, etc, this will get considerably larger. Looie496 (talk) 02:45, 17 January 2014 (UTC)[reply]
You also need a fair quantity of dietary fiber, and you can't get much of that into a capsule.--Shantavira|feed me 08:27, 17 January 2014 (UTC)[reply]
Possibly in a horse pill size... --Auric talk 17:27, 18 January 2014 (UTC)[reply]
I recall consuming "Space Food Sticks" as a future-astronaut-wannabe; this was supposed to be that sort of thing. ~E:71.20.250.51 (talk) 17:05, 19 January 2014 (UTC)[reply]

Blonde pubic hair and eyebrows

is it possible that a white person have blonde pubic hair and blonde eyebrows? — Preceding unsigned comment added by 70.31.22.79 (talk) 01:52, 17 January 2014 (UTC)[reply]

Sure. --Jayron32 03:03, 17 January 2014 (UTC)[reply]
Sadly, our article Human hair color doesn't mention pubic hair. But Jayron's original research is accurate. HiLo48 (talk) 03:07, 17 January 2014 (UTC)[reply]

There's a well known quote from the film Diamonds are Forever which should give you a clue:

  • Bond: I tend to notice little things like that - whether a girl is a blonde or a brunette...
  • Tiffany: And which do you prefer?
  • Bond: Well, as long as the collar and cuffs match...

Also, if you look at the Albinism page you will notice that the albinistic people in the pictures have white eyebrows - not quite blonde but the principle is the same. Richerman (talk) 10:26, 17 January 2014 (UTC)[reply]


Actually, hair is never blonde, only blond. People can be blonde, if they're female. But I don't suppose that was your point. --Trovatore (talk) 10:53, 17 January 2014 (UTC) [reply]

Not in British English see:[10]. Richerman (talk) 15:20, 17 January 2014 (UTC)[reply]
Since English does not have grammatical gender, only sex (i. e., natural gender), and hair is not a feminine noun, I agree that it does not make sense to say blonde hair, whether the hair belongs to a female or male person. However, it does not make sense to call a woman blonde, either, as blond and brunet would be the only adjectives to exhibit this behaviour, since all others are invariable except for synthetic comparison. As an adjective, blonde is essentially only a variant form – in my POV: a pure misspelling – of blond. The only unarguably correct use of blonde is as a noun (a) blonde for "blond woman", although the obscurity of the use of the male counterpart (a) blond reveals a sexist bias, I suspect. See also my comment at Talk:Declension#English example for an adjective that declines. --Florian Blaschke (talk) 16:55, 18 January 2014 (UTC)[reply]
In British English 'blonde' is correct as a noun or adjective according to the OED and blond is only listed as an alternative spelling. I've never seen the spelling 'brunet' before - it gets a mention in the OED under the entry for 'brunette' but it says 'now chiefly US' Richerman (talk) 19:35, 18 January 2014 (UTC) [reply]


"Sexist bias" my ass. I think that's PC nonsense. How is being a "blonde" worse than being a "blond"? --Trovatore (talk) 21:39, 18 January 2014 (UTC) Ah, rereading your comment, I seem to have misinterpreted you. I still think it's a bit hypersensitive though. --Trovatore (talk) 21:42, 18 January 2014 (UTC) [reply]

The Movie Gravity.

1. When George Clooney untethers himself from sandra bullocks, why does he drift away from earth ? Shouldn't he be revolving around the earth just like a satellite ?

2. Opening the hatch from outside - will not drain all the air from the escape vessel ? — Preceding unsigned comment added by 115.113.11.188 (talk) 10:22, 17 January 2014 (UTC)[reply]

I haven't seen the movie, but you might be interested to read Gravity (film)#Scientific accuracy.--Shantavira|feed me 10:36, 17 January 2014 (UTC)[reply]
SPOILERS: I have seen the movie, and yes, Clooney would not drift away in a real situation like that. As for the second, if you saw the film, you would hopefully have noted that the hatch never actually opened. Mingmingla (talk) 17:09, 17 January 2014 (UTC)[reply]
1. If Clooney was tethered to Bullocks but still accelerating (and stretching the tether) then breaking the link would make Clooney drift away.
2. he didn't do that, it was all Bullocks' dream. But no it wouldn't drain all the air that is still compressed in some sort of cylinder. Bullocks could open and close the oxygen supply, so there was air somewhere.
For watching films you need Suspension of disbelief. Gravity is not a documentary about how things work in space. The whole story is full of holes to make the story look more appealing. Traveling from the Hubble telescope to one space station and then to another, showing Bullocks in sexy hot-pants (instead of a more appropriate thermal suit below the space suit), and much more, makes the story progress. OsmanRF34 (talk) 18:05, 17 January 2014 (UTC)[reply]
Even the actress seems to have progressed to a plural person.  :) -- Jack of Oz [pleasantries] 23:29, 17 January 2014 (UTC) [reply]
Thanks Jack - that load of bullocks was annoying me too. :) Richerman (talk) 00:00, 18 January 2014 (UTC)[reply]
For suspension of disbelief you need a fiction which has its own rules and which abides by them consistently. The core rule undercutting most fiction is the world is the same unless otherwise stated, and if it makes it not the same without justification, the suspension breaks, the immersion breaks, and the enjoyment is reduced.
For (1) it makes sense if you assume that they are spinning slightly so there is some centrifugal force pulling him away from the tethers. They did such a good job with the rest of the science that I bet the writers had this much better, but it got lost somewhere between storyboarding and the edit room floor. 109.70.142.60 (talk) 23:44, 17 January 2014 (UTC)[reply]
Neil Degrasse Tyson Tweeted several criticisms of the movie (although apparently he really liked it on the whole). Your first question is one of the issues he raises. In an interview with Huffington Post, director Alfonso Cuaron explained it this way:

What happens is she's grabbing the tethers and he comes with momentum. His momentum pulls her. They're moving together. There's a wide shot that shows they keep moving and you can see the background keeps on moving. What happens is, if he lets go, his force stops and the force of the tether takes over. And, look, by saying that, this is not a documentary. We took certain liberties. Part of the liberties we took were in the sense of we would stretch the possibilities of certain things.

--— Rhododendrites talk19:52, 19 January 2014 (UTC)[reply]

Eating when not hungry

People say eating when not hungry can lead to becoming overweight but is this true even if the recomnended daily calorie intake is not exceeded? 82.40.46.182 (talk) 12:44, 17 January 2014 (UTC)[reply]

Yes. Simply because you may not be using all the calories you're taking in. Recommended daily intake is vague at best. Many other factors, such as age, gender, level of activity, genetics, etc., can affect what YOUR RDI is. See Weight_gain 196.214.78.114 (talk) 13:27, 17 January 2014 (UTC)[reply]
Only in general though. Any number of external factors could be affecting the person's appetite. (Making them not feel hungry when they really do need calories, or vice-versa.)
Obviously, if the question is motivated by something personal you'd want to talk to an actual doctor. APL (talk) 22:57, 17 January 2014 (UTC)[reply]
I've heard two pieces of advice: "Never eat when not hungry" and "Always eat breakfast". Unfortunately, I'm never hungry when I first wake up, so those two are in conflict for me. I decided to go with the first bit of advice, and not eat breakfast. StuRat (talk) 23:59, 17 January 2014 (UTC)[reply]
Eating when bored may lead to distracted eating or panic eating.--Auric talk 17:21, 18 January 2014 (UTC)[reply]

Second-order differential equation in Simple Harmonic Motion

I am reading a physics book in which the following equation is given:

i.e., second-derivative of position (x) w.r.t., time (t) is equal to negative of position (x) times spring constant (k) divided by mass (m). My book just says "it is a second-order differential equation and after solving this equation you get
". But it didn't mention how to solve this and only give the answer. If possible, I want someone to show me steps how to solve second-order differential equation. Britannica User (talk) 13:57, 17 January 2014 (UTC)[reply]
It sounds like you're reading a physics book that's a notch ahead of your mathematical preparation. By the time you read that book, its author assumes that you should have the equation for simple harmonic motion etched indelibly into your memory, and recognize its solution by inspection; maybe you skipped a few books ahead of your level. But how is the solution derived?
The solution is predicated on using Euler's formula to represent the cosine in terms of a complex exponential. Solve the differential equation by the separation of variables method, and recognize that the exponential function is a good ansatz because it equals its own integral (rather, it is linearly related to its own integral... and in the sloppy math we do in physics, that means it's equal, because we don't worry about a scale factor or a constant offset - those are just coordinate changes). Apply the physical constraints (which are stated implicitly, because no initial condition was provided), and drop the sinusoidal term (representing an arbitrary phase lag). This sort of thing is covered in gorey detail in a book on methods of solution for ordinary differential equations.
So, this is just another case where "thinking like a physicist" means "knowing the math so well that you do all of the above by inspection." Most people need two or three years of mathematical training beyond their initial integral calculus work before they can just see this sort of thing. Some people, no matter how much time they pour in, never get there. And every few centuries, we get a Newton or a Fourier and they just get it. Nimur (talk) 14:23, 17 January 2014 (UTC)[reply]
Oh, and if you don't like Euler's formula, or if you're a skeptic who doesn't want to use eiωt as your trial function, then you can re-express your ansatz as a Taylor expansion-style infinite sum of polynomial terms, and it can be proved that the series will converge by solving explicitly for the polynomial coefficients. You will also prove that the exponential would have been a good choice. So, you can solve this equation numerically, too. In fact, there are many ways to find solutions to this very common equation. Nimur (talk) 14:30, 17 January 2014 (UTC)[reply]
I am a 10th grade student, but grade doesn't matter. I know about complex numbers, Euler's formula, differentiation (ordinary as well as partial), integration, and how to solve first-order ordinary differential equations; however, I don't how to solve second-order differential equations. A book on Calculus says this "There is no general analytic method for finding solutions to second-order and higher differential equations. Solutions of such differential equations are often obtained by educated guessing, or numerical methods". If the solutions are obtained by educated guessing, then I noticed that there can be many possible solutions of the second-order DE I mentioned above. Therefore, why did the author of my physics book just stick to a single solution? How did he know only that solution would fit to that equation? Britannica User (talk) 15:47, 17 January 2014 (UTC)[reply]
Not a problem... just don't panic if there are some complex parts in the math. With enough time and effort, you'll get it.
In general, if a solution space is spanning the problem space, we can guarantee the solution is unique. In methods for solving differential equations, we construct an orthonormal set of basis functions from which we construct the solution. Even if we guess the answer - that is, if we use an ansatz function - we can still prove it is correct. Purist mathematicians have lots of tricks to help us make good guesses; but your book is correct, sometimes there is no analytic method when you've got very hard equations. (Sloppy physicists just learn how to guess right, and prove it later).
This probably sounds jargon-y, but what it means is that we have a formal, mathematical way to prove that our solution is complete and that it is the only possible solution for this problem. Here, we would use A cos (ωt) + B sin (ωt) and we know B=0, and ω2=k/m because we solve the equations explicitly. B could be nonzero if initial conditions were specified. As physicists, we don't care, because the function is periodic, so we can slide around the time we define for "t=0" until B is also zero. Nimur (talk) 15:52, 17 January 2014 (UTC)[reply]
  • If you want to learn the technique that applies here, the key thing to know is that the equation is linear. There is a general method for solving linear differential equations of all orders. Our article unfortunately is written at a level of abstraction that you probably won't be able to handle, but any textbook that covers differential equations will have a section on this class of problem. Looie496 (talk) 16:07, 17 January 2014 (UTC)[reply]
Indeed. The simple harmonic oscillator equation is especially easy to solve because it is both linear and homogeneous. Your book which said "There is no general analytic method for finding solutions to second-order and higher differential equations" probably had in mind more general *non-linear* differential equations, which are much more difficult to solve explicitly. Gandalf61 (talk) 16:27, 17 January 2014 (UTC)[reply]
And for the simple harmonic oscillator, there are many methods to analytically find the solution; separation of variables, for example. You can also take the Laplace transform (or any other differential transform) and solve algebraically in the transform domain, and then apply the inverse transform to find the result. This specific equation, x’’ = ±Cx has been analytically solved for hundreds of years in thousands of different ways. Nimur (talk) 16:38, 17 January 2014 (UTC)[reply]
  • We have an article specifically on the Harmonic_oscillator that may help. It is often easiest to find the Characteristic_equation_(calculus), which reduces the problem to simple algebra. The solution behavior depends on the roots of this equation: each root leads to a solution, and the general solution is a linear combination of these solutions with arbitrary real coefficients. You might also look at some other calculus books: both of mine discuss harmonic oscillators in a chapter on elementary differential equations. Hope this helps. OldTimeNESter (talk) 17:15, 17 January 2014 (UTC)[reply]
  • You can do the following:
    1. Introduce a new function:
    2. Now we have:
    3. The above equation can be solved by simple integration: , where is a constant
    4. Taking into account the definition of we have:
    5. Or:
    6. The latter equation can be again solved by simple integration:
    7. The latter integral you can calculate yourself as an exercise.
Ruslik_Zero 19:49, 17 January 2014 (UTC)[reply]

How do people have sex in "impossible sex positions"?

It's a short story, written by a well-regarded author, William H. Coles, here. The short story only mentioned it briefly, yet it got me thinking what type of character Denise was like and what she could have placed in her story. How do people know if a particular sex position is "impossible"? When there is no penetration or something? 140.254.227.125 (talk) 15:28, 17 January 2014 (UTC)[reply]

I think the "back-to-back" position would fall into this class! -- Q Chris (talk) 15:38, 17 January 2014 (UTC)[reply]
Please explain the qualifications that allowed you to make your judgment. 140.254.227.125 (talk) 16:12, 17 January 2014 (UTC)[reply]
See hyperbole. Were a student to regularly describe lovers copulating while hanging from chandeliers or standing on their hands, her creative writing instructor may well describe her as being "fond of [writing about] intercourse in impossible positions" even thought such positions may be possible with sufficiently strong and well hung lighting fixtures and acrobatic participants. --ToE 05:39, 18 January 2014 (UTC)[reply]
(Was your use of the term "well hung" really necessary?) HiLo48 (talk) 22:12, 18 January 2014 (UTC)[reply]
Generally we consider gravity in determining impossibility. Most of the impossible positions require zero gravity, a series of straps, or an underwater environment to be feasible.--Auric talk 17:18, 18 January 2014 (UTC)[reply]

How to calculate probability of a complex system

It's clear that if you have a die or a bag full of red and black balls you can see how probable are one result or the other. But what if you cannot run several tests (like in the case of nuclear reactor or a space shuttle), and you have a extreme complex system, how can you calculate the probability of failure? OsmanRF34 (talk) 17:58, 17 January 2014 (UTC)[reply]

You have to look at all the possible failure modes, figure out the probability of each, and then combine those. For example, failure mode and effects analysis of an electrical system involves looking at a break in every wire, to determine which breaks will cause serious problems, and which will not. In many cases multiple failures are required to cause a disaster, so then those probabilities much be combined. StuRat (talk) 18:11, 17 January 2014 (UTC)[reply]
If you want a simplified example, let's say there are two ways in which a system can fail, and the first requires 2 things to go wrong, while the 2nd requires 3, and for convenience, we will say each component has a 1% chance of failure per year:
P1 = 0.01 × 0.01 = .0001
P2 = 0.01 × 0.01 × 0.01 = .000001
Ptotal = P1 or P2 = 1 - (1-P1)×(1-P2) = 0.0001009999
Note that Ptotal is very close to just being the sum of P1, P2, etc., for small values, so sometimes that approximation is used. Here Ptotal was the chance of a disaster in one year, but you could get the chances for a decade or a century just by combining them in the same way as the last step above, where P1 would then be the probability of any failure in year 1, P2 in year 2, etc., and you can just keep adding more terms like that to get the total probability over the full time period.
One limitation of this type of analysis is that it assumes that each component failure is independent of the rest. However, this isn't always the case. You can have a cascade failure, where one failure causes others, like an electrical grid where one wire going down puts too much power on the rest, and then they fail, too. You can also have a common outside cause for multiple failures, like an ice storm causing many electrical lines to go down. For a real example, the Fukushima Daiichi Nuclear Power Plants lost primary reactor power, backup electrical power from the grid, and the backup generators, all due to the same earthquake and tsunami. Not having any power then made it difficult and dangerous for them to open and close valves manually, which made it impossible to control the reactors.
With this is mind, the historic failure records of a complex system may be more useful in figuring out the overall risk. However, this isn't possible for a new system, unless similar to existing systems. And, even then, quite a minor change can have a huge effect on the overall risk. For example, had the backup generators been stored at the highest elevation in the complex, they would have been above the waterline and thus would have functioned to prevent the disaster.
I should probably also mention that human factors figure in to risk, as well. Specifically, once a system has worked well for many iterations, everyone becomes complacent, and safety standards drop, until a disaster happens, which causes them to tighten up on safety, until many more safe iterations, when safety standards start to drop again. This can lead to a cycle of disasters, relatively evenly spaced out, over years. The Space Shuttle disasters are an example of this. StuRat (talk) 18:34, 17 January 2014 (UTC)[reply]
To give an example of historic failure rates, there have been three major incidents in the nuclear power industry (Three Mile Island, Chernobyl, and Fukushima) out of 14500 reactor-years of operation. There are currently about 450 reactors in operations, which would suggest that a major incident might be expected to occur on average every 11 years. Of course, one might hope that each major failure leads to improvements that make future failures less likely. The fact that Fukushima occurred 25 years after Chernobyl might support that conclusion (or it might just have been good luck). From a practical perspective, one makes the systems that one understands as safe as possible, and hopes doing that is good enough to protect against problems that no one planned for. Dragons flight (talk) 22:36, 17 January 2014 (UTC)[reply]
I don't think I'd include Three Mile Island as a major incident. It was a minor incident blown out of proportion by the media. Nobody died. StuRat (talk) 00:05, 18 January 2014 (UTC) [reply]


January 18

Brain death and life support

Hi all. After reading the news today about Jahi McMath's body slowly deteriorating while on life support, and the fact that her brain will slowly turn into liquid, I was curious. Does the body generally function normally despite being on life support and having no brain activity? What parts of the body tend to fail first and why? Why can't she maintain her body temperature (apparently hers is always low)? Any other random information on this subject would be interesting. Thanks! Justin15w (talk) 02:00, 18 January 2014 (UTC)[reply]

I think that theoretically you could keep a person alive for the rest of their natural life on total life support. However, in practice, this is quite difficult to achieve. Consider something as simple as bed sores. A healthy person would feel the need to adjust their position, and roll over in their sleep, and the problem would be solved. Somebody brain dead doesn't roll over, and therefore their blood doesn't circulate properly, eventually causing sores, then tissue death, then gangrene and blood poisoning and death. Now if the staff constantly adjusts the patient's position, this can be avoided. And this is just one of dozens of simple problems that can happen. Add them all up and the chance of living to their normal life expectancy is quite low. (The staff would also need to use electrical stimulation to exercise all the muscles, perhaps occasionally administer a stimulant to get the heart pumping faster, then lift them out of bed to an upright position, etc.) StuRat (talk) 02:22, 18 January 2014 (UTC)[reply]
Good point. Not to mention complications from constant mainlining of medication and total parenteral nutrition. Justin15w (talk) 02:38, 18 January 2014 (UTC)[reply]
There's more going on than that. The brain controls a lot more than just breathing. It controls body temperature, salt balance, blood pressure, some digestive functions, some immune functions, etc. Once the brain dies, many other things go out of balance, and inevitably this leads to general system failure. Also the breakdown of the dead brain tissue generates toxic byproducts that damage other tissues. Looie496 (talk) 02:47, 18 January 2014 (UTC)[reply]
So how did Ariel Sharon remain alive for 8 years while in a persistent vegetative state. Look at the case of Elaine Esposito - and that was before we had "modern" technology. It would be interesting to know where the OP got the idea that Jahi McMath's brain would slowly liquify. Was that from a reputable source or press speculation, rather like we have here? Bedsores are the very least of the problems. There is plenty of technology available involving variable pressure mattresses that prevent the problem of localised pressure leading to bedsores. The idea of electrically stimulating the muscle to maintain their tone is a curious and incorrect notion. Passive exercises would be used to prevent contractures. The line "I think that theoretically you could keep a person alive for the rest of their natural life" What is natural about being kept alive in a coma? Reminds me of an incident I witnessed when an anxious patient told the consultant that he was worried about his heart and that it might stop. The consultant listend to his heart with a stethoscope and then said, "Your heart is fine, it will last you for the rest of your life". The patient was greatly reassured. There are so many variables in this question and the problem of keeping people "alive" that it is relative difficult to give a general answer. Richard Avery (talk) 08:20, 18 January 2014 (UTC)[reply]
Yes, it was an LA Times story, linked here: http://www.latimes.com/local/lanow/la-me-ln-jahi-mcmath-body-deteriorating-20140108,0,4831276.story#axzz2qrNbVpsf Justin15w (talk) 15:52, 19 January 2014 (UTC)[reply]
A Vegetative state such as Sharon's is not the same as Brain death. You may not be conscious, but your damaged brain may well be working enough to regulate your body. Rojomoke (talk) 11:08, 18 January 2014 (UTC)[reply]
I agree absolutely and I was not making comparisons between the condition of Jahi McMath and that of Ariel Sharon. I was correcting some misleading information/uninformed speculation about the nursing care of a person in a long term comatose state. Richard Avery (talk) 15:01, 18 January 2014 (UTC)[reply]
Not every facility has the equipment or staff required to prevent bed sores. And yes, they are just one of many possible problems, which is what I said. As for passive exercise (just moving the limbs periodically), that would be OK if the person is never expected to recover, but electrical stimulation of muscles will prevent muscular atrophy, which is important if they are in a coma and expected to recover. Otherwise, when they awake and can't walk, they may get discouraged and suffer from depression.
Another issue with temperature maintenance in brain dead or coma patients is that a person who is just asleep will shed covers if they overheat or snuggle under them if they get cold. These simple temperature control mechanisms no longer work for the comatose and brain dead. StuRat (talk) 15:25, 18 January 2014 (UTC)[reply]
  • There are very important differences between (1) a persistent vegetative state such as Sharon was in, (2) coma, and (3) brain death. In a vegetative state the brain is largely functional, but not functional enough to support consciousness. A person in this state shows substantial activity, including sleep-wake cycles, eye movements, withdrawal from painful stimuli, bowel movements, etc. What is lacking are purposeful movement and communication. In true coma, which rarely lasts for longer than five weeks, there are no sleep-wake cycles, no eye movements, etc. Only some very basic reflexes are functional. Coma almost always resolves within a month either into a vegetative state or into brain death. In brain death, all of the electrical-activity-generating tissue within the brain is dead. Not just non-functional, but actually dead. Looie496 (talk) 17:46, 18 January 2014 (UTC)[reply]
    • It's also important to consider which part of the brain. The operation of basic life functions occurs in the brain stem; while most of the stuff that we consider a "person" (i.e. one's personality, emotions, memory, etc.) occur in other parts of the brain, primarily the Cerebral cortex. Thus, a body with a functioning brain stem can be kept alive almost indefinitely, but that's really not a living person if there's nothing working outside of that, it's just a bunch of tissue. --Jayron32 04:46, 19 January 2014 (UTC)[reply]

Powder beetle bite humans.

Request for medical advice
The following discussion has been closed. Please do not modify it.

Got a beetle bit a year & 5 months ago. Had Dr.' s look at it at OHSU about 20 of them and they didn't know what it is now it is eating my hand away. The VA don't even know for sure what it is.Please help I can't be the only one in the world with this. I still have a few of the bugs in a jar. — Preceding unsigned comment added by 199.15.221.104 (talk) 10:27, 18 January 2014 (UTC)[reply]

I'm sorry, but the Reference Desk is unable to offer medical advice. Rojomoke (talk) 11:11, 18 January 2014 (UTC)[reply]

Silicone Casting

Long rambling introduction of how I came to be in this position, that you can skip if you want: I originally bought a 3D printer some time ago, when they were just getting interesting, and since then have worked at producing and selling a range of different simple things, keyrings and figurines mostly, but more recently I thought to use my contacts in the costume industry to supply a few simple plastic parts there, items mass produced, strong and light weight, it seemed a good idea. Soon after, I was approached by someone that was interested in creating similar items for the same industry, out of silicon, and they asked if I would be able to produce moulds for them to cast these shapes. Initially I was a little reluctant, since it would allow them to compete with me in the same area, but I saw the value in working together and went along with it, designing a few simple moulds for them to use. However, they have since hit some issues of their own, and it is looking like they will not be able to go through with their plans at all. Which leaves me thinking, there is only a certain limited demand for printed plastic items, any chance to expand my business is helpful at this point, so perhaps I can pick up where they left off?

I am interested in casting items out of silicon (by which I mean the soft, slightly rubbery plastic stuff, I am not sure if it is related at all to the element of the same name) however, I am not entirely sure how to go about doing so, I know the basics of casting in moulds, but not the specifics for this particular material, and the internet is not being helpful, any search leads me to casting of other materials within silicon moulds, which seems more common. So, I wonder whether I might get a better answer here, from real people. Any suggestion of where I might find instructions on this method? Suggestions of good places to buy materials and equipment would also be helpful. From what I have seen, it is an area of some demand, and a wide range of possibilities, so a good opportunity for me if I can take it.

Thank you,

213.104.128.16 (talk) 22:32, 18 January 2014 (UTC)[reply]

I don't know much about casting, but the rubbery material you're considering is silicone, not the element silicon. DMacks (talk) 22:42, 18 January 2014 (UTC)[reply]
Ah, that makes sense, thanks. Still doesn't help with finding information, but it solves that little mystery. 213.104.128.16 (talk) 00:03, 19 January 2014 (UTC)[reply]
The specific subset of silicones you're looking for is silicone rubber. As for the casting process, this will depend on the viscosity of the material in its uncured state: if it's viscous, you'll have to use pressure casting (die casting), but if not, then simple permanent mold casting will do. In either case, you'll have to use the platinum-catalyzed curing system, cure-in-mold -- the other method just takes too darn long! 67.169.83.209 (talk) 06:56, 19 January 2014 (UTC)[reply]
Yes, and of course silicone contains silicon, hence the name. I've often thought that most plastics used in the kitchen should be replaced by silicone, as it doesn't absorb smells and colors like plastics do, and doesn't melt or degrade at the same low temperature as plastics. For example, I have nice glass bowls suitable for refrigerator or freezer storage, microwaving, and serving, but they have plastic covers which can't take microwaving and even warp in the dishwasher. One shortcoming of silicone seems to be that you can't get the same vibrant colors as plastics, but I could live with that. StuRat (talk) 23:10, 18 January 2014 (UTC)[reply]
In general, 3D printing is a horribly slow and expensive way to make almost anything in any reasonable quantity. Filament plastics cost between $30 and $40 per kg where granulated or pelletted plastics are around $1.50 to $2 per kg. A 3D printer can easily take an hour to make something that an injection molding machine can make in a fraction of a second. So 3D printing is a non-starter for mass-produced items.
The benefits of 3D printing only come when the quantities are small and the losses due to the expensive plastics and L-O-N-G print times are outweighed by the cost of making a mold that's only going to be used a handful of times. So (depending on your market and the nature of what you make) there is a cross-over point at which making a mold and doing some kind of casting starts to make sense.
The precise point at which that happens depends entirely on the nature of what you're making - it's volume and shape complexity being the biggest factors. Usually, when you design something that you expect to sell more than (say) a hundred of - you should design it in such a way to make molding easy.
However, the useful intersection comes sooner when you can make a mold USING a 3D printer. This gives you the convenience of going direct from CAD drawing to product - and makes creating a custom mold very cheap. The issue is making the mold resilient enough to withstand making a large enough number of finished objects. Plastic molds don't last long with any mechanically pressurized casting system - so you might find yourself needing to make many, many molds using your printer. Poured resin (for example) casts well, and a plastic mold works well for a lot of castings. Metal casting into plastic molds generally doesn't work at all because the plastic melts - but you can find low melting point alloys that will work - although you may have to do things like spinning the mold ("spincasting") to force the metal down into all of the details - and then the pressures involved will still trash your molds after not much time.
So for silicone - the issue is whether you can simply pour the stuff in (which should work) - or whether you have to force it in (which probably won't).
If you have to force it in, then you may need to make another step. 3D print the original object - take a mold from that using one of a variety of possible techniques - then use the mold for casting the finished object.
I suspect that silicone is a material right on the edge between being usable with a 3D printed mold and not...and the critical deciding factor is the shape of the object.
If you're making things like key fobs - then I'd recommend looking at other materials. SteveBaker (talk) 16:53, 20 January 2014 (UTC)[reply]


That makes sense, most of what I make is in small quantities of custom designed items, and I have made molds with the thing before, though those were sold to another company so I have no experience of exactly how the product works for casting items. the end result of the printing is a pretty tough, long lasting plastic, mine have been through some rough testing before and come out surprisingly well from it, so I suspect I'll get at least quite a few silicone parts from a single mold. And all going well, perhaps I can move on to casting other materials, to get cheaper mass produced items rather than the slow, expensive output my printer gives by itself.
The impression I'm getting from this whole discussion is that I can get the silicone rubber in more or less viscous types, the less viscous being easier to pour into the mold, of course. I am still not entirely clear on the curing process, but this is a good start. 213.104.128.16 (talk) 19:07, 20 January 2014 (UTC)[reply]

January 19

Bleach

If bleach is so harmful to human health, why is it used so widely as a cleaning product and even in swimming pools we swim in? Clover345 (talk) 12:21, 19 January 2014 (UTC)[reply]

It's a matter of use and concentration. Dihydrogen monoxide is deadly in large quantities, too, and yet it's in nearly every kind of food and even indispensable for our water supply. I don't think that bleach is used in swimming pools - chlorine, which is an active ingredient in many kinds of bleach is also used for disinfecting swimming pools, but I think the delivery mechanism is quite different. I could be wrong, though - chlorine bleach, ubiquitous in the US, is very rare in Germany. --Stephan Schulz (talk) 12:29, 19 January 2014 (UTC)[reply]
See bleach for more information. There are lots of different bleaches.--Shantavira|feed me 13:56, 19 January 2014 (UTC)[reply]
I believe Sodium hypochlorite is used in swimming pools sometimes, but generally in home swimming pools rather than larger public ones which may use direct chlorination (i.e. with chlorine gas). We do have an article Swimming pool sanitation which suggests something similar although also suggests only large commercial public swimming pools use direct chlorination.
(Of course not everyone use chlorination, bromination is another alternative and there are other possible methods.)
However people may or may not use sodium hypohlorite in the form of bleach (by which I mean something sold as bleach in the laundry or cleaning section). I think pool supply stores may generally have more specialised products, these may include tablets or powder/granules which I think are normally calcium hypochlorite or sometimes may be lithium hypochlorite, or sodium hypochlorite solutions in higher concentration probably not marketed as bleach. But provided you avoid junk [11], I'm not sure there's any advantage to choosing any of these, I imagine the best is whatever gives you the appropriate concentration for the lowest cost and is easy to store and safely apply. (The only real concern seems to be the calcium contributing to water hardness [12] and of course the difference between storing and applying a solid vs a liquid. I believe you normally have to predissolve the solids to ensure they get properly dissolved.)
I acknowledge, as per the article, swimming pool sanitation can be complicated with regular testing etc required. It seems one common addition is cyanuric acid or alternatively products which contain the already chlorinated cyanurates [[Trichloroisocyanuric acid] or Dichloroisocyanuric acid (these are commonly called stabilised chlorines in pool supplies) as the greatly increase the halflife of the chlorination (increases sunlight stability). And you also have to deal with the pH etc.
But it does seem to me there's also a degree of lack of knowledge. For example, there seems to be a concern over the build up of cyanuric acid when you use the chlorinated cyanurates and the need for water replacement or higher levels of chlorine [13] [14]. But I don't understand why this is a problem, it seems to me the logical thing is to use stabilised chlorines initially and then once you reach a suitable level, only add unstablised chlorines (whether bleach, sodium hypochorlite solutions for pools, calcium hypochlorite tablets without cyanuric acid or whatever) until cyanuric acid levels start to drop to a level below what you want. (This site [15] seems to agree.) Alternatively it may be easier to add the cyanuric acid seperately.
So it wouldn't surprise me if people are using calcium hypochlorite or even sodium hypochlorite solutions sold by pool supply companies when they are actually paying more to achieve the same chlorination as they would achieve using ordinary sodium hypochlorite bleach simply because they never considered it i.e. not for convience or anything else.
BTW AFAIK the situation is similar for disinfection of water. Municipal water supplies usually use direct chlorination. But simply adding sodium hypochloride bleach is recommend for smaller scale operations such as after a natural disaster or in areas without potable water supplies. In fact, the CDC recommends sodium hypochloride over calcium hypochloride tablets because of concerns over the quality and consistency of tablets. [16] I would have thought the same applies to bleach/sodium hypochloride but I guess you don't have to worry about size and perhaps the variation is lower.
Nil Einne (talk) 19:12, 19 January 2014 (UTC)[reply]

Many people don't realise that one of the popular sterilising fluids for baby's bottles [17] is basically a dilute solution (16.5%) of Sodium hypochlorite (bleach). The manufacturers do say, however, that "The purification process during the manufacture ensures complete removal of all heavy metal ions, which would normally act as a catalyst to chemically break down many hypochlorites, causing instability". This product can also be used, at the right dilution, for purifying drinking water. Richerman (talk) 19:39, 19 January 2014 (UTC)[reply]

Bleach is, according to the EPA, a method of last resort for purifying drinking water. I learned the same advice from "extreme survival"-style camping: bleach is more effective than most of your ordinary camping-gear (like a filter or iodine droplets) at killing the nasties in your drinking water; it's more potent, lasts longer, acts faster, and it can kill you if you use too much in your water purification process. Nimur (talk) 20:06, 19 January 2014 (UTC)[reply]
You seem to have indented under the wrong reply - I didn't suggest using household bleach to disinfect water. The 16.5% solution is much safer to use. Richerman (talk) 20:54, 19 January 2014 (UTC)[reply]
There would see to be very little difference between using a weak bleach soloution (reading the FAQ, it's 2% *bleach*, 16.5% *salt*) and just using less bleach. The advantage of the baby bottle cleaner seems to come from removing all the heavy metal ions, favouring a safer decomposition of the bleach. Since the ions would presumably be present in the water anyway, you lose this advantage whether you use the solution or just straight bleach. MChesterMC (talk) 09:48, 20 January 2014 (UTC)[reply]
Sorry my mistake, it is a 2% solution. However, using a concentrate of a lower dilution (which is actually non-toxic according to their blurb) means that there is no chance of poisoning yourself if you get the dilution wrong. Richerman (talk) 17:16, 20 January 2014 (UTC)[reply]

Organic livestock more humane?

Someone told me that organic livestock is more humane from an ethical point of view than regular livestock. Is this true? I've always been skeptical of organic food claims being more healthy and better for the environment, so I'm also skeptical of this claim. ScienceApe (talk) 17:42, 19 January 2014 (UTC)[reply]

Exposing "organic livestock" to an environment with a high degree of organic chemistry (extra amounts of pesticides and fertilizers, etc.) seems not to be very humane or ethical.   71.20.250.51 (talk) 18:04, 19 January 2014 (UTC)[reply]
There is no reason for organic raising to be more ethical. The animals can still be slaughtered inhumanely and kept in terrible conditions without pesticides. The two are unrelated. Mingmingla (talk) 18:18, 19 January 2014 (UTC)[reply]
  • Let me point you to our article on organic beef. The part of the USDA rules that is relevant is that organic beef cattle have to be born and raised on pasture, and more importantly, must have unrestricted outdoor access. That means they don't spend time in feedlots, which are usually pretty unpleasant. Looie496 (talk) 18:21, 19 January 2014 (UTC)[reply]
(e/c) And presumably such animals benefit from, for example, not being fattened artificially. We could do with an article on organic livestock, it is mentioned only briefly in our organic farming article. The free range article is probably the closest we have.--Shantavira|feed me 18:24, 19 January 2014 (UTC)[reply]
Actually, the USDA says "Due to the number of variables involved in pasture-raised agricultural systems, the USDA has not developed a federal definition for pasture-raised products" which means, for example, that an animal which lived its entire life in a pasture and an animal which saw one once on the way to slaughter could both be labelled as "pasture" cattle. Matt Deres (talk) 19:26, 19 January 2014 (UTC)[reply]

The standards for USDA and UK Soil Association are here and here. Basically you need to check the label to see which standards apply. There are, of course, other labels that guarantee ethical standards for livestock that are not necessarily organically reared, such as some of the ones shown here. Richerman (talk) 18:56, 19 January 2014 (UTC)[reply]

One of the criteria the USDA lists for Organic Beef is the cattle, "Never receive antibiotics". My friend told me that consuming animals being given antibiotics can lead to antibiotic resistance. Is this true? ScienceApe (talk) 16:33, 20 January 2014 (UTC)[reply]

The thing to think about is antibiotic resistance in bacteria, not in humans or cattle. And yes, *every* use of antibiotics technically favors antibiotic-resistant bacteria. This is why they don't use antibiotic cleaners in space! This is also why doctors stress not quitting an antibiotic course early. If you want more info on this topic, you should probably open a new thread. SemanticMantis (talk) 16:47, 20 January 2014 (UTC)[reply]
(ec) It is possible for the bacterial population in your gut to become resistant, but even if it that didn't happen the indiscriminate use of antibiotics causes resistant organisms to evolve and get into the environment see:[18]. Richerman (talk) 16:54, 20 January 2014 (UTC)[reply]

Damaged molecules?

Resolved
 – 17:43, 20 January 2014 (UTC)

A "nutrition expert" interviewed on a TV health segment mentioned that commercial processing of vegetable oils "damages the oil molecule". This sounds like BS to me; either a molecule is what it is, or it isn't, right? Or, is it possible to "damage a molecule" such that it looses its health benefit? ~:71.20.250.51 (talk) 21:28, 19 January 2014 (UTC)[reply]

Well, sure; molecules can get changed in all kinds of ways. Just consider what a hamburger patty goes through as it goes from being raw to being cooked to being burnt to being nothing but charcoal. Eating that lump of black char is unlikely to provide much in the way of vitamins. Could you provide more detail about what the oil supposedly went through to become less nutritious? Did it have to do with hydrogenation? Matt Deres (talk) 21:50, 19 January 2014 (UTC)[reply]
That changes molecules from one molecule to other molecule(s). Processing doesn't "damage" an oil molecule (right?). -Re: "more details" --this post is in response to what seemed like mumbo-jumbo from an "expert" on TV (the processed oil you buy at a store is no good, etc.); this was on network programming, not an infomercial. ~:71.20.250.51 (talk) 22:04, 19 January 2014 (UTC)[reply]
Sure. Damage = change from useful to less useful. If I take a hammer to a window, I change it. Into something less useful. Changes to oil molecules that make them less useful could reasonably be described as damage. --Jayron32 23:41, 19 January 2014 (UTC)[reply]
You can "damage" some molecules - like when you denature proteins (cooking egg whites, for instance). I don't think this is possible for oils. The usual problem with oils is when the molecules are changed into other molecules by adding hydrogen to it. Trans fat is created this way. 75.41.109.190 (talk) 22:33, 19 January 2014 (UTC)[reply]
So, (let's say) you take oil from the Borobudur Ubatuba plant, and send some to the EvilFood Industrial Processing Corp., and some to the GreenyGood Food Coddler's Co-op., you could end up with two different Borobudur Ubatuba oils? ~:71.20.250.51 (talk) 00:09, 20 January 2014 (UTC)[reply]
  • The most common processes that damage oils and other lipids are known as rancidification -- that article will give you more information. However there are also other processes that can occur. For example if you put oil

together with something that is strongly alkaline, the result is soap -- something you don't want in your food. Looie496 (talk) 05:15, 20 January 2014 (UTC)[reply]

This is becoming an argument about terminology. Sure, if you "change" a molecule, you'll have to either change the number and type of atoms within it (which makes a "different" molecule) - or you might possibly make it be folded differently.
Cyclohexane (to pick a very simple example) can fold into several different shapes (called "Chair", "Half Chair", "Boat" and "Twist-boat") - those forms all have the same chemical formula, the same number and type of atoms - and they are even connected together in the same way - so they are all "Cyclohexane". But I suppose that if you wanted "chair-Cyclohexane" and some process turned it into "boat-Cyclohexane" - you might say that it had been "damaged" by the process because the chemical properties have been slightly changed...but it's only a matter of terminology to say whether that's the "same" molecule or a "different" molecule -and "damaged" is such a loaded word in a situation where the molecule can trivially be flipped back into the preferred form. When we get to something as complicated as an "oil" - molecules will be changing and flipping shape all the time - so if you just do nothing whatever to a bucket of the stuff, it would slowly change (by some definitions of the word "change"). Perhaps that can be considered "damage" or perhaps "improvement" - or "unchanged" - depending on what use you're planning to put it to.
But even when the number of atoms changes - we have a terminology issue in the case of any long-chain molecule. We simply don't consider a molecule with 10,000 repeated units to be a "different" molecule from one with (say) 9,500 units. So you could take a polyethylene molecule, chop it in half - and you'd still say that you have "polyethylene" - even though the properties have changed and the strict chemical formula is different.
So I'm not sure this question is really very meaningful without a lot of clarification about the terminology we're using - and the purpose to which it'll ultimately be used.
SteveBaker (talk) 16:27, 20 January 2014 (UTC)[reply]
Thank you everyone, this does clarify the subject for me. I hadn't considered Protein folding, Molecular clustering, etc. Perhaps the "expert" wasn't totally off-base afterall. ~Eric:71.20.250.51 (talk) 17:43, 20 January 2014 (UTC)[reply]
Well, I wouldn't go that far! Nutritionalists (especially the ones interviewed on TV shows) aren't chemists! There is no such thing as "an oil molecule" - vegetable oil is a complicated mixture of many different chemicals - and clearly it's very likely that any sort of processing will change both configuration and chemical formula of the resulting molecules. The big question is whether this constitutes "damage" - in the sense of the oil not being so useful after this processing stage. Clearly, these commercial processes wouldn't be applied to the oil if there was no benefit to the manufacturer - perhaps they're improving shelf life - or making the stuff clearer and more visually appealing in a bottle on the supermarket shelves. It's plausible that something they do to "improve" it from their perspective also "damages" it from the perspective of the nutritionalist by (say) removing some flavors, vitamins or fibre. I'd bet that almost any change could be viewed as "damage" from someone's point of view.
The nutritionalist's answer is (IMHO) premium bullshit because it's a vague and lazy description. Had we been told "commercial processing lowers the boiling point, making it harder to cook without it smoking"...or..."commercial processing removes the longer chain molecules ruining the texture of bread made with it"...or whatever - then I'd listen and take note - but this sounds far too much like a blanket "Commercially processed anything is unspeakably horrible" kind of a claim, tossing in the word "molecule" because it sounds scientific - which is common amongst people who speak about food on mindlessly stupid TV health shows!
SteveBaker (talk) 00:28, 21 January 2014 (UTC)[reply]

A doubt about metric expansion of space: the galaxies that now are, for example, receeding from us at 100 km/s did ever receed from us at that speed or when they were nearer they receeded more slowly? If yes, this would contrast with the extimations of universe age, because it was calculated assuming that the recession was constant. 95.235.225.100 (talk) 22:06, 19 January 2014 (UTC)[reply]

No one assumes the recession is constant through time. The rate of expansion changes through time due to both the slowing effect of gravitational attraction and the accelerating effect of dark energy. Dragons flight (talk) 05:28, 20 January 2014 (UTC)[reply]
Coming up with an estimate of the age of the universe by simply assuming that the Hubble parameter has had about the same value during the life of the universe works surprisingly well. Even Alan Sandage's 1958 estimate for H of 75 (km/s)/Mpc amounts to a Hubble time of 13.05 billion years, and the current measured value for H of 67.80±0.77 (km/s)/Mpc amounts to a Hubble time of 14.43 billion years, both of which are surprisingly close to the current best measurement for the age of the universe of 13.798±0.037 billion years. However, it's been a long time since the determination of the age of the universe was done so crudely by professionals in the field.
Yes, it was discovered in 1998 that the expansion of the universe is accelerating slightly (see Accelerating universe), but the surprise in that observation wasn't that the Hubble parameter hasn't been precisely constant, but rather that the expansion of the universe wasn't decelerating somewhat. I.e., before 1998's observation, the presumption was that the deceleration parameter was likely positive, not that the deceleration parameter was zero.
The best current measurements of the age of the universe use observational data from NASA's Wilkinson Microwave Anisotropy Probe and the Planck spacecraft, and use the complicated ΛCDM model of the universe, which is a lot more sophisticated than simply assuming that the Hubble parameter has been constant, and estimating the age of the universe as its inverse. Red Act (talk) 06:50, 20 January 2014 (UTC)[reply]

I meant if, as example, 6,9 billions years ago (the half the universe's age) the same galaxies were receeding from us at (adproximately) half the speed today or their velocity was more or less that of present. Thanks for answering.93.45.32.204 (talk) 16:56, 20 January 2014 (UTC)[reply]

Their past velocities relative to us would be similar to present. Dragons flight (talk) 01:41, 21 January 2014 (UTC)[reply]

Situations where a law change allowed researchers to gather useful data due to before/after comparisons?

I'm looking for examples where important social science data was able to be gathered because a new law went into effect, which allowed for before/after comparisons.

I'm especially interested in examples where the new law was initially rolled out to only a subset of the population, and that population subset was determined by something random like the last digit of their SSN or license plate, and this allowed researchers to treat it as a [inadvertent] randomized controlled trial.

Anywhere in the world is fine. --Hirsutism (talk) 22:22, 19 January 2014 (UTC)[reply]

I'm not sure this is exactly what you're after - because it's not "social science" and not exactly a "law" that was involved. But when the 9-11 attacks in Sept 2011 forced every passenger plane in the USA (and most in Canada) to be grounded for three days, researchers were finally able to deduce what effect their contrails had on the weather - showing a variation of between one and two degrees centigrade while the aircraft were grounded. This is an experiment that would be impossible to conduct without a fortuitous governmental imposition like that. Fortuitous things like long power-outages reveal all sorts of interesting things such as changes in birth rate, crimes and even literacy (parents read to their TV-deprived children after that massive power outage in the Northern USA - and enough of them stuck with doing it after the power came back on to make a measurable difference to childhood literacy rates over the next year or so)...but again, that's not a law that was passed. SteveBaker (talk) 16:09, 20 January 2014 (UTC)[reply]
  • What you are looking for is a specific type of Natural experiment. That term is very common in biological sciences (e.g. before/after analysis of a hurricane), but it applies equally well to the social sciences, when the researchers are not in control of the treatment or grouping. Here is an example I recall, "Does Daylight Saving Time Save Energy? Evidence from a Natural Experiment in Indiana" [19], that made use of different counties changing their time at different times. It's in an economics journal, and is focused on energy, but the ultimate drivers are sociological processes. Anyway, with the right term in mind, searching google scholar for /"natural experiment" law sociology/ (e.g. here [20]) produces myriad relevant hits, touching on child labor, recycling practices, etc., so that should get you started. SemanticMantis (talk) 17:07, 20 January 2014 (UTC)[reply]

January 20

Some articles are unclear

I cannot understand what is said by the articles: protist and algae. The article protist includes algae int it and says that protists are eukaryotic. But the aticle about algae includes cyanobacterias which are prokaryotic. These articles' facts are opposing each other. The article about cyanobacteria includes the cyanobacteria into bacteria domain. But in the article algae there are sentences like this:

  • Most algae except cyanobacteria contain chloroplasts

So if algae are protists(as mentioned) and protists are eukaryotes (as mentioned), then cyanobacterias should not be included in algae because they don't have a membrane bounded nucleus. What can we do? --G.Kiruthikan (talk) 04:47, 20 January 2014 (UTC)[reply]

Did you read the lead of the algae article? I think it answers your questions. Basically the answer is that the protist article should have said that they include some types of algae. Looie496 (talk) 05:09, 20 January 2014 (UTC)[reply]
"It depends who you ask, and when," to some degree. Have a look at some of the most common classification schemes in use today. Some articles are surely using different schema. We can even find and cite sources that give contradictory correct answers!
I'm actually much more interested in this problem that Wikipedia articles contradict each other!
When I went to lower-grade school, our text-books used the five-kingdom classification method (bacteria, protists, fungi, plants, and animals). This was mostly consistent with what I "knew" to be correct; I had read, cover-to-cover, my home library copy of the 1967 World Book Encyclopedia, and it used a similar categorization. By the time I went to high-school biology, our text-books had switched to the "2.5 kingdom" (archaea, bacteria, everything-else) schema. Everything seemed wrong! Even the waterbears were in the wrong chapter of my high-school book, and there can be no doubt what they are! I got into a lot of trouble with my teacher by making noise about that issue. (If only more people could get so passionate about these important flaws in school biology textbooks!)
What had happened is that over the years, new scientific research has enabled us to classify the same organisms in many different ways. There is no single canonically-correct way to classify an organism; and if you assume that some microorganism must be either a protist or a cyanobacteria, you'd better be very sure you know exactly how you've defined those classifications. Sometimes, superficial treatments about science portray a level of consistency that doesn't really exist in the research community. Molecular biologists may prefer one classification scheme, while zoologists prefer a different scheme, while ecological conservation policy-makers use a totally different type of taxonomy. Classification schemes for organisms are not "facts;" they are positions that are put forward by prominent researchers. Old-fashioned publications, like paper encyclopedias and schoolbooks, had an editorial board who would convene, and there would be a top-down commandment specifying that one set of "facts" was canonically correct and reflected the current state of scientific knowledge (at least for the purposes of that year's publication). That's not how science works - if anything, Wikipedia is giving you more "correctness" by presenting contradictory information and letting you make the critical judgement.
If you are classifying an organism, you need to know which scheme you're using, and what basis is the standard for determining taxa. You should be aware that different sources and authors may use different schema. Particularly at Wikipedia, we are an encyclopedia edited by many individuals, and with very little coherent editorial oversight; nobody here is "the Chief Editor" who commands top-down that all articles shall use schema X. So, at a superficial inspection, our articles may contradict each other. Nimur (talk) 10:30, 20 January 2014 (UTC)[reply]

How do resistors suppress RF interference?

Every since I started messing with cars I've heard that automobile spark plugs and their the wires contain resistors. Supposedly this eliminates RF noise from interfering with radio reception. On Wikipedia's Spark Plug page I found this statement:

The central electrode is connected to the terminal through an internal wire and commonly a ceramic series resistance to reduce emission of RF noise from the sparking.

That's great. Why does it work? — Preceding unsigned comment added by 50.43.12.61 (talk) 06:48, 20 January 2014 (UTC)[reply]

Same reason that a shock absorber reduces mechanical vibrations. 67.169.83.209 (talk) 09:52, 20 January 2014 (UTC)[reply]
It works via (electro magnectic) transmission line theory. A wave is produced and the wave travels down the transmission line and hits the end of the transmission line and bounces back. Each time it travels down the transmission line it releases RF. It does this trillions of times by bouncing back and forth, back and forth endlessly. To stop this, put a resistor in series with the transmission line. The resistor is tune so that to the wave it looks like an infinite transmission line. As the wave travels down the transmission line, it is absorbed by the resistor and does not bounce backwards because as far as the wave is concerned it is going down an infinite transmission line. 220.239.51.150 (talk) 10:39, 20 January 2014 (UTC)[reply]
Schematic showing how a wave flows down a lossless transmission line. Red color indicates high voltage, and blue indicates low voltage. Black dots represent electrons. The line is terminated at an impedance-matched load resistor (box on right), which fully absorbs the wave.

— Preceding unsigned comment added by 220.239.51.150 (talk) 10:46, 20 January 2014 (UTC)[reply]

Wavelength of electromagnetic radiations emitted by different excited lead isotopes

Do the different isotopes of lead emit electromagnetic radiations of different wavelengths? or Do all four isotopes emit electromagnetic radiations of same wavelengths when they are decaying? — Preceding unsigned comment added by 27.62.251.166 (talk) 13:11, 20 January 2014 (UTC)[reply]

Yes, different isotopes have slightly different molecular spectra due to a difference in nuclear structure. However, these effects are much larger in lighter elements. In lead, which is the heaviest stable element, it may be hard to measure the difference in wavelength between isotopes, but in principle it should be possible. See this article from Encyclopedia Brittanica. The naturally occuring isotopes of lead are observationally stable (see isotopes of lead), so they do not typically decay, but when you compare e.g. Pb-210 with Pb-211 or Pb-214 the energies associated with the decay process differ significantly. - Lindert (talk) 13:47, 20 January 2014 (UTC)[reply]

I want to know whether it fixed that a particular isotope would emit electromagnetic radiations (EMR) having only a certain specified range of wavelengths (i.e., wavelength of EMR emitted by Pb-210 is different from Pb-211) or the wavelengths of EMR emitted is independent of the isotope used. 106.216.118.149 (talk) 14:45, 20 January 2014 (UTC)[reply]

The gamma rays emitted by the decaying nuclei are all different frequencies. However when lead decays it turns into different isotopes of different elements. So your 210Pb gives off gamma rays that belong to Bismuth 210m at an energy of 271.3 keV. In the table in the isotopes of lead article you can see the numbers with m1, m2... after them, these give off the gamma rays listed and are all different energies, and belong to lead.

Food chemistry, eggs, milk, and salt

Hello, I witnessed some curious behavior making breakfast recently, and hope you can help clear it up. I whisked two eggs together in a clear measuring cup, until fairly homogenous. I then added a bit (15-20cc) of whole milk, and did not stir. Both eggs and milk were roughly the same temperature, straight from the fridge. The resulting mixture had a distinctly inhomogenous, marbled look, which is as expected (basically like this [21], but even less mixed, and with less milk. The important part is the borders are very sharp).

When I sprinkled a few shakes of salt on top, the veins of milk on the surface started wiggling and writhing, making the previously sharp and stable milk/egg border roil. It really caught my eye, as it reminded me of Diffusion-limited_aggregation or vortex shedding or some other clever pattern formation thing.

What's going on here? Something to do with surface tensions? Ions? Would there be any real "reactions" leading to a reaction-diffusion system? Anyway, I highly suggest you take a look next time you make scrambled eggs. Thanks! SemanticMantis (talk) 18:11, 20 January 2014 (UTC)[reply]

I'm not sure, but I think this may have to do with the changes in osmotic pressure caused by addition of salt. 67.169.83.209 (talk) 02:15, 21 January 2014 (UTC)[reply]

Foo yung

Are Chinese foo yungs healthy? Clover345 (talk) 22:18, 20 January 2014 (UTC)[reply]

"Yes" and "no". They essentially share ingredients with omelets. Your question has been answered on several sites; e.g.: an authoritative one→[22] ~:71.20.250.51 (talk) 22:46, 20 January 2014 (UTC)[reply]

Rubbing Alcohol

I was talking to my neighbor recently, and he recommended applying Rubbing_alcohol to a sore muscle, and vigorously rubbing it in to provide relief. Thus, the name rubbing alcohol. The article here makes no mention of such a use, or why it is called rubbing alcohol. I have no intention of trying this, but I was curious. Is this a common, safe, or valid use of the product? Cthulhu42 (talk) 22:36, 20 January 2014 (UTC)[reply]

The bottle I have handy has instructions that say exactly that, so I assume it's at least common and/or safe. As to how valid it is, I don't know. When the alcohol evaporates, it will cool your skin, but I don't think much of that effect would even reach your muscles, let alone what benefit it would provide. Matt Deres (talk) 00:29, 21 January 2014 (UTC)[reply]
It says here " According to Medical Dictionary, the name "rubbing alcohol" stems from its use in the past as a medicinal rubdown, although this is not as common of an application now". It also describes the reasons for that usage under the heading 'Liniment for muscle aches'. Richerman (talk) 00:35, 21 January 2014 (UTC)[reply]