Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 480: Line 480:
:There is an error in this question. The questioner makes an observation: People tap the nozzle when removing it from a car. Then, the questioner makes an assumption: People are tapping the nozzle because they want to get every last drop of gas out of the nozzle. Finally, the questioner asks questions based on the assumption being true. What if the assumption is false? I tap the nozzle when removing it from the car, but not because I want to get every last drop from the nozzle. I do it because there is usually a drop or two of gas on the nozzle. As you move it from the car to the pump, they can fall. If you not very careful, they can land on your leg or foot. Just one drop of gas on your pants and you will have a bad aroma follow you throughout the day. So, it is best to tap a couple times. It is the same as using a urinal. You want to tap a couple times so you get a drip on the inside of your pants. [[Special:Contributions/209.149.113.66|209.149.113.66]] ([[User talk:209.149.113.66|talk]]) 12:48, 17 September 2015 (UTC)
:There is an error in this question. The questioner makes an observation: People tap the nozzle when removing it from a car. Then, the questioner makes an assumption: People are tapping the nozzle because they want to get every last drop of gas out of the nozzle. Finally, the questioner asks questions based on the assumption being true. What if the assumption is false? I tap the nozzle when removing it from the car, but not because I want to get every last drop from the nozzle. I do it because there is usually a drop or two of gas on the nozzle. As you move it from the car to the pump, they can fall. If you not very careful, they can land on your leg or foot. Just one drop of gas on your pants and you will have a bad aroma follow you throughout the day. So, it is best to tap a couple times. It is the same as using a urinal. You want to tap a couple times so you get a drip on the inside of your pants. [[Special:Contributions/209.149.113.66|209.149.113.66]] ([[User talk:209.149.113.66|talk]]) 12:48, 17 September 2015 (UTC)
::I never "tap" and it's never yet dripped on my clothes. --[[User:Dweller|Dweller]] ([[User talk:Dweller|talk]]) 14:12, 17 September 2015 (UTC)
::I never "tap" and it's never yet dripped on my clothes. --[[User:Dweller|Dweller]] ([[User talk:Dweller|talk]]) 14:12, 17 September 2015 (UTC)

:::Therefore, nobody else in the entire history of gas pumping has ever tapped to keep from having fuel drop on their clothing. You need to take part in more scientific studies since your practices are universal for all of humanity. It would really cut out a hell of a lot of time spent observing human behavior. All anyone has to do is as what you do. [[Special:Contributions/209.149.113.66|209.149.113.66]] ([[User talk:209.149.113.66|talk]]) 14:27, 17 September 2015 (UTC)


==Unidentified twig ==
==Unidentified twig ==

Revision as of 14:27, 17 September 2015

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


September 13

Difference between a large multiple star system and a small globular cluster?

Is there some magical number of gravitationally bound stars that marks the boundary line here? Anybody got a reference? Hcobb (talk) 03:31, 13 September 2015 (UTC)[reply]

This has a table, globular cluster minimum 10000 , open cluster between 10 and 10000, solar masses that is... The text mentions that the smallest open clusters contain fewer than a dozen stars. The known multiple star systems seem to go up to seven only. Ssscienccce (talk) 06:19, 14 September 2015 (UTC)[reply]

Mostly cloudy/partly sunny

The weather prediction for here on Sunday is "mostly cloudy". For Monday it is "partly sunny". What is the difference between these? Bubba73 You talkin' to me? 05:31, 13 September 2015 (UTC)[reply]

"Mostly cloudy" is slightly cloudier than "partly sunny". These are subjective terms by weather forecasters, not rigorously objective terms. Cullen328 Let's discuss it 05:43, 13 September 2015 (UTC)[reply]
Resolved
Thanks. Bubba73 You talkin' to me? 05:45, 13 September 2015 (UTC)[reply]
I beg to differ: Mostly Cloudy and Partly Sunny are both standardized technical vocabulary used by civilian weather forecasters in the United States. In civil weather services, these terms ("partly sunny," "mostly cloudy," ... and so on) are used, and they are based on octants of cloud coverage. In aviation weather services, the corresponding terms "scattered" and "broken" are used. Consider reading Sky Conditions; FAA Handbook 7900 Surface Weather Observations; and of course, AC 00-45G Aviation Weather Services. The terminology is not only carefully defined; sky condition is (nowadays) typically detemined by machine, using ASOS sky condition equipment, and reported in standard format.
"Mostly cloudy" means 5 to 7 oktas will be covered by opaque cloud; "partly sunny" means 3 to 5 oktas. The first of these forecasts implies the existence of a ceiling; the second forecast indicates that a ceiling is less likely.
Nimur (talk) 07:43, 13 September 2015 (UTC)[reply]
That helps clear it up. In the table, partly cloudy and partly sunny are in the same category. I had assumed that partly sunny was more like mostly cloudy. Bubba73 You talkin' to me? 15:39, 13 September 2015 (UTC)[reply]

Note: The following answer was accidentally deleted in the edit where I made an answer. I apologize for my error, which was not deliberate. Cullen328 Let's discuss it 05:17, 14 September 2015 (UTC)[reply]

...which illustrates the important distinction between a "weatherman" and a meteorologist! Not everybody who reads out the weather report on television news is a scientifically trained forecaster. People who care about the accuracy of their weather information rely on products of the National Weather Service (at least, in the United States). Although there is a plethora of copycat websites and TV reporters, with fancy graphs and data visualization eye-candy, there is no substitute for hearing the report from an actual meteorologist or briefer!
I would go so far as to say that television news - let alone a web search engine result - does not constitute a reliable encyclopedic source for weather information. We have a National Weather Service, and it publishes all its reports, as well as several great books on weather theory and practice, at no cost. Nimur (talk) 07:35, 14 September 2015 (UTC)[reply]
You may find this perspective interesting. Short Brigade Harvester Boris (talk) 13:15, 14 September 2015 (UTC)[reply]
Learn how to read METARs and you'll never have to worry. They even have decoders. Here's current METAR locations and conditions in U.S. [1] --DHeyward (talk) 05:57, 16 September 2015 (UTC)[reply]

Source gated transistor

The BBC reported today about a school student who had done research on source gated transistors. I don't know what these are (and am not asking for a direct answer to address that). Despite Google returning lots of links to information about them, it seems that Wikipedia has never heard of them either. Is there an article in which the term "source gated transistor" can be placed so that this apparent ignorance goes away? Bazza (talk) 09:29, 13 September 2015 (UTC)[reply]

Here's the article that is referenced by BBC News: Self-Heating Effects In Polysilicon Source Gated Transistors (2015). It cites five other publications when it defines "source-gated transistor," and it looks like the term has been used in publications since at least 2003. Nimur (talk) 15:04, 13 September 2015 (UTC)[reply]
Thanks. I don't doubt the validity of the term; but where in Wikipedia should it be referenced? This is not a subject area I am anything like familiar with: I came to Wikipedia hoping to read a sentence or two on why this type of transistor differs from others. Is there someone who knows better who can either start a new article, or include mention of this in an existing one? Bazza (talk) 16:36, 13 September 2015 (UTC)[reply]
Could be added to thin-film transistor. This google book has some info: Thin Film Transistor Technologies, other info here, and here.
Like a FET, the current between source and drain is controlled by the gate voltage, but in a FET this happens across the length of the channel, in an SGT the current is controlled at the metal-semiconductor contact at the source. As I understand it, the SGT seems to be a slower but more efficient, more robust, alternative for the present TFT (used in for example LCD displays). The source-controlled nature reduces the impact of process variability, making them cheaper to manufacture. Possible applications include large-area electronics made with inexpensive but imprecise patterning techniques. Ssscienccce (talk) 20:29, 13 September 2015 (UTC)[reply]

Added to FET article for now. Hcobb (talk) 21:53, 13 September 2015 (UTC)[reply]

Resolved
Excellent, thank you. Wikipedia Search now looks not quite as ignorant ;-). Hopefully someone will feel the need to expand that section sometime in the future. Bazza (talk) 10:32, 14 September 2015 (UTC)[reply]
Looking forwards to listing practical applications. Hcobb (talk) 13:39, 14 September 2015 (UTC)[reply]

Over-sampling in archaeology?

in List of bog bodies, "Esterweger Dose Child", "Girl of the Bareler Moor" and "Ballygroll Child" each mention "oversampling", "over-sampling" or "over sampling" (respectively), as a potential cause of loss of remains post-excavation. I'm kind of at a loss as to what, exactly, this would entail. Surely even a few hundred years ago there wasn't such a great need for destructive testing of bog bodies that more than half of the bones of a body would be destroyed? Or is oversampling something else?

Has anyone even seen this phrase used in the context shown here? Is this a common term in the field? 97.90.151.30 (talk) 09:34, 13 September 2015 (UTC)[reply]

I have heard this term ("oversampling," "destructive sampling," and so on) extensively in the context of archaeology.
For lack of a better resource, here is a pamphlet, Archaeology: Science and the Dead. It's published by the Anglican Church, but it's a very neutral publication that outlines the ethical, scientific, and legal issues related to scientific testing on human remains. In fact, the specific topic of disrupting ancient "bog-bodies" in England is a significantly-debated topic: the Advisory Panel on the Archaeology of Burials in England (APABE) was specifically formed by the Church of England in response to a government request to more appropriately deal with ancient and prehistoric human remains archaeology in the British islands.
Destructive sampling affects non-human remains, too. I recall that one of the great debates over the Shroud of Turin was whether it should be cut up for submission to, say, radiocarbon dating: some people considered it too valuable to destructively sample. Even ordinary artifacts must balance the utility of destructive sampling against its effects on irreplaceable artifacts. As an example, here are published guidelines for one museum: Destructive Analysis Policy and Procedures for the archaeology museum at the University of Michigan.
Nimur (talk) 15:17, 13 September 2015 (UTC)[reply]

Single-letter amino acid representation

I am familiar with the single-letter representation of amino acids. I've come accross a description of an SmD peptide which is described as:

AARG-sDMA-GRGMGRGNIF

(1) I understand what the capital letters represent, but not what the lowercase "s" represents.
(2) What do the hyphens in "-sDMA-" mean?

Reference: Mahler et al, 2005 - PMID 15642993.

Thanks, --NorwegianBlue talk 10:16, 13 September 2015 (UTC)[reply]

According to that article, "a symmetrical dimethylarginine (sDMA) residue", and cites PMID 15642139 regarding this idea. We have an asymmetric dimethylarginine article but no symmetric dimethylarginine article. The symmetric variant has one methyl group on each of the two pendant N of the arginine sidechain rather than both methyl groups on one. The hyphens set off "sDMA" as a single residue rather than suggesting that "sD", "M", and "A" are regular one-letter codes in the chain.DMacks (talk) 10:26, 13 September 2015 (UTC)[reply]
Thanks! --NorwegianBlue talk 10:33, 13 September 2015 (UTC)[reply]
Resolved

Time traveller's age

Could someome remind how the problem of time traveller's reversed age while travelling to the past is solved in sci-fi? For instance, unless the time travel occurs in a localized bubble of present, a 29-year old time traveller from 2090 travels backwards to 2062 which would render him 1-year old and making the return impossible (not to mention older times before his birth). From what I see, Novikov self-consistency principle assumes that you would meet yourself, and according to grandfather paradox, the only other possible solution, assuming the possibility of time travel, is a parallel universe. Maybe Asimov or Gamov tackled this? Brandmeistertalk 17:32, 13 September 2015 (UTC)[reply]

There are literally dozens of ways that this is handled in different science fiction tropes - no one of them is 'definitive' or 'correct', they are just fiction. Since mainstream science doesn't admit the possibility of time travel, there is no meaningful answer to your question. (This may not prevent various 'fringe' types from coming here and babbling on about wormholes and tachyons and other things that are about as real as unicorns and fairies.) SteveBaker (talk) 18:04, 13 September 2015 (UTC)[reply]
Yes. Backward time-travel is impossible, as far as we know. Assuming it is possible makes for entertaining plot lines in fictional stories. As one of my old math teachers said, "If you start with incorrect assumptions, you're liable to get interesting results." ←Baseball Bugs What's up, Doc? carrots20:49, 13 September 2015 (UTC)[reply]
As far as science goes, the closest thing to reverse time travel might be under the many worlds hypothesis, if you could find a parallel universe just like our own, but with the desired time offset, and just jump right into it. That wouldn't create any time paradoxes, as no actual time travel occurs, it only appears to. StuRat (talk) 22:17, 13 September 2015 (UTC)[reply]
The notion of parallel universes seems only slightly less far-fetched than unicorns and spaghetti monsters. ←Baseball Bugs What's up, Doc? carrots23:24, 13 September 2015 (UTC)[reply]
Actually, no - the notion of the Many-worlds interpretation of quantum mechanics is actually one of the leading theories right now. A 1995 survey of 70 leading experts came out with 58% of them saying that they expected it to be true. What is as far-fetched as unicorns and the FSM is the idea that you, as a person, can jump between parallel universes (although you do continually get replicated into near-identical universes). So you're right that StuRat's approach to getting some kind of time travel is unreasonable - but you're very, VERY wrong about parallel universes being on the level of unicorns, etc. SteveBaker (talk) 16:05, 14 September 2015 (UTC)[reply]
(Caveat: If parallel universes exist - then unicorns almost certainly exist in an infinite number of them...but not in our universe.) SteveBaker (talk) 16:13, 14 September 2015 (UTC)[reply]
Why would you assume a time traveler becomes younger when traveling to the past? According to general relativity, if traversable wormholes exist, time travel would be possible. Ssscienccce (talk) 04:11, 14 September 2015 (UTC)[reply]
Yeah - and if unicorns existed, you could rub their horns and make arbitrary wishes. Which is precisely as likely as worm-holes existing that let you teleport/time-travel/unlock-the-secret-to-eternal-youth/whatever. There are a large and very annoying number of so-called documentaries on the Science and Discovery channels that insist on talking about wormholes as if they were some kind of established science - and they just aren't. To the contrary, in fact. SteveBaker (talk) 16:05, 14 September 2015 (UTC)[reply]
I don't understand the question. The "localized bubble of present" is part of the definition of time travel. What travels through time is your 29-year-old body including your 29-year-old brain with its encoded memories of the year 2090. You can come up with not-quite-100%-absurd ways that that might happen (such as wormholes or rapidly rotating cylinders). Regressing in age is something else entirely. It's magic or super-advanced rejuvenation technology, not physics. -- BenRG (talk) 03:54, 14 September 2015 (UTC)[reply]
And if you jumped back 10 years, with you somehow magically became 10 years younger as a result - then your body (and therefore your memories) would be exactly what they were 10 years ago...then you'd have no way to tell that you'd ever been in our present. That wouldn't be time travel...it would be completely indistinguishable from normal reality. SteveBaker (talk) 16:09, 14 September 2015 (UTC)[reply]
It would still be different if either 1) You original position after travelling is where you time travelled from; or 2) There are now two of you. Well 2 would normally imply 1, otherwise you end up with a rather nasty death. If you have 1 without 2, then this also means the original you disappears.

I'm not sure if you can come up with a reason why you will age backwards etc, but will either not affect the other "you" or at least would end up in the same position as what you started from, particularly since "position" is actually a fairly tenous concept when you're talking about time travel. But then again, a lot of the rules appearing in time travel fiction don't make much sense, and I say this as someone who enjoys soft science fiction including the occasional time travel story.

I do agree reverse aging etc isn't what's normally meant by backwards time travel in fiction etc anyway. Probably at least partially because it leads to far less interesting scenarios. (Time travel does normally include 1 and 2, if the time travelled is short enough for them to come up.)

Nil Einne (talk) 13:07, 15 September 2015 (UTC)[reply]

It would still be different if either 1) You original position after travelling is where you time travelled from; or 2) There are now two of you. Well 2 would normally imply 1, otherwise you end up with a rather nasty death. If you have 1 without 2, then this also means the original you disappears.

I'm not sure if you can come up with a reason why you will age backwards etc, but will either not affect the other "you" or at least would end up in the same position as what you started from, particularly since "position" is actually a fairly tenous concept when you're talking about time travel. But then again, a lot of the rules appearing in time travel fiction don't make much sense, and I say this as someone who enjoys soft science fiction including the occasional time travel story.

I do agree reverse aging etc isn't what's normally meant by backwards time travel in fiction etc anyway. Probably at least partially because it leads to far less interesting scenarios. (Time travel does normally include 1 and 2, if the time travelled is short enough for them to come up.)

Nil Einne (talk) 13:07, 15 September 2015 (UTC)[reply]

(2) without (1) would probably result in a rather messy explosion as the density of normally incompressible fluids instantly doubles. This is also a problem when teleporting a'la StarTrek because at the instant you arrive, you have to somehow displace all of the air that was previously at that location. Many details of these processes are left to guesswork...the soles of your shoes would have to be pre-compressed to accomodate small rocks under your feet - or else you'd have to teleport in an inch or two above the ground and have a weird shock as you drop down onto it.
The other issue with time travel is that people generally seem to materialize in the same place that they were in when they activated the machine (think "Back to the Future", for example)...but the Earth will have revolved and moved in it's orbit and the Sun will have moved around the galaxy...so where exactly would you be if you *only* travelled into the past? In the vacuum of space, I think...although with considerations of relativity and the expansion of the universe...I'm not so sure what would happen. So you need a machine that can not only move you in time - but also in space. Also, is momentum conserved? Because you're on the surface of a rapidly spinning planet - if you aren't very careful, you'll materialize with a velocity sufficiently high to vaporize you and everything nearby! And now you have all the problems of the StarTrek transporter all over again.
Then there is the issue of what happens if you change something in the past? Science fiction never seems to agree on this one (often, as with StarTrek, they aren't even consistent within one fictional universe). So if you go back to the time of the dinosaurs and stomp on a butterfly...
  1. You can't return to the present, so what happens there is moot.
  2. When you return to the present, it's as though you never stomped the butterfly.
  3. When you return to the present, everyone evolved from lizards, everything is different - but you haven't changed.
  4. When you return to the present, everyone evolved from lizards - but everything else in the world is more or less the same, we still have cars made by Ford and the (lizard) president is still called "Obama".
  5. When you return to the present, everyone evolved from lizards - and so have you.
  6. When you return to the present, everyone evolved from lizards - and now there are two of you (one is a bit "lizardy", the other not).
  7. When you return to the present, everyone evolved from lizards - but everyone remembers how it used to be before you changed it, so you get the blame for the "lizard-thing".
  8. When you return to the present, it's as though you never stomped the butterfly, but you remember that you did.
  9. When you return to the present, it's as though you never stomped the butterfly, you remember that you did - but your on-board computer mysteriously didn't capture any information about that.
  10. When you return to the present, you don't remember a thing about the trip - but your on-board computer contains information about you stomping the butterfly.
  11. At the instant you stomp the butterfly, the future you changes into a lizard-man, so the 'you' in the dinosaur era also changes - so everything stays perfectly consistent. But maybe you create oscillations in the timeline by changing the future in such a way that lizard-you doesn't stomp the butterfly - so things return to how it was if you hadn't stomped the butterfly - so now human-you DOES stomp the butterfly - so now you're lizard-you and...Aaarrgggghhhh!
  12. You can't stomp on the butterfly, no matter how hard you try - something always happens to stop you.
  13. You stomp the butterfly, cause a temporal anomaly and you instantly cease to exist.
  14. You stomp on the butterfly, cause a temporal anomaly and the entire universe ceases to exist.
  15. You stomp the butterfly, causing the time machine's anomaly-detector circuit to blow up, trapping you in the past.
  16. You change the future, and S-L-O-W-L-Y the effect of that catches up with you (pictures of loved ones fade over hours in Back to the Future).
  17. You end up creating a fully consistent alternate timeline/universe.
  18. The consequences of stomping the butterfly gradually blend back into normality - so the future is never changed measurably (the opposite of "The butterfly effect"!)
  19. The time cops come with a replacement butterfly and sort everything out.
  20. The time cops have the forethought to arrest you just BEFORE you stomp the butterfly.
  21. Events unfold in such a way that you cancel out the effect of stomping the butterfly - so the present is unchanged.
I'm sure there are many more. SteveBaker (talk) 15:01, 15 September 2015 (UTC)[reply]

Rainbow width

I saw a rainbow today and it looked wider than other rainbows that I recall seeing in the past. Does the larger width necessarily mean that the conditions that were causing it were closer to me than other rainbows? Is there any correlation between rainbow width and other factors? Dismas|(talk) 22:09, 13 September 2015 (UTC)[reply]

Some other editors are bound to correct me if I am wrong but the width of rainbows are always the same due to the physics involved. The reason this rainbow may have appeared wider is due to the juxtaposition of other things in the landscape which gives the mind a sense of scale. So on a prairie, it looks normal but if the Rocky Mountains are in view it looks wider. Much like the moon appears larger when very close to the horizon.--Aspro (talk) 23:25, 13 September 2015 (UTC)[reply]
There are, of course secondary and tertiary rainbows, pastel-coloured supernumerary rainbows and twinned rainbows (I've seen several of these at the same time), but you are unlikely to have mistaken these for a single wider rainbow. As Aspro said, the colour width is a fixed angle (about two degrees for the primary and three degrees for the secondary), but the brain interprets this as different widths depending on the surroundings. Hold your thumb at arm's length to make a comparison. Dbfirs 07:12, 14 September 2015 (UTC)[reply]
See moon illusion. --65.95.178.150 (talk) 14:23, 14 September 2015 (UTC)[reply]

Thank you. Dismas|(talk) 23:25, 14 September 2015 (UTC)[reply]

September 14

about boyles law

R.bhusal98 (talk) 10:21, 14 September 2015 (UTC) state and expalin boyles law/[reply]

Try Boyle's law. Mikenorton (talk) 10:24, 14 September 2015 (UTC)[reply]

Potato effect

Does a potato peel contribute to the oiliness of the skin ? — Preceding unsigned comment added by 175.101.24.162 (talk) 11:02, 14 September 2015 (UTC)[reply]

Potatoes contain virtually no oil.[2]--Shantavira|feed me 13:46, 14 September 2015 (UTC)[reply]
  • The preparation of baked potatoes by restaurants usually consists of oiling the skin, then wrapping the potatoes in aluminum foil, then baking them at 450F four at least an hour before serving (and they are kept rather hot for the rest of the shift by the waitrices who mind the potato drawer even after they have been baked). this my contribute to the notion that the skins are oily. μηδείς (talk) 19:24, 14 September 2015 (UTC)[reply]
Maybe the original questioner is asking whether consuming potato peels makes a human's skin oily? That's the way I read the question at first. Deli nk (talk) 19:39, 14 September 2015 (UTC)[reply]
I think the questioner is asking: "Would the rubbing of potato peel on the skin contribute oil to the skin?" Perhaps the questioner can say a little more about this. A Google search for rub potato peel on skin gets lots of hits. Bus stop (talk) 20:34, 14 September 2015 (UTC)[reply]

Automated hypothesis generation and cell culture machine

Over a year ago, I read in a magazine (I think it was Lab Times; I'm not sure how widely it's distributed) of a machine someone had built that generates it own hypotheses, tests them and forms new hypotheses based on the results. I'm unable to find information on this now. I know there are automated cell culture machines but this one worked with code a person had written to enable a much greater degree of autonomy. Has anyone else heard of this? --129.215.47.59 (talk) 13:00, 14 September 2015 (UTC)[reply]

Is it this? Tgeorgescu (talk) 13:12, 14 September 2015 (UTC)[reply]
No, that's not it. This machine generated its own data using cell cultures. --129.215.47.59 (talk) 14:05, 14 September 2015 (UTC)[reply]
Could it be the Robot Scientist "Adam" as described in [3] and [4], with some popular press coverage at [5] and [6]? It uses models of metabolic processes and gene homology information to predict what will happen in cell cultures of yeast knockouts, and then actually performs the experiment with an automated liquid handling system. They've since moved on to drug discovery (a number of links are in the Robot Scientist Wikipedia page). -- 160.129.138.186 (talk) 14:46, 14 September 2015 (UTC)[reply]
I think that might be it; I'm not sure. It's close enough, anyway. Thanks! 129.215.47.59 (talk) 13:48, 15 September 2015 (UTC)[reply]

Brain capacity

Why doesn't the brain's memory get full? It seems that what our memory can hold (no matter how you calculate it) is much bigger than what we need for survival. Even considering a myth the 10% of brain use, it seems like there is a lot of not used memory out there (or in there). --Scicurious (talk) 19:24, 14 September 2015 (UTC)[reply]

Obviously, we don't remember everything so it wouldn't be unreasonable to suggest that our brains have a limited capacity to remember and that limit is routinely reached. Also, I don't see any reason to think our memory is bigger than what we need for survival. In the environment in which humans evolved, being able to remember more (exactly where food sources were found it past years, exactly where predators were encountered in the past, etc.) would be beneficial to survival. Deli nk (talk) 19:36, 14 September 2015 (UTC)[reply]
Our brains constantly delete old memories, consolidate others, and just generally juggle lots of things around. Also there are different types of memory that store different things. The exact details of how memory works are a fertile area of neuroscience research. We have tons of articles on memory, many of which also point you to a plethora of sources. If you want to really get a deeper understanding you'll have to do some reading. Hey, it's a good way to exercise that memory! --71.119.131.184 (talk) 19:53, 14 September 2015 (UTC)[reply]
I'd be surprised if you didn't already have some impression of this fact, given the nature of your comments, but for the sake of clarity for the OP, it is worth noting that the brain doesn't "delete" memories. This kind of nomenclature is inaccurate and problematic when applied to the brain (and neural networks broadly) because it implies that a memory is coded data stored in a discrete memory block, as with a classical computer. Rather, in the case of the brain, experiential information is stored as product of associations between neurons which are used, as you note, by various modules of the brain in many different fashions. Thus memories are never really deleted, though their potency (that is, the likelihood of their recall and their effect on future cognition and behaviour) may dwindle as the associations between the relevant neurons change (some growing stronger, some weakening). But its not binary phenomena, as "deleted" implies and it's not uncommon for memories to remain but change in character. Snow let's rap 01:14, 15 September 2015 (UTC)[reply]
Snow let's rap I am very surprised that you or anyone else for that matter did not answer the main inquiry. "Brain capacity".... We have an article that claims "It is estimated that the human brain's ability to store memories is equivalent to about 2.5 petabytes of binary data" see link here Void burn (talk) 02:33, 16 September 2015 (UTC)[reply]
I removed that from the article because it's from a Scientific American Q&A that cites no source, peer-reviewed or otherwise. Landauer estimated 109 bits of factual memory (not procedural) based on experimental tests of people's recall ability. -- BenRG (talk) 05:59, 16 September 2015 (UTC)[reply]
Memories are lossy. Today you call your bank...for the 30 seconds it takes to dial the number, you remember it - but 30 seconds later, it's gone. For a few minutes you remember the name of the guy you talked to and every detail of what he said. An hour later, you recall the important content of the call - but his name is gone. A day later, you recall that you called the bank and at what time, and whatever content mattered. A week later, you recall calling the bank some number of days ago...but when exactly is gone. A month later - you might not recall calling the bank - but the important things you learned from them are still there. A year later, you have no memory of the event at all...it might as well never have happened. A decade later, you don't recall even having an account at that bank.
This is true of everything. More important events stay around longer - the day you got married, the day your kid was born...those exist in some clarity for a long time. My memory of my first day at work at my first job is entirely gone - and I have just a couple of 'snapshots' of my first day in school.
How about an analogy: Think of it like a store of photographs on the hard drive of your computer. If you keep every photo you ever took, it would get full very quickly. So within a few months you decide to delete the ones you don't care about anymore - you go through and delete the ones that had your thumb over the lens, the ones that were out of focus. But what if you did that for years? Eventually, you'd have to start tossing out photos of friends at parties and stuff like that. But after decades, you'd be making hard decisions about throwing out *some* of your wedding photos in order to keep a couple of pictures of your kid's first birthday - and then some of those would need to be erased to make room for his high school graduation. You might get clever and decide to reduce the quality settings and re-save some photos at lower quality so they take up less space. Do that a few times and they gradually get blurry. But, you'd never consider throwing out ALL of your wedding photos - just as your brain will never throw away all memory of that day.
With that kind of strategy, even with a limited amount of storage space, you could keep enough photos at reasonable resolution that are very important to you, plus a good number of lesser importance that you'd store at lower resolution, but some events would simply have to be deleted entirely. But you'd never truly run out of disk space.
SteveBaker (talk) 14:15, 15 September 2015 (UTC)[reply]
My memory is as bad as yours, but that can't be inherent in the brain's design. There are people who can recall everything that happened to them on every day of their lives. All/most young chimpanzees seem to have eidetic short-term memory, versus only a small fraction of human children, so lousy memory might even be a uniquely human trait. -- BenRG (talk) 18:13, 15 September 2015 (UTC)[reply]
Estimates of the capacity are based on the total number of synapses. 1 bit per synapse would give about 100 Tbit or 10TB. Some divide it by ten because they think most neurons have other functions. Others have argued that the neurons can select the neurons they connect to, and synapses aren't digital, they can encode more values (I think that was the origin of the 2.5 PB) This article on slate mentions a few estimates, don't know why he concludes that 100 trillion datapoints would equal 100TB and not 100Tbit.
I would argue against the assumption that we have more memory than needed. Evolution would not produce a memory that offers no advantage.
That we can learn new things doesn't necessarily mean we have "unused" storage. All neurons are connected, there's no memory waiting until the rest is filled. It seems logical that memories aren't limited to meaningful connections only, if some combinations of the signals that generate the memory make sense, there will be many others that are meaningless, and the associations created by them will fade away, never used or perceived. In children, such associations could include meaningful connections, like concepts they haven't learned yet.
a similar argument has been used to explain why we have so much memory: "In particular, there are reasons to believe the capacity of our memory systems to store perceptual information may be a critical factor in abstract reasoning. It has been argued, for example, that abstract conceptual knowledge that appears amodal and abstracted from actual experience (25) may in fact be grounded in perceptual knowledge (e.g., perceptual symbol systems; see ref. 26). Under this view, abstract conceptual properties are created on the fly by mental simulations on perceptual knowledge. This view suggests an adaptive significance for the ability to encode a large amount of information in memory: storing large amounts of perceptual information allows abstraction based on all available information, rather than requiring a decision about what information might be necessary at some later point in time (24, 27)." http://www.pnas.org/content/105/38/14325.full
Sparse distributed memory could explain why we seem to have more memory capacity than needed. Accuracy of recall would depend on the saturation of the memory. Ssscienccce (talk) 07:30, 16 September 2015 (UTC)[reply]
How much we can remember is not the same as how many bits it would take to encode the low-level structure of the brain. According to magnetic storage there are "a few hundred" magnetic domains per magnetic region on a hard drive platter. It's not clear to me if each magnetic region is independently readable/writable as a bit, but conservatively guessing that it is, and adding in error correction, you'd get an estimate of several petabytes for the capacity of a modern hard drive. Encoding the analog direction of the field in each domain would take you to ~100 petabytes. But the amount you can actually write to it and read back reliably is <10 TB.
Landauer (who I linked above) got ~109 bits from tests of what people can remember. Brady et al (which you linked) cites Landauer and estimates that (a minimum of) 17.8 bits per image is needed to explain their subjects' performance. The images were presented at intervals of 3.8 s, so that's higher than Landauer's numbers but only by a small factor. -- BenRG (talk) 18:15, 16 September 2015 (UTC)[reply]

Namesake vocalization by birds

Hello! Wikipedia articles about birds have a box that summarizes taxonomical information, conservation status, etc., alongside providing a picture and audio. I just came across the eastern whip-poor-will's page and I am intrigued by a caption in the bird's box saying "Namesake vocalization". There seems to be no Wikipedia article explaining what that means. Although one can make a fairly good guess due to the mentioning of the bird name's onomatopoetic origin right at the beginning of the main article's text, I wonder whether there is a way of grouping birds according to that caption. Are there more birds that do namesake vocalizations? I tried to search for more, but failed to adequately perform that search. How to search for more? A list would be very helpful! Kind regards, stovariste — Preceding unsigned comment added by 67.249.183.98 (talk) 19:51, 14 September 2015 (UTC)[reply]

Perhaps that could be reduced from "Namesake vocalization" to simply "Vocalization". I'm not sure. (Eastern whip-poor-will) Bus stop (talk) 20:11, 14 September 2015 (UTC)[reply]


What an interesting question! Let's break this into two parts for clarity.
1. How to search for this? Eastern_whip-poor-will does say "namesake vocalization", but that's just the caption, not part of the standardized infobox templates. So while searching for "namesake vocalization in the search box technically works, there isn't any easily viewable category of pages that have this kind of content, or pages that are about birds with onomatopoetic names. Most of the search results on [7] are spurious, but it did lead me to Lesser scaup in particular, and the scaups in general, which I think is exactly the kind of thing you're looking for. I don't know how to create categories, but someone here (or at WP:HELP) could probably help us create Category:Birds with namesake vocalizations or Category:Birds with onomatopoetic names. Once the category is created, it's easy to include a page in the category, you just put a little link at the bottom of each page.
2. What other birds have onomatopoetic names? A few I know of: the Chickadees and the Towhees definitely count. Not clear on if e.g. screech owl fits a strict definition of onomatopoetic names, as we don't just call them "screeches" and "owl" is not onomatopoetic. Humming birds is surely named from the sound, but it's a sonation, not a vocalization, but I'll list is as being somewhat relevant.
I'm not sure, but I don't think "namesake vocalization" is all that standard of a terminology among either birders or ornithologists. Sure, it has some use, but (in my opinion, WP:OR) this google search [8] would have far more relevant hits if the terminology were fairly standard.
Finally, though our audio content is variable in coverage and quality, the Cornell Ornithology lab has the very nice "all about birds" website and database that you can use to listen to calls that we don't have here on WP. Here is their page on Eastern Towhees [9]. SemanticMantis (talk) 20:30, 14 September 2015 (UTC)[reply]
List_of_onomatopoeias#Animal_and_bird_names reminded me of killdeer, and also has a few others I didn't know - like Dodo - though that one is more of a hypothesis than a known fact... SemanticMantis (talk) 20:35, 14 September 2015 (UTC)[reply]
In England we have Yaffle, an archaic country name popularised by a children's television series. Alansplodge (talk) 21:12, 14 September 2015 (UTC)[reply]
We also have the cuckoo and crow, which I would venture to say are more widely known… ‑ iridescent 16:40, 15 September 2015 (UTC)[reply]

Hello again! Thanks for the comments and rich discussion! The list of onomatopoetic animal and bird names is definitely helpful, I wasn't aware of it! Let me argue that calling a sound a namesake vocalization is much more precise than saying that an animal has a onomatopoetic name. First, one is not left with guessing which sound or vocalization has given rise to the name. Second, a namesake vocalization conveys that the name is not entirely deformed by historic (diachronical) processes. The classic example here is the English word pigeon, which, according to Saussure, is derived from Latin pipio which, in turn, is derived from yet another precursor which finally would lead to its onomatopoetic origin. I mean, even if a namesake vocalization is a concept that cannot be backed by original research whatsoever, it seems worthwhile establishing ... ahoy! --Stovariste (talk) 08:54, 15 September 2015 (UTC)[reply]

Nuke Mars now, ask me how.

Isn't Mars being hit by 5 Megatons of nuclear powered radiation every second already? Hcobb (talk) 22:26, 14 September 2015 (UTC)[reply]

And the point is?--Jubilujj 2015 (talk) 22:29, 14 September 2015 (UTC)[reply]
What could human action reasonably add to that? To just match that rate use up all of our nukes in less than half an hour. Hcobb (talk) 22:33, 14 September 2015 (UTC)[reply]
I assume you are talking about Elon Musk's proposal. Well think about Earth for a second. We're hit by about 1017 watts of solar energy every second, or on the order of 25 megatons of TNT every second. Yet you would notice if even a kiloton-range nuke detonated over your home. The idea is to put a concentrated amount of energy right at the poles, instead of spread across the planet. This concentration of energy, which does not happen naturally, would (in Elon Musk's estimation) vaporize enough water to give Mars something of an atmosphere. I'm not equipped to address the feasibility of the idea myself. Someguy1221 (talk) 23:17, 14 September 2015 (UTC)[reply]
Greenhouse effect, although risky on earth and a disaster on Venus, would be vital to the process of terraforming Mars, as Carl Sagan wrote a few decades ago. Presumably there's more than one way to do it. ←Baseball Bugs What's up, Doc? carrots23:33, 14 September 2015 (UTC)[reply]
Elon Musk's "proposal" was made during his appearance last Wednesday on The Late Show. Video and transcript are here: Elon Musk on Tesla, SpaceX and Mars Terraforming, with the Mars part starting at 02:17. It is possible that he mentioned this "fast way" of terrafromnig as a joke (playing into Colbert's "You're a super villian" theme), and he gave no indication of the number and yield of nuclear devices necessary or if he had even read a study that offers any such numbers. We have the article Terraforming of Mars, but it does not mention the nuclear option. -- ToE 10:50, 15 September 2015 (UTC)[reply]
However, our general Terraforming article does mention Martyn J. Fogg and his book Terraforming: Engineering Planetary Environments (1995), and one of his proposals [disclosure: I'm a friend of his and have the book] was to detonate (large-scale) nuclear explosives in (presumed) subterranean (sub-arenean?) carbonate rock deposits in order to return to the atmosphere their considerable CO2 and H2O content. Fogg however emphasised that this would be problematic if colonists or scientific bases were already present. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 13:11, 15 September 2015 (UTC)[reply]
Thank you! Fogg's The Terraforming Information Pages (included in Terraforming#External links) includes a Zubrin & McKay 1993 paper Technological Requirements for Terraforming Mars which includes a section "Activating the Hydrosphere" that states: Activating the Martian hydrosphere in a timely fashion will require doing some violence to the planet, and, as discussed above, one way this can be done is with targeted asteroidal impacts. Each such impact releases the energy equivalent of 10 TW-yrs. If Plowshare methods of shock treatment for Mars are desired, then the use of such projectiles is certainly to be preferred to the alternative option [4] of detonation of hundreds of thousands of thermonuclear explosives. After all, even if so much explosive could be manufactured, its use would leave the planet unacceptably radioactive. (With footnote four being Fogg's 1992 "A Synergic Approach to Terraforming Mars".) -- ToE 18:29, 15 September 2015 (UTC)[reply]
I would argue it's better to leave the water sequestered at the poles for future colonist to mine, rather than have an extremely thin water vapor content in the air, mixed with radioactive isotopes. I don't see the entire planet being terraformed anytime soon (unlike in the movie Total Recall, where the Martians apparently designed a system capable of terraforming Mars in 30 seconds, but forgot to hit the on button). However, greenhouses full of growing plants might be possible. StuRat (talk) 16:31, 15 September 2015 (UTC)[reply]
Or, if we're discussing SF solutions, manipulate an ice asteroid to crash on Mars, like in Niven's Protector (novel).Sjö (talk) 11:03, 16 September 2015 (UTC)[reply]
This doesn't solve the gravity problem. Mars is smaller than earth. It seems we could equate an earth altimeter that would equal the residual atmosphere on Mars. I suspect the equivalent altirude very high and thin and earth doesn't grow much at that altitude (but it's dry, has ice, etc - think Himalayas) . If we did warm up the atmosphere, it would simply escape to space until temperature and density return to the current steady state. --DHeyward (talk) 06:10, 16 September 2015 (UTC)[reply]
I remember reading a claim that the Moon could hold an atmosphere for 100,000 years - though I can't find it now and in any case, because of the scale height, it would have to be one hell of an atmosphere (going really high up). Not sure if you'd have to give it a decent spin and a magnetic field for that number, either. But Mars is thought to have naturally held an atmosphere for some time during its early development. Indeed, I've seen claims that the planet is in something of an Ice Age now, with the brines frozen that might at other times be apparent as flowing liquid. Note that water vapor is a greenhouse gas, so warming the water already present ought to provide some increase in greenhouse warming. Methane is a much better greenhouse gas, and a component of some of the objects that might be lobbed in Mars' direction. Wnt (talk) 00:49, 17 September 2015 (UTC)[reply]
Yes, Fogg's book (see above) demonstrated that once created, a breathable pressure atmosphere would persist for hundreds of thousands of years before needing replenishment, by which time we'd likely come up with ways to do so. A "revived" atmosphere might have the wrong proportions of gasses, but that could be answered by a simple scuba-like apparatus provided the atmospheric pressure was sufficient to obviate the need for whole-body pressure suits.
There are also questions about whether Mars' surface gravity is sufficient to keep humans healthy. Investigation by the Mars Society have suggested that the lower limit is around 1/3g, putting Mars at 0.38g near the limit, but this is a very difficult matter to research. See Colonization of Mars for more details. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 14:21, 17 September 2015 (UTC)[reply]

Could dogs evolve language?

Most people prefer pets that "connect" with us, and many pet owners talk constantly to their dogs. That's lead to a possible preference for linguistically competent dogs. Could it be that we are artificially making dogs evolve to understand human language? Could we breed dogs to understand human language better on purpose? --Jubilujj 2015 (talk) 23:10, 14 September 2015 (UTC)[reply]

Yes, we've been breeding dogs to get better at understanding us (and vice versa) for a long time now. See Dog_communication, Dog#Intelligence.2C_behavior_and_communication. selective breeding, Human–animal_communication and Interspecies_communication. As to your last question, I think we already have bred our dogs to improve their understanding of human communication, if not "language" in the strict sense. See Dog_behavior#Behavior_compared_to_other_canids for some other examples of this. There are many decently good claims to showing that specific dogs understand specific words, and various other "linguistic" accomplishments [10] [11] [12]. It's probably too far to say that current dogs understand a human language well, but I think WP:OR it's safe to say that some dogs have behavior consistent with understanding of some aspects of human language, some aspects of gaze, some aspects of emotion. At some point this gets philosophical, but my answer to your first question is "yes" and the second is "probably." For some interesting context on domestication experiments, see also Domesticated_silver_fox, which is still ongoing! SemanticMantis (talk) 23:50, 14 September 2015 (UTC)[reply]
Of course dogs could evolve language...so could frogs and ants. The future of evolution is entirely unpredictable.DrChrissy (talk) 00:01, 15 September 2015 (UTC)[reply]
@DrChrissy: the topic here is artificial evolution, not natural evolution. I don't believe it is unpredictable, but there are obviously some constrains.--Jubilujj 2015 (talk) 00:37, 15 September 2015 (UTC)[reply]
Specifically what you are talking about is selective breeding rather than natural selection, but DrChrissy's point still stands, because it is not as easy to isolate the influence of different selective forces as you may assume. Ignoring the "sci-fi" possibility of genetic engineering (which may indeed be a possibility in areas like this in the not-too-distant future), an experiment designed to grant dogs "true" linguistic capability through only selective breeding would likely run on the order of tens of thousands of years at minimum, if not massively longer, slowly improving the relevant neurophysiology one iteration/generation at a time. That's a long time to assume that other selective pressures wouldn't be involved. It also implies giving dogs a certain level of awareness and inviting them into the "cognitive niche", the consequences of which are impossible to predict. So really, Doc's answer is as close to as reliable as you are likely to get on this highly speculative question.
Is it is possible dogs (or any species) could give way to an evolutionary path that results in an independent evolution of complex natural language? The answer is clearly yes, because it happened once already. But asking for the exact likelihood with regard to any given species strains the predictive capabilities of even the most knowledgeable expert, and I think that was DC's point. Snow let's rap 01:38, 15 September 2015 (UTC)[reply]
Americas Funniest Home Videos has showed a dog saying I want my mama! still very dog sounding but uncannily well pronounced and English-like. Sagittarian Milky Way (talk) 02:03, 15 September 2015 (UTC)[reply]
Note that communication isn't limited to spoken language. Dogs seem to be one of the few non-human animals that understand pointing, for example. Some are even bred to point, to silently communicate the location of prey animals. StuRat (talk) 16:20, 15 September 2015 (UTC)[reply]
Would a dog that evolved to the point it could talk still be considered a dog? RJFJR (talk) 16:27, 15 September 2015 (UTC)[reply]
I suppose that depends on what you mean by "talk". If they could hold an intelligent conversation with a person, that would require major changes to their brains, and that would make them a new species. On the other hand, if they just learn to mimic a few words, like many birds can, that's not so remarkable. Note that a bird saying "Polly want a cracker" doesn't know it is named Polly, a cracker is the thing it eats, and what "want" means. It just knows when it makes that sounds it gets a nice treat. StuRat (talk) 16:45, 15 September 2015 (UTC)[reply]
I guess, dogs and cats do not decode the letters of the voice. But they get information about felling and condition very precise told of the sound. Angy, lucky, proud and so on, much animal species gets it well, some from long distance, when even hear anybodys voice. The meaning of you words is beeing recognized as a noise and your behavior. Are you talking to the animal which knows what will happen when hearing this (v/n)oice or are you taking to somebody else. When changing voice of mood, the animal just detects similarities but does not expect or relay on the same meaning and consequences. If the animals name's sound is not part of another part of your used words sound, it triggers the animals attention if you are watching it or not. If the same sound it is part other speech, you need to focus the animal. If you call your resting cat by its name, see its tail beginning to move. --Hans Haase (有问题吗) 17:53, 15 September 2015 (UTC)[reply]
Dogs have been bred to express behaviors that exist in wolves. Pointing, barking, herding, etc, etc, can all be found in wolves. The traits humans reward are submission and specific behaviors that already exist. The interesting thing is that if all the purebred dogs are released and become "wild", the mutt converges to a specific size but it's not a wolf. --DHeyward (talk) 07:44, 16 September 2015 (UTC)[reply]

September 15

information re subject listed on wikipedia

I am researching drugs related to shallow angle glaucoma. One, xylometazoline, is marketed under many names in different countries, naturally, and by several different companies. A product, Otrivin, is a nasal spray, and seems to be the only one that carries a warning about the effects of this drug. This is not mentioned in the info on wikipedia. My question, is it appropriate for me to do research and then list it on the site, with the research history? I ask, becasue I am researching a case where someone has permanent eye impairment due to appropriate/necessary information. thanks Suzanne — Preceding unsigned comment added by 115.188.152.155 (talk) 00:18, 15 September 2015 (UTC)[reply]

It depends on how you are doing your research. Wikipedia does not accept original research, for instance. But if by "research" you mean reading published literature, and compiling information, that would be acceptable if the sources are good. See the guideline on reliable sourcing for medical topics. Someguy1221 (talk) 00:42, 15 September 2015 (UTC)[reply]

can durex lube be used for an ultrasound?

or does it has to be a special lube?--Hijodetenerife (talk) 02:36, 15 September 2015 (UTC)[reply]

As far as I can tell there is no such thing as "durex lube", durex is a company who make a wide range of lubricants, water and non-water based, with flavors, colors and other additives. Ultrasound gel is used to provide an air free bond between the ultrasound probe and the skin. In my experience, personal lubricants are "waterier" than ultrasound gel and would spread and disperse more quickly. IMHO it would probably work but not as well. Vespine (talk) 03:58, 15 September 2015 (UTC)[reply]

Data analysis: olive oil cuts risk of cancer by 62%

Does olive oil reduce the risk of cancer by 62%? What could the authors of the study (http://www.latimes.com/science/sciencenow/la-sci-sn-mediterranean-diet-olive-oil-breast-cancer-risk-20150914-story.html) have messed up? Disregarding fabrication of data (assuming good faith), and with a sample of 4,282 women and 35 cases of breast cancer, could that all just be a coincidence? --Scicurious (talk) 07:15, 15 September 2015 (UTC)[reply]

They concede that more trials are needed. "In particular, they wrote, future studies should include more women and more cases of breast cancer. All of the women in the PREDIMED study were white, between the ages of 60 and 80 and had Type 2 diabetes or at least three risk factors for cardiovascular disease, such as high blood pressure, too much 'bad' cholesterol or a history of smoking. That means these findings may not apply to women who are younger, in better health or from other racial groups." Bus stop (talk) 07:43, 15 September 2015 (UTC)[reply]
Interesting paper. They did control for weight, which was my first concern, among other factors such as smoking, drinking, age and family history. From the data itself, I think the most curious thing about it is that the participants in the olive oil group had the lowest incidence of cancer no matter how well they stuck to the diet. That's a thing that simply shouldn't happen, but I don't see any obvious reason for this. The authors claim a dose response, but I don't believe the data actually shows this with confidence. Regards the chance of those 35 cases of breast cancer distributing as they did, it would be very unlikely for this to happen by chance. I do feel like a six year study period is too short. The development time of many cancers from formation of the primary tumor to the appearance of symptoms can be twice that long, which makes me wonder if they're looking at something else - not cancer risk, but detectability or something along those lines. Someguy1221 (talk) 07:44, 15 September 2015 (UTC)[reply]
I always want to ask "versus what ?". They just said "versus a regular low-fat diet". But, they also said "The women in the extra virgin olive oil-heavy Mediterranean diet group got 22% of their total calories from the oil, on average". This makes me want to know where the control group got the calories to replace that 22%. For example, if they ate more animal meat, then that could have caused more breast cancer in that group. StuRat (talk) 16:14, 15 September 2015 (UTC)[reply]
These are all valid questions, and the answers are not clear if you just read the popsci coverage like the LA Times link above. If you want to know the answers, you have to be willing to do some additional reading. The research article in question is freely accessible, and even linked from the LA Times link above. Dietary information for the groups is described and summarized in the supplementary content "eTable", available here [13]. If you want to know more than what is published in the article or the supplementary material, or the other published papers documenting the methods and results of the PREDIMED trial, then you'll do better off asking the researchers than anyone here. SemanticMantis (talk) 21:43, 15 September 2015 (UTC)[reply]
According to prof. dr. Martijn B. Katan (one of the most cited authors in his field), evidence about nutrition is messy and "one study means no studies", i.e. only lots of studies reach enough evidence for health claims in nutrition. Source: his book, Wat is nu gezond? He said e.g. that there are preliminary results that chocolate and read wine would be healthy, but we will know for sure only 20 or 30 years later. That's the time required for doing proper research about such claims. Tgeorgescu (talk) 01:32, 16 September 2015 (UTC)[reply]
Scicurious, maybe Keys was the one that messed up, "What if Its All Been a Big Fat Lie?" [14] --Stefan-S talk 14:50, 16 September 2015 (UTC)[reply]
I saw a documentary about an experiment upon two MDs (twins). One was forbidden to eat sugars and the other was forbidden to eat fats. At the end of the experiment, they were both worse off than before, with the fat-eating MD more dangerously so. The idea (supported by Katan's book) is to have diverse, balanced meals and avoid one-sided eating. Tgeorgescu (talk) 03:51, 17 September 2015 (UTC)[reply]

Static shock while cycling under a power line

Last Saturday, I cycled under a high-voltage power line, close to the pylon supporting it. As I passed under it, I heard a faint static crackling sound, and when I inadvertantly touched the metal of the handlebar, I got a slight shock. I think I've recieved such a shock every time I've cycled under this particular power line, but never when I've cycled under any other power line. What would cause this (and why only from this particular power line)? (If it is relevant, the road was on a bridge). Iapetus (talk) 10:01, 15 September 2015 (UTC)[reply]

The lightbulb glows because of the electric field from the power lines
Power lines carry high-voltage AC, which means that they radiate quite strong electric/magnetic fields at 50 Hz. These move electrons around your body, so you get electric charges forming on different parts of your body (this is called electrostatic induction). Cyclists are surprisingly well electrically insulated - they sit on rubber seats, hold rubber grips and cycle on rubber pedals - so the charge has nowhere to go... until you touch some metal, and electrons rush away to balance the charges of the two objects. You can read about how the British National Grid tries to prevent so-called "microshocks" here, although ultimately the only answer is "keep the electrical wires high up". Presumably the bridge lifts you closer to the power lines than you would otherwise be. However, what they recommend is making sure you're touching the metal before going near the power lines. That way, the charge is continuously discharged in a very weak current, rather than in a single spark. (It's also worth noticing that cyclists will occasionally get shocks anyway, even if they don't go anywhere near power lines, if they happen to be wearing clothes that easily generate a charge, like certain synthetics do). Next time you go through in the dark, try taking a compact fluorescent light bulb with you. If the EM fields are especially strong here, the bulb should glow – you can then compare this to the glow seem by other power lines. Smurrayinchester 14:01, 15 September 2015 (UTC)[reply]
I appreciate this question being asked. I've only a few times gotten minor shocks from the bicycle, and every time it's been while I was riding under a high-voltage power line. I always suspected that the local atmosphere was electric, but I never bothered to research it; I just learnt not to touch the metal while under high-voltage power lines. Nyttend (talk) 00:15, 16 September 2015 (UTC)[reply]
I think it's more likely the distance from the head to the power line is closer than the bike. The shock raises the bike potential to the head. In some countries the voltage is high enough that birds cannot land on them as the electric field is strong enough to kill them. Helicopters that bring lineman to inspect power lines us a pointy wand to reduce the field to zero. It draws a 3 foot arc as it's applied and removed. Watch this crazy job. --DHeyward (talk) 06:26, 16 September 2015 (UTC)[reply]

Warning markings mimicry - why don't all animals do it?

I have a question about warning colouration in animals. Some animals advertise their dangerousness (poisions etc) with bold colour markings. Other animals take advantage of this with bold color markings of their own. My question: why haven't all of the animals in that environment adapted with their own fake warning colouration? It seems a very cheap way of avoiding predation (changing a few colour pigments vs manufacturing poisons, other things). This would of course result in the signal being muddied in so much noise that the markings would no longer have any purpose. But evolution does not have that kind of foresight, so that's not the reason. Or is it possible that this does happen, and all the examples of warning coloration are an evolutionary 'snapshot', before all the other animals exploit the signal and it becomes useless? 129.96.86.179 (talk) 10:39, 15 September 2015 (UTC)[reply]

I think your last point is the answer, or close to it. If mimicry is useful, then (more) animals may evolve it. But the more animals that do, the less meaningful the colours become. If too many animals end up evolving warning colours, then one or more of the following are likely to happen:
a) intelligent predators (those capable of learning) will realise that warning colours are meaningless and ignore them. This will obviously result in some of them eating poisonous animals. Depending on how damaging this is, they may either evolve resistance, evolve an ability to distinguish between the real and mimiced dangers, or decline due to too many of them gettign killed or injured.
b) intelligent and observant predators will learn to distinguish real hazards (e.g. wasps) from fakes (e.g. hoverflies).
c) unintelligent predators (or those that are genetically hard-wired to avoid warning colours) will avoid everything (and therefore starve to death, go extinct, and be replaced by predators that follow a more useful strategy.
d) Any predator that currently ignores warning colours (whether due to stupidity or immunity) will carry on as before.
Also bear in mind that warning colours do have a cost - you stand out more, so are more likely to be eaten by predators that can resist your (alleged) poison, as well as those that haven't yet learned to avoid it.
The net result of all this is that there comes a point where evolving warning colours wouldn't help an organism to survive, and may even be counter-productive. At that point, any mutant individual that had gained warning colours will be less likely than its drab peers to pass on their mutant genes, so the species as a whole will not evolve mimicry.
I'm not sure what is a good reference for this, but Dawkin's The Selfish Gene has a lot about game theory and how this can affect what strategies evolve, how they can change, and particularly how what is useful depends on what everyone else is doing. On Wikipedia, Evolutionary_game_theory may be useful - and of course Mimicry. Iapetus (talk) 12:05, 15 September 2015 (UTC)[reply]
For general reference, this phenomenon is known as Batesian mimicry. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 13:16, 15 September 2015 (UTC)[reply]

The more mimic organisms (the harmless prey) there are, the less useful the markings are for the model organism (the organisms with powerful defenses). In other words, it's cheap for the mimic, but it's harmful for the model organism. It's also not a question of intent. In apostatic selection, it's the predator that dictates how their prey evolves. Aposematism (warning coloration) depends on one very important thing: that the predator remembers that they taste bad or are poisonous. The easier it is for predators to remember, the better. But when predators eat mimics that look like the model organism but don't taste bad, they get conflicting signals. The model organism loses the protection they gained from the warning coloration, because predators will start eating them too due to previous experience from eating the mimics.

This is reflected in the other kind of mimicry, Müllerian. Where two or more organisms that are BOTH unpalatable/poisonous/dangerous begin to look like each other, again due to the way predators choose prey. It's easier for them to remember when the "things I should not eat" all look alike. The ones which look different get eaten, the ones which look like the ones they remember to be bad, get ignored.

Prey organisms thus tend to form mimicry complexes, where bad-tasting prey evolve to look like each other, while they in turn are mimicked by other harmless organisms. The effectiveness of a mimicry complex relies on the ratio between the model organisms and the mimics. The more models, the more effective the complex is. So as soon as there are more mimics than models, they begin to evolve away from each other again, as predators once again learn to distinguish between them.-- OBSIDIANSOUL 13:47, 15 September 2015 (UTC)[reply]

Note that like Iapetus said, predators "remember"/"learn" etc. through either true memory in an individual (in higher animals) or through selective pressure (i.e. they begin to prefer prey with certain appearances after generations of trial and error get hardcoded into their genes)-- OBSIDIANSOUL 14:07, 15 September 2015 (UTC)[reply]
Note that evolution is a truism. It has no predictive value as every observation can/will be explained even if the the observations are completely opposite. --DHeyward (talk) 08:07, 16 September 2015 (UTC)[reply]
Seriously, DHeyward? Do you have any evidence whatsoever to support your utter bullshit? μηδείς (talk) 03:01, 17 September 2015 (UTC)[reply]
Hi all, OP here. Thanks for the detailed answers. The stuff about mimicry complexes looks particularly interesting. Although DHeyward, I think you are quite wrong when you say 'every observation can/will be explained even if the the observations are completely opposite'. If (for example) there were animals with predation-deterring warning colouration, and there were NO animals at all that exploited this with mimicry, this would be VERY difficult to provide an evolutionary explanation for. Therefore, not all observations are equally explicable via evolution. "Just-so stories" are real, but (as a layperson) I do think those concerns can be exaggerated. 129.96.87.62 (talk) 03:03, 17 September 2015 (UTC)[reply]
There are plenty of literature regarding the dynamics of mimicry complexes/rings if it interests you (e.g. [15], [16], [17], [18]). How they evolve and diverge is quite fascinating. For example, in some cases, Batesian mimics actually evolve their own defenses (though usually lesser than the original model organism) and thus "graduate" to becoming Müllerian mimics. In essence switching from a parasitic relationship to a mutualistic one. Thus gaining the protection of the mimicry complex, while not completely devaluing its usefulness. There are also alternative hypotheses on the evolutionary origins of aposematic coloration, including inducing predator neophobia rather than experience.-- OBSIDIANSOUL 03:55, 17 September 2015 (UTC)[reply]

Dopamine chemistry

I'm currently reworking the dopamine article with the aim of bringing it to GA and perhaps eventually FA status. I wrote a short section on the chemistry of dopamine, but it isn't satisfactory and I'm not sure I'm capable of making it so. The basic problem is that chemistry is perhaps my weakest area. More explicitly, there are two things I could use help with: (1) Just reading it over to make sure that the material makes sense to a person who understands chemistry. (2) Sourcing. I scraped up the information from various places, but I don't know of a good Wikipedia-quality source. I suspect that all the information there can be found in a good biochemistry textbook, but I don't have easy access to any. If anybody could assist me with this, I'd be grateful. (We probably don't want more detail than the stuff I wrote -- I just want to make sure it's correct and appropriately referenced.) Looie496 (talk) 13:39, 15 September 2015 (UTC)[reply]

I would suggest looking for an active member of Wikiproject Chemistry here: Wikipedia:WikiProject Chemistry/Participants and asking if they would be willing/available to help you out. Somebody from the overlapping (in this case) project Wikipedia:WikiProject Pharmacology/Participants may also be able to provide assistance.--William Thweatt TalkContribs 05:41, 17 September 2015 (UTC)[reply]
I'm not a chemist, but giving it a brief glance the statement "dopamine is the simplest possible catecholamine" doesn't seem correct. For example, 3,4-dihydroxybenzylamine has one less carbon atom in the amine chain and thus would be a "simpler" catecholamine. --Paul_012 (talk) 08:41, 17 September 2015 (UTC)[reply]

Would bread spoil if it is placed in a 100% sterile environment?

Would it be able to last forever? Or would it just go stale? Is there a difference between staleness and spoilage? What do people mean when something "goes bad"? 140.254.70.25 (talk) 14:47, 15 September 2015 (UTC)[reply]

Staleness comes from crystalization of certain compounds within the bread. Bread requires no bacteria to go stale, just time. Staling covers the process. --Jayron32 14:51, 15 September 2015 (UTC)[reply]
Bread also contains oils, which will go rancid. Bacteria can cause oils to go rancid, but it's not the only way.StuRat (talk) 14:58, 15 September 2015 (UTC)[reply]
When DSV Alvin famously sank in 1968, the crew's bologna sandwiches were on board the craft. It sank to the seafloor, about 1500 meters deep, in the anoxic zone, where cold water and very low oxygen makes for a very hostile environment. The submarine was recovered about a year later... and the sandwiches were intact! Here's an explanation from the SEAS "Ask-a-Scientist" program, sponsored by the National Science Foundation:
Nimur (talk) 15:36, 15 September 2015 (UTC)[reply]
Military style Meal, Ready-to-Eat (MRE) often include sandwiches that last 3 years. [19]. However, Spock might be right when he said: "It's bread, Jim, but not as we know it."--Aspro (talk) 15:44, 15 September 2015 (UTC)[reply]
You can tell anyone who's ever lived in Boston, by the fact that they don't freak out at canned bread, which keeps virtually forever until you open the can. The origin story the tour guides like to tell you that so many of the stricter religious groups in Massachusetts forbade baking on religious holidays, enterprising bakers took to canning their bread to preserve it; I do not for one minute believe this, since even the most bone-headed puritan could have figured out "bake it the day before", and I strongly suspect it originated as a food for ships' crews. ‑ iridescent 16:28, 15 September 2015 (UTC)[reply]
Note that the Puritans rather strongly rejected religious holidays except for Sundays. The tour-guide story is further stupidified because there's a relevant biblical account (Exodus 16, beginning around verse 22), in which the Israelites were outright required to do more work on Friday to store extra food for the sabbath. This is a well-known portion of the Torah/Pentateuch, and most of the children would have been familiar with it, as well as the adults; it's not some obscure statement hidden in an otherwise on-another-topic prophecy. Nyttend (talk) 17:05, 15 September 2015 (UTC)[reply]
Above the nutrition label at that link it says it "Contains Diary". Well, I certainly don't want bread made from shredded diary pages. I might get diarrhea (or, if they shredded a ship's log instead of a diary, I might get logorrhea). StuRat (talk) 16:38, 15 September 2015 (UTC) [reply]

When the yeast did its work, it is being killed by baking. It left food for mold which also contaminated in the bread and the air around. As the humidity of bread is high, the mold can grow. It has good conditions in the bread. Cookies and pastry which is kept dry expires much later. It is also conserved by the use of sugar. The bread of burgers is sweet as well. --Hans Haase (有问题吗) 17:32, 15 September 2015 (UTC)[reply]

A neat trick is to take stale chocolate chip cookies and put them in a plastic bag with a piece of bread. Voila, overnight the cookies are magically healed. --DHeyward (talk) 06:55, 16 September 2015 (UTC)[reply]
Certainly is nice trick but I have never know chocolate chip cookies to survive from being eaten, long enough to ever go stale. Do you have to lock them away in a safe vault first? Perhaps we ought to exchange homes if you suffer from stale cookies. You can also use my Gym membership-card as I can't be bothered to walk there any more as I always come home to an empty fridge.--Aspro (talk) 17:07, 16 September 2015 (UTC)[reply]

Total tooth extraction

I had a wisdom tooth pulled today, due to problems caused by improper eruption, so of course I'm temporarily prohibited from doing any chewing on that side and a bit restricted in what I can drink, but I'm allowed to chew food on the other side. I've heard of people having all teeth pulled in one sitting (apparently to facilitate the installation of dentures), which sounds rather inconvenient in the short term: how would you eat? Do you get put on IVs for a while? Do you just have to gorge yourself beforehand, so that you're not hungry later? Nyttend (talk) 16:56, 15 September 2015 (UTC)[reply]

From the lede of our article liquid diet: A liquid diet is a diet that mostly consists of liquids, or soft foods that melt at room temperature (such as gelatin and ice cream). A liquid diet usually helps provide sufficient hydration, helps maintain electrolyte balance, and is often prescribed for people when solid food diets are not recommended, such as for people who suffer with gastrointestinal illness or damage, or before or after certain types of medical tests or surgeries involving the mouth or the digestive tract. -- ToE 17:04, 15 September 2015 (UTC)[reply]
(ec) You would have to use a blender and a straw. Not pleasant if you try to eat a hamburger that way, but not bad if you stick to liquid meal replacements. I would expect even that to sting at first, but soon you could handle liquids, while solids would hurt for much longer. StuRat (talk) 17:08, 15 September 2015 (UTC)[reply]
If you search online for after-extraction care instructions you will find orders not to drink from a straw for from a few days to a couple of weeks, due to concerns that doing so will dislodge blood clots and interfere with healing. This is particularly important following the removal of upper wisdom teeth, as their nerve can extend up into the maxillary sinus, and inadequate healing may lead to a permanent oroantral fistula. Closed mouth (or pinched nose) sneezing and vigorous blowing of the nose are similarly proscribed. -- ToE 18:01, 15 September 2015 (UTC)[reply]
I've bolded your important correction, hope you don't mind. Usually it doesn't matter so much when Stu gets something wrong because of his guesses, but in this case I felt extra attention was warranted. SemanticMantis (talk) 19:29, 15 September 2015 (UTC)[reply]
My dentist's generic how-to-act-after-extraction instructions sheet included a warning not to use a straw, but I figured that perhaps my dentist was more cautious than Stu's. Definitely obeying my dentist's instructions :-) Nyttend (talk) 00:11, 16 September 2015 (UTC)[reply]
This is OR and 40 years ago, but when my mother had all her teeth removed, she had a set of false teeth implanted into the holes left. The instructions to her were, from memory, nil by mouth before the op (under a general anaesthetic), then tepid liquids for a day, soft foods for 2 or 3 weeks afterwards (mashed potato and gravy, soup, yogurt, that sort of thing). --TammyMoet (talk) 18:05, 15 September 2015 (UTC)[reply]


September 16

When you die, what will happen to the microbes that live on your body?

Suppose you die today. What will happen to your body when you die on a microscopic level? Will your microbes still use you as a food source until you return to the soil? Will they find a new host? How did your microbes get on your body in the first place? Did they get on your body when you chewed your first solid food or fed on your mother's milk? Do everybody have the same set of microbes in their gastrointestinal tracts? 71.79.234.132 (talk) 01:19, 16 September 2015 (UTC)[reply]

Answers in order:
  1. Decomposition
  2. Some will eat you, some will die.
  3. Not likely, since they will be six feet below potential new hosts, though they might colonize any insects or worms that find their way to you.
  4. We are always covered in microbes at all times. Initial colonization will occur at birth through contact with unclean people and objects.
  5. See above.
  6. No. People typically have very similar sets of microbes to one another, but there is substantial variation from person to person.
See Human microbiota for more information. Someguy1221 (talk) 01:24, 16 September 2015 (UTC)[reply]
Before burials and cremations were invented, did dead human bodies just lie on the ground to be eaten by the scavengers? If they lie there long enough, can the microbiota spread to other human beings in the tribe? 71.79.234.132 (talk) 01:35, 16 September 2015 (UTC)[reply]
Any contact with a microbe carries a chance of being colonized by that microbe. However, actually determining the inter-relatedness of of different individuals' microbiota has not been possible until recently with the development of genome sequencing and especially metagenomics. So while it's certainly possible to be colonized from the microbes in a cadaver, there is no way to know whether this was ever a common occurrence. If you take a group of modern people, however, you can collect genomic sequences of their microbiota and construct a phylogenetic tree that, combined with information about each individual's life history, may allow you to determine how and when individuals were colonized. Someguy1221 (talk) 01:47, 16 September 2015 (UTC)[reply]
Just a quick note: cremation as we know it today is far newer than burial. A similar effect is produced by burning a dead body on a funeral pyre, but that's a much more primitive process, and your microbiota would have a lot better chance of contact with a live body than in the (hopefully) more sterile environment of a cremation facility. Nyttend (talk) 02:22, 16 September 2015 (UTC)[reply]

Torture

When a person is tortured using a skull splitter (as famously used on Cavaradossi in Act 2 of Puccini's Tosca), what are the physiological effects on the victim? 2601:646:8E01:9089:D010:57E9:169C:D80B (talk) 02:53, 16 September 2015 (UTC)[reply]

Surely this would vary from victim to victim? Not familiar with the opera or the device (and a Google search for "skull splitter" finds basically nothing except the Orkney Skull Splitter brand of beer), but lots of other types of torture will have radically different effects on different people, so we should be able to expect the same here. Nyttend (talk) 03:33, 16 September 2015 (UTC)[reply]
The OP is asking specifically about physiological effects which I don't think would vary THAT widely. When you put someone on a rack, their joints dislocate, that's the physiological effect, which would be pretty much the same person to person. However like the above, I am not at all familiar with a "skull splitter" so not too sure exactly how it works or what it does, apart from presumably, splitting one's skull, which sounds considerably unpleasant enough. Vespine (talk) 03:43, 16 September 2015 (UTC)[reply]
Trepanning came to mind as soon as I read "skull splitter". This no-longer-accepted medical procedure had at least one effect with wide variation in the results: some patients survived, and others didn't. This is why I was guessing that Cavaradossi might not have physiological effects similar to everyone else subjected to the same device. Nyttend (talk) 03:49, 16 September 2015 (UTC)[reply]
Trepanning is a still used procedure. Rmhermen (talk) 21:37, 16 September 2015 (UTC)[reply]
[un-indent] FYI, the "skull-splitter" was an iron or steel ring (usually with 2 sets of spikes, one on each side) which was gradually tightened around the victim's skull, squeezing it. (The head crusher was similar, but applied pressure from the top.) 2601:646:8E01:9089:D010:57E9:169C:D80B (talk) 04:38, 16 September 2015 (UTC)[reply]
So, what question are you asking that your imagination can not answer? It sounds pretty straight forward. Vespine (talk) 22:58, 16 September 2015 (UTC)[reply]
For one thing, whether the pain would become unbearable BEFORE significant brain damage occurs, or the other way around. 2601:646:8E01:9089:917A:B8AA:82B4:50AB (talk) 04:37, 17 September 2015 (UTC)[reply]
Here is a rare video clip of a similar process. ←Baseball Bugs What's up, Doc? carrots01:40, 17 September 2015 (UTC)[reply]

Is Sexual Repression real?

Are there serious scientific studies about sexual repression? The Wikipedia article by the same name is not completely developed, so there seems to be nothing on the science behind this phenomenon, and I'm seriously doubting it exists as a medical condition. I don't get it. What's the difference between repression and self-denial/altruism? Could it be that the latter is voluntary and desirable while the former is involuntary and undesirable? 71.79.234.132 (talk) 04:04, 16 September 2015 (UTC)[reply]

The Wikipedia article on sexual repression is mostly dealing with people being prohibited from expressing their sexuality by cultural or legal standards, which would of course be a sociological and not psychological phenomenon. Looking for scientific use of the phrase is prone to bring up a lot of research and history related to discredited 19th century theories on sexuality (female hysteria and things of the sort). From a modern scientific perspective, it is recognized that there exist psychological conditions that cause a person to lose interest in sex. Such a loss of interest, which you might describe as sexual repression, is a common symptom of depression [20]. Anxiety disorders can also cause the condition vaginismus, which causes sex to become extremely painful for the female partner, and can lead to a loss of sexual interest. Someguy1221 (talk) 04:28, 16 September 2015 (UTC)[reply]
Then, is it still called "repression" if one prohibits oneself from engaging in certain behaviors by cultural and legal standards and feels perfectly happy and satisfied that way? Or does repression suggest the suppression is instantly bad? 71.79.234.132 (talk) 04:54, 16 September 2015 (UTC)[reply]
To my knowledge, "sexual repression" always has a negative connotation. A person who simply has no interest in sex would be called asexual. This is distinct from someone who has interest in sex but chooses not to partake, and is satisfied with that decision, would simply be practicing abstinence. Someguy1221 (talk) 09:36, 16 September 2015 (UTC)[reply]
There are indeed many scholarly studies about sexual repression. As said above, it is commonly viewed through the lens of cultural and legal systems more than as a medical disorder, though there is also some work from a more psychological or social-psychological angle. Here's one about sexual repression in Maoist China [21], here's one about how sexual repression changes through generations [22]. Here's a paper that discusses how masturbation was culturally repressed and seen as not only as a "sin" [23], but was (incorrectly) blamed for many real medical illnesses. Here's a book chapter [24] about sexual repression in Sambia culture. Many more articles are available via google scholar. You can ask at WP:REX if you would like a full copy and cannot otherwise find one. SemanticMantis (talk) 14:32, 16 September 2015 (UTC)[reply]

Graduate responsibility

Why is it that graduates are increasingly being given more responsibility in the workplace straight out of uni. For example, I've heard of uni graduates being given multi million pound projects to manage etc. 2A02:C7D:B917:9700:D8F6:4A2D:F9A7:DE4E (talk) 17:31, 16 September 2015 (UTC)[reply]

The Ref Desk isn't likely to be able to answer this in any event (open-ended "why" questions are notoriously difficult to reference), but the first step is to establish whether your premise is valid. Are recent grads being assigned more large-budget projects to manage now than they were 10 or 20 years ago? Your "I've heard" isn't actually evidence that there's anything to even investigate with respect to the "why". — Lomn 19:08, 16 September 2015 (UTC)[reply]
I agree with the above answer this isn't really a science ref desk question. However I don't really see any reason to strongly doubt it. If a grad gets "given" a multi-million pound project to manage, maybe they were an exceptional student, they had work placement as part of their curriculum already, they have good mentors and supervisors. Does this happen more now than in the past? I don't see why not? In my field of IT, certainly there is a constant demand to do more for less, more is demanded from everyone each year, there is less time given to "train" people, new employees are expected to "come up to speed" sooner and with less resources. That's just the very nature of capitalism. I don't see why more would not also be expected of people straight out of uni. Vespine (talk) 00:12, 17 September 2015 (UTC)[reply]
The original post seems to be referring to the business world, so my personal experience may not apply; however, my impression of academia in the US is that it has become significantly harder, not easier, for young people to receive large grants. Dragons flight (talk) 10:18, 17 September 2015 (UTC)[reply]
See complex question for the Wikipedia article on the problem with the question as asked, and why no one can actually answer it. Questions based on a premise that itself has not been established (like "When did you stop beating your wife?") cannot be answered in any meaningful way. --Jayron32 23:40, 16 September 2015 (UTC)[reply]
Ok, but remember not every complex question is also a loaded question. The former is hard and tricky, the latter is basically acting in bad faith. "Have you stopped beating your wife?" is acting in bad faith in a way that this question is not... SemanticMantis (talk) 01:30, 17 September 2015 (UTC)[reply]
Who says they are? ←Baseball Bugs What's up, Doc? carrots01:36, 17 September 2015 (UTC)[reply]

Terahertz astronomy for detecting nucleic acids near Enceladus?

If life exists in the oceans of Enceladus, it may have a shared origin with Earth and would then contain RNA. Is it possible with current technology to get a yes-or-no answer about trace RNA in the plume by beaming terahertz maser pulses at the plume and reading back the resulting reflection/fluorescence by some sort of submillimeter astronomy? Wnt (talk) 19:40, 16 September 2015 (UTC)[reply]

I don't know, but these people [25] seem to think the idea has promise, even going one better, claiming detection of life bits that don't share a common origin with terrestrial life.
See also the 35 papers that cite that one [26]. This [27] seems like a nice somewhat recent overview, though at a skim I don't see anything about astrobiology. SemanticMantis (talk) 20:32, 16 September 2015 (UTC)[reply]

Is Anti-persistence forecast skill statistically significant sometimes?

Since PF is basically saying that weather systems will stop moving forever eventually you should approach where there's been 1.0 crossings of the average line on average and this should work better: If it's warmer than average now, forecast colder than average, if it's colder than average, forecast warmer. This would probably have some meteorological skill as you've got some extra information by knowing how long the average time between zero crossings is. Enough to be statistically significant in some or all cases? (Like certain climates or seasons of climates? It would probably work better if you only make forecasts when it reaches a local minima or maxima of temperature sigma level that breaches a certain rarity (1-sigma?) (1.5-sigmas?)). For any climate shouldn't there be a range of days in the future where a contrarian forecast beats both "regression towards the mean" (climatology) and "it'll stay the same forever" (persistence)? Forecasting primers seem to jump straight from persistence to either a list of clues that's longer than "after x days, switch to anti-persistence", or "use a weather map", which requires an artifact like a smartphone, telegraph or satellite reports, or paper (or at least using one in the last few days). They don't take this simplest idea to it's logical conclusion and exhaust all simpler methods than remembering a bunch of patterns. Really logically after twice the average number of days between average line crossings persistence should be better than climatology again (though not as good as before), after 3 times as long anti-persistence should be better than climatology again, after 4 times as long the persistence forecast is best and so on but the number of days between actual crossings of the average line is so variable in many (all?) climates that I'm not sure if even persistence II's skill (two half cycles) would be statistically significant above climatology. Sagittarian Milky Way (talk) 23:28, 16 September 2015 (UTC)[reply]

(This seems really rambly and hard to follow; can you focus and clarify a bit? For this question - "For any climate shouldn't there be a range of days in the future where a contrarian forecast beats both "regression towards the mean" (climatology) and "it'll stay the same forever" (persistence)? "
-- sure, that's technically possible, but conventional research over the past half century or so has determined that it can't be reliably determined. Chaos theory, lyapunov exponents and all that. I think you might get much better answers if you spend a little while to make a more clear question :) SemanticMantis (talk) 01:23, 17 September 2015 (UTC)[reply]
I'm not really sure what you mean, but I think what you are getting at is: If it is warm today, does that make it more likely that it will be cold at some point in the future? In general, the answer is usually no. The most skillful forecasts are usually somewhere between "the weather stays the same" and "the weather returns to the mean", but predicting "the weather goes to the opposite condition" is generally not an improvement on any time scale. To give an analogy, imagine that each day you pick a random number between -1 and 1 and write it down. The weather might be imagined as something like the average of the picks over the last several days. Sometimes the seven day average will go high, and sometimes low, but on average it returns towards zero. However, since each random pick is independent of what came before, there is nothing that forces low numbers to automatically follow high. Hence there is no more skill in predicting that low totals follow high totals than in predicting that tails should follow heads when flipping a fair coin. Dragons flight (talk) 13:24, 17 September 2015 (UTC)[reply]

September 17

Gains and losses at petrol pumps

I often see people vigorously tapping the nozzle of the petrol pump to shake off every last drop before replacing the nozzle at the pump. If we assume petrol costs just £1 a litre (it's currently a little more than that for unleaded, where I live, but this keeps it simple), what would be an approximate monetary value for what these people are gaining and impatient people like me are losing?

If we pretend that every second you're doing something else other than working, you're losing wages (yes, I know), if we assume the jiggling takes about 10 seconds and someone is paid at just the UK minimum wage of £6.50 per hour, how much petrol would they need to save in order to make it worthwhile? And what if they're paid at the average wage of c.£3000 per month? Based on these calculations, is there an hourly wage that could be calculated at which jiggling the nozzle becomes more worthwhile than extra time spent working?

Finally, is there a good scientific reason not associated with money for doing this jiggling? Would these drops of very flammable material present a significant additional fire [or other] risk, in an environment where there is already an inherent significant risk? Does it make a difference that it would be cumulative over time, or does the fast evaporation of petrol nullify this factor?

Thanks all.

The curious scienceandmathsidiot, --Dweller (talk) 10:19, 17 September 2015 (UTC)[reply]

I've long thought that people (who, after all, never actually see petrol) think it's much more viscous than it is (perhaps people imagine it's like ketchup, with blobs of petrol stuck to the inside of the nozzle that must be dislodged). In actuality petrol is about half as viscous as water. Once the pump is off, the only place there could be any is in the local minimum of the hose, so lifting the hose does yield a small mount (I've tested it while filling a can, and gotten about 5 or 10ml at most from this). -- Finlay McWalterTalk 10:54, 17 September 2015 (UTC)[reply]
As mentioned above, I'm usually pretty lazy about the whole business, but I think I have sometimes seen more than just a few drops fly out of the nozzle as I remove it. --Dweller (talk) 11:22, 17 September 2015 (UTC)[reply]
The economic benefit isn't in conserving that last drop of fuel, but in stopping it getting on your car's bodywork. Gasoline works very well as a paint stripper (and even more so as a wax stripper), and if you let that stray drop run down the side of your car rather than tap it off, your car will streak very quickly, which will cost considerably more to repair than the nominal value of the time you spend shaking off drops. ‑ iridescent 11:02, 17 September 2015 (UTC)[reply]
That is precisely the kind of reply I was hoping for, and why this isn't on the Maths desk, where I was originally going to post it. Thanks. --Dweller (talk) 11:22, 17 September 2015 (UTC)[reply]
Incidentally, Diesel fuel is more viscous than water (roughly double) - and its heavier fractions don't evaporate after a spill. So it makes more sense to shake the Diesel dispenser than the petrol one. -- Finlay McWalterTalk 11:06, 17 September 2015 (UTC)[reply]
There is an error in this question. The questioner makes an observation: People tap the nozzle when removing it from a car. Then, the questioner makes an assumption: People are tapping the nozzle because they want to get every last drop of gas out of the nozzle. Finally, the questioner asks questions based on the assumption being true. What if the assumption is false? I tap the nozzle when removing it from the car, but not because I want to get every last drop from the nozzle. I do it because there is usually a drop or two of gas on the nozzle. As you move it from the car to the pump, they can fall. If you not very careful, they can land on your leg or foot. Just one drop of gas on your pants and you will have a bad aroma follow you throughout the day. So, it is best to tap a couple times. It is the same as using a urinal. You want to tap a couple times so you get a drip on the inside of your pants. 209.149.113.66 (talk) 12:48, 17 September 2015 (UTC)[reply]
I never "tap" and it's never yet dripped on my clothes. --Dweller (talk) 14:12, 17 September 2015 (UTC)[reply]
Therefore, nobody else in the entire history of gas pumping has ever tapped to keep from having fuel drop on their clothing. You need to take part in more scientific studies since your practices are universal for all of humanity. It would really cut out a hell of a lot of time spent observing human behavior. All anyone has to do is as what you do. 209.149.113.66 (talk) 14:27, 17 September 2015 (UTC)[reply]

Unidentified twig

Can anybody help identify this?

. It came off a tree in Yorkshire, England, but I don't know anything more than that.

It looks to me like a stalk of emerging fruit, but I don't know what; though it makes me think of Magnolia. --ColinFine (talk) 11:14, 17 September 2015 (UTC)[reply]

Looks to me like an acacia bud. How long is it? Ignore that; just realised it's almost certainly mistletoe if it came from a tree. ‑ iridescent 11:20, 17 September 2015 (UTC)[reply]
Hi, @Iridescent. Is that a positive identification, or based on my loose use of 'off'? I have not spoken to the person who found this, but I have no reason to suppose that it is not a scion of the tree, as opposed to an epiphyte or parasite. --ColinFine (talk) 11:52, 17 September 2015 (UTC)[reply]
Not a positive investigation, but a "looks likely to be". Do a google image search on "phoradendron buds" and compare. ‑ iridescent 11:57, 17 September 2015 (UTC)[reply]