Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 76.104.28.221 (talk) at 19:10, 9 June 2012 (→‎I don't understand how to mount the camera on a barn door tracker: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


June 4

What is the word for different outcome probabilities?

What is the basic phenomena called that anything in the universe at different locations will have a different outcome probabilities. Such that a solid block that melts will disperse into droplets that move in different directions and different sizes and not be absolutely uniform? Electron9 (talk) 01:31, 4 June 2012 (UTC)[reply]

Usually a solid block that melts turns into a pool of liquid. Could you make it a bit more clear what you are asking, please? Looie496 (talk) 03:11, 4 June 2012 (UTC)[reply]
Chaos? — kwami (talk) 03:40, 4 June 2012 (UTC)[reply]
See order and disorder (physics). StuRat (talk) 05:43, 4 June 2012 (UTC)[reply]
Second law of thermodynamics or Entropy. Phase space is used to represent all configurations & states a system can have. SkyMachine (++) 07:22, 4 June 2012 (UTC)[reply]

Transit of Earth, as seen from Mars?

I am only a basic amateur in this, so tell me if my reasoning is correct. As all the planets pass in great arcs around the Sun, then each planet must experience eclipse events with all other planets closer to the Sun than it is. (This would be true even if the planets did not move in a plane, as they do in reality.) Thus, Earth has both Transits of Venus and of Mercury. I am arguing that Mars would experience transits of both these, as well as a Transit of Earth. Is this correct?


Similarly, Jupiter would experience transits of Mercury, Venus, Earth and Mars. And Pluto would have transit events of all the planets. Have these events been considered and calculated? Which would be the rarest eclipse, for that is what they are? Would there be total eclipses with some transits, as the Sun appears smaller for the outer planets? When all the moons are taken into account, how many transits and partial / total eclipses are there altogether in our Solar System? What is the rarest and most spectacular? There must be cases where there are simultaneous eclipses involving 4 or more bodies. Myles325a (talk) 08:03, 4 June 2012 (UTC)[reply]

You are not the first person here to have thought about this. Have a look at Transit of Earth from Mars and, more generally, Astronomical transit and the navigation box "Transit visibility from planets superior to the transiting body" near the end of each article. None produce anything like a total eclipse Jupiter from Saturn being the the greatest at 5-6 percent, according to Transit_of_Jupiter_from_outer_planets#Saturn. Thincat (talk) 09:49, 4 June 2012 (UTC)[reply]
... and according to this, as seen from Earth, there will be a simultaneous transit of Venus and Mercury on 26 July 69163. Thincat (talk) 10:36, 4 June 2012 (UTC)[reply]
I'll be sure to mark my calendar so I don't miss it. :-) StuRat (talk) 00:47, 5 June 2012 (UTC) [reply]
We have a series of articles covering several of the planets: Solar eclipses on Pluto, Solar eclipses on Jupiter, Solar eclipses on Mars, Solar eclipses on Uranus and Solar eclipses on Saturn. Apparently, most of the planets can experience total eclipses from their respective moons -- Ferkelparade π 10:56, 4 June 2012 (UTC)[reply]
Without having done the math or research, I'd expect that a Neptunian transit viewed from Pluto (and its corollary, the Plutonian transit viewed from Neptune) won't occur due to the 2:3 orbital resonance between those bodies and Pluto's high inclination. Specifically, while you'll be able to draw a rough Sun-Neptune-Pluto line on a regular basis, it'll be in pretty much the same spot every orbit and that spot is not likely to coincide with the line where the Sun, Neptune, and Pluto are actually co-planar. — Lomn 14:08, 4 June 2012 (UTC)[reply]

"Seed for one year, weed for seven"

As I struggle this Jubilee weekend to weed the wilderness the counts as my back garden in the rain, my partner can be heard from the kitchen quoting an old saying that is meant to spur me into action. In truth it just annoys the bejesus out of me. The old saying "seed for one year, weed for seven": how true is this? How many times do I have to pull up weeds before they finally give up the ghost and decide that they're not welcome in my garden? -- roleplayer 09:34, 4 June 2012 (UTC)[reply]

What's your life expentancy? ←Baseball Bugs What's up, Doc? carrots09:38, 4 June 2012 (UTC)[reply]
37x109 should cover it. Benyoch ...Don't panic! Don't panic!... (talk) 09:46, 4 June 2012 (UTC)[reply]
I was going to say "6.02x1023", but then I realized your yard was infested with weeds not moles. DMacks (talk) 09:55, 4 June 2012 (UTC)[reply]
*Facepalm* Oh dear. -- roleplayer 09:59, 4 June 2012 (UTC)[reply]
Bravo! Brammers (talk/c) 13:09, 4 June 2012 (UTC)[reply]
I've never heard of that saying, but it reminds me of Exodus 23:10,11. Plasmic Physics (talk) 13:21, 4 June 2012 (UTC)[reply]
Exodus 23:10,11 is not related in any way except that it mentions the number 6. --169.232.178.111 (talk) 07:39, 6 June 2012 (UTC)[reply]
What exactly does it say in Exodus 23:10,11? Not all of us are Christians (I'm not) - I assume this is from one of the Christian bibles? Wickwack121.221.26.41 (talk) 15:03, 4 June 2012 (UTC)[reply]
Surprisingly, the internet is not just a porn machine. -RunningOnBrains(talk) 15:14, 4 June 2012 (UTC)[reply]
Speak for yourself. Reading old testament passages give me screaming orgasms.Throw-away-account-02783457342 (talk) 16:55, 4 June 2012 (UTC)[reply]
There is one Bible, but many translations, not all coherent. Exodus is one Book contained within the Bible. Plasmic Physics (talk) 23:44, 4 June 2012 (UTC)[reply]
I think the saying is that perennials will only need to be planted every seven years or so, but need to be weeded constantly. I'm not sure the seven is meant to be a scientifically stringent and tested number, just that when you garden, planting happens much less often than weeding. Indeed, the major job of a gardener, above all else, is weeding. The actual planting of the seed is a minor amount of the labor involved in the endeavour. So, again, don't look for a scientifically-proven truth to the aphorism; instead look at it as a general idea behind gardening; when you plant a garden expect most of your time and effort to be spent pulling out undesirable plants. --Jayron32 13:55, 4 June 2012 (UTC)[reply]
The only way I've heard of that saying is in relation to the poppy, not to all seeds. That may put your mind at rest. Or you can learn to love your weeds, which after all are mainly wild flowers in unwanted places :) --TammyMoet (talk) 15:02, 4 June 2012 (UTC)[reply]
I had a friend once who used to say that weeds were just flowers that grow where you don't want them to. Imagine my pleasure when the question of what the difference was between weeds and flowers came up on Simon Mayo's drivetime programme on BBC Radio 2, and the expert they brought on to answer the question basically said what my friend had always said in jest. -- roleplayer 18:04, 4 June 2012 (UTC)[reply]
So as your friend and the BBC guest dude are correct, all you need to do is to convert your interest to the non-preferred flower/s or plant/s (aka weed/s) and promote its/their growth in lieu of your present choice and you will be guaranteed to have a successful all be it useless crop that doesnt need weeding. Please, dont thank me, just send cash. Benyoch ...Don't panic! Don't panic!... (talk) 01:03, 5 June 2012 (UTC)[reply]
My garden has spent the last twelve years of my tenure doing its own thing in spite of any effort on my part to do anything different with it. In some parts of the world ivy is considered rustic and quaint. In my garden it's bloody annoying. And don't get me started on bloody bindweed. The only thing preventing me from taking a flametorch to it is the slim chance I might accidentally burn my own house down. -- roleplayer 01:12, 5 June 2012 (UTC)[reply]
I wonder whether anyone has ever considered genetically engineering a fungus which kills only ivy, with a double kill switch coded into it? Plasmic Physics (talk) 01:21, 5 June 2012 (UTC)[reply]
Some years ago I saw this discussed on Gardeners World on the BBC. It refers to what happens once you leave the weeds to set seed and the guy reckoned there was a lot of truth in the adage. His explanation was that only about 50% of the weed seeds that drop onto the soil germinate in any one year - the rest are left in the soil. So, assuming you pull up all the new weeds every year, then each year the number of seeds left in the soil will reduce by 50%. After seven years this will reduce the remaining seeds to a neglible amount. He demonstrated this by dividing a pile of seeds by half and then half again and, after doing this seven times, there were indeed very few left. Who says you can't learn from watching television? And as for bindweed - glyphosate gel painted on the leaves is the only answer, I believe. Richerman (talk) 01:51, 5 June 2012 (UTC)[reply]
How does that relate to a genetically engineered fungus? Plasmic Physics (talk) 03:14, 5 June 2012 (UTC)[reply]
ok, I've outdented it - is that better?
An indent on the same level will do, that way it doesn't look like an answer to my comment. Plasmic Physics (talk) 08:47, 5 June 2012 (UTC)[reply]

Body Vs Face

I have been around to many cold places. While we dress up for the cold, we cover our entire body below the chin, the ears and the head. The only parts that are exposed are the cheeks, nose, eyes and majority of the forehead. We do not feel any shivering sensation or extremely cold sensation when these parts are exposed but when you remove one sweater, you start shivering. Why is it so? What is so special about the face ? How is it able to thermoregulate it so well? — Preceding unsigned comment added by 117.193.137.186 (talk) 12:17, 4 June 2012 (UTC)[reply]

The face is just small, so not much heat it lost through it. Shivering isn't localised, it happens when your core body temperature is too low. It doesn't matter where you are losing heat from, just the total amount of heat you are losing. --Tango (talk) 12:53, 4 June 2012 (UTC)[reply]
Agreed. The face still becomes quite cold much like your feet if you happen to be in a cold basement without shoes or socks on. But you don't shiver then either even though your feet feel very cold to the touch. Core body temp being lowered is what triggers shivering. Dismas|(talk) 12:56, 4 June 2012 (UTC)[reply]

Well, I mean, why doesnt it feel uncomfortable? I am sure you would feel really uncomfortable if your feet were in the basement. — Preceding unsigned comment added by 117.193.137.186 (talk) 13:53, 4 June 2012 (UTC)[reply]

If your nose and your cheeks don't feel uncomfortable when you're out in the cold, then there's something wrong with you :P 109.97.179.91 (talk) 14:56, 4 June 2012 (UTC)[reply]
To a degree. Several mitigating factors apply: (a) It is what you are used to. We don't cover our faces unless conditions are arctic, so we are used to the face being colder. When I was in primary school, we didn't wear shoes - we went barefooted. Winter temperatures routinely got down to + or -1 C, summer +40 C. But at high school, they compelled us to wear shoes - I've worn them ever since when out of house. But I still don't sense heat or cold in my feet - If I need to go outside briefly, I don't bother with anything on my feet. (b) as aluded to above, we put sufficient clothing on to keep the whole body comfortable. This means that although the face is a small area, while wearing winter clothing, it is the only skin available for the body to regulate the temperature, so more blood flows in facial skin than would be the case if you took a coat or whatever off so heat can be lost elsewhere. So, even though the face is exposed to cold air, the skin is warmer than it would be if you took some clothing off. (c) Humans have been wearing clothing of some sort in cold areas, even if just animal furs, for probably millions of years. Plenty of time for any slight advantage in not being bothered by a cold face to be an evolutionary pressure. Wickwack121.221.26.41 (talk) 14:58, 4 June 2012 (UTC)[reply]
Interesting. Where and when were you allowed to go to school barefoot ? I'd think you'd have lots of cases of plantar warts and foot injuries. StuRat (talk) 18:51, 4 June 2012 (UTC) [reply]
Australia, semi rural area, late 1950's. Attitudes were very relaxed back then. Children were allowed to be children and were expected to walk to school. Nowadays, Australia is over-the-top safety conscious like the USA. All school kids are compelled to wear a complete school uniform including proper shoes and large floppy hat, + suncream, and parents drive them to school. I'd never heard of plantar warts until your post. I've never had what is shown in the WP article. Foot injuries certainly did occur, but no real problem. It's compensated by having feet completely and properly grown. Those of us who went barefoot to school went barefoot everywhere else and have broader feet, larger toes, and higher arches. It was always considered a bad idea to go barefoot on farms with animals. Wickwack23:50, 4 June 2012 (UTC) — Preceding unsigned comment added by 120.145.46.131 (talk)
On top of blood flow there is also the hot air you exhale which helps to warm your face. I have never heard of anybody having got frostbitten on their nose. --80.112.182.54 (talk) 16:33, 4 June 2012 (UTC)[reply]
Well Medline says it's one of the most vulnerable parts of the body! There are plenty of pictures on Google Images. --TammyMoet (talk) 16:53, 4 June 2012 (UTC)[reply]
Yes, when we have to go out in extremely cold weather we usually cover our faces, too, as far as possible, or wear a hood that keeps the coldest air away from our faces. Because breathing out warms our noses, the problem is usually only serious when there is a strong wind along with the cold temperature. Dbfirs 17:01, 4 June 2012 (UTC)[reply]
The nose is definitely subject to frostbite, as some mountain climbers have learned. That's pretty extreme conditions, though. ←Baseball Bugs What's up, Doc? carrots23:39, 4 June 2012 (UTC)[reply]

In addition to the overall heat loss, there is the concern about localized cooling causing frostbite. I think the face has evolved to have more blood flow, specifically because it's so exposed. Also, the cheeks may be warmed a bit by exhaled air in the mouth. In addition to extremely low temperatures, wind can also play a major factor in frostbite. You may find yourself turning your face away from the wind to prevent that. StuRat (talk) 18:48, 4 June 2012 (UTC)[reply]

Some very good answers already that have covered almost all of this. To focus on one aspect of the original question, I would say that you do feel a cold sensation on the face, but you are accustomed to ignore it (almost certainly by habit, and perhaps by evolution too).

To make that clearer, take a look at the two pics in this BBC news article about British attitudes towards clothing and the cold. In the top pic, the schoolboys are obviously greatly enjoying themselves despite large parts of their bare legs being exposed to icy winds, and snow and ice all around them. In the pic lower down, young ladies enjoy freezing rain on a "night out", despite just about all their legs being exposed. In some times, places, recreations, or professions, it's pretty much de rigeur for ladies to wear extremely short skirts even while their upper bodies are carefully wrapped in warm clothing.

If you dressed me in a short skirt and suggested I go on a "night out" in the middle of winter, I would certainly feel an "extremely cold sensation" in unexpected areas. I'm just not used to such attire. But on the other hand, in recent very cold (by British standards) times, I happily donned a few extra sweaters, coat, hat, scarf and gloves, but didn't think about trying to wear a second pair of trousers or trying to obtain long woollen underwear. (The latter was used extensively by the British during major wars in cold climates when decent shelter was not easily found, but isn't common these days).

Why do I think we're accustomed to ignore cold sensations in certain areas partly by evolution? Well, as others have said, the shivering response is to deal with the core temperature dropping. Cold in your legs, hands or face is not a problem (from an evolutionary perspective) unless either it contributes to a core temperature problem, or it results in frostbite, or it interferes with the function of the body parts in question.

When humans first learned to use animal furs to shield themselves against cold, draping something over the shoulders, hanging down to protect the whole torso (and thus the core temperature), perhaps secured with a simple belt, would have been the first easy step. Making the equivalent of trousers, or decent boots, would take a whole lot more expertise, skill, and time; and useful gloves even more so. (Note that the ancient Greeks, who sometimes gave up on major military operations because the weather was cold and rainy, mostly fought barefoot and considered trousers to be something that women and barbarians wore. In their world, if you're a hoplite and therefore important, you own a cloak and you wrap yourself in it when the weather gets cold.)

So from an evolutionary point of view, the human body perhaps isn't really expecting the face, hands, and maybe knees or feet to be shielded from cold. (Incidentally, the first of those two pics linked above shows that the boy on the ice is well kitted out with hat and gloves, but the one trying to pull him off the ice has neither). The face has to be clear to allow vision (more important than hearing), and covering the nose and mouth not only slightly impedes breathing, but also promotes build-up of damp from the exhaled air. Just like bare legs, humans become used to bare face easily enough. --Demiurge1000 (talk) 23:54, 4 June 2012 (UTC)[reply]

I somewhat agree with Demi here that learning to ignore it is a factor. I grew up in Malaysia but when I came to Auckland even during winter I usually just wear a short sleeve lightweight cotton or polyester T-shirt and long pants. Particularly in Auckland, our winters aren't really that cold (although our houses are potenitally colder then in colder areas due to poor insulation) but still cold enough that I think even people colder areas will often wear some sort of jacket. Going from a warm area (say a restaurant or under a duvet on a bed) can lead to a bit of uncomfortableness at first perhaps even some shivering but usually after a while I don't notice it much. Nil Einne (talk) 08:46, 6 June 2012 (UTC)[reply]

Blood donation as a free medical test?

Question 1: Donated blood are screened for various infectious diseases. When a disease is detected is the donor contacted? (I realize high-sensitivity screening tests produce a large number of false positives and require specific tests for confirmation).

If the answer to the above question is "yes", then I have a follow-up question.

Question 2: Please consider the following two sets of infectious diseases:

Set A: Infectious diseases not detected during regular medical check-ups

Set B: Infectious diseases screened for in donated blood

The intersection of Set A and Set B can be screened for "free" by simply donating blood. Of course it's not really free, you're actually paying with your blood, but it's essential a good deed and a free preventive check-up. Why isn't this more popularized? I see it as an excellent sales pitch for blood donation. Is there any legal, medical, or moral reason against it?

Standard disclaimer: I am not asking for medical advice. I do not have any of the above-mentioned infectious diseases. I have not, nor will I in the future, donate blood (needles frighten me terribly). I am not advocating for the abuse of the blood donation system in any shape, way, or form. I'm asking purely out of curiosity after reading WP's excellent article on blood donation.Throw-away-account-02783457342 (talk) 16:51, 4 June 2012 (UTC)[reply]

Our article which hasn't been modified since May seems to more or less answer a lot of what you asked so if you read it I'm not sure why you're asking. For example, it says 'The donor is generally notified of the test result'. Also you seem to be missing the obvious which our article also mention, besides false positives screening can have false negatives (e.g. if the infection can't be detected yet) even if they are rare. To put it a different way, there's a reason in many countries there are restrictions on who can donate even if the extent of these restrictions are sometime controversial (both of these are mention in our article), I think it's rare people suggest there should be no restrictions. This leads to an obvious reason (which ironically our article also mentions in a fashion) of why donor agencies would not want 'a free preventive check-up' to be a sales pitch. They don't want to be primarily encouraging people who think they might have an infectious disease to donate. What they want is people who are unlikely to have an infectious disease. So all in all, it doesn't sound like you read the article properly or are thinking this through. Why would it be so difficult for people to get a screening test without having to donate blood? If these tests are fairly expensive, then this has implications for the cost of the blood donation process so unfortunately that means they're probably not done (another thing our article mentions). So in reality in most countries the same screening tests are available without donating blood and no one needs to encourage people to risk infecting recepients by using free screening as a carrot for blood donation. (Another thing our article mentions, many people think blood donation should rely on unpaid volunteers. While it doesn't really discuss the reasons, one of them is the fear paid donations would encourage donors to lie or otherwise more risky donors.) Nil Einne (talk) 17:43, 4 June 2012 (UTC)[reply]
Thank you for the prompt response. Yes, I read the sentence "The donor is generally notified of the test result". It doesn't sound very certain to me, so I'm asking for clarifications. "Generally" has the meaning of "in disregard of specific instances" according to Merriam-Webster, and I'm precisely asking about one of these "specific instance", so I thought the general statement might not apply to this specific instance.
I used Set A and Set B because I thought it's better to have the discussion in the abstract, but now seems like the discussion can't continue until I name the disease. The only disease I'm aware of that's in both set A and set B is Hepatitis C. According to the WP article, transmission of Hepatitis C from blood transfusion is 1 in 10,000,000, which is essentially the false negative rate you mentioned. Personally I think 1 in 10,000,000 is negligible and can be left out of the discussion entirely. If I have misinterpreted that statistic (quite likely) or you feel the false negatives are still a issue worth discussing then please feel free to point it out.
Regarding your points on donor restrictions and "primarily encouraging people who think they might have an infectious disease to donate", I perfectly understand where you're coming from. My initial question sounded very much like suggesting "let's invent a loophole so that people who abuse drugs intravenously and people who engage in risky sexual behaviors can screw the system." Please take my word that that was not my intention at all. I learned in a recent news article[1] that Hepatitis C is quite prevalent in North America and was shocked to find out that it's not commonly screened for the in the US. Subsequently most carriers of Hepatitis C do not even know they have it, and no amount of annual check-ups will reveal it, until it's too late. Donor restrictions won't have any effect on these silent carriers (since they are completely oblivious of it). Setting aside the people who caught Hepatitis C through injection drug use, the other 40% of Hepatitis C carriers won't even have a reason to suspect they could have caught an infectious disease, so they are definitely not "people who think they might have an infectious disease". Again, I'm not advocating for people lying on their blood donation forms or abusing the blood donation system in any way. I'm saying people who have no reason whatsoever to suspect they have an infectious disease, who have never injected drugs, who have never engaged in risky sexual behavior, (just normal folks like you and me) should donate blood. It helps both society and themselves. This group is roughly 2.5 million people in the US.
Regarding your question "Why would it be so difficult for people to get a screening test without having to donate blood? ", the answer is yes, it's almost impossible to detect Hepatitis C unless you specifically ask for it in North America. And since most people aren't even aware of the disease, they don't know to ask for it. After all, their physician who have years of medical experience didn't bring it up, it's kind of presumptuous and chicken little to say "test me for X" just because they read the WP article on X.Throw-away-account-02783457342 (talk) 18:26, 4 June 2012 (UTC)[reply]
You can't really have complicated policies, and expect the general public to follow them. "Don't donate just to get a test" is managably simple. "Do donate to get tested only if you have no reason to believe that you need to be tested" is too much of a mixed message. I was once deferred from donating blood for a year because I had gotten a rabies vaccination. Now, they consider preemptive vaccinations (like, before travelling) to be fine, but not vaccinations in response to exposure to a potentially-rabid animal (the vaccine is safe, they're just worried that you might be infected still, I guess). However, they don't have an exception for if you later find out that the animal didn't actually have rabies. I assume that the policymakers knew how to formulate a policy that would've worked better if it were followed to the letter, but they were probably also worried about making a policy that people could follow in the first place. Paul (Stansifer) 00:09, 5 June 2012 (UTC)[reply]
You need really to specify what country you're talking about. In the UK potential blood donors are specifically told at http://www.blood.co.uk/can-i-give-blood/donor-health-check/ question 11: Please don’t give blood if you THINK you need a test for HIV or Hepatitis or if you have had sex in the past year with someone you think may be HIV positive or Hepatitis positive. Although the chances of infected blood getting past our screening tests is very small, our tests do not always show if you are infected. This is why we must take care in choosing donors and why you must not give blood if you are infected. We rely on your help and co-operation. Other countries have other rules. The UK system does say it will inform you of positive results. Myself, I gave up blood donation a couple of months ago and took up platelet donation instead (the UK requires either/or). I can recommend it. Tonywalton Talk 00:41, 5 June 2012 (UTC)[reply]
I'm interested in answers all over the world. AFAIK most developed and developing countries screen for Hepatitis C in donated blood, yet none of those countries screen the general population for Hepatitis C. Throw-away-account-02783457342 (talk) 10:59, 5 June 2012 (UTC)[reply]
Just out of interest, I gave a platelet donation today and noted down what it says in the Notes to Donors here in the UK. They test donations (whole blood as well as platelets) for hepatitis B, hepatitis C, HIV, syphilis and HTLV. They also test for such diseases as malaria and West Nile virus if the donor has been in parts of the world where these are prevalent within certain time frames. If any of these are detected they do inform the donor and "offer advice and support". The caveat above ("Please don’t give blood if you THINK you need a test…" is repeated. Tonywalton Talk 21:31, 7 June 2012 (UTC)[reply]

Geologist versus engineer

Which type of scientist do enviormental consulting firms hire more of? Would a Hydrogeology concentration in a geology degree make me as desirable to enviormental consulting firms like parametrix as an enviormental engineering degree? Thanks alot. — Preceding unsigned comment added by 99.146.124.35 (talk) 17:29, 4 June 2012 (UTC)[reply]

It's hard to say. If you want to know, call the company directly and ask them what you should study if you want to be considered for a job there or at a similar firm. They will tell you what they are looking for. After all, asking the person who knows the answer directly is more likely to produce the correct answer than asking random strangers on the internet; the odds of finding a hiring manager from the company in question that way are vanishingly small. --Jayron32 18:06, 4 June 2012 (UTC)[reply]

Why didn't anyone try to look for the transit of Venus in the pre-telescopic era?

Geocentric theory presumes the two inner planets to be closer to the Earth than the Sun. They pass the Sun hundreds of times a century. Venus never misses by more than c. 17 sun widths. Therefore it might be worth to try to catch one. Maybe all these epicycles didn't predict well enough. Still, every time it looked like it might happen I would look to see if anything interesting happened. Maybe they assumed that it couldn't be seen against the Sun because it's glow would be overpowered by the Sun (it's not obvious even against blue sky after all), and that it had no size. They would not be looking for a shadow. Still I'd just wonder if anything magic happens. Very powerful astrology going on. Sagittarian Milky Way (talk) 21:27, 4 June 2012 (UTC)[reply]

In fact our article Transit of Venus states that in the Dresden Codex the Mayans charted Venus' full cycle, yet despite their precise knowledge of its course, they never mentioned even it's theoretical possibility (don't know if the Mayans thought it had angular size or not). Sagittarian Milky Way (talk) 22:27, 4 June 2012 (UTC)[reply]

I don't think they would have had the means to calculate when a transit would happen, so they would need to need to spend a lot of time looking before they saw one. Even then, they would struggle to see it without magnification. The best thing they could look through would be some kind of smoked glass, which wouldn't give a very clear view. --Tango (talk) 21:59, 4 June 2012 (UTC)[reply]
Eh, I saw it at sunrise with no magnification (heck, eyeglasses for nearsightedness even makes the view slightly smaller). Still they only had to look once at sunrise and sunset once every 0.8 years for.. oh up to 120 years. A 1 out of 2 chance of catching it. But there are two of them. Sagittarian Milky Way (talk) 22:17, 4 June 2012 (UTC)[reply]
Were you using blurry smoked glass? --Tango (talk) 23:29, 4 June 2012 (UTC)[reply]
No, luckily it was pretty hazy and it was right at rising over a relatively low horizon. Of course if you used blurry smoked glass (which you shoudn't do unless the sun is very low I don't think you could keep it still enough for a glass blemish to fool you. It was so long ago memory might not be reliable anymore but now that I think of it it may have been only an indication, though it was absolutely there. Gray. And I bet the "minification" is only somewhere around 80-90%. Sagittarian Milky Way (talk) 17:53, 5 June 2012 (UTC)[reply]
May be it is because they did not have a heliocentric theory of the Sun. The Earth was the center of their universe. Two blobs in the same region of sky may have not excited them. SkyMachine (++) 23:45, 4 June 2012 (UTC)[reply]
The locations of celestial objects definitely did excite people in the geocentric days. That's what astrology is all about. --Tango (talk) 12:48, 5 June 2012 (UTC)[reply]
Maybe they did see it but went blind before they could enter it into historical record. But your answer on the imprecision of calculating exactly when it would occur is most probably the reason no one prior to Horrocks is known to have observed it. It helps to have a reason to observe it which is what Horrocks had, to attempt to work out the geometry of the solar system. SkyMachine (++) 10:10, 6 June 2012 (UTC)[reply]

Why didn't they just put a color filter on the eyepiece?

1673 woodcut illustration of Johannes Hevelius's 8 inch telescope with an open work wood and wire "tube" that had a focal length of 150 feet to limit chromatic aberration.

They made these up to 600 feet long, with a tower to hold the the lens and a 600 foot string to keep the eyepiece pointing to the right place. Sagittarian Milky Way (talk) 22:05, 4 June 2012 (UTC)[reply]

A color filter will dramatically reduce the brightness of any image obtained. In modern times, you can simply increase the exposure time to capture an image, but back then it wasn't an option. If you couldn't see the image with your eyes, you couldn't see it. Incidentally, there were high quality color filters back then, in the form of stained glass. Someguy1221 (talk) 23:19, 4 June 2012 (UTC)[reply]
In any case, chromatic aberration cannot be corrected merely by using coloured filters, as the question assumes. If it could, amateur and professional astronomers would not spend considerable sums of money on telescopes with achromatic and (better) apochromatic lenses, which are much more expensive than comparable telescopes without such lenses. While "coloured fringes" are the most obvious symptom of chromatic aberration, it also degrades the entire image. In addition, astronomers often want to see the unaltered colour(s) of what they're looking at, or use very particular colours of filter to increase the contrast of otherwise difficult-to-see details. {The poster formerly known as 87.81.230.195} 90.197.66.109 (talk) 01:33, 5 June 2012 (UTC)[reply]
A good enough colour filter will solve chromatic aberration. If there isn't a range of colours then they can't be focused differently. The reason they aren't use is because, as Someguy says, it would dramatically reduce the brightness. The less chromatic aberration you want, the narrower a range of colours you need to use, and the dimmer the image would be. --Tango (talk) 12:51, 5 June 2012 (UTC)[reply]
[2] See, they make moon filters with far less transmission than the deepest yellow filter. Can't they at least get something out of seeing in yellow? Or.. tie three telescope tubes together, give them red, green, and blue color filters and use mirrors and/or lenses to join the beams together. Maybe that was too much for the early 18th century. But somehow aerial telescopes were more practical. Sagittarian Milky Way (talk) 18:26, 5 June 2012 (UTC)[reply]
You would have the same difficulty with the mirrors and/or lenses that you are using the join the beams together as you did with the original telescope. You wouldn't get much more brightness either, since you're just getting three narrow parts of the spectrum rather than one, which still adds up to being a very small amount of the spectrum. And you wouldn't get anything close to true colour - you can make pretty much any colour by mixing red, green and blue, but the amounts of red, green and blue you need aren't the same as the amounts in the original image (for example, if there is some yellow in the image, you'll need to replace it with equal parts red and green, which your device wouldn't do). --Tango (talk) 12:17, 6 June 2012 (UTC)[reply]
Then add more tubes and colors, that's still probably more practical and/or cheaper than this. Sagittarian Milky Way (talk) 20:00, 7 June 2012 (UTC)[reply]
The OP seems to be confusing the state of optical science in the 16-17th century with its current state, around 4 centuries later. Those hugely long (because of long-focal-length objectives) were used because at the time they hadn't discovered, or lacked the craftsmanship and/or money to obtain, a better way to minimise chromatic aberration. Compound objective lenses (with components of two different glasses which somewhat cancelled out each others' effects) were one solution; another was the reflecting telescope, invented by optical pioneer Isaac Newton for this very reason. {The poster formerly known as 87.81.230.195} 90.197.66.109 (talk) 21:33, 6 June 2012 (UTC)[reply]
The achromat lens had to wait for the mid-1700s but the article doesn't seem to mention anyone trying looking at at least the very bright Moon or planets through yellow stained glass (that color appearing brightest and closest to white due to the luminosity function, or Newtonian reflectors obsoleting 300-foot refractors. Somehow it took a very long time after it's invention by Newton for reflectors to be presented to a scientific society as shown in the article. Maybe they couldn't make reflective enough mirrors back them (but they can still look at the Moon), or parabolas the size of those aerials were much harder to make or silver than sphere-surfaced lenses. [www.stargazing.net/naa/scopemath.htm] If 1-10% of the light was transmitted they could still see 9th-11th magnitude stars with their 8 inch (a magnitude is a 5th root of 100) That's still a lot of stars, more than many binoculars. The darkest extended objects you could see with a near perfect telescope are maybe magnitude 22 per square arc second of surface brightness. Compared to that the Moon and discovered planets are about a million to a billion times brighter. So somehow they saw such an unfocused rainbow blur of colors with sane focal ratios but no one thought of using colored eyepiece glass or something? Sagittarian Milky Way (talk) 20:00, 7 June 2012 (UTC)[reply]

Peregrine Falcon G-forces

I could use some help on figuring out how Peregrine Falcons withstand G-forces when pulling out of a dive. I have been talking about this in the talk of wikiproject birds under the section Peregrine Falcon. Plese help.Nhog (talk) 16:48, 4 June 2012 (UTC)[reply]

Moved here from the ref desk talk page. ←Baseball Bugs What's up, Doc? carrots22:18, 4 June 2012 (UTC)[reply]
Existing discussion of this subject can be found here, at WP:BIRDS. I suggested that Nhog ask about it here... --Kurt Shaped Box (talk) 22:27, 4 June 2012 (UTC)[reply]

There's surprisingly little information on this topic; I can understand why User:Nhog is having trouble.

I tried a google search on "Peregrine Falcon acceleration" which yielded a couple of barely-relevant hits, but they seem to talk mostly about acceleration and speed, not deceleration.

Do you have an estimate of the actual deceleration, in g?

It may be that the deceleration experienced by the falcon, though seemingly "high" (in other words, substantially greater than 1, or substantially greater than that encountered by any other bird) is not actually high enough to be dangerous. For example, our G-force article says that ordinary (non test pilot) humans can tolerate up to maybe 5g without too much trouble. Is a falcon pulling out of a dive experiencing a lot more than that? (And yes, I know, birds are not humans; I'm just talking rough orders of magnitude here.)

Steve Summit (talk) 01:13, 8 June 2012 (UTC)[reply]

It would not be hard to estimate the maximum g two different ways (1) measure the trajectory from a video (2) estimate the maximum force the wing can generate and the mass of the bird, big hint Cd is unlikely to exceed 1 by any significant margin. A hgh school student might struggle with the maths involved, or they might not. Greglocock (talk) 02:36, 8 June 2012 (UTC)[reply]

Thanks for all this I will try and ask national geograpic to see if they have anything and if not mabey they can rescerch it. If you can help plese do. Nhog (talk) 18:43, 11 June 2012 (UTC)[reply]

1.15 AU to survive RGB tip

Dr. Schroeder and Smith points out in order for planet to survive over RGB their current orbits has to be 1.15 AU or greater. Because Earth will be swallowed up because of the tidal interactions which basically counteracts with sun's loss of gravitational mass and the diagram actually points out Earth will be swallowed up when sun extends to 0.9 AU. If sun hits 1.2 AU, then how will 1.15 AU planet survive with sun's tidal interaction the thing which slows down planet's velocity. Did they actually make a mistake in the calculation? Is it suppose to 1.30 AU, or 1.15 is right? It seems weird and confusing. They never shown us the variables they used.--69.226.45.43 (talk) 22:26, 4 June 2012 (UTC)[reply]

While tidal effects cause the Earth's orbit to decay, constant mass loss from the sun (which will accelerate as it approaches the reg giant phase) causes the orbit to expand. The authors' claim is that present day Earth would need an orbit of 1.15AU in order to move far enough away from the sun to avoid engulfment. Someguy1221 (talk) 23:04, 4 June 2012 (UTC)[reply]
(Edit conflict; I see that Someguy1221 has answered the main question, but I put too much work into this not to post it dammit! :D)
First off, you are neglecting the fact that the Sun is going to lose a substantial amount of mass as it expands, and so Earth's orbit will also expand in response. However, remember that any scientific discourse about events so far in the future is going to involve speculation and large uncertainties. There are many different factors competing, and science can't say for sure which factors will win out (i.e., whether Earth will be consumed or not).
These factors will serve to expand Earth's orbit:
  1. the Sun entering the Red Giant phase will give off much more intense solar wind, which will impart an outward force
  2. this intense solar wind will result in an overall reduction of the Sun's mass, leading to wider planetary orbits
  3. Yarkovsky effect (though I believe this will be negligible even as the sun/earth distance decreases dramatically)
While these will work to contract Earth's orbit:
  1. Tidal decay
  2. Increased drag as the Sun's corona expands
And there are some factors that will have unknown effects:
  1. The orbits of the terrestrial planets may be unstable on these timescales, leading to possibilities of collision with other planets, switching of orbits, or even escape from the solar system
  2. Unknown unknowns!
We know that there will likely be a net expansion of the orbits of the terrestrial planets, but to what degree remains highly uncertain. So, in general, I really have a problem with any definitive statements being made about an event which is forecast to happen further in the future than the solar system is old. The best answer is "Earth might be consumed by the sun in its Red Giant phase".
However, in response to your original question, they state "any hypothetical planet would require a present-day minimum orbital radius of about 1.15 au" (emphasis mine). Thus they are talking about a planet which would today have a 1.15 AU orbit, and so in the future would have a much larger orbit. -RunningOnBrains(talk) 23:19, 4 June 2012 (UTC)[reply]
Do we exactly know how big the sun will be at the tip of RGB, and how much will sun's mass lost at the tip fo RGB? Is 33% of mass lost can pin the premise on the ground, or is it more speculate guesstimate. Is this possible that sun at tip of RGB lost up to 45% of mass which may put Venus and Earth's orbit to 1.30 and 1.80 AU? Can sun lose more mass than 45% of its original mass at the tip of RGB? Yes, I definitely know sun's loss of mass will cause changes of planet's orbits. I was wondering how is it possible for Venus to escape destruction. If Venus escape engulfment by wider orbit that would only be 1.08 AU, that is not enough to avoid destruction of the planet, that is too low. Can sun end up much bigger than 1.2 AU? Can we really pin 1.2 AU down the ground. The site say Mars will most likely survive, not 100% guaranteed, which there is a chance not to but clearly better than 50/50.--69.226.45.43 (talk) 00:26, 5 June 2012 (UTC)[reply]
As I alluded to above, there is no way to know such things for certain. We can make estimates based upon theory and models and observed behavior of presumably similar stars, but all that only gets us just that: estimates. We really don't have many direct observations of star radius; all our knowledge of star sizes is based upon estimates of luminosity and other observable factors. Only 3 other stars besides the sun are actually close enough to be resolved by telescopes (see List of stars with resolved images)! It is also important to note that at the apex of the AGB, stars similar to the Sun will undergo extreme variations in temperature and luminosity, which implies that radius will vary greatly as well. So it's in these "pulses" that are mentioned in the article you link above that the Earth will have the greatest possibility of being consumed; and these pulses are likely to be rather chaotic in nature. As I said before; there really is no definite way to know. Perhaps as we get more and better observations of other stars and computer models get better our certainty one way or the other will rise, but I suspect it will never be 100% certain. Well, at least until 8 billion years from now ;) -RunningOnBrains(talk) 02:08, 5 June 2012 (UTC)[reply]


June 5

What is the difference between Julian and Georigian calendars?

Resolved

This is related to question I asked on Maths reference desk, but I figure this is more of an astronomy thing. I need to calculate age for bunch of people given their ages on certain dates. The dates are old style. As far as I understand the only difference between Julian and Georgian calendar is that in the former case there is always a leap year at the turn of the century. So I would figure that as long as both dates are old style, I could use online calculator somebody has made for Georgian calendar. However doing so causes descrepency in the results, that makes the estimiation imposible. I tried to do estimiate for different person from one described before living in different pace, in case of which exact date for all reference points was available. I get discrepency exactly on turn of the century - it is possible to do estimiate with data from before and after the fact, but using all data points produces impossible result, which is off by months, not just one day. (Person was 17 on 10.03.1782, 30 on 30.05.1795, 46 on 04.09.1811, 50 on 17.04.1816, 60 on 02.08.1826, thus the earliest he could have been born is august, but the latest - april of the same year) Is all data wrong or am I missing something? ~~Xil (talk) 02:04, 5 June 2012 (UTC)[reply]

The use of the Julian calendar with its excessive leap years for so long resulted in a calendar drift of about 1 day every 133 years. This means you need to subtract a particular number of days when moving from old style to new style; see Gregorian calendar#Difference between Gregorian and Julian calendar dates. It wouldn't account for a discrepancy of months, though. From reading the other question, it sounds like a simple case of your data being wonky. :) FiggyBee (talk) 03:43, 5 June 2012 (UTC)[reply]
I said I expect them to be a little off, because I imagined the date might as well refer to when data colected were written down and submited the authoroty and the first set I tried had exact date missing for one reference. However I did a bit of research - they refer to age on the date of the census, the datees in different settlements are different, and originaly data was double checked, the whole thing was used for tax collection, so I wouldn't expect it to be that much off (in first case the first reference was almost a year off from what consistent data suggested, but seemed more realistic given the age of person given in previous census), rather something is wrong with the method of calculation ~~Xil (talk) 06:53, 5 June 2012 (UTC)[reply]
If your dates cross 1752, then there was the problem with the change in the start of the year from March 25th to January 1st. See Old Style and New Style dates for details. This doesn't account for the error in your example of 1782, unless the recorder was still using old-style dates (common in Quaker records). Dbfirs 07:15, 5 June 2012 (UTC)[reply]
The dates are from Russian Empire, which used old style untill 20th century, the date for the new year, though, apparently was changed in 1700 ~~Xil (talk) 07:57, 5 June 2012 (UTC)[reply]
Just google "julian to gregorian converter" and "distance between two dates". Sagittarian Milky Way (talk) 17:08, 6 June 2012 (UTC)[reply]

Another problem is that the date when the year number changed was not always January 1. For example, in England up to and including 1751 the year number changed on March 25. See Calendar Act of 1750. I don't know if the Russian empire used January 1 as the date to increase the year number during the period you are interested in. Jc3s5h (talk) 17:38, 6 June 2012 (UTC)[reply]

Yeah, it did, they swiched to January 1 in 1700 and I don't think it would have any influence on age counting - say if you are born on 20th August, you will still have birthday on the same day, if the new year is on a different date (when you swich to Georgian though, your birthday is moved by about two weeks). I am quite at loss what else could be wrong here. I'd expect inconsistencies officials make to be consistent in themselves. Also I find it weird that the problem occurs exactly on turn of the century, which to me suggests the leap year could be the cultprit. I tested the method by seeing if it was possible to work back to my own birthday, which is Georgian calendar date, and it seemed viable as I did come up with six month period during which I indeed was born. I figure the best way to test it would be to find exact birth date in someone's vital records and see if I can calculate that, but unfortunetly many of the vital records also state age, not birth date, so I haven't yet come accross one that will do ~~Xil (talk) 00:11, 7 June 2012 (UTC)[reply]
Just a word that it's the Gregorian calendar, named after Pope Gregory XIII. There has never yet been a Pope George. -- ♬ Jack of Oz[your turn] 06:13, 7 June 2012 (UTC)[reply]
There's been a John-Paul, though, so we're just waiting on George and Ringo... - Nunh-huh 08:53, 7 June 2012 (UTC)[reply]
I am well aware of that, don't know why I wrote something else :) at any rate I managed to find a julian to gregorian converter, which proved that using Gregorian dates also results in BS, also I think I figured out why (I'll explain on Maths desk), so as far as calendar is concerned - thanks for confirming what is the difference ~~Xil (talk) 10:00, 7 June 2012 (UTC)[reply]

Tubular lights for DIY lamp

There is a computer game (Deus Ex: Human Revolution) which features a candle stick with an electric light. I rather like it and would like to attempt to create something similar myself. I was wondering what would be suitable to best mimic the sticks of light. Neon tubes? EL tape coiled around a cylinder? --2.120.147.92 (talk) 07:13, 5 June 2012 (UTC)[reply]

Sort of like this? --TammyMoet (talk) 07:44, 5 June 2012 (UTC)[reply]
Good call, Tammy - I think LEDs are the right way to go with this. Another option (note that this is educated guesswork rather than experience) would be to find some clear polycarbonate rods or cylinders, then either sandblast them or use some other form of abrasive medium to scuff the surface so they'll scatter light when lit from below. I suspect a 2 W or 3 W LED would be more than enough, depending on how bright you want it. There's a website called Hackaday which has a lot of people on it who will have all sorts of ideas about how to do the project, and crafting expertise to go with it. Might be worth paying a visit to their forums. Brammers (talk/c) 10:54, 5 June 2012 (UTC)[reply]
If you want a tubular light where the whole cylinder has a fairly uniform lighting like in the screenshot (which doesn't look much like a candle to me) compared toedit:instead of something like Tammy showed (which does), you may want to look in to people making light sabers lookalikes. There are plenty of plans and discussions on how to do it online, I think I even mentioned some on the RD before. Your base will obviously be different (edit: and I presume you don't care about things like extend/retract effects, dueling durability or sound effects [3]), but that's a fairly minor thing to modifyNil Einne (talk)
Here you go http://www.instructables.com/id/Flickering-LED-Candle-1/ or http://www.instructables.com/id/YAFLC-Yet-Another-Flickering-LED-Candle/ and there are several others on the site too. --TrogWoolley (talk) 18:21, 5 June 2012 (UTC)[reply]
Right but that's like what Tammy showed, something that really simulates a candle and not like what's shown in the screen shot the OP provided, for which as I said you can get a lot of help from those who make light sabers. The OP hasn't clarified which one they meant (but you reply is indented under my reply). Nil Einne (talk) 08:19, 6 June 2012 (UTC)[reply]
Thanks Nil Einne; that's a great idea. I've checked out some of the tutorials and now I've got good guidance. 2.120.147.92 (talk) 10:16, 6 June 2012 (UTC)[reply]

Growing computer memory using bacteria - Why use gold?

While reading this article, it struck me... Why use gold?

"Next, they imprinted a block of gold with a microscopic chessboard pattern of chemicals. Half the squares contained anchoring points for the protein. The other half were left untreated as controls. They then dipped the gold into a solution containing the protein, allowing it to bind to the treated squares, and dunked the whole lot into a heated solution of iron salts."

Is it because it's not magnetic? It's a good conductor? It lends your project a certain flair? Why? Thanks, Dismas|(talk) 07:56, 5 June 2012 (UTC)[reply]

The best conductors are silver, copper, and gold, in descending order. Both silver and copper have antibacterial properties, so gold would be the practical choice. Gold is also much more corrosion resistant than the other two, so that's probably a plus in the laboratory setting. Anonymous.translator (talk) 11:09, 5 June 2012 (UTC)[reply]
Corrosion resistance is pretty important when you're making microscopic metal structures and then dipping them in an aqueous solution. --Srleffler (talk) 17:20, 5 June 2012 (UTC)[reply]
Thiols bind to gold fairly well, so it's a common trick to use a cysteine residue to anchor a protein to a gold substrate. Not too many other metals have that property without having other chemical effects as well (corroding, excessively reactive with your protein or other chemicals in general, etc.) DMacks (talk) 15:03, 5 June 2012 (UTC)[reply]

Mono- and di-glycerides of fatty acids - Trans fats?

According to some questionable sources found via Google, mono- and di-glycerides of fatty acids (E471) are just a sneaky way of not listing trans fats in the ingredients of certain products. My question is, when MAGs/DAGs are produced from vegetable oils, how do the manufacturers know they aren't producing MAGs/DAGs with a trans-fatty acid side chain? --Markr4 (talk) 12:31, 5 June 2012 (UTC)[reply]

This is more of a legal question than a scientific one, and I haven't yet found where "E471" is defined in a standard. Certainly looking at a wholesaler site, there are different grades [4]. One says "90% stearate", for example, which means much of it is not trans fat. Since it is called "distilled monoglycerides", there should be no deliberate partial hydrogenation over a catalyst... I think. And many have sources like palm oil and tallow which should be nearly saturated to start with. But I don't know how well its quality would be defined or enforced... in theory, a monoglyceride is just a one-chain fat, with no specification of whether it's trans or not. Wnt (talk) 20:33, 6 June 2012 (UTC)[reply]
Thank you for the reply. When I see mono/diglycerides in the list of ingredients of a product, it normally also lists the product as Suitable for Vegetarians, so at least for those products it obviously isn't derived from tallow. Some other sources say that palm oil is expensive, so manufacturers are using MAGs/DAGs derived from cheaper oils which may be less saturated and therefore more prone to being trans fats.
I'm surprised by the lack of information about this though, and I wonder if the plethora of epidemiological studies that look at the effects of fats on heart disease have controlled for MAG/DAG intake... --Markr4 (talk) 14:36, 7 June 2012 (UTC)[reply]
I would expect that the amount of E471 used is far less than the amount of fat in a typical product (it is an emulsifier which has quite different properties from fat). Icek (talk) 19:16, 7 June 2012 (UTC)[reply]

Transit of Venus-inspired question

Today's transit of Venus got me wondering what other astronomical phenomenon are coming up in the next few years. Do we happen to have such an article? A Quest For Knowledge (talk) 17:26, 5 June 2012 (UTC)[reply]

The best I can suggest is to start from Category:Astronomical events of the Solar System, and explore the subcategories. Each subcategory has lists within it, such as Lists of solar eclipses. There are so many dozens of these "events" that having a master list of everything would get cumbersome. Between conjunctions, syzygys, eclipses, transits, etc. etc. there's probably something "interesting" monthly. Sadly, Jack Horkheimer has passed; he had an excellent weekly TV program that aired on PBS that did a great job of highlighting exciting and interesting astronomical events. The show still exists, but without Jack it has lost much of its excitement (IMHO). See Star Gazers. If you don't have access to PBS on your TV, you may be able to find episodes online. --Jayron32 17:38, 5 June 2012 (UTC)[reply]
Hmmm...List of astronomical phenomenon visible from Earth might not be unwieldy as long as the inclusion criteria includes:
  1. Must be visible from Earth with the naked eye.
  2. Must be notable on its own. A Quest For Knowledge (talk) 17:48, 5 June 2012 (UTC)[reply]
The first Google hit on "Astronomical events" is http://www.seasky.org/astronomy/astronomy-calendar-2013.html. There are pages for all years from 2010 to 2020. PrimeHunter (talk) 17:50, 5 June 2012 (UTC)[reply]
The Sky at Night is a similar TV programme in the UK. It's monthly and each episode includes a segment on naked eye/binocular astronomy in the next month. --Tango (talk) 18:35, 5 June 2012 (UTC)[reply]


Occultation of Regulus by asteroid 163 Erigone on March 20, 2014, visible near New York. Count Iblis (talk) 19:32, 5 June 2012 (UTC)[reply]


Supernova explosion of Betelgeuse:

"The explosion will be so bright that even though the star in the Orion constellation is 640 light-years away, it will still turn night into day and appear like there are two suns in the sky for a few weeks."

Count Iblis (talk) 19:42, 5 June 2012 (UTC)[reply]

This slide show was in today's Telegraph: Only goes up to 2014 though. --TammyMoet (talk) 19:54, 5 June 2012 (UTC)[reply]
I was not aware of this until recently, but while comet orbits are well known after their discovery, whether they will be barely visible or spectacular even during the day is wildly unpredictable. Comet Kohoutek in the 70s was touted as the greatest astronomical spectacle of my parents' generation, but it turned out to be quite a dud; while Comet McNaught (C/2006 P1) caught people quite by surprise and became the brightest comet in 40 years (unfortunately, not visible to those of us in the Northern Hemisphere). So heck, there could be the greatest comet in modern history visible next month for all we know! -RunningOnBrains(talk) 01:00, 6 June 2012 (UTC)[reply]
Actually it was if you knew where to look. I saw its braided tail above the southern horizon at twilight in Coventry, UK. Granted it was faint, but I saw it for at least a week before it finally disappeared. A stunning sight. There is a photo of the phenomenon taken from Switzerland in the gallery in the article. --TammyMoet (talk) 09:32, 6 June 2012 (UTC)[reply]

Particle in a box

Let's say under Quantum Mechanics you have a particle in a box. Let us say that at some point in time you measure the momentum of the particle to be . Obviously, the wavefunction has collapsed to this state of definite momentum. Now, let us wait a sufficiently long time until the wavefunction has stabilized such that there are probabilities of measuring other probabilities. Now, let us say we measure a new momentum . Why does this not violate the notions of conservation of energy and/or conservation of momentum? — Trevor K. — 18:30, 5 June 2012 (UTC) — Preceding unsigned comment added by Yakeyglee (talkcontribs)

Momentum, like any other conserved quantity, commutes with the Hamiltonian, so the result doesn't change under time evolution. Count Iblis (talk) 19:28, 5 June 2012 (UTC)[reply]
Two points, which are really saying the same thing: (1) If you know the particle is in the box then the uncertainty in its position is finite, and so Heisenberg's uncertainty principle says that the uncertainty in its momentum must be non-zero. (2) How do you propose to measure its momentum with exact precision without disturbing it ? Gandalf61 (talk) 11:48, 6 June 2012 (UTC)[reply]
This is a trickier question than I thought at first. Momentum is not conserved in the particle in a box. The particle's momentum changes when it bounces off the walls and there's no corresponding change in the momentum of anything else. There are no states of definite momentum, either (as Gandalf said, that would violate the uncertainty principle). So there's no way to measure the momentum exactly in the first place, and no reason to expect the same answer when you try again later. -- BenRG (talk) 21:21, 6 June 2012 (UTC)[reply]

Why does a power station need cooling?

Whereas a steam locomotive doesn't? (possibly related side question: why isn't the steam coming from a locomotive or power station used to pre-heat the water to preserve charcoal?) Joepnl (talk) 22:27, 5 June 2012 (UTC)[reply]

To the 1st question, I suppose it's just a factor of power output. If a steam locomotive could generate hundreds of megawatts, I imagine it would also need cooling. To the 2nd question, steam is most certainly used to pre-heat the water in power plants and locomotives using a Feedwater heater. Vespine (talk) 22:52, 5 June 2012 (UTC)[reply]
A steam locomotive does need constant cooling. That's why there are water towers in each train station back in the day. The water turns into steam and carry the excess heat off, while doing useful work. Anonymous.translator (talk) 22:57, 5 June 2012 (UTC)[reply]
A steam locomotive doesn't need cooling because it's an open cycle steam engine, and the power increase from passing the exhaust steam through a condenser isn't enough to offset the increased weight and complexity. (There were condensing steam locomotives, but they tended to be used only in unusual conditions). I'm not aware of any locomotives that pre-heated the feedwater, rather, the steam was sent through a blastpipe to greatly increase the draft (and thus the combustion effectiveness) of the firebox. --Carnildo (talk) 23:31, 5 June 2012 (UTC)[reply]
Our feedwater heater article discusses its use on locomotives. DMacks (talk) 00:07, 6 June 2012 (UTC)[reply]
Steam locomotive has lots of detail on the operation of those machines. The water towers along the tracks were for refilling the water tanks on the train, not for cooling as such. Where there was a gap, towns were constructed specifically to provide water for the locomotives, hence the term "tank town" for any small town. ←Baseball Bugs What's up, Doc? carrots23:53, 5 June 2012 (UTC)[reply]
Well, in a way, they were for cooling, in that cool water was taken in and hot steam was let off while the engine ran. The only way to avoid this cooling effect would be to recondense the steam, which, as previously noted, was rarely done in locomotive steam engines. StuRat (talk) 01:36, 6 June 2012 (UTC)[reply]
That's possible. In any case, the tank towns' primary purpose was to resupply the locomotives with water. ←Baseball Bugs What's up, Doc? carrots02:38, 6 June 2012 (UTC)[reply]
Right, but it's really the same thing, in that the reason they needed to resupply water is that the old water got hot and boiled off. StuRat (talk) 03:14, 6 June 2012 (UTC)[reply]
I recon that the reason is that it is easier to control internal temperature for a steam locomotive that it is for a power station. It is a matter of scaling laws. A larger engine takes more time to change its temprature in reponse to a change in fuel. Plasmic Physics (talk) 03:25, 6 June 2012 (UTC)[reply]
Similiarly, which melts faster: a one ton ice block, or a ton's worth of ice cubes spread out? Plasmic Physics (talk) 03:27, 6 June 2012 (UTC)[reply]
When Plasmic Physics and StuRat cook their vegetables in a saucepan or steamer (http://en.wikipedia.org/wiki/Steamer_(appliance)), they must think the water is for cooling the said vegetables, unlike the rest of us. Wickwack120.145.177.252 (talk) 08:11, 6 June 2012 (UTC)[reply]
What are you retorting about? Plasmic Physics (talk) 09:21, 6 June 2012 (UTC)[reply]
The water in the pan cools the pan, not the vegetables. The vegetables cool the water (if you put cold potatoes into boiling water, it stops boiling for a while). If you had an empty pan on a hot ring, it would become red-hot like the ring is. The water keeps the (inner surface of) the bottom of the pan down to 100°C (or a bit more if you've added salt) until it boils away. So the water cools the boiler, just like the water in a steam train cools the furnace by turning to steam. TrohannyEoin (talk) 11:46, 6 June 2012 (UTC)[reply]
So what? That is not why water is used (water is used in cooking vegetables as it conducts the heat to the vegies), and does not imply you need to have a cooling system. It has nothing to do with the OP's question. Water is NOT used in a steam engine to cool the furnace - if you want a cool furnace, don't light the coal. Rather, the purpose of the water is to be heated so it is steam and is thereby a usuable (expandanable and compressible) working fluid. Wickwack58.164.238.58 (talk) 12:20, 6 June 2012 (UTC)[reply]
That is exactly what I said, a power station needs a cooling system to actively regulate the temperature. A locomotive doesn't, because of its size it responds much faster to simply changing the amount of fuel consumed. Plasmic Physics (talk) 12:26, 6 June 2012 (UTC)[reply]
The temperature in the boiler of a power station is regulated by the cooling tower? Sounds rather indirect. I would say that the cooling tower is needed as part of the equipment to help condense the steam for re-use in the boiler. Several components operate as a system to achieve the desired boiler temperature. Steam locomotive boilers and combustion chambers must be rugged and lightweight, and relatively small, and as a result are way less efficient than steam power plants. Steam locomotives often had once through use of the water, necessitating water towers every so many miles. To avoid stopping at the water tower in a "tank town," railroads in the 1870's introduced the use of a track pan between the tracks and a scoop which could be lowered to force water up into the tender, giving rise to the term "jerkwater town." It's a place where the train doesn't even stop for water. My copy of the OED states (incorrectly) that the term "jerkwater" meant the engine would stop and the crew would "jerk" water from a stream with a bucket to fill the tender's water tank. Edison (talk) 14:55, 6 June 2012 (UTC)[reply]
The purpose of a power station's cooling tower is to dissipate heat from the condenser; the condenser, in turn, is used to create a vacuum at the exit of the turbine which increases the efficiency of said turbine. A condenser with cooling system is a big, bulky thing (one sized for a steam locomotive is about the size and weight of a train car), so trains didn't use them often, and simply accepted the lower efficiency that resulted. Regulating the temperature of the boiler in both steam locomotives and power stations is done through the simple fact that unpressurized boiling water cannot be hotter than 100C: so long as the boiler tubes are completely immersed in water, the boiler cannot overheat. --Carnildo (talk) 00:09, 7 June 2012 (UTC)[reply]
Locomotives even in the 19th century used temperatures above 100C in a pressurized boiler. 100 pounds per square inch, corresponding to 338 degrees F, was typical for early locomotive boilers in the US, and 50 PSI in Britain, corresponding to 298 degrees F . The boiling water and steam in the boiler have tremendous potential energy. It was not just the equivalent of an open kettle of boiling water. High pressure is required for the steam to be useful in pushing the piston and making the engine go. A power plant is likely to operate at 1000 F, not 212F. Edison (talk) 04:39, 7 June 2012 (UTC)[reply]
In both the steam locomotive engine and boiling vegetables examples, water cooling is an essential part of the process, just not a part we normally think about. Without it, the engine and pot would get too hot and be damaged (having left a pot on too long, I can attest to the damage that causes). StuRat (talk) 15:09, 6 June 2012 (UTC)[reply]
Water cooling is not an essential part of the process. Yes, you can damage a pot if you boil it dry, but then the food won't be being cooked at that stage either. Water is essential to the process of cooking vegies as it is the medium that conducts heat from the pot to the vegies. You could make a pot out of tunsten (melting point ~3400 C) and use it dry. The stove heat won't hurt it, but with no water, the food won't cook either. Or, you could use a temperature sensor in a feedback system (some modern stoves have this) to prevent the pot from being overheated, even if dry. But, again, the food won't cook without the water to conduct the heat to it. Neither is water used in a steam engine to cool it. It's used because it is a convenient low cost compressible and expandable working fluid when heated. Wickwack58.164.238.58 (talk) 16:09, 6 June 2012 (UTC)[reply]
The food would be cooked without water, but it would be cooked entirely too much, into cinders, on the bottom. The water is used to redistribute heat to cook the top of the food more than it would get without water, and to COOL the pan to a temp that won't scorch the bottom. (This is what redistribution of heat is, cooling some areas and heating up others.) StuRat (talk) 23:03, 6 June 2012 (UTC)[reply]
Regarding a steam locomotive take a look at Steam engine safety. If the water level in the boiler dropped too low the burner would melt or weaken the boiler and it would explode. The water in the steam engine's boiler was both the working fluid for the engine and also the coolant. The energy required to vaporize the fluid prevented the boiler from going above the melting point of the metal. You could build the pressure vessel and burner out of tungsten, but if your burner temperature exceeded the melting point of tungsten, you would have to have a working fluid to cool it. Otherwise your boiler would melt. Of course if your burner temperature is too low to melt or even weaken the tungsten, then your working fluid is only needed to run the engine. The cooling for a steam train came as the steam was used to work the engine and when the steam was vented into the atmosphere or was re-condensed.Tobyc75 (talk) 23:49, 6 June 2012 (UTC)[reply]
A nitpick; the danger is not that the boiler melts, but that it expands unevenly to the point that it buckles and ruptures. FiggyBee (talk) 00:33, 7 June 2012 (UTC)[reply]
The failure mode called "crown sheet failure" happened when the fireman let the water level drop so much that there was not a layer of water on top of the top sheet of the firebox. Then the steam above it would not adequately cool it, and it would soften and be pushed downward by the steam so that it opened, separating from the stay bolts which connected the sheet to the top of the boiler.and the heated boiler contents would flash into steam through the firebox into the cab, scalding the crew to death in a most horrible way. This could also happen in a crash, when mechanical stress caused a boiler failure. Another cause was an accumulation of scale on top of the crown sheet. Steam locomotives were high maintenance devices, with frequent internal inspections of the boiler recommended. Edison (talk) 04:55, 7 June 2012 (UTC)[reply]
I agree with StuRat about boiling food. Boiling is not the only way to cook food: you can steam veggies, or grill them, or sautée them in a pan (eg. onions). A bit of liquid for thermal contact is useful, but a tablespoon of oil in a frying pan is more than sufficient. Boiling is used as a cooking technique when precise temperature control is desired: the boiling water maintains a very accurate 100°C temperature throughout the pot by actively cooling any part of the pot that exceeds the boiling temperature (conversion of water to steam is of course endothermic). This makes it easy to cook the vegetables without burning them, and doesn't require as much attention from the cook as other cooking methods would.--Srleffler (talk) 17:15, 8 June 2012 (UTC)[reply]
  • Surface condenser has the answer: In thermal power plants, the primary purpose of a surface condenser is to condense the exhaust steam from a steam turbine to obtain maximum efficiency and also to convert the turbine exhaust steam into pure water (referred to as steam condensate) so that it may be reused in the steam generator or boiler as boiler feed water.
    The difference between the heat of steam per unit mass at the inlet to the turbine and the heat of steam per unit mass at the outlet to the turbine represents the heat which is converted to mechanical power. Therefore, the more the conversion of heat per pound or kilogram of steam to mechanical power in the turbine, the better is its efficiency. By condensing the exhaust steam of a turbine at a pressure below atmospheric pressure, the steam pressure drop between the inlet and exhaust of the turbine is increased, which increases the amount of heat available for conversion to mechanical power. Most of the heat liberated due to condensation of the exhaust steam is carried away by the cooling medium (water or air) used by the surface condenser."
    When James Watt invented the separate condenser, the main point was that this increased the efficiency of steam engines, and made them more practical. It also reduced the need for water – something that was less of an issue in railway locomotives which could take on water while running. See Steam locomotive#Condensers and water re-supply and Condensing steam locomotive. dave souza, talk 16:37, 6 June 2012 (UTC)[reply]


The OP posted under title "Why does a power station need cooling?" and then posted "Whereas a steam locomotive doesn't? and "why isn't the steam coming from a locomotive or power station used to pre-heat the water?" This is 2 questions: 1) why do power stations have cooling (as in cooling towers) and not locomotives? and 2) why don't locomotives use (waste) steam to preheat the water (before boiling it). Nobody has answered these specific questions. Carnildo has a clue, and Dave Souza has some good understanding. Plasmic Physics has no idea - perhaps he's seen a photo of a loco, but he doesn't know how they work. StuRat has got himself muddled. Here are some on-target answers:-
(1) The reason why power stations have cooling and locos generally don't stems from the fact that power stations stay in one place, and locos move about on rails, which sometimes go round curves. Power stations have coooling systems (condensers), feed water preheating, combustion air preheating, and all manner of extra bits of hardware, each adding little bits of efficiency, and in some cases improving the service life. It doesn't matter if all this hardware takes up space and has weight - its all sitting on solid foundations in a big parcel of land, and doesn't move.
The worth of a railway loco is in pulling carriages or wagons i.e., its drawbar pull. Maximum drawbar pull is a simple function of the driving wheel ratio (ratio of wheel diameter to crank pin offset, which sets the stroke), the piston diameter, the steam pressure, and the number of pistons. To support this drawbar pull, there must be sufficient weight on the driving wheels, otherwise they will spin/slip on the rails.
It happens that all the hardware to supply this steam pressure, at an adequate steam flow rate, is generally much greater in weight than the weight required to prevent slip. If you look at a photo of a typical loco, you see a number of small wheels, used to help support the weight, that aren't driving wheels. A loco is confined to a small frontal area, to fit on the track/raod width, fit under bridges, etc. You could make it longer, but you can only have no more than 4 or 5 pairs of driving wheels, or it won't go round the bends. It follows from all this that weight in a railway loco must be kept to a minimum, and its length must not be too long. It would be nice to add condensers, preheaters and all the other tricks, but they will add weight & take up space. More weight in the loco (or its tender(s)) and the less weight in carriages or wagons it can pull. Experience has found that its best to keep it simple.
(2) The reason locos generally don't preheat the water is, because of the need to keep it simple and keep weight and length down, the onboard water storage and the boiler vessel is the same unit. To preheat water before it enters the boiler vessel, you need to store the water in a separate tank. Separate tank locos (called "tank engines") do exist for special requirements (eg working long routes or routes where water is not available) where reduced drawbar pull must be accepted. There must be a pump to force water from the tank into the boiler vessel, whcih of course must be under the full working presure. At a power station, all pumps can be driven by electric motors connected to the station auxilary bus, at a net thermodynamic efficiency of ~40%. On a loco, any pump must be driven from either the driving wheels (thermodymanic efficient around 12% at best), or a separate auxilary steam engine (max efficiency perhaps 20%)
Incidentally, to use other efficieny-improving devices like condensers, you need pumps.
It should be noted that power stations generally operate 24 hours a day, 7 days a week. This forces the use of continuous feed arrangements - water is continually pumped into the boiler, air continually forced into the furnace, lube oil continually pumped through bearings. So, you've got pumps anyway, might as well use some of them to help with efficiency. Railway locos must be a batch process - stock up with coal and water, replenish bearing cups with oil, and go on the day's trip (a few hours), then some down time. So, pumps aren't strictly necessary. Parlicularly in base-load power stations, it doesn't matter if it take the beast part of the day to get all the equipment, condensers, tanks, vessels, up to temperature and pressure - you only start a power station from cold under exceptional circumstances. But with a locomotive, you do it every day. It takes a while to get the coal burning properly, raise steam, and warm up the cylinders, as it is.
It should also be noted that a railway loco has a crew of only two - the driver (USA term: engineer) and the fireman. Both have plenty to do without being expected to monitor extra equipment. You don't want more crew. Apart from their wages, you would have to increase the size of the loco to make room for them. In a power station, more staff to manage all the extra bits of efficiency adding equipment can be justified, and there's heaps of room for them.
For a very interesting and well informed layman's view of a multitude of bright, stupid, dumb, and outright bizare ideas that have been tried in railway steam locomotives, see http://douglas-self.com/MUSEUM/LOCOLOCO/locoloco.htm
Wickwack120.145.54.86 (talk) 02:24, 7 June 2012 (UTC)[reply]
Can I just say, on your point two, WHAT THE HELL ARE YOU TALKING ABOUT? Railway locomotives most certainly do not carry all their available water in their boiler; in fact, they don't "use up" water in the boiler at all; it has to be kept at a fairly constant level for efficient (not too high) and safe (not too low) operation. Tank engines (which carry additional water in tanks around the boiler) have a much smaller additional water capacity (and hence a much shorter range) than tender engines, which typically drag around thousands of gallons of extra water in a trailer for that purpose. The transfer of water from these storage tanks to the boiler did indeed require a complex pump in the early days of railways, but from the mid 19th century solid-state injectors were universal. Additionally, feedwater heaters were quite common on steam locomotives in the 20th century, with the ultimate development being the Franco-Crosti boiler. FiggyBee (talk) 03:10, 7 June 2012 (UTC)[reply]
As I said, all manner of ideas have been tried at one time or another. But the bulk of locomotives don't use them, again for reasons I gave. Your argument is like saying all gasolene-powerd motor cars should have superchargers and turbochargers, because a) it has been tried, and b) it can be shown theoretically that their use can give greater efficiency. But very few gasolene-engine cars have been made with super- and turbochargers, because, like railway locos, its best to keep them simple. Tank engines were used where special circumstances required it - they were definitely NOT typical. Obviously, you can't preheat water unless the heater is somewhere between the source of water and the boiler vessel - that means either a tank engine (or the Franco-Crosti type which is a sort of tank engine with a high-presure heated tank) or a condenser engine - neither are typical. Implicit in the OP's question (essentailly why wasn't it done) is why wasn't it commonly done, not can it be done, nor has it been done. Stay focused on answering the question. As far as I can gather, there have been a handfull of F-C types built, as against thousands upon thousands of conventional type. Wickwack120.145.54.86 (talk) 03:26, 7 June 2012 (UTC)[reply]
No, the reasons you gave were that locomotives don't carry a seperate feedwater stock, which is flat-out wrong; 100% of every water-boiling steam locomotive ever built carried a seperate feedwater stock. Feedwater heating was usually accomplished by heating the water shortly before it was injected into the boiler. If you've ever seen a photo of a steam locomotive with a mass of pipework over the top of the firebox or going from the back all the way to the front, that's a feedwater heater. They were not universal, but they were not uncommon. FiggyBee (talk) 03:34, 7 June 2012 (UTC)[reply]
In case you're not getting it, look at this photo: File:PRR_K5_5698.jpg. See the thing with "Pennsylvania" written on the side? That thing is full of water (also, see the bump with the pipe coming out of it behind the chimney at the front? That is part of the preheater). The answer to "why wasn't it done" is "it was". The answer to "why wasn't it always done" is "in some times and places it was cheaper to run less efficient locomotives than to do more maintenance". The answer to no question is "because locomotives don't carry a feedwater source seperate from the boiler", because that is nonsense. FiggyBee (talk) 03:49, 7 June 2012 (UTC)[reply]
Nah, I got it all right, as soon as I looked at http://en.wikipedia.org/wiki/Tender_(rail), while you were typing your last post. Yep, I certainly stuffed up answer (2). Not sure about how common feed water heaters were though. I have technical drawings of a few 1940's locos and feedwater heaters are not shown. The WP article on steam locos mentions them but the schematic http://en.wikipedia.org/wiki/File:Steam_locomotive_scheme_new.png does not mention them. Wickwack120.145.54.86 (talk) 04:10, 7 June 2012 (UTC)[reply]
As for the comments about using tungsten tanks/pots, that might well eliminate the need for cooling, but is, of course, prohibitively expensive. StuRat (talk) 04:48, 7 June 2012 (UTC)[reply]
And heavy! FiggyBee (talk) 05:03, 7 June 2012 (UTC)[reply]
And won't eliminate the need for water - because the water is NOT there for cooling - it's there to transfer the heat to the food (cooking) or to be the expansion medium (engines). Wickwack124.178.36.4 (talk) 08:17, 7 June 2012 (UTC)[reply]
Tank engines merely carry their water on the locomotive chassis (normally in boxlike tanks beside the boiler but sometimes in a 'saddle' tank over the boiler) rather in a towed tender. There is no other distinction.Hayttom (talk) 15:19, 7 June 2012 (UTC)[reply]
Somehow the work of pumping in water to the boiler to replace that used by steam is far less than the work obtained from the steam as it pushes the piston or drives a turbine. I suppose the small volume of water pumped in against the boiler pressure is less work than the large volume of steam pushing the piston at the same pressure. Early engines might have used steam engine driven from the boiler to pump in water. A pump which operated only when the wheels were turning would have been a disaster (boiler explosion when the water level dropped if the engine sat stationary for a while). Was it ever a human-powered pump? Later 19th century and later locomotives used a steam injector, where a cleverly designed valve managed to get a jet of steam to push water into the boiler without a piston pump or rotary pump. I am probably one of the few here who had the opportunity as a child to step into the cab of a working steam engine of a major railroad back in the era of steam. The locomotive fireman injected water strategically: it would not do to pump in a large volume of cold water when the locomotive was about to pull a train up a long hill, since it would be hard to maintain steam pressure and speed. Conversely, the fireman would add water when the engine was pulling into a station, since that avoided a possible buildup of steam pressure and the need to release excess pressure through a high pressure relief valve, wasting the coal and the employer's money. Economy was king in railroad operation, and there was always an awkward balance between adding complex gadgets to save coal , such as by preheating water and superheating steam and condensing the water, and the new failure modes they introduced, along with more maintenance expense, more training time for engine crews and maintenance employees, more tasks and monitoring for the crew, with increased chance of missing some problem such as low water level, as well as higher initial cost. I own a couple of small toy steam engines which actually do operate as some above suggested, with the boiler filled half full of water and sealed, then a fire lit below it until steam pressure is enough to blow the whistle and operated the piston and flywheel, and any small load attached to it. Edison (talk) 16:51, 8 June 2012 (UTC)[reply]
I mentioned driving the water inlet pump from the driving wheels as a possibility - I haven't heard of it being done. Never the less, it would be practical, apart from the extra thermodynamic losses, and boiler explosions definately NOT a problem. When a steam locomotive is not moving, there is no consumption of steam in the pistons (some steam is still consumed in cylinder heating and may also be used to keep up the firebox draft). Crews obviously know this and don't put on coal when they are about to stop. There is nearly always some loss of steam though, as it is impossible to get firing exactly right. Not only can the driver control pressure by water injection, as you've said, if he doesn't, all that will happen is that the safety valve will open and blow off some steam. The books I have, especially History of Westrail by Fred Aflick, state that fuel consumption can vary considerably from crew to crew. There are 2 main reasons for this - (1) apparently it takes some skill and experience to lay out the coal in the firebox for optimum combustion, and (2) fuel consumption depends on the mental planning along the route. More coal should be put in the firebox before hills, as there is a time lag between more coal and more steam pressure. Similarly, less coal should be put on before getting to downhill runs or stop points. Too little heat at the right time and pressure drops, reducing efficiency. Too much heat at the wrong time, and steam will be vented off and wasted. Some crews are better at route planning than others, but all crews will vent some steam at one or more points along a route. Wickwack120.145.9.95 (talk) 14:09, 9 June 2012 (UTC)[reply]

June 6

Uranus & references to its original name

In the 'Naming' section of Uranus's page it states that Georgium Sidus was the name selected by Hershel. Are there any scanned textbooks or documents online from the time that show this being used in a list of the names of the planets? --Anonimasimio (talk) 09:02, 6 June 2012 (UTC)[reply]

Here are two Herschel documents about the planet,[5][6] neither including a list of names, but the latter includes in passing the names of both Jupiter and Saturn. Thincat (talk) 09:27, 6 June 2012 (UTC)[reply]
Interesting to see that Hershel called it The Georgium Sidus in writing. I guess what I'm most curious to find is the kind of list that would be given to school children to memorize. As in, here an the eight planets: Mercury... Saturn and Georgium Sidus. --Anonimasimio (talk) 09:31, 6 June 2012 (UTC)[reply]
Here is the Nautical Almanac for 1820 where the name is in a list and it is called "Georgian" (page 34, for example, of the PDF). The preface refers to "the Planet Herschel, called the Georgian Planet by us" (page 25). Hardly appropriate for school children though. Thincat (talk) 10:55, 6 June 2012 (UTC)[reply]
Thanks, Thincat. That's the first contemporary, non-Hershel reference I've seen. I'm digging this up because there is a modern story told by many of how Uranus was almost named 'George' but I've never been able to find anything from the time that actually uses that name. If anyone can find a that reference, or any other contemporary usage of the planets' name before we settled on Uranus, I'd be grateful. --Anonimasimio (talk) 11:56, 6 June 2012 (UTC)[reply]
I have seen (in the library of a National Trust property, but I won't say which one, because I don't think we were supposed to be looking at the books) a school atlas that lists the planet as "Hershall" (It was a long time ago, but I'm pretty sure of the spelling). --ColinFine (talk) 15:38, 6 June 2012 (UTC)[reply]
Anonimasimio, I've found a publication on Google Books by Tiberius Cavallo in 1803 that refers to Uranus as "the Georgian Planet" in several places -- [7] He does, in a footnote, mention that the planet is also called Uranus or Herschel by some. There are a number of other titles from the 19th Century in Google Books that also refer to it as "the Georgian Planet", but I thought Cavallo was of sufficient stature to be an important one to mention. I didn't hunt for "Georgium Sidus", your original question, but you may have luck searching for it as well. Best of luck with it! Jwrosenzweig (talk) 06:02, 7 June 2012 (UTC)[reply]
Another citation perhaps worth mentioning is this book by Jacques Ozanam [8], which I point out only because it's an example of a non-British scientist who is clearly familiar with the name (and appears to be using it as the planet's standard name, although I don't know if this book is translated from a French original) -- anyway, most of the citations I'm finding are from scholars born in or working in England and Scotland, and I thought it was worth pointing out that "Georgium Sidus" had traveled across the Channel as at least one of the planet's names, if not its standard name. I'll also note that I'm seeing citations as late as the 1850s that seem to be using "the Georgian Planet" as Uranus's standard name, though it might be worth investigating whether these are just reprints of earlier editions, or if the name persisted in use that long. Jwrosenzweig (talk) 06:10, 7 June 2012 (UTC)[reply]
To clarify, Sidus is merely Latin for "star" (which in the old sense included planets). And Georgium is the accusative singular of Georgius, a proper name from the ancient Greek Γεώργιος, a name based on γεωργός (farmer) = γῆ (earth) + ἔργον (work). We recognize Γῆ to this day as Gaea, and thus, oddly enough, Uranus was very nearly named after the Earth! Wnt (talk) 17:20, 7 June 2012 (UTC)[reply]
What's the distinction between sidus and stella? ←Baseball Bugs What's up, Doc? carrots22:58, 7 June 2012 (UTC)[reply]
That's a question we should give serious consideration to. -- ♬ Jack of Oz[your turn] 05:03, 9 June 2012 (UTC)[reply]
  • Sidus is a constellation, but it can also refer to a single star or at the other extreme the entire night sky.
  • Stella means a star or a planet.
  • Sidus gives us words like 'sidereal' and 'consider'. Stella gives us 'stellar', 'constellation' etc. And the words 'star', 'Astarte' (= Venus), 'Ishtar' (= Venus), 'asterisk', 'asteroid', 'astro-' words (like 'astrolabe', 'astrology', 'astronaut', 'astronomy', 'astrophysics', etc), 'disaster' and 'catastrophe' are all related. It's written in the stars. -- ♬ Jack of Oz[your turn] 05:26, 9 June 2012 (UTC)[reply]

Battery charging time

I have installed a custom made 12 V 750W UPS with 170Ah battery. The charging current is to 14A, so how long will it take to charge the battery.

2. If I use a 24V 1000W UPS with 2 170Ah battery what will be the charging time. — Preceding unsigned comment added by 182.185.144.194 (talk) 11:12, 6 June 2012 (UTC)[reply]

To properly answer this question, you need to tell us more about the battery and the load - I'll say more about this but first the simple, and very inaccurate answer:
The battery capacity rating Ampere-hours (AH) is defined as the product of current and hours from full charge to a defined flat condition under conditions meant to be typical for the battery in question. The charge capacity is roughly about the same as the discharge capacity. Thus, in your first case, time to battery "flat" = 170 Ah / 14 A = 12 hours. Since the maximum output of a 750 W 12 V power supply is 750/12 ie 62.5 A, I assume your load is equal to output - charge ie 62.5 - 14 = 48.5A. Your second question cannot realy be answered as you have not given the charging current. If it is assumed to remain 14 A then the charge time is unaltered.
In real batteries, the product of discharge (or charge) and current is a function of current. The capacity increases as current is reduced. Eg for typical lead acid batteries, halving the current will not double the discharge (or charge) time, it will as much as treble it.
In real batteries, the ampere-hour capacity is strongly temperature dependent. Capacity increases with temperature.
How long a charge takes or lasts is a function of how flat you can tolerate. There are industry standard cutoff voltages for each type of battery, so that you can make comparisons, but in practice your load may cease to work properly at a different voltage. Lead acid batteries in particular are damaged if flattened. Types intended for standby service will give a short life if routinely flattened more than 50 to 70 % of nominal capacity.
All this means that in order to meaningfully calculate a charge or discharge time, you need more comprehensive data about your battery, and you need to know more about your load. Battery manufactuers usually provide graphs of performance over a range of charge and discharge currents.
You mention a "custom made 12V UPS". Note that if you are using a nominal 12V battery, and your power supply has a constant output voltage of exactly 12V, it will not fully charge the battery. For instance typical lead-acid batteries require to be charged to 2.3V per cell (13.8V for a nominal 12V battery) at 25 C. Other types are similar.
If you are installing your own system, you should ensure you are thoroughly conversant with battery safety - particularly with ventilation requirments if using lead acid, and charging requirements if using lithium-iron types. Get it wrong and you can have explosions, posionous gas, or both.
Wickwack58.164.238.58 (talk) 12:00, 6 June 2012 (UTC)[reply]
The load will like this 2 120W fans and 1 65 W laptop. All rated at 220V AC.
the cutoff is set at 10V.
It's what we call Desi UPS http://www.wiredpakistan.com/forum/30-engineering-corner/
I have also attached a multimeter to the battery, the max value reached while charging is 13.8 — Preceding unsigned comment added by 182.178.210.45 (talk) 13:08, 6 June 2012 (UTC)[reply]
Something's not right somewhere, or information is missing. Your stated load totals 305 W. Allowing for typical invertor efficiency that means a DC current of ~28A on a 13.8V bus. For a 750W system, 54 A is available. For a continuous bus system, charging current will equal 54 - 28 ie 26 A. But you siad charging current is 14 A. This is possible if the UPS is of the switched bus type (battery on a separate charger circuit, and switched to feed the invertor when required), but you haven't told us about the 1000W UPS. Is it continuous bus or switched bus? If switched bus, what is the charging current? Possibly the link you gave was meant to tell us more, but it is not functional. Wickwack58.164.238.58 (talk) 14:52, 6 June 2012 (UTC)[reply]
  • When I had the job of testing some early UPS units a few years ago, I noticed that the discharge rate was a brutal one in UPS mode, and much higher than the charge rate. This made sense, because the device is not really designed to operate on ,say, a daily basis, but rather to operate in the occasional emergency. The battery is called upon to discharge at an extremely high rate, at the cost of decreasing the battery life, in order to keep initial cost down, but still provide a relatively large amount of AC power to the protected computer or device. Protecting the continued operation of the computer (don't lose the term paper you have just finished typing, or don't lose all the pizza orders held in the computer, or the financial transactions, or the control of the elevators in a highrise, or the air traffic control, or the power system operations) is judged more important than the well-being of the battery. Fewer ampere-hours could be obtained from the battery in this mode than if the discharge rate was lower. Lead acid batteries have very low internal resistance, so they can be called on to do heroic discharge rates for a short while, like cranking a car engine (hundreds of amps from a 70 amp-hour battery for a few seconds). Power interruptions are often a few seconds if automatic reclosing of distribution feeders clears the fault, or a few minutes if supervisory switching can solve the problem. If a PC is operating on UPS with no generator backup, the typical practice is to save your work and do an orderly shutdown rather than keeping on for an hour or whatever until the battery dies abruptly. Ten volts as the battery voltage when the discharge is ended,, as 182.178.210.45 stated, seem an extreme state of discharge, and likely detrimental to the battery life (number of charge discharge cycles) which can be expected. Wasn't a ten hour rate a classic rule of thumb (capacity divided by ten = charge rate and discharge rate?) Even so, a typical charger tapers off the charge rather than continuing at the initial high rate, which would require raising the charging voltage over time. The website of the battery manufacturer might offer specific recommendations for charging protocal. Edison (talk) 14:31, 6 June 2012 (UTC)[reply]
You have it pretty right, Edison. However, the 10 hour rule used to be commonly quoted for small batteries intended for powering portable equipment such as cameras, tape recorders, and the like. It has largely gone by the wayside now, and was never used for larger sizes, nor used for vehicle, industrial, or UPS batteries. Also, constant-current chargers became common when switchmode AC/DC conversion became economic. In continuous bus UPS designs (common because they're cheap, reliable, and simple), the current available for changing the battery is simply the AC/DC convertor output, which must be capable of sustaining the full rated UPS system load, minus the current drawn by the DC/AC convertor in supplying the load. Hence if for some reason the load is minimal, the full AC/DC convertor output is available to charge the battery. Wickwack58.164.238.58 (talk) 14:52, 6 June 2012 (UTC)[reply]
Do you have a reliable source for charge rates way higher than capacity/10 being the present day common practice? I agree that for occasional emergency use (UPS, emergency stairway lighting) a rate which discharges the battery in an hour or so at the cost of battery health and at the cost of ampere hours is common, but I question charging it really fast, since I expect damage to result. The AC/DC converter is ONLY used to charge the battery in many UPS systems, which run the load off the mains until mains failure, then quickly switch to inverter from the battery, fast enough that the PC stays online. Better and more expensive UPS systems indeed run from the inverter off the battery all the time, perhaps with a generator offline to start when the mains fail. Edison (talk) 04:29, 7 June 2012 (UTC)[reply]
For the medium to large scale UPS's (10's to 100's of kW) I had experience with, the battery charging rate was roughly of the same order as the discharge by application design. This is because they were continuous buss systems, so the charge current is essentially the rated output minus the actual load, as I said. This should be taken into account when choosing the battery. Switched bus systems can deliver a fixed charge current independent of actual load and are thus kinder to the battery as you have said. And since the switching can be from raw AC input to invertor output, the AC/DC convertor can be much smaller and cheaper. However, continous bus systems can be prefered for their reliability and inherently stable and spike-free output. Lead acid batteries in particular will not take kindly to rapid charge. Generally, the slower the charging, the longer the life and the greater the capacity. As far as I am aware, batteries designed for rapid charge are used for small portable applications like personal electronics, phones, small tools, and the like. For performance, and service life, for any charge rate, consult manufacturer's data. Batteries are made for a wide range of cycle conditions - even within the same basic chemistry (gell lead acid, nickel-iron, whatever). So I don't think a source with one simple rule of thumb or formula, beyond what I gave above, can be cited. Incidentally, where diesel backup is employed, it usually because an outage cannot be tolerated, such as telecomms, business critical server farms, or hospital operating theaters. This is quite different to a PC or minicomputer application, where you only need enough time to shut down in an orderly way (a few minutes). Diesel backup is usually provided with a UPS battery capacity for 3 hours or so. This is so that, should the mains not came back on in that time (an unlikely occurance) and the diesel fail to start, there is enough time to call out the mechanic, plus time for him to fix it immediately if he can, or request delivery of a portable genset. In such cases, the UPS battery will be very large and inherently tolerate a lot of charging current. Wickwack120.145.54.86 (talk) 05:28, 7 June 2012 (UTC)[reply]
Which will system will be better, 24V or 12 V ?
It depends on many factors. But, as a general rule, if the load power is the same, and the battery capacity in Ampere-Hours (Ah) is kept the same, then 24V will be better as the discharge current (and potentially the charge current) will/can be halved. I stress that this is only a very rough guide. You would normally expect to halve the battery Ah size as well on 24V. Of perhaps more reliable to state is that AC/DC conversion and especially DC/AC conversion is slightly more efficient at 24V. Wickwack58.167.249.78 (talk) 12:05, 7 June 2012 (UTC)[reply]
What about charging time ? Will it be reduced in 24V system ? — Preceding unsigned comment added by 182.185.220.86 (talk) 11:25, 8 June 2012 (UTC)[reply]
I've already answered that to the limitted general degree possible given that you have not supplied sufficient information. Is the 24V sytem to be switched bus or continuous bus? If switched bus what is the charging current? 2 x 170 Ah batteries, or 2 x 85 Ah batteties, or what?? And, as I siad, battery capacity varies with operating conditions (cutoff voltage, charge current, temperature, etc etc). You need to look at manufactuers data for batteries you are considering. Your questions are like asking us "If I now eat 2kg of food each day and propose to eat 2.5 kg, how much will I weigh?" How should I know, it depends on so many factors that haven't been given. About all I can say is you will get fatter, if all other factors are kept constant. Similarly, I can't say what the charging time will be, beyond the rough guide I've already given. Wickwack121.215.41.248 (talk) 15:31, 8 June 2012 (UTC)[reply]
Charging current is 14A, Battery 2 x 170 Ah, cutoff voltage 10V, room temperature is around 36C . Manufactures usually don't provide such extensive data. — Preceding unsigned comment added by 182.185.171.47 (talk) 09:04, 9 June 2012 (UTC)[reply]
In that case, since you have neither changed the ampere-hour capacity, nor the chanrge current, then the charge time is unchanged. However the discharge time for the same load power will increase somewhere around 2.5 to 3 times, ignoring the 35 C temperature. Battery manufacturers certainly DO provide such "extensive data" - it is essential for the reasons I have explained. You just haven't asked in the right place. I have used such data myself. A 35 C temperature is excessive for an office environment. Wickwack124.178.139.104 (talk) 12:47, 9 June 2012 (UTC)[reply]

Source of uncertainty of the Earth's age ?

On the page Age of the Earth it is stated that the age is 4.54 ± 0.05 billion years. What are the main sources of the 1% uncertainty, and what is most dominant?

  • Is it the accretion time (100 million years is indeed 2% of the 5 billion years age) itself?
  • Is it the uncertainty of the accretion time?
  • Or is it the uncertainty of the dating methods?

Wolfsson (talk) 13:17, 6 June 2012 (UTC)[reply]

From the first citation on that page, the uncertainty seems to be from the dating methods. The time appears to refer to the end of the accretion of solid bodies such as Earth, which should have occurred at roughly the same time throughout the inner solar system.-RunningOnBrains(talk) 13:39, 6 June 2012 (UTC)[reply]
Also, I bet there is uncertainty into just what qualified as Earth. Did a molten ball of magma qualify ? StuRat (talk) 15:13, 6 June 2012 (UTC)[reply]
Given that the Earth is a molten ball of magma (except for quite insignificant parts), I'd say yes. --Stephan Schulz (talk) 17:48, 6 June 2012 (UTC)[reply]
Seems unfair on the inner core. Sean.hoyland - talk 17:54, 6 June 2012 (UTC)[reply]
The Earth doesn't currently contain much molten magma. Even the vast majority of the mantle is a rheid, not molten magma. Red Act (talk) 19:03, 6 June 2012 (UTC)[reply]

There was an interesting article a few weeks back that one of the longer lasting isoptopes had been found to have a different half-life than believed, and that the earth's age might need to be recalibrated . Anyone recall the article or isotope? μηδείς (talk)

No, and such a change at this point given the progress of science in this field seems highly unlikely. Uranium-lead dating is extremely solid in both theory and practice. Interestingly, our article on Calcium-aluminium-rich inclusion suggests that this age of the Earth may be a lower limit on the age of earth rather than the actual age.-RunningOnBrains(talk) 06:52, 7 June 2012 (UTC)[reply]
Yes, that kind of radioactive dating tells you when that particular rock formed. What we know is that the Earth must be at least as old as its oldest rock (everything melted during the formation, so any older rocks won't have survived intact). It is unlikely that the Earth is significantly older than its oldest surviving rocks. Our estimates of the age of the Earth are consistent with the ages of meteorites, which theory tells us are left over from the formation of the planets. It's also consistent with our estimates of the age of the Sun. It's very unlikely that our age estimates are completely wrong. --Tango (talk) 16:12, 7 June 2012 (UTC)[reply]
To expound further, I realize my original point was unclear: I'm not saying there couldn't be new evidence (or new theories consistent with the current evidence) that pointed to an older earth, but the idea that we have a half-life wrong is about as close to impossible as I'm comfortable to say. -RunningOnBrains(talk) 17:12, 7 June 2012 (UTC)[reply]

bond energy of ionic compounds

how will we define the bond dissociation energy or bond dissociation enthalpy for ionic compounds ? i have read the definition of bond dissociation energy as " The bond dissociation enthalpy is the change in enthalpy when one mole of covalent bonds of a gaseous covalent compound is broken to form product (gaseous atoms) in gas phase ." the problem is that ---firstly the ionic compounds don't occur in gas phase , secondly if we heat it we will get ions (& the energy involved is lattice energy) & not the atoms, moreover the definition tells only about the covalent compounds .[1]117.225.240.240 (talk) 14:03, 6 June 2012 (UTC)[reply]

See Lattice energy. Bond dissociation energy refers only to covalent bonding. --Jayron32 18:05, 6 June 2012 (UTC)[reply]
As a side note: boiling sodium chloride, produces discrete molecular sodium chloride or 'chloridosodium'. Further heating to the point of ionization leads to diassociation into a gas plasma. This consists of a menagerie of ions, including sodium(1+) and chloride(1-), as well as polyatomic ions. Plasmic Physics (talk) 23:30, 6 June 2012 (UTC)[reply]

distribution of angular momentum

Most of the angular momentum of the Solar System, I seem to remember reading, is that of the orbit of Jupiter. But of course every rotating or revolving body has its own bit of a.m., and their vectors are not all parallel. Is the variance of these vectors (mass-weighted, of course) a meaningful concept? If so, is it known (or estimated)? If so, how big is it in radians?

And should it be expected to decrease over the eons? —Tamfang (talk) 16:48, 6 June 2012 (UTC)[reply]

It does seem like an interesting concept, although I doubt if it would be measured in radians. If we assume that the initial rotating cloud of gas was uniform in it's rotation (not sure if this is true), then any variance must be caused by outside influences (passing comets, etc.). However, once there is even the slightest variation, it's possible for two objects to give each other more variance by passing near each other and knocking each other out of position (perhaps a bit more with each orbit). Pluto seems to be the (dwarf) planet with the most deviation of the inner planets, so something interesting must have happened there. Objects out in the scattered disc and Oort cloud are far more random in their vectors than those in the inner solar system and, to a lesser extent, the Kuiper belt (with exceptions in the inner solar system allowed on for smaller objects, easily knocked out of the plane of the ecliptic, like the centaurs). The following chart may also be relevant. StuRat (talk) 19:07, 6 June 2012 (UTC)[reply]
Distribution of trans-Neptunian objects, with vertical axis showing inclination from the plane of the ecliptic, and horizontal axis showing distance from the Sun.
"Inner planet" usually refers to Mercury, Venus, the Earth and Mars. Pluto isn't an inner planet. It's orbital inclination is normal for Kuiper belt objects, as your chart shows. --Tango (talk) 19:39, 6 June 2012 (UTC)[reply]
OK, then how does one refer to the region inside of Pluto's orbit ? StuRat (talk) 22:56, 6 June 2012 (UTC)[reply]
Cis-Plutonian. But as the orbit of Neptune, rather than of eccentric Pluto, is taken as the delimiter between the outer solar system and the next broad category of solar system objects, people would be more apt to talk about Cis-Neptunian objects. List of trans-Neptunian objects shows there are about 40 known TNOs with perihelions closer than Neptune's, but none with an average orbital radius smaller than Neptune's Tautologically; they wouldn't be TNOs if they were. -- Finlay McWalterTalk 09:19, 7 June 2012 (UTC)[reply]
Let's start by checking if what you remember reading is correct, since it's a simple enough calculation. I would guess the major contributors are going to be Jupiter's orbit and the Sun's rotation, since the Sun and Jupiter make up essentially the entire mass of the solar system. Jupiter's orbit gives an angular momentum of 779,000,000 km (orbital radius) * 13 km/s (orbital speed) * 1.9*1027 kg (mass) = 1.9*1037 km2kg/s. The Sun (assuming uniform density, which will overstate its angular momentum since a lot of the mass is concentrate in the core) has an angular momentum of 2/5*m*r2*2*pi/T (see List of moments of inertia, m, r and T are the mass, radius and rotation period of the sun, respectively). That gives us 2/5*2.0*1030kg*(0.7*106km)2*2*3.1/25 days=1.1*1036 km2kg/s. So yes, Jupiter is the main contributor. I'll just quickly check Neptune, since it is so far away it might be higher than I expect: 4.5*109 km * 5.4km/s * 1.0*1026 kg = 2.4*1036 km2kg/s. That is higher than I expected, but not as high as Jupiter's. --Tango (talk) 19:39, 6 June 2012 (UTC)[reply]

Awesome Science things in Europe?

So, not exactly a science question, but i'm hoping you can help. It's my dad's 60th soon and we want to take him somewhere in western europe to see something awesomely sciencey. CERN is an obvious one but it seems like most of their stuff for tourists is museumy rather than seeing anything real. Trying to pull some strings to get into ATLAS but i'm not hopeful. Something like the JET Tokamak comes to mind but we live in the UK and it's not exactly an exotic location to spend more than a day. Any thoughts would be appreciated or reprimands for putting this here also welcome! 137.108.145.21 (talk) 18:50, 6 June 2012 (UTC)[reply]

You enjoy reprimands ? How about spankings ? :-) StuRat (talk) 18:57, 6 June 2012 (UTC) [reply]
I presume you've been to the National Space Centre in Leicester, or Jodrell Bank? --TammyMoet (talk) 19:33, 6 June 2012 (UTC)[reply]
The Deutsches Museum in Munich has a massive collection of all manner of science and industry, but it's not an active site of research, which seems more like what you're going for. --Mr.98 (talk) 01:00, 7 June 2012 (UTC)[reply]
You might try one of those bigger-on-the-inside-than-the-outside timey-wimey boxes. μηδείς (talk) 01:43, 7 June 2012 (UTC)[reply]
I know you say you want to visit western europe but a few locations in England stick out too much not to mention, even though they are probably more of a day trip rather then a nice stay away somewhere. Maybe if not this time, you can at least put them on a list for "next time". I'm in Australia and just over a year ago I stayed for a month with my brother who was living in London at the time. I did a day trip to Down House, i'm a big fan of Darwin so that was very high on my list of things to do, I thought it was set up really well. The other two places I'd love to see would be blatchley park and the cavendish laboratories, those places obviously hold legendary status in the history of science, not having seen them I can't say how "interesting" their exhibits would be to people not already interested in their history. Vespine (talk) 04:09, 7 June 2012 (UTC)[reply]
I know it's not quite what you're asking, but I'm 62 and on the list of things I'd like to do before I die is to to go on one of the northrn lights cruises and hope to see the Aurora Borealis. Richerman (talk) 09:00, 7 June 2012 (UTC)[reply]
Have you considered Iceland? There are various geothermal energy experiences to appeal to the sciencey-minded, the opportunity to see active geysers, lava fields and other awesome volcanic things, and of course the possibility either of 24-hour daylight or the aurora, depending on when you travel. We're approaching solar maximum, so aurora-spotting has an improved chance of success next winter, particularly in higher latitudes. Karenjc 18:16, 7 June 2012 (UTC)[reply]
Some more ideas: 184.147.126.249 (talk) 18:39, 7 June 2012 (UTC)[reply]
UK (but probably not close to where you live!) - Callanish, at the right date to check the solar alignments for himself
UK (but really cool anyway) - Kew Bridge Steam Museum for the giant engines
France (or Sweden, or Germany, or Switzerland) - any built-to-scale Solar System models. The Swedish one is the biggest
Austria - Ars Electronica Center (fusion of science and art?)
Belgium - Euro Space Center
The Max Planck Institute of Plasma Physics's Wendelstein 7-X in Greifswald is pretty big and flash and sciency. Last time I was there they were running public tours. It is a bit out of the way though (4 hours drive north of Berlin). 101.171.127.244 (talk) 19:26, 7 June 2012 (UTC)[reply]

Locating the South Pole

In the days of Scott and Amundsen, how would an explorer identify his position as being exactly at the South Pole, and how accurate would this be? --rossb (talk) 22:52, 6 June 2012 (UTC).[reply]

I'd say the easiest way would be to use the motions of astronomical bodies. Can't say how accurate that would be. Plasmic Physics (talk) 23:11, 6 June 2012 (UTC)[reply]
Not so much "motions" as "positions" -- we note at history of longitude that "determining latitude was relatively easy in that it could be found from the altitude of the sun at noon with the aid of a table giving the sun's declination for the day." For the poles, that's enough right there -- latitude 90, longitude irrelevant (also, time of day functionally irrelevant). This site confirms that a sextant reading of the sun was used, and notes that Amundsen was within 200m of the true South Pole. — Lomn 23:25, 6 June 2012 (UTC)[reply]
If I recall Huntford's book The Last Place on Earth correctly, in those few days of at-pole peregrination, Helmer Hanssen was probably the one who got closest to the geographic pole. It was my recollection that Scott carried a theodolite, which would have allowed him to make much more accurate measurements of the position of the Sun than Amundsen could with his sailor's instruments (and that the additional weight thereof was one of many contributory factors to Scott's failure). But, while there was a theodolite on the expedition, it seems it wasn't taken to the pole (picture and info). -- Finlay McWalterTalk 20:10, 7 June 2012 (UTC)[reply]
At night, they could track the celestial South pole to give a rough direction. Somehow I don't think that option was available, you'd be pretty foolish to attempt such an expidition during the polar winter. Plasmic Physics (talk) 23:34, 6 June 2012 (UTC)[reply]
For the next three days the men worked to fix the exact position of the pole; after the conflicting and disputed claims of Cook and Peary in the north, Amundsen wanted to leave unmistakable markers for Scott. After taking several sextant readings at different times of day, Bjaaland, Wisting and Hassel skied out in different directions to "box" the pole; Amundsen reasoned that at least one of them would cross the exact point, from Amundsen's South Pole expedition. If my visualisation of the trigonometry is right, when you're at the South Pole the sun stays at the same altitude all through the day (+- a little depending on if it's rising or setting). These days of course it's much easier; you know you're there when you trip over this thing. FiggyBee (talk) 23:31, 6 June 20126 (UTC)
Not really, that thing is not really at the pole, the pole drifts around. Yes, incase you're wondering, both the magnetic and actual poles drift. That is just an arbitrary point that someone chose when the pole actually used to be in the vicinity. LOL, it never even passed through that particular point either. Plasmic Physics (talk) 23:38, 6 June 2012 (UTC)[reply]
Most of the change comes from the ice underneath that barber pole drifting, to the tune of 10 meters per year. --Carnildo (talk) 00:17, 7 June 2012 (UTC)[reply]
Yes, but the Earth also moves irregularly on its axis, e.g. the Japan earthquak moved it quite a bit. Plasmic Physics (talk) 01:07, 7 June 2012 (UTC)[reply]
4 to 10 inches. Rmhermen (talk) 04:57, 7 June 2012 (UTC)[reply]
Wow, that's alot! I guess I should have checked the data first. Plasmic Physics (talk) 05:08, 7 June 2012 (UTC)[reply]
I'd think we could tell that much change all over the world, if it happened during the quake. If not, by what mechanism does the Earth's axis slowly change as a result of the quake ? StuRat (talk) 06:25, 8 June 2012 (UTC)[reply]
Does anyone know what the flag to the left of the Union Jack is? ElMa-sa (talk) 19:50, 7 June 2012 (UTC)[reply]
It's either the flag of Belgium or the flag of Romania. Per this page, the flags at the ceremonial pole are those of the signatories of the Antarctic Treaty System. WP pictures of the site suggest that the flag arrangement is not constant. — Lomn 20:14, 7 June 2012 (UTC)[reply]

unidentified monocot in South Jersey

Can anyone identify or suggest an identification for this plant? It is one foot tall. I will be able to recognize the flowers if someone suggests the correct identification. A friend had one such plant in his yard in South Jersey last year, and now it has taken over the entire property. The blooms, when they come, are white and look like bluebells Hyacinthoides non-scripta, although the stalk is more erec,t or wild orchids of some type. Thanks. μηδείς (talk) 23:29, 6 June 2012 (UTC)[reply]

As a drive by shout I'm thinking some kind of Helleborine. Richard Avery (talk) 06:14, 8 June 2012 (UTC)[reply]
Looking at the links and pictures I can find on google that is close but I haven't seen anything that looks like an exact match. I'll have to see if I can get a picture sent to me after the flowers mature. μηδείς (talk) 15:52, 8 June 2012 (UTC)[reply]

June 7

I guess the question title doesn't really need much expansion; I live on the Isle of Man but oddly haven't actually bothered taking the Horse Tram at any point (I don't live in Douglas and there's no real point in it for normal transport anyway). I have been held up by it when driving along the promenade many times but never seen it traversing points.

So... the question is fairly simple. How on earth does it USE these points? Do they use "ramps" in the tracks? Do they force the horses to swerve violently with the coaches? Something else?

Much appreciate your response Egg Centric 00:34, 7 June 2012 (UTC)[reply]

P.S. As a secondary point, the original google maps link affected the spam filter. That's cause I chose short url which shortened it using goo.gl... anyway it was a nuisance wiki complaining about this! Egg Centric 00:34, 7 June 2012 (UTC)[reply]
It seems most likely that those points are no longer used, and they've simply removed the blades and tarmacked in the gaps. Alternatively, the only option I can see is lifting the tram at each end and moving the wheels across to the point rail, which I can see being a reasonable alternative to mechanical points (which would need to be quite complex if they're set in a road surface safe for pedestrians and other vehicles) if the trams are light and those points aren't used often or by in-service trams. I had a zoom up to the end of the line on Google Earth and the crossover at the terminus (the roundabout near the ferry terminal) seems to have normal switchblades.FiggyBee (talk) 00:57, 7 June 2012 (UTC)[reply]
That makes considerable sense, thank you. I also wonder hwo they do change the working points. Tell ya what, I'll try to get on one during the weekend and ask the folk involved what they're doing. And then incorporate it into the wiki article.
(Note: I am very busy this coming weekend so if I don't manage this, sorry in advance!) Egg Centric 01:22, 7 June 2012 (UTC)[reply]

chlorhexidine gluconate

will chlorhexidine gluconate react with eugenol, zinc oxide, or clove oil to form anything harmful? --Wrk678 (talk) 07:51, 7 June 2012 (UTC)[reply]

And what are you planning to do with such a mixture? Someguy1221 (talk) 08:09, 7 June 2012 (UTC)[reply]
May be the OP is concerned their chlorhexidine containing mouthwash will react with their Zinc oxide eugenol containing dental work? If so I suggest they contact the person who did the dental work or some other suitable professional. 2001:0:5EF5:79FD:20CB:1C04:833A:FA41 (talk) 15:53, 7 June 2012 (UTC)[reply]
I can't answer that medical question, but chlorhexidine and eugenol have been used in conjunction in research, e.g. [9] [10] Note that, as explained at [11], eugenol is not seen as entirely benign in some situations all by itself, and eugenol-free periodontal dressings have been developed. The real problem with any answer is that "harmful" depends on the context. A bottle of cyanide is not harmful... provided it stays in the bottle or under a fume hood. While water toxicity really does occur. The dose (and circumstances) makes the poison. Wnt (talk) 18:19, 7 June 2012 (UTC)[reply]

Drug Delivery

I have had this doubt for a long time. Suppose I have a pain in certain area of my body - say X. I take a pain killer tablet. So the chemicals in the tablet dissolve into my blood stream. Now how does the drug get absorbed exactly in the painful area and relieve the pain ? What is the mechanism that makes the drug to get absorbed at the exact painful spot ? A even more localized example is when you have a sore throat and a tablet works wonders ! — Preceding unsigned comment added by 117.193.139.99 (talk) 08:14, 7 June 2012 (UTC)[reply]

A systemic painkiller (taken orally and then circulating in your blood) doesn't have to target the physical location or cause/origin of the pain for which you are taking it, it could just dull "your sense of pain". DMacks (talk) 08:17, 7 June 2012 (UTC)[reply]

Yes, I expected that reply. But say, there is some problem with the liver/ spleen and tablets need to be taken ( I mean some localized disease), how does it work. It would be a real waste to have the drug in the entire blood stream rather than localize its concentration at the target site. So how does it happen. — Preceding unsigned comment added by 117.193.139.99 (talk) 08:29, 7 June 2012 (UTC)[reply]

Your typical non-steroidal anti-inflammatory painkiller (tylenol, ibuprofen, aspirin) functions by broadly inhibiting your body's ability to transmit localized pain signals. In this sense, it is acting locally, but it's acting locally everywhere. Opioids, on the other hand (as well as GABA analogues, although less dramatically) block your brain and spinal cord's ability to register pain signals. In that sense, the drugs are acting non-locally from one place, but are still present throughout your entire body assuming you took a pill. As you are probably aware, some painkillers can simply be injected directly to where they need to work, such as for dental procedures. In this case, since the drug stays and acts locally, much higher local concentrations of the drug can be achieved with minimal side effects.
As for liver effects, it turns out that most drugs you consume actually wind up in your liver, given its purpose of breaking down most unusual chemicals that you consume. But everything else, unless it is injected locally, pretty much ends up everywhere. It is a goal of medical science to to develop drugs that target to specific tissues, actually. But that's not to make the drugs cheaper. Rather, the hope is that since the drug will only be going where it is needed, there will be fewer side effects from what the drug would do to tissues that don't need it. Someguy1221 (talk) 08:36, 7 June 2012 (UTC)[reply]
(edit conflict) Some drugs are designed to bind selectively to certain types of tissues or cells. For example, they could have a high affinity for chemicals (and hence cells with these chemicals on their surface or vicinity) that are known to be produced in the case of a specific biochemical situation (inflammation, cancer, etc). In this situation, the drug does localize and concentrate to a certain site where it acts, it just disperses and circulates on the way to getting there. Or else they circulate randomly, but only act on cells that are in a certain state and therefore concentrate their effect there. DMacks (talk) 08:48, 7 June 2012 (UTC)[reply]
In my country, Australia, one of the commercial products based on Ibuprofen runs advertising clearly designed to suggest that the drug somehow knows where the pain is and goes directly there, rather than everywhere in one's body. I suspect it's bullshit. HiLo48 (talk) 16:53, 7 June 2012 (UTC)[reply]
We get the same adverts in the UK and I've been told by a doctor that it is nonsense. Perhaps it only acts near the site of the pain, but it goes everywhere your blood goes, just like anything else. --Tango (talk) 21:07, 7 June 2012 (UTC)[reply]
Since most substances taken orally affect your entire body evenly (with some rare exceptions, like radioactive iodine to treat thyroid cancer), this always seemed like a poor way to treat localized problems, to me. One of the worst ideas, IMHO, is the pill you take to help you grow hair, thus risking serious side effects to solve a cosmetic problem. This obvious alternative is to deliver a hair-growth med with a topical liquid or foam, applied to the areas with hair loss. StuRat (talk) 06:42, 8 June 2012 (UTC)[reply]

Carpal tunnel syndrome

would carpal tunnel prevent a career as a computer programmer? — Preceding unsigned comment added by 59.189.220.235 (talk) 14:08, 7 June 2012 (UTC)[reply]

I moved your question to its own section.Anonymous.translator (talk) 14:13, 7 June 2012 (UTC)[reply]
Not if you had voice command software like dragon breath165.212.189.187 (talk) 15:49, 7 June 2012 (UTC)[reply]
No, especially if it is treated. --TammyMoet (talk) 16:37, 7 June 2012 (UTC)[reply]
Also depends on whether it affects both hands severely. That's often not the case. HiLo48 (talk) 16:48, 7 June 2012 (UTC)[reply]
A programmer can reduce the risk of carpal tunnel syndrome by use of ergonomic equipment (wrist rest, mouse pad), taking proper breaks, and using keyboard alternatives such as digital pen and voice recognition. DriveByWire (talk) 20:54, 7 June 2012 (UTC)[reply]
Severe repetitive stress injury (of which carpal tunnel syndrome is one) can certainly force someone to change careers. If you're experiencing pain on a regular basis, you should definitely do something about it. I've been struggling with a relatively minor case for a while, and I believe that I tend to get better when I'm spending large chunks of time without touching a computer keyboard or pointing device. I sometimes use Dragon NaturallySpeaking to browse the web and read my email, but I don't think there's any way I could stand programming with it. I've seen a doctor and physical therapist through workers' compensation, which I highly recommend. I use Workrave to take breaks from computer usage, and I've started getting exercise to try to improve my overall health. I believe that I'll be completely back to normal eventually, but if I'd not taken action, I'd probably be permanently injured and no longer be a programmer. Paul (Stansifer) 22:04, 7 June 2012 (UTC)[reply]
Just speaking from experience, I've had CTS in various grades for 15+ years. In the beginning, when it was very painful, I wore wrist splints at the advice of a doctor, and worked to correct many orthopedic problems with my workspace (for a long time I had my mouse on a different level than the keyboard — a real no-no). Anyway, over the years it improved on its own with these adjustments, and today it only rarely manifests as a dull ache. (And the fact that I can't bowl — for whatever reason, bowling triggers every weird CTS symptom in me, and I temporarily lose feeling in multiple fingers. I wasn't any fan of bowling anyway, so no big deal as far as I'm concerned.) I still have a career in which I am constantly using computers and constantly typing. Results will vary given the individual and the severity of the case, but anecdotally, on the face of it I wouldn't conclude that CTS would make computer-based careers inaccessible, but one would really need to be proactive about managing the CTS and correcting the conditions that have created it in the first place. If you have CTS or suspect you do, definitely talk to your doctor about it, there are lots of relatively simple things that can be done to mitigate it, along with non-simple things if those don't work. Separately, I couldn't imagine coding with voice command software, personally, though I wouldn't be surprised to hear that some people can manage it. --Mr.98 (talk) 00:27, 8 June 2012 (UTC)[reply]

density

Is density more/less proportional to the distance between nuclei or the number of protons and nutrons in the nuclei or are they equally proportional. What s the relationship? — Preceding unsigned comment added by 165.212.189.187 (talk) 15:18, 7 June 2012 (UTC)[reply]

Density is mass per volume. The nucleus is essentially all of the mass and that mass is essentially the total of the proton and neutron masses, so the mass (and therefore the density) is proportional to the number of protons and neutrons assuming changing these does not change the distance from one nucleus to the next (i.e., the ionic or covalent radius in the material). Volume is the third power of length, so the density is inversely proportional to the cube of the distance between nuclear centers. DMacks (talk) 15:51, 7 June 2012 (UTC)[reply]
"inversely proportional to the cube of the distance between nuclear centers." is just the simple-case situation of assuming a cubic lattice, etc. DMacks (talk) 18:59, 7 June 2012 (UTC)[reply]

Could adding protons or neutrons ever decrease density of a material?165.212.189.187 (talk) 18:44, 7 June 2012 (UTC)[reply]

Density implies a finite amount of free space in each atom. Someone on this ref desk said there is no way to measure the amount of free space in an atom. Could someone clarify?165.212.189.187 (talk) 15:24, 7 June 2012 (UTC)[reply]

Density does not require finite free space, merely the ability to say "the vast majority of the mass is somewhere within a certain volume". That is, it doesn't matter whether it's a high-mass point particle (nucleus) surrounded by perfect vacuum, or if the mass is evenly distributed for a certain size, or a fuzzy blob that has no distinct boundary as becomes less dense as it extends outward, or even if we cannot actually describe in "real world" macroscopic ideas what is happening a little ways away from the center. Given a large enough space, we can still confidently say "the total mass is X and the total volume is Y within it" and calculate the average density for that object. When we talk about density of a chemical (unless you're doing x-ray crystallography or something), we're talking macroscopic, not just one or two atoms, so the mass distribution at the atomic level is orders of magnitude too small to make a noticeable difference. DMacks (talk) 15:57, 7 June 2012 (UTC)[reply]

How does the "certain volume" not imply a "certain volume (of free space)"?165.212.189.187 (talk) —Preceding undated comment added 18:41, 7 June 2012 (UTC)[reply]

Again, it doesn't matter what is in the space, whether it's extended nucleus or a field generated by nuclear or electronic wavefunctions, or virtual particles or "nothing at all". DMacks (talk) 18:46, 7 June 2012 (UTC)[reply]

DMacks, I don't think you understand the question.165.212.189.187 (talk) 18:48, 7 June 2012 (UTC)[reply]

(edit conflict) That's true and I'm not trying to answer the question. My response is to dispute your "Density implies a finite amount of free space in each atom" premise on which the question and confusion appears to rely. DMacks (talk) 18:59, 7 June 2012 (UTC)[reply]
OK, I'll try. To define density you need only define the mass and the volume of the total "thing" you are trying to get the density of. The distribution of matter within the volume is irrelevant. When people talk about the density of a nucleus, they are defining a somewhat arbitrary volume within the atom to be "the nucleus". Someguy1221 (talk) 18:56, 7 June 2012 (UTC)[reply]

I am not concerned with the density of the nucleus, just the atom or any amount of a certain element for that matter. once you define the volume haven't you also determined the (free)space that atom/material occupies?165.212.189.187 (talk) 19:20, 7 June 2012 (UTC)[reply]

(linking back to the previous discussion for context) True, once you have defined a volume of interest, you have (rather tautologically) determined the space that things within that volume reside in. Some of it is probably "free space", though it's not clear what you mean by that phrase, and as noted in the prior discussion, "free" will depend not only on the things themselves but also what they interact with -- see the prior discussion's example about neutrinos being able to consider pretty much anything to be "free space". There's also the problem, as noted before, that things at the quantum scale do not have precisely defined boundaries. How big is an atom? We can only speak statistically.
Ultimately, though, I think it will be most helpful if you clarify your intent/meaning regarding "free space" and its specific inclusion in your questions, as it appears to me that the rest of the questions have been ably answered. — Lomn 20:09, 7 June 2012 (UTC)[reply]
One thing to consider is that allotropes are made of the same "stuff" but can have very different densities, for example Allotrope_of_carbon. In these cases, "number of protons and neutrons" doesn't change but density does. Vespine (talk) 23:12, 7 June 2012 (UTC)[reply]

165, it just gets back to the problem that there is no obvious definition of "free space" in modern physics. At some level, you have to arbitrarily define the volume of a piece of matter. You can do this based on the statistical probability of locating a particle in a certain region of space, or on the distance over which a particle can exhibit a certain type of interaction, or some other equally arbitrary boundary. You can stuck at both ends, actually. Even when you're probing an empty vacuum, something you would consider "free space", there is always a probability of finding an electron that shouldn't be there. And if you probe right at the center of what you think is a proton, there is always a probability of finding nothing. Someguy1221 (talk) 00:20, 8 June 2012 (UTC)[reply]

Thanks, although you question my definition of free space (point taken), I question your definition of "nothing". Really, the electron that you find "shouldn't" be there?165.212.189.187 (talk) 12:53, 8 June 2012 (UTC)[reply]

Someguy is simply illustrating the problem with simplistic models when compared to the real world. In the context of a discussion about "empty vacuum", anything found there (such as the electron) "shouldn't" be there in terms of the model (else it's not empty). This is not a statement imputing motive to the electron, nor is it a statement that electrons shouldn't be found in nature, but rather a recognition that models tend to be imperfect abstractions. — Lomn 13:21, 8 June 2012 (UTC)[reply]

Acupuncture

Does acupuncture really work, or is it just a placebo effect? --108.227.31.161 (talk) 19:54, 7 June 2012 (UTC)[reply]

Most of the evidence is consistent with a very strong placebo effect. There's a lot of research outlined at Acupuncture. Some of the bigger findings include evidence that the location of needle placement is unimportant (evidence against the importance of meridians or particular points being associated with any particular malady) and that actual needles need not be used at all (poking, but not breaking, the skin with toothpicks performed as well as inserted needles). Add to that the fact that meridians and qi have never been shown to exist and that most of the successful trials involve only subjective outcomes (e.g., pain reports) and/or inadequate control groups, the fingerprint is one of a placebo treatment. — Scientizzle 20:09, 7 June 2012 (UTC)[reply]
I'll add that "working" and "a placebo effect" are not necessarily exclusive. If a treatment can reliably and repeatedly achieve the desired effects, then it "works", even if we are pretty sure there is no valid underlying mechanism. SemanticMantis (talk) 20:12, 7 June 2012 (UTC)[reply]
I know the traditional reason for why acupuncture works is BS, but that doesn't automatically mean that it doesn't work. --108.227.31.161 (talk) 20:29, 7 June 2012 (UTC)[reply]
I've seen a summary of published research linking acupuncture to the release of endorphin. It was found that when test subjects were given something that blocks the action of opiates, the pain-relieving effect of acupuncture disappeared. I don't have a reference to the research but the endorphin article has a section that seems to say the same thing. So it seems that acupuncture is not (entirely) placebo. IMO, the reasoning of experiments using supposedly-sham acupuncture to show that acupuncture is based on placebo effect is flawed. The observations don't necessarily support the conclusion that the effects of acupuncture has no physiological basis. An alternate conclusion is that the traditional procedures and emphasis on the meridians are unnecessarily specific. It could be that the same physiological mechanism is triggered by the "sham" procedures. --98.114.146.125 (talk) 12:36, 8 June 2012 (UTC)[reply]
We do have an article on veterinary acupuncture, the subjects of which one might imagine would be less prone (although perhaps not immune) to the placebo effect. The article is dismally referenced, however, so all you can really take from it is that people think it works on animals too. But then people believe all kinds of nonsense. -- Finlay McWalterTalk 20:39, 7 June 2012 (UTC)[reply]
Pets can be prone to a kind of placebo-by-proxy. The owner things the pet should be getting better, so behaves differently, and that difference in behaviour makes the pet better (or seem better). You need a blind study, with the owner not knowing if the treatment has been done or not, to get useful results. I'm not sure if any of those have been done - our article doesn't say. --Tango (talk) 21:12, 7 June 2012 (UTC)[reply]
People, and animals, will often get better from a malady by themselves. A treatment may often be given the credit for an improvement in condition that would of happened anyway. This is one of the reasons why a double-blind placebo-controlled study is much, much better than anecdotal evidence. LukeSurl t c 22:19, 7 June 2012 (UTC)[reply]
If you are really interested and want to read more, the science based medicine blog has a number of very good posts evaluating acupuncture studies and their interpretations. Vespine (talk) 23:04, 7 June 2012 (UTC)[reply]
Strong OR notice here: I can not speak to whether acupuncture is legitimate science or a placebo effect, but having had numerous treatments, it has been effective for me in relieving pain, stress and other conditions.    → Michael J    06:20, 8 June 2012 (UTC)[reply]

I have had treatments and I liken it to the feeling you get after working out. They also say that the placement of the needles does matter because it causes a "micro wound" which triggers the white blood cells and other "healing" chemicals in our body to concentrate there to begin to heal that area.165.212.189.187 (talk) 12:58, 8 June 2012 (UTC)[reply]

I had accupuncture administed by my physiotherapist to treat a tear in my anterior supraspinatus tendon. The rationale was that tendons heal poorly due to limited blood flow, and inserting a foreign object triggers an immunoresponse that leads to an increase in blood moving into the area. 203.27.72.5 (talk) 07:10, 9 June 2012 (UTC)[reply]

Is this proof correct for the geometry file?

— Preceding unsigned comment added by Mitch the amateur scientist (talkcontribs) 21:18, 7 June 2012 (UTC)[reply]

It looks like a description of the square-cube law. But it doesn't make a formal claim, nor follow any kind of formal reasoning, so you can't really call it a proof. -- Finlay McWalterTalk 21:34, 7 June 2012 (UTC)[reply]
It's also abusing standard terminology: increasing a quantity "by addition" or "by factors (multiplication)" is not a clear or common way to phrase a geometric operation. Factorization has a very precise mathematical definition, explained in our article. Nimur (talk) 23:22, 7 June 2012 (UTC)[reply]
It looks like it is trying to say the increases are "additive" and "multiplicative", however it's all multiplicative just with different powers. So, it's neither a proof nor correct. I'm not sure what "the geometry file" means, either. --Tango (talk) 23:24, 7 June 2012 (UTC)[reply]
I think the person writing this appreciates that the area of a sphere is two-dimensional, and thus, when the radius is increased by a factor n ("additive"?) its area increases by a factor of n2. The "proof" is that he has sketched two perpendicular axes to represent this area. Of course, this falls short of a clear mathematical proof of the proposition, though someone's intuition is on the right track. I think it would be taken as more of a proof if he considers that the sphere can be divided into many little nearly-square sectors (square in the limit as it is divided up infinitely fine) in which case the proportionality to r and r2 can be proved plainly since they're a defined shape; otherwise the proof needs to incorporate a proof of the area of a sphere of a given radius. Wnt (talk) 11:25, 8 June 2012 (UTC)[reply]
  • proof of the Pythagorean theorem
    I would compliment the artist on the nice diagram, and point the OP to the notion of "proofs without words", such as this proof of the Pythagorean theorem. For more, see here [12], and/or google it. Indeed, the originally-posted image could be modified into an essentially rigorous proof, as others have suggested. Basically, it is not necessary to make formal verbal claims to have a "proof", if one is willing to rely on the reader's background a bit (after all, very few proofs are entirely self-contained). SemanticMantis (talk) 15:21, 8 June 2012 (UTC)[reply]

Is Skin Cancer More Prevalent Now Than in the Past?

It seems that every summer we are warned of the harmful and dangerous effects of sunlight on unprotected skin. SPF numbers increase every year. Even the slightest exposure to sunlight is discouraged, often in a nearly hysterical tone.

What seems odd about this is that, until very recently, constant exposure to the sun was the norm for mankind. Whether it was building the pyramids, growing crops, sailing, etc., people spent their entire lives working in direct sunlight, without any more protection than clothing and a hat, if that.

I know that scientists, using modern medical technology, are able to determine the ailments of mummified Egyptian pharaohs. Is there any evidence that historic skin cancer rates were comparable to today’s cancer rates?Phidias007 (talk) 22:42, 7 June 2012 (UTC)[reply]

It might be interesting to look at the average age that skin cancer is likely to occur vs. the average length of life. That is, if the skin cancer rate were lower, it might be simply that something else bit them first. ←Baseball Bugs What's up, Doc? carrots22:54, 7 June 2012 (UTC)[reply]
My first thought was the same as Bugs', but according to our article on skin cancer people under 19 are the most likely to get skin cancer (at least children from UK). The bigger factor perhaps is ozone depletion. Our ozone depletion article discusses the resultant skin cancer increase at length, but for some reason our skin cancer article only mention ozone depletion once, at literally the very last sentence. Anonymous.translator (talk) 23:26, 7 June 2012 (UTC)[reply]
Oops, forgot to mention tanning beds as well. Anonymous.translator (talk) 23:28, 7 June 2012 (UTC)[reply]
You've misunderstood the article. It's comparing children in the UK with children elsewhere in Europe. There is no comparison between different ages (I've checked the source). I'm reworded that bit of the article to make it clearer. My first thought was also the same as Bugs' and I suspect we are all correct. Serious infectious disease is so much rarer now that pretty much every other medical condition is more common than it used to be simply because more people are surviving long enough to get it. --Tango (talk) 23:37, 7 June 2012 (UTC)[reply]
My apologies. The wording was very confusing.Anonymous.translator (talk) 23:48, 7 June 2012 (UTC)[reply]
Northern Europeans, with their light skins, are far more vulnerable than any other population. It is only recently that large numbers of light-skinned Caucasians have lived at tropical and subtropical latitudes. See our melanoma article for more information. The ancient Egyptians were actually pretty dark-skinned. Looie496 (talk) 23:19, 7 June 2012 (UTC)[reply]
It wouldn't surprise me if skin cancer was quite prevalent in the past and people just ignored it. People get all sorts of harmless things in their skin, and get left with all sorts of scars after catching various poxes, that they wouldn't have noticed melanomas as anything remarkable. Melanomas usually only become a serious problem when they metastasise, and you're not going to know that the symptoms of the new tumour(s) have anything to do with the skin lesions. --Tango (talk) 23:41, 7 June 2012 (UTC)[reply]
We are told here in Australia that we have the highest prevalence of skin cancer in the world, largely because we've plonked a whole bunch of people with northern European ancestry into the sunniest continent on Earth. We've also had a sun, beach and outdoor sports loving culture for most of the life of this nation. And yes, we live longer than our ancestors, so the skin cancer has time to appear before we die of something else. HiLo48 (talk) 23:47, 7 June 2012 (UTC)[reply]
Another issue is that in pre-industrial times most Caucasians had constant exposure to the sun, so they built up tans gradually, and did not burn, which is what causes the most damage. Nowadays people live and work indoors and burn on weekends and holidays. μηδείς (talk) 03:18, 8 June 2012 (UTC)[reply]
Also, is it possible that skin cancer was confused with other diseases of the skin, in ancient times ? StuRat (talk) 03:56, 8 June 2012 (UTC)[reply]
Two broad thoughts: 1. Increasing SPFs is a matter of marketing as much as anything else. Beware of confusing advertising with reality. 2. Increased incidence has to be squared away with increased diagnostic ability or changed diagnostic criteria; just because you suddenly measure more or something doesn't mean its base rate has changed. --Mr.98 (talk) 11:47, 8 June 2012 (UTC)[reply]
Any data on UVA, UVB, and UVC energy [W/m²] over the last 100-years?, that ought to give a serious hint. Electron9 (talk) 17:48, 8 June 2012 (UTC)[reply]
The amount of UV incident in a given area at the "top of the Earth's atmosphere" is a function of the sun's sunspot activity. This runs in cycles - over the last 100 years, the approx 11-year cycle on top of the gradual rise since the last Maunder Minimum. The variation due to this in the context of skin cancer is just about negligible. More important factors are waht affects attenuation of UV in the atmosphere. This is mainly the rise in particulate polution since WW2, which decreases UV incident at the ground, and currently significant only in certain cities, and the depletion of the ozone layer in the last 20 years of so, which increases UV incident at ground level, more in some locations that at others. See http://en.wikipedia.org/wiki/Ozone_depletion, which does not include the actual UV increase due to ozone depletion, but does give data on the consequent increase in skin cancer. Incident UV affects the output of photovoltaic electricity generatiion. In some areas where ozone depletion is significant, output is in recent years of the order of 2 to 5% higher than expected, but in a quick search I could not find a definitive online reference. Wickwack124.178.139.104 (talk) —Preceding undated comment added 11:33, 9 June 2012 (UTC)[reply]

June 8

who edit Wikipedia in a sensitive subject... who is right ??

Hi: According to Wikipedia in English, in the page about "omega 3" they said that are really controversial the good effects that brings omega 3 in any mammal... if fact they cite research that say that is not a really big help for the human body.

In Wikipedia in Spanish, they say exactly the opposite, omega 3 is really a big help for the human body. Who is right ?? who edit this articles ?? what about if I am a mediocre doctor who just write something that I learned 100 years ago ?? to whom I have to believe is a kind of articles that involve a live risk ?? many thanks in advance. chau and sorry for my funny English — Preceding unsigned comment added by 186.2.50.237 (talkcontribs)

Because Wikipedia can be edited by anybody, you should always have doubt about what you see in a Wikipedia article. If you want to check, you should look at the sources that the article cites. If there are no sources, you should have a lot of doubt. Anyway, my understanding is that the English article is correct. (For convenience, our article is Omega-3 fatty acid. Looie496 (talk) 03:02, 8 June 2012 (UTC)[reply]
It's difficult to evaluate the claims in the Spanish article as instead of citing sources, they simply provided links, and those links are now broken. Someguy1221 (talk) 04:10, 8 June 2012 (UTC)[reply]
... and I hope that even mediocre doctors don't use Wikipedia as their medical text! Dbfirs 07:28, 8 June 2012 (UTC)[reply]
Spanish wikipedia has horrible administrators, some of them, revert all Ip's edits without reading them at all. Many requests are ignored. They take community decisions by votes instead of arguments.. though I have to admit that there is a lot more vandalism there than here. 65.49.68.173 (talk) 16:28, 8 June 2012 (UTC)[reply]

Thanks to all of you. chau — Preceding unsigned comment added by 186.2.50.237 (talk) 04:41, 9 June 2012 (UTC)[reply]


I think the English Wikipedia is not so reliable on these sorts of medicine related issues, because it gives far to much weight on the Institute of Medicine (IoM) reports; these reports are extremely conservative when it comes to accepting claims of benefit, while the burden of proof needed to include possible negative health effecs is extremely low. While this may be a good thing for compiling reports meant as advice to health care workers, what we need on Wikipedia is a balanced approached, one that gives equal weight to equally reliable evidence. Count Iblis (talk) 17:53, 8 June 2012 (UTC)[reply]

Helium dihydride cation

Why can't this exist? I'm talking about the species HeH22+, isoelectronic with the trihydrogen cation.--Jasper Deng (talk) 05:38, 8 June 2012 (UTC)[reply]

It's too unstable. HeH+ is already the strongest known acid, i.e. it is more likely to dump a hydrogen ion than any other compound ever discovered. There are a handful of funky looking sources claiming you can get HeH+ to accept a hydrogen atom, but not a hydride ion. My guess is that even if you could, the second hydride would be dumped almost immediately. Someguy1221 (talk) 05:47, 8 June 2012 (UTC)[reply]
Careful. The relevant unit would be a simple proton (H+), not "hydride" (H), which would instead give the neutral helium dihydride result. But the massive instability is certainly the key. Our helium hydride ion discusses the species mentioned by Someguy, and also evidence for and stability of HeH2+ and others in this monocationic series. One could certainly so some ab initio calculations on HeH22+ to see what would be happening there. DMacks (talk) 14:22, 8 June 2012 (UTC)[reply]
Whoops, my bad. Thanks DMacks. Someguy1221 (talk) 18:16, 8 June 2012 (UTC)[reply]
And just to prove I'm not just making stuff up, HeH22+ (CAS #12519-50-5) has been studied in this manner, and is unstable in normal situations but is stable in the presence of high magnetic fields (like "surface of a neutron star")--see doi:10.1103/PhysRevA.81.042503. Note that all refs I found consider it as a chain--not sure if exactly linear in all cases, but anyway more structurally related to beryllium hydride than the trigonal trihydrogen cation you mention. DMacks (talk) 14:46, 8 June 2012 (UTC)[reply]

Water on the line

I work 50m away from a railway line in SE England. Yesterday, after a day of rain, when a train passed along the line, there was a spectacular noise followed by plumes of steam that stretched for a couple of hundred metres and reached well above the tree line. I've lived and worked next to this stretch of track for 17 years and have never seen anything like this before. I have not heard on the news that a train-load of people have been fried on the London to Dover line yesterday afternoon, so was the train acting like a Faraday cage and were the passengers in any danger? 83.104.128.107 (talk) 15:51, 8 June 2012 (UTC)[reply]

There's some missing info in your question. Is this an electric train, powered by an electrical rail ? If so, the electricity would want to go towards earth/ground, and standing water by the non-charged rail, in conjunction with electrical connections between the rails through the train's undercarriage, might have allowed that. There would be little "motivation" for the electricity to go into the passenger compartments, so a Faraday cage isn't necessary. StuRat (talk) 17:20, 8 June 2012 (UTC)[reply]

What cause brownian movement?

What makes gas molecules extert brownian motion ..? is it related to the electron shell?, proton/neutron core? quarks? the energy at least intermediately has to be stored somewhere. Electron9 (talk) 16:15, 8 June 2012 (UTC)[reply]

The energy isn't stored anywhere. It's a manifestation of the thermal kinetic energy of the individual molecules impacting the larger object.
In simpler language, you should know that temperature is really a measure of the average kinetic energy of the individual molecules in a substance. For solids this is just molecules vibrating, but in a fluid like water or air the molecules are free to move around each other. So when you put a small enough particle in a fluid and look at it under a microscope, only a few molecules are going to be hitting the (relatively) larger object at any one time, and the odds are good that they won't be hitting symmetrically. Therefore they impart a small amount of net kinetic energy to the larger object, and it moves slightly. It is a random process, so averaged over infinite time there will be no net motion, but it can still jiggle the object around a great deal over short time scales. -RunningOnBrains(talk) 16:39, 8 June 2012 (UTC)[reply]
Short version of the above - Heat. Roger (talk) 16:51, 8 June 2012 (UTC)[reply]
No, it's not heat. Temperature and heat are very different concepts. 203.27.72.5 (talk) 07:22, 9 June 2012 (UTC)[reply]
If brownian motion is plainly a manifestation of thermal vibration. What parts of the atom is vibrating? electron shell? nuclei? quarks? Electron9 (talk) 17:44, 8 June 2012 (UTC)[reply]
The entire atom or molecule. StuRat (talk) 17:47, 8 June 2012 (UTC)[reply]
Further: Any macroscopic analogy is going to be at least partly incorrect, because we are dealing with individual molecules and atoms here, which are subject to quantum mechanics. However, I will do my best.
In a gas or liquid, molecules are never still. They are constantly moving, at a velocity that can be predicted by the temperature of the gas. As a 2-dimensional analogy, imagine a whole bunch of billiard balls flying around a giant pool table, with no friction to slow them down. They will stay at a constant velocity until they hit either the walls of the table or another billiard ball, and then they will go off at a different velocity in a different direction. It is very nicely illustrated by the image I posted at right: You have to remember that there is no friction at this scale, so unless the material is cooled or warmed, the average thermal velocity of the molecules is going to stay the same. Now if you introduced a larger object to the table, say a bowling ball, it's going to be jostled by the constant collisions, and so if you were standing far enough away that you could only see the bowling ball, it would appear to be vibrating randomly, just like in Brownian motion. -RunningOnBrains(talk) 17:59, 8 June 2012 (UTC)[reply]
Thanks! (maybe should be added to the article), does thermal vibration cause the nuclei and electron shell to vary their distance to each other? ie will the atom deform in some way like air does for sound? Electron9 (talk) 19:43, 8 June 2012 (UTC)[reply]
Not really, but here we're really getting to the point where macroscopic analogies break down, because electrons really aren't in one place at any time, and electron shells aren't really a physical object: see electron cloud. I am also probably extremely unqualified to speculate on the exact quantum mechanical processes which take place when two atoms collide; it probably depends strongly on which atoms we're talking about. But it can be safely described as a purely elastic collision for the point of describing Brownian motion.-RunningOnBrains(talk) 20:09, 8 June 2012 (UTC)[reply]
Actually, I disagree with Runningonbrains here. Absolutely, thermal vibration causes variations in the position of the different atomic particles relative to one another. As Runningonbrains rightly pointed out the (average) velocity of the particles that make up an object can be predicted from the object's temperature. The velocities of the individual particles vary according to the Maxwell-Boltzmann distribution. The deformation of the atoms or molecules leads to a higher energy state i.e. electrons repel one another altering the shape of their orbitals. These high energy states are relatively unstable and if a more stable configuration can be assumed, then it will be. Many molecules that decompose do so more quickly at higher temperatures. This can be modelled by saying that those molecules that have high energies are deformed by the motion of their constituent particles and assume a more stable state by breaking chemical bonds. The number of molecules with high energies is a function of temperature as predicted by the boltzmann distribution. Electrons can even be heated so much that they leave the atom all together, as in a thermal plasma. 203.27.72.5 (talk) 07:49, 9 June 2012 (UTC)[reply]
That would mean that the atomic nuclei (protons-neutrons) will deform in a plasma just like electrons does at a lower temperature? Electron9 (talk) 08:27, 9 June 2012 (UTC)[reply]
Somewhat more important in practice is that some thermal energy is stored by the rotation of molecules at temperatures above about 600 -700 K, (ie angular momentum, as well is the linear momentum mentioned above) and by the lengthening of inter-atomic bonds with increasing temperature, which further increases the fraction of heat energy stored in rotation. This is evidenced by the fact that noble gasses show specific heat independent of temperature, but large molecules have considerable variation in specific heat throughout the measureable temperature range. Only the fraction of heat energy stored in linear momentum drives brownian motion. Wickwack124.178.139.104 (talk) 11:42, 9 June 2012 (UTC)[reply]

A side question, would a tube between two vessels with a diameter just slightly larger than a single atom (or molecule) and a length significantly less than the average brownian motion distance. And funnel on one side make more atoms to move to one side than the other? especially when the mol/m³ is low. Electron9 (talk) 19:43, 8 June 2012 (UTC)[reply]

No. You're proposing a variation of Maxwell's demon, which violates the second law of thermodynamics (in this case, by creating a pressure and temperature gradient. The Brownian ratchet may also be of interest. — Lomn 20:01, 8 June 2012 (UTC)[reply]

Radiation

Do objects which absorb radiation reemit it? How does this work? — Preceding unsigned comment added by 176.250.228.38 (talk)

It depends a lot on what type of radiation you're talking about. All kinds of matter absorb and emit all kinds of radiation all the time. It is a continuous process. --Jayron32 18:54, 8 June 2012 (UTC)[reply]
Most things which absorb radiation with then give off radiation as well, although it may very well be a different form of radiation. For example, if you shine visible light (one form of radiation) onto a black object, it will radiate the energy back out, not as visible light, but as infrared light/heat. StuRat (talk) 19:02, 8 June 2012 (UTC)[reply]
There are chemical processes in which the electron shells surroundng atoms absorb radiation at one wavelength and them re-eimit it at another wavelength - this is called fluorescence. And there are processes in which radiation is absorbed and its energy is converted into a different form - see pair production, photoelectrochemical processes, photosynthesis, photoelectric effect, photovoltaic effect, concentrated solar power. Gandalf61 (talk) 10:18, 9 June 2012 (UTC)[reply]

Rules of science

Often in Biology, everything doesn't follow the known rule we've established. Can the same thing be said for physics & chemistry? 176.250.228.38 (talk) 20:00, 8 June 2012 (UTC)[reply]

Yes. I know in chemistry, there are lots of experiments that don't do what a hypothesis (based on literature precedent for similar experiments) says they "should"--exceptionally low yield, different geometric form, different part of a complex molecule reacts, nothing happens at all, or a totally different reaction occurs instead. There are probably a near-infinite number of combinations of experiments that would follow some not-yet-detected pattern, or an observed pattern that does not have any known underlying cause, where the data is all "out there" but nobody has even looked yet (i.e., to explain apparently random variations in yield, etc.). DMacks (talk) 20:06, 8 June 2012 (UTC)[reply]
This answer isn't exactly right. The answer to your question really depends on what question you're asking.
If you're asking if there are exceptions to the established laws of physics and chemistry, the answer is no. You're not going to get gravity to be different from one experiment to the next, and you're not going to get sodium and chlorine to react and form anything other than sodium chloride. They are called "Laws" for a reason.
However, if you're asking whether or not experiments can produce unexpected results, the answer is most definitely yes. You can never (probably) have an experiment that is completely controlled, where you know every bit of information about the initial conditions. You can get a different yield than you were expecting from a reaction, but this would be due to some contaminant you didn't know about, or some environmental factor that was different like temperature or moisture in the air. Maybe the yield is highly sensitive to the initial ratio of reactants, and the expiriment Maybe even some unforeseen quantum mechanical effect could change the expected results, if it's an especially complicated chemical reaction. Or, incredibly rarely, maybe your physics experiment has discovered a whole new particle or effect.
However, if you do exactly the same experiment every time, you will get exactly the same result. The reason that biology is such a messy science with many unexpected results is that there are just too many unknown factors to take them all into account; an organism is unimaginably complex, certainly not as easy to describe with simple laws as E=mc2 and F=ma.-RunningOnBrains(talk) 20:22, 8 June 2012 (UTC)[reply]
Another important point is that science is neither a set of laws, nor a primarily deductive enterprise. Science is a method for inductive reasoning. --Stephan Schulz (talk) 20:31, 8 June 2012 (UTC)[reply]
Running, your second third last paragraph sounds like determinism, which is not a necessary component of science, and indeed is contrary to some widely held interpretations of quantum mechanics. --Trovatore (talk) 20:36, 8 June 2012 (UTC)[reply]
Deterministic probabilities ? :-) Electron9 (talk) 21:52, 8 June 2012 (UTC)[reply]
I guess I interpreted the OP's question as a macroscopic sort of thing. You're right in that there is always some QM-related uncertainty (even if miniscule at macroscopic levels); but those would not be unexpected to an experimenter, and certainly follows the "known rule" as the OP put it. I would say that my final sentence above really sums up my point; maybe I shouldve just stuck to that :) -RunningOnBrains(talk) 22:46, 8 June 2012 (UTC)[reply]
It depends a lot on what the OP means by "rules". Most scientific "rules" are actually models of some sort, and all models are approximations of reality, so there will always be real examples that lie outside of the predictions of the model. --Jayron32 03:04, 9 June 2012 (UTC)[reply]


As a former analogue electronics engineer, now involved in certain aspects of chemistry, I must say there is an immense difference between electronics and chemistry at an engineering/design level. In electronics, everything is ultimatey based on a limitted number of component parts - resistors, capacitors, inductors, conductors, and active devices (transistors etc). The behavior of these devices behave according to well established simple laws - so simple that a 12 year old can, if sufficiently interested, design a stereo (I did when I was 12). Real parts don't exactly follow these laws, but they are close enough. Understand those laws properly, and you can understand anything in analogue ectronics.
Chemistry is very different. The "almost fundamental" component parts of chemistry are the atoms. The behaviour of atoms in any situation can (in theory) be predicted by quantum mechanics. In practice, that's just too hard, so the "laws" of chemical engineering are fortutious theories like the kinetic theory of gasses, and the theories of chemical kinetics with regard to reaction rates. These theories have so many gaps, exceptions, an approximations, that to former electronic engineer, it is very frustrating. To calculate current in a circuit, I can alway do that to at least 3 figures accuracy - 6 figures, if I need it, is not hard. To calculate the rate of a chemical reaction, chemists are doing well if they get within the correct order of magnitude. As a further example, the kinetic theory of gasses pupports to give an understanding of specific heat (thermal capacity) and how it varies with temperature. It accuately gives the specific heat for noble gasses (but who cares), and is roughly right for low valency atoms, but seems to be very inaccurate otherwise.
In electrical enginering, if say, a power company wants a $100M EHV transmission line and distribution system, the engineers do some caculations, order the materials, get it built, and it will work just fine. In chemical engineering, if a company wants a new $100M processing plant, the engineers do some calculations, scour the world for somebody who has done something like it, tweak the calculations, then build a pilot plant and muck about with it untill it works. Then with that experience, do more calcs, scale it up, then order all the materials etc and build the BIG ONE. Then sometimes find out it doesn't work at all well, and $100M has been wasted.
In short, chemistry must conform to valid scientific laws, but those laws are too difficult, so in practice, rough semi-empirical approximations are used. And things don't always go according to plan.
If you understand "basics" like chemical kinetics (and that is not at all easy), you still may not be able to understand real world applications. By "understanding" I mean able to calculate and preduct accurately what will happen.
Ratbone124.182.45.112 (talk) 03:34, 9 June 2012 (UTC)[reply]
It's a subtle and important point - every science has different objectives to deal with different topic matter. Physics sets hard-and-fast absolute rules of what is impossible. Chemistry is more about figuring out what is practical. And biology is a science of the possible, where every "rule" has an exception. The continuum continues further in disciplines like psychology, where there is doubt if it is even a science, and perhaps, even to the tropes of fictional writing. Wnt (talk) 15:26, 9 June 2012 (UTC)[reply]

cephalosporin

are there any once a day oral cephalosporin antibiotics?--Wrk678 (talk) 22:56, 8 June 2012 (UTC)[reply]

Mercuroketones

Are any of these possible?

Can any mercuroketones (compounds containing a C=Hg double bond), such as those in the image, exist? Whoop whoop pull up Bitching Betty | Averted crashes 22:58, 8 June 2012 (UTC)[reply]

I don't really think so, because good luck getting mercury to hybridize its s and/or d orbitals to allow a covalent bond.--Jasper Deng (talk) 03:18, 9 June 2012 (UTC)[reply]
Look at Oxymercuration reaction in which a cyclic mercurinium ion is formed, Hg has lost an electron and has three bonds, two with adjacent carbons. Still no double bond though, and I could find no evidence of it on google either. Graeme Bartlett (talk) 07:28, 9 June 2012 (UTC)[reply]
Mercury can definitely form double bonds with oxygen, I don't know about carbon though. Plasmic Physics (talk) 12:02, 9 June 2012 (UTC)[reply]
"Mercuroketone" is probably not a good term for it, since mercury not very electronegative. More likely "methylene mercury" (or other coordination/description of the carbon part, as usual for ligands on metals) or a "mercury carbene" (161 google hits) complex or something like that. Hg+=CH2 (apparent Hg(III) species), CAS#1234574-43-6, has been studied theoretically. But I'm also seeing lots of examples where what your type of connectivity is written as a Hg(II) ylide, for example, Hg+–CH2, rather than a Hg=C double bond. As for some of the coordination examples, you have to be careful not to exceed an electron-count of 18 (the transition-metal analog to the octet rule used in main-group elements). DMacks (talk) 14:42, 9 June 2012 (UTC)[reply]

June 9

"35dB-90dB[μV]" equal to 56 - 31000 mV ?

On the page "How to use the booster." it is said that "35dB-90dB[μV]" is the necessary voltage level for a 75 Ω antenna signal. Is that equalient to 56 - 31000 mV ..? Electron9 (talk) 01:54, 9 June 2012 (UTC)[reply]

No, you are 3 orders of magnitude out. To calculate db[μV] take the Log of the voltage in microvolats and multiply by 20, thus 56 mV corresponds to 95 dB[μV] and 31000 mV (31V) corresponds to 150 dB[μV]. 35 dB[μV] corresponds to 56 μV. 90 dB[μV] corresponds to 31.6 mV. However, 35 dB is rather high for the required input at the teminals of a TV set. A modern analogue TV should get a good picture with 20 dB[μV] or even less. 35 db[μV] would be good at the input to the antenna distribution cable system in a high rise building, where there is significant loss in the cable runs and in the splitters. A digital TV should in theory do rather better but in practice you need to allow a good margin to avoid dropouts and friezes. Keit120.145.6.122 (talk) 06:36, 9 June 2012 (UTC)[reply]
Like I suspected then, at least the multiplier 20 was correct. Btw, do you have any sources regarding the 20 dB[μV] level? maybe there's even a general difference between analog (CVBS) and digital (DVB) in regard to minimal signal strength? Electron9 (talk) 06:49, 9 June 2012 (UTC)[reply]
I answered from memory. However, a quick web search turned up this paper (as well as a lot of useless chat rooms about TV!), which seems to cover things quite well: http://www.eecs.berkeley.edu/~sahai/Presentations/Dyspan_2005_tutorial_part_I.pdf. On page 82 it gives the good picture minimum level for a digital TV as -85 dBm (dBm is an impedance-independent measure referenced to 1 mW). This corresponds to 15.4 μV across 75 Ω, i.e., 23 dBμV. Keit120.145.6.122 (talk) 11:03, 9 June 2012 (UTC)[reply]

Why is the oldest person always 114 years old?

For some years now, every time the allegedly oldest person in the world dies, the person's age has been reported to be 114. Just today we saw this item, saying this woman was the oldest person in Europe, and she died today at the age of 114. Why always that same age and never 113 or 115? Michael Hardy (talk) 02:38, 9 June 2012 (UTC)[reply]

It's often 115, that number you saw was just for Europe. I would argue it's simply a statistical issue. If you look at the US Social Security Administration's most recent actuarial table, they calculate the probability of making it from 115 to 116 to be only 25%. And making it from 114 to 115 is only a measly four percentage points better. So I would look at that and say that you start with a fixed population maximum of people born in 1897, and have that population experience greater-than-exponential decay from age 10 onward (it is less-than-exponential prior to that). The reason the "oldest person at the moment" is almost always 115 is that the decay function, although a bit noisy at those ages, would predict less than one survivor for all ages past 114. You'll see that creep up in the future as the starting population for each given year is increasing, as is post-adulthood life expectancy. Someguy1221 (talk) 02:53, 9 June 2012 (UTC)[reply]
I have seen data on that the maximum life expectancy is 130-150 years for a human, so 114 years is getting close. And thus the deterioration of the body is likely becoming exponential. Electron9 (talk) 03:02, 9 June 2012 (UTC)[reply]
Now, modifying the human might allow indefinite lifespan, like genetically rebuilding it and adding repair nanobots. So I'm assuming they're talking about without genetic engineering, cyborgization, or reanimation. (perfect preservation already existing, in the form of liquid nitrogen). So how do they propose living 27.5 years over the record? Calorie restriction? Sleeping through most of it?(/coma/hibernation/near death) Hysterectomy? Sagittarian Milky Way (talk) 18:54, 9 June 2012 (UTC)[reply]
You must be a young whipper snapper if you don't remember la chẻre Jeanne Calment. See list of the oldest verified people. μηδείς (talk) 03:15, 9 June 2012 (UTC)[reply]
She was a pistol. Her comments about Van Gogh are especially funny. ←Baseball Bugs What's up, Doc? carrots05:25, 9 June 2012 (UTC)[reply]
I guess the disagreement is probably due to low sample size, but this study suggests that mortality per year (i.e. chance of dying in any given year of life if you live that long) is 50% from age 110-115, and they speculate that that number may even increase beyond this age. -RunningOnBrains(talk) 04:41, 9 June 2012 (UTC)[reply]
The small sample sizes are, indeed, a problem. The mortality tables I use professionally are only based on actual data up to age 95. After that, there just isn't enough data to get robust results, so they arbitrarily extrapolate from age 95 up to age 120, which they set as having a mortality rate of 100%. A detailed explanation of the process can be found here (be warned, it is quite technical). --Tango (talk) 16:31, 9 June 2012 (UTC)[reply]

Baeyer's Reagent

Isn't Baeyer's reagent an alkaline solution of potassium permanganate?? The article on it states it to be neutral. — Preceding unsigned comment added by Roshan220195 (talkcontribs) 10:21, 9 June 2012 (UTC)[reply]

I guess that it is a matter of time. A fresh solution of potassium permanganate should be neutral. As time passed the permanganate decomposes slowly, as it does, the solution becomes more and more alkaline. Plasmic Physics (talk) 10:57, 9 June 2012 (UTC)[reply]
Plasmic Physics is right. To add the numbers to show why: A pure solution of potassium permanganate should be neutral, because potassium hydroxide is a strong base and permanganic acid is a strong acid, with a pKa of -2.5[13]. The salt of a strong base and a strong acid always forms a neutral solution. However, as the permanganate ion decomposes to the manganate ion, manganic acid has a pKa of about 7.4, making it a weak acid. --Jayron32 12:40, 9 June 2012 (UTC)[reply]
What is the decomposition mechanism for permanganate? O=[Mn-](=O)(=O)=O.O=[Mn-](=O)(=O)=O → O=[Mn-](=O)(=O)OO[Mn-](=O)(=O)=O → O=O.O=[Mn-](=O)=O.O=[Mn-](=O)=O ? Plasmic Physics (talk) 13:17, 9 June 2012 (UTC)[reply]
It isn't really decomposition, it's oxidation: The permanganate will oxidize just about anything, producing manganate and some sort of oxide, or elemental oxygen. The manganate will spontaneously disproportionate to permanganate and managanese dioxide, so given any trace reductant, there should develop an equilibrium between manganate, permanganate, and manganese dioxide which will account for the rising pH. --Jayron32 15:41, 9 June 2012 (UTC)[reply]

Electronic eavesdropping

How does one find a bug that has been put in a house or car? Kittybrewster 11:40, 9 June 2012 (UTC)[reply]

Make a simple visual search of places where things can be quickly hidden first. Then, search using a radio scanner. Set the scanner sensitivity fairly low so you don't waste time on legit radio transmissions. See http://en.wikipedia.org/wiki/Scanner_(radio). Scanners are very good at picking simple radio transmitter bugs because when they get to the right frequency while auto scanning, you'll hear your own voice(s), and or the scanner will howl. Sometimes bugs placed by professional outfits or govt angencies are placed that display great inguinuity and cannot be found with scanners. Books have been written about this. I'm not trying to imply anything about your goodself, but be aware that there is a common pschological condition, often occuring in people who are otherwise normal, and sometimes brought on by stress, where people believe that they are being spied on, when they are not. Sometimes businessmen think they are just so darn good that the opposition must surely be spying on them. Very very few actually do so with bugs. Most industrial intelligence is obtained quite legally and simply by employing specialist researchers scanning documents in the public domain. I've learnt what I needed to know about competition by sharing a beer in a pub combined with monitoring employment adverts and press releases. Did you check our wiki article http://en.wikipedia.org/wiki/Covert_listening_device ? Also be aware that it is possible to write computer malware that activates the microphone in a laptop and monitor your voice, as well as your keystrokes, without you knowing, over the internet. Always have reputable computer security installed, and make sure your software firewall is set up with optimised port restrictions. If your PC has Vista, that's good - make sure that installing software requires an administrator password. Wickwack124.178.139.104 (talk) 11:55, 9 June 2012 (UTC)[reply]
Radio scanners will miss spread spectrum digital burst transmissions. Use a digital spectrum analyzer to find bugs. As for computers, disable recording devices like microphone and webcams physically and audit software. If you use any Microsoft software your computer is f-cked by design. Electron9 (talk) 18:53, 9 June 2012 (UTC)[reply]
Note that radio scanners only work for devices which transmit radio signals. Other approaches are a hardwired bug, with the wires going outside the house to where somebody taps into them, a device that records and is retrieved later, or one that uses existing communication channels, like the phones lines, cable, wireless internet, cell phones, etc., to transmit signals. StuRat (talk) 18:52, 9 June 2012 (UTC)[reply]
Measuring vibrating glass is not that uncommon technique. Electron9 (talk) 18:55, 9 June 2012 (UTC)[reply]

Crepuscular rays

I saw some interesting atmospheric phenomena the other day (gallery is here; I've included all the shots I took, but the first, fourth, and fifth are the best). So, I'm assuming these are some kind of crepuscular rays, but the shape of them is what I'm curious about. I've seen crepuscular rays on countless occasions, of course, but the shortness of these is something new to me. The sky looked like someone had gotten crazy with a clone brush. What exactly is at work here? Are the rays only showing up in places where there's a certain amount of humidity/water vapour and then disappearing in the drier air below? These pictures were taken around 8am, facing (roughly) east; the sun is off-frame to the left. As you can see, there was a variety of clouds out that morning. I'm afraid all I had on me was my iPhone, so the quality is less than ideal. The images have not be manipulated in any way (other than the standard jpeg compression). Matt Deres (talk) 13:26, 9 June 2012 (UTC)[reply]

The weird thing is, sun rays (crepuscular rays) are lighter than the background, while yours appear darker. I might say they were smoke in the upper atmosphere blown into lines, but that doesn't explain why they would appear to radiate from the Sun. StuRat (talk) 18:47, 9 June 2012 (UTC)[reply]

How many times did sex evolve independently?

How many times did sex evolve independently? 82.31.133.165 (talk) 17:17, 9 June 2012 (UTC)[reply]

Are you sure that it did ? (As opposed to only evolving once and being passed down to all species which evolved from that one.) Our evolution of sexual reproduction article says, in the 2nd sentence, that "All sexually reproducing organisms derive from a common ancestor which was a single celled eukaryotic species.", and provides source(s) to back up that claim. StuRat (talk) 17:40, 9 June 2012 (UTC)[reply]
That sentence by itself wouldn't mean much more than that all sexually reproducing species are eukaryotes. More strongly, it seems very unlikely that meiosis -- the special type of cell division involved in sexual reproduction -- evolved more than once, since it requires a large number of special mechanisms in order to happen, and as far as I know those mechanisms are always implemented in essentially the same way. Looie496 (talk) 19:06, 9 June 2012 (UTC)[reply]

What is: "colonic sorting" in biology?

thanks.

As I understand it, it's a mechanism by which the colon of certain animals such as rabbits separates small particles and fluids from larger, less digestible particle. Looie496 (talk) 18:56, 9 June 2012 (UTC)[reply]

a simple table which sums up all types of reproduction?

1.monoecious: types, dioecious: male-female, male or female with intersex. thanks.

I don't understand how to mount the camera on a barn door tracker

Doesn't the angle between the camera body and the tracker matter? What are the guidelines for tilting and aligning the camera, once the polar finder has been aligned with the poles? 76.104.28.221 (talk) 19:10, 9 June 2012 (UTC)[reply]

  1. ^ ncert class 11 chemistry text book part one pg 171 http://ncert.nic.in/NCERTS/textbook/textbook.htm?kech1=0-7