Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 59.103.70.227 (talk) at 12:31, 6 January 2009 (Definition of "Life": new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 31

Best exercise for endorphins

What's the best type and program of indoor exercise for generating endorphins (and, ideally, continuing to do so for the next couple days)? I have access to a high-end campus gym. Will caffeine increase or decrease the effectiveness, taking into account that it will allow me to work out harder? Also, can I expect a change in endorphin levels to affect my academic performance? NeonMerlin 01:12, 31 December 2008 (UTC)[reply]

Have you read Endorphin? Lisa4edit (talk) 01:52, 31 December 2008 (UTC)[reply]
I have, and don't see an answer there. NeonMerlin 04:24, 31 December 2008 (UTC)[reply]
I think the whole "endorphin high" is sensationalised by the media. Ask an opioid addict what a real endorphin high feels like, because a bit of running isn't really going to make the difference to your brain that pop-science has made out. --Mark PEA (talk) 14:09, 31 December 2008 (UTC)[reply]
I agree with Mark and would also suggest that the type of exercise is going to make no noticeable difference how how you feel but how often you take exercise might do. Ideally you will move around at different times during the day. Itsmejudith (talk) 17:56, 31 December 2008 (UTC)[reply]
Even though I feel that it certainly is sensationalised by the media (you probably wont get a big high every time you exercise) there is truth to the "endorphin high" concept, I my self feel happy and have more energy after I go for a strenuous run or swim and several people I know that exercise said they feel the same way after doing moderate to strenuous exercise. Scientists have also shown, that people who exercise (runners especially) have more endorphins and more anandamide (a chemical that mimics the effects of marijuana) in their blood stream than those who don’t.--Apollonius 1236 (talk) 23:15, 3 January 2009 (UTC)[reply]
Horizontal folk-dancing? Mattopaedia (talk) 03:46, 1 January 2009 (UTC)[reply]
Do you mean sex, Matt? I agree. Axl ¤ [Talk] 12:16, 2 January 2009 (UTC)[reply]
Absolutely!! It's probably the only form of exercise most people would look forward to the prospect of doing almost continually for a couple of days. ;-) Mattopaedia (talk) 22:19, 2 January 2009 (UTC)[reply]

One way to feel the effect of endorphins is to cry (hear terrible news), then eat a bar of chocolate, then jog for 10 mins. Polypipe Wrangler (talk) 04:05, 3 January 2009 (UTC)[reply]

I heard the best types of exercise for endorphins are ones that involve endurance and long periods of time working out such as running, swimming and cross-country skiing.--Apollonius 1236 (talk) 18:12, 3 January 2009 (UTC)[reply]

As for the question at the end about academic performance, I don't know if any study has been done to test possible correlation between the two. Keep in mind that "academic performance" is a broad area covering a wide variety of totally different skills, including memorization, writing ability, time management, etc. Off the top of my head, I would imagine that endorphin levels would mainly help with the social side of academic life (things like taking part in class discussions), but might actually distract from focusing on the other stuff — from my own experience of "runner's high", I would guess that my memorization ability was adversely affected. On the other hand, if you suffer from any depression, then you're already starting out at a "negative" level, and regular aerobic exercise, along with all the other stuff the experts recommend (a good diet, regular sleep, psychotherapy, etc), is a great way to bring your "happiness level" up to "normal" (whatever that is). Lenoxus " * " 15:54, 6 January 2009 (UTC)[reply]

Pressure reducing devices in city water mains

My friend works for the water company in my city, and she was telling me about these large machines that they have at certain locations along the water mains pipelines, which reduce the pressure. So, as she says, water enters it at one pressure (she measures pressure in feet) and exits it at a lower pressure. She couldn't explain how it worked, unfortunately, which was frustrating because I can't understand how such a device could possibly work. Wouldn't some other property of the water also have to be changed, as all the various properties are tied together by Bernoulli's law and other such fluid mechanics principles? Cheers, Maelin (Talk | Contribs) 01:42, 31 December 2008 (UTC)[reply]

A venturi tube? Changing the diameter of the pipe can change the pressure in the pipe, if the net flow of water is conserved. Also, water pressure will drop as various feeds tap off from the water-main, splitting the flow. Also, changing absolute elevation of the pipe (such as following the contour of a hill, or just changing the depth that the pipe is buried) can decrease pressure. Nimur (talk) 01:48, 31 December 2008 (UTC)[reply]
Reduced pressure zone device isn't explicitly built into the mains to reduce pressure, but does so anyway. (It's there to keep your swimming pool water from flowing back into the line. Lisa4edit (talk) 02:00, 31 December 2008 (UTC)[reply]
Here are a couple of links that I found here and here that have some diagrams and animations. The basic principal appears to involve a valve that opens towards the high pressure (inlet) side, with some kind of diaphragm on the outlet side that controls the movement of the valve. As the outlet pressure increases, the diaphragm moves outward and the valve moves towards the closed position. As the outlet pressure decreases, the diaphragm moves inward and the valve moves towards the open position. I'm not sure if the same principles apply to large scale devices in water distribution systems. I'm no expert in this field (not even a novice). I'm surprised there is not a Pressure reducing regulator article. (To the fluid mechanics experts out there: Here's your chance to contribute!) -- Tcncv (talk) 04:02, 31 December 2008 (UTC)[reply]
There is a pressure regulator article, but it has few details on how they work. --Heron (talk) 18:01, 31 December 2008 (UTC)[reply]
The basic mechanism is pretty simple: on the low-pressure side, you've got a resevoir that, through a spring or other mechanism, maintains the desired pressure. Separating it from the high-pressure side is a valve that only opens if the pressure on the low-pressure side is below the target pressure. --Carnildo (talk) 23:50, 31 December 2008 (UTC)[reply]
[1] Kittybrewster 19:06, 2 January 2009 (UTC)[reply]

Laser based short sight correction-what is the downside?

This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis, prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page.
This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis or prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page. --~~~~
This probably falls under the category of medical advice. We can direct you to laser eye surgery and cataracts but you should primarily rely on your optician, optometrist, or other medical professional. If you need a second-opinion, you should definitely stick to trained medical professionals. Nimur (talk) 04:32, 31 December 2008 (UTC)[reply]
LASIK may have answers for some of your questions - WikiCheng | Talk 04:34, 31 December 2008 (UTC)[reply]
Myopia might also help you see clearer (In a purely proverbial sense ;-) Lisa4edit (talk) 04:48, 31 December 2008 (UTC)[reply]

Reactivity of Salt

Reactivity of Sodium formate and calcium chloride salt towards water? —Preceding unsigned comment added by HairulanuarMohdzin (talkcontribs) 09:40, 31 December 2008 (UTC)[reply]

At any normal temperature, the water will dissolve these chemicals. Calcium chloride can absorb water from a humid atmosphere to make a solution. Calcium chloride can form a hydrated salt. Sodium formate I don't know about. Graeme Bartlett (talk) 11:48, 31 December 2008 (UTC)[reply]
The article Sodium formate notes that it is deliquescent - that is, it will absorb enough water from the air to form a liquid solution. (Much like calcium chloride will.) -- 128.104.112.113 (talk) 00:06, 1 January 2009 (UTC)[reply]

Time

What is used as the physical point of reference for the time of day? Is it the orientation of the sun and the earth? In other words, if all of the clocks in the world stopped, would they be reset by (for example) saying that it is 12pm GMT when the Greenwich Meridian is directly in line with the sun (i.e. the sun is highest in the sky over the Meridian)? YaniMani (talk) 11:30, 31 December 2008 (UTC)[reply]

I'm not sure if I quite understand your question but if I do the first parts is answered at International Atomic Time and perhaps Leap second. It doesn't answer what happens if all the 300 clocks in various locations stop but I doubt there is any established contigency since it's so unlikely it's not worth considering. It's probably more likely that the earth will be hit by 3 large asteroids in one day and even that may not be enough to ensure none of the established clocks survive. And even if that happens there are still other atomic clocks that we could decide to use including many in space. If the earth is destroyed, the question is moot anyway. In the unlikely event somehow every single atomic clock stops working and yet there is still enough civilisation and technology to want an accurate global time standard I'm sure we will work out something but it would surely depend on what somehow cause every single clock to start working Nil Einne (talk) 00:40, 1 January 2009 (NZDT, UTC+13)
Thanks. I read them, but I am still not sure what the point of reference used is (they talk about correcting for rotation). I am of course not asking about whether there is a contingency for what happens if all the clocks stopped, it is just a theoretical situation to frame the actual question - i.e. what is the external point of reference that we would use? YaniMani (talk) 11:58, 31 December 2008 (UTC)[reply]
You have opened a great big can of worms. You'll never ask "What time is it?" again without wondering which time you're getting. When we ask what time it is, we mean civil time, which is really standard time, which is directly referenced to Coordinated Universal Time. In the United States, this time can be found on the air on WWV. The reference for UTC is indeed the Prime Meridian, but the position of the sun is too vague for really good timekeeping. The Wikipedia article on UT says "UT in relation to International Atomic Time (TAI) is determined by Very Long Baseline Interferometry (VLBI) observations of distant quasars." They fiddle with it as needed to keep Easter from moving into August, of course. --Milkbreath (talk) 12:30, 31 December 2008 (UTC)[reply]
Some further reference. Icek (talk) 16:01, 31 December 2008 (UTC)[reply]

When the sun is directly over the Prime Meridian, that should be the setpoint, conceptually, for all clocks, as it in fact was in the late 19th and early 20th century, before other standards such as atomic clocks. I understand that for reasons perhaps having to do with the sensitivities of the French to British standards, or the earth not being perfectly symmetrical, the GPS system places 0 degrees longitude a short distance away from the engraved line in a brass plate which equals the center line of the telescope at Greenwich formerly used for the determination of time. So the sun being directly overhead of one of the historic prime meridians could be used as the synchronizing standard of clocks. Edison (talk) 19:14, 31 December 2008 (UTC)[reply]

Because the earth's orbit is elliptic and not circular (and other factors), the apparent position of the sun will vary throughout the year if viewed at the same time each day. See Analemma. -- Tcncv (talk) 04:30, 1 January 2009 (UTC)[reply]
...And consequently, the basis for time is not that noon UT is when the Sun is at the zenith at the Prime Meridian; it's the average (mean) time of this event. That's why UT used to be called Greenwich Mean Time. Note that I said "UT" and not "UTC". UTC, the time we now use, applies a fine-grained correction to UT; see leap second. --Anonymous, 08:57 UTC, January 1, 2009.

To answer your question as asked: The definition of time has two components: the length of the second, and a nominal zero point. The current defintion for the duration of a second is no longer related in any way to the movement of the earth or other celestial bodies. The starting point is related to a particular event (actually, a statistical average of multiple events) in the past. Therefore, for purposes of definition, the answer to your question is "no." There is no formal definition of a particular future celestial event as being at a defined time. All future celestial events will have observed times, not defined times. Now to answer your hypothetical question: If by horrible mischance, we somehow lose track of time, How would we reset the clocks? This affects only the zero point, not the length of the second: we can create new clocks that accurately measure the length of the second. This means the new zero point must be agreed upon by convention. Depending on how long the clocks were stopped, teg new agreement might try to relate "new time" to "old time" by picking a particular celestial event that was predicted to high precision with respect to "old time" and agreeing that the event is the reference pont for "new time." But note that this would be a new agreement made by a comitee, and would have no more (or less0 absolute signigicance than our current definition. -Arch dude (talk)

Thanks for the correction about noon being subect to a correction depending on where in its orbit about the sun the earth is. Subject to that correction, the transit of the sun and the stars through the crosshairs of the Greenwich meridian could be used to re-establish exact time, down to the fraction of a second. The actual transit of the various stars was tracked each night, down to the fraction of a second, recorded, and used to determine standard time in the late 18th and early 20th century. By tracking many starts, and averaging the transits, the mechanical master clocks which sent out electrical signals on the hour, could be regulated to a fraction of a second., as it was over 100 years ago. Edison (talk) 06:10, 2 January 2009 (UTC)[reply]


January 1

Date

What year is it? ~ R.T.G 12:42, 1 January 2009 (UTC)[reply]

I'm pretty sure it's 2009 everywhere at this point. In the Gregorian calendar, of course. --98.217.8.46 (talk) 12:54, 1 January 2009 (UTC)[reply]

No time machine available in this age, sorry. This is year 2009; you better live with it... What year were you coming from?--PMajer (talk) 13:42, 1 January 2009 (UTC)[reply]

it's still the year of the rat until 1/26/2009 when the ox takes over --76.125.8.141 (talk) 16:47, 1 January 2009 (UTC)[reply]
It's also Year Heisei 21, in emperor era counting. Nimur (talk) 16:57, 1 January 2009 (UTC)[reply]
Take your pick, see year and list of calendars. I'd go for the 2008–2009 fiscal year. Also for date see date. Dmcq (talk) 22:31, 1 January 2009 (UTC)[reply]
@ Nimur wouldn't it still be Heisei 20 till the New Year somewhen in spring? Or do they use the Gregorian year for the era counting?76.97.245.5 (talk) 01:01, 2 January 2009 (UTC)[reply]
According to Japanese era name, and this citation: "In historical practice, the first day of a nengō (元年, gannen?) starts whenever the emperor chooses; and the first year continues until the next lunar new year, which is understood to be the start of the nengō's second year." I'm not actually familiar with the specifics but I can ask some Japanese colleagues and report back. Nimur (talk) 16:59, 2 January 2009 (UTC)[reply]

An optional use for leeches?

A was looking at some stupid stars advocating leeches as detoxifier (pfff). Anywayit got me mind rolling in a sort of similar direction, could those blood sucking MOFOs be used in conjunction with a diet? I would imagine that there are strick homeostatic processes that keep the volume of blood and number of cells constant, so if blood was removed energy would need to be dissipated to restore balance. Does this theoretically make sense?78.133.19.131 (talk) 13:00, 1 January 2009 (UTC)[reply]

Cut the leeches out and just ask if moderated blood loss could be a way to diet. I suspect the replenishment of a small amount of blood is not a huge caloric drain but I'm sure someone can answer for sure. --98.217.8.46 (talk) 14:05, 1 January 2009 (UTC)[reply]
I'd say yes, except for the usual problem that your body will detect that it's losing weight, think you're starving to death, then decrease your metabolic rate and increase your hunger tenfold, making you actually gain weight. Also, you'd need supplements to ensure that you're replacing everything which is lost with blood, such as iron. StuRat (talk) 15:50, 1 January 2009 (UTC)[reply]
Here is a British Medical Bulletin on metabolic rate changes as a result of injury (I'm not sure if controlled bloodletting counts as injury). It may provide some insight on the complex effects that StuRat mentioned. Nimur (talk) 17:02, 1 January 2009 (UTC)[reply]
Leeches have been used for curing diseases like paralysis with (supposedly good success) in Kerala, a southern state of India. Have a look at this and this and search for 'leech' - WikiCheng | Talk 03:53, 2 January 2009 (UTC)[reply]
That is totally unrelated to the question being asked. --98.217.8.46 (talk) 04:36, 2 January 2009 (UTC)[reply]
"blood sucking MOFOs"? Come now, let's avoid stereotypes. —Tamfang (talk) 19:14, 5 January 2009 (UTC)[reply]

This rings my bullshit detector Wiki Cheng Bastard Soap (talk) 11:23, 2 January 2009 (UTC)[reply]

It certainly does! The amount of blood you could lose without all sorts of other health problems (such as aneamia) is pretty small - the blood donation people say that you can safely lose (and recover) only about a half-liter every month. The energy required to do that is tough to estimate - but it doesn't seem like it would be very significant. No - I think you should stick to using leeches for storm prediction. SteveBaker (talk) 14:30, 2 January 2009 (UTC)[reply]
The Mayo Clinic web site asserts that you 'burn' about 650 calories when you donate a unit of blood: [2]. While most jurisdictions limit donors to one unit every eight weeks, that restriction has a margin for safety built into it. If we assume blood loss at twice the permitted rate, that runs about 160 calories per week, or about 20 calories per day. It's a trivial reduction that would probably be wasted. (You're going to be hungrier than usual every time your body notices it's short on blood.) You'd save about the same number of calories by switching from cream to milk in one cup of coffee each day, or by taking your tea with one lump of sugar instead of two. TenOfAllTrades(talk) 15:24, 2 January 2009 (UTC)[reply]

I know someone who contracted giardiasis, then deliberately didn't get treated, to assist with dieting. I've heard of people use a similar trick with tapeworms. Medical advice: I wouldn't recommend any of these methods. Axl ¤ [Talk] 09:59, 3 January 2009 (UTC)[reply]

Tapeworms work by consuming food in your gut before you have a chance to absorb it. However, that just makes you more hungry and causes you to eat more - so they may not help as much as you'd hope. If you have the mental strength not to eat more as the tapeworms consume your food - then you'd probably have no problem sticking to a medically more reasonable diet. Dunno about giardiasis - but the list of symptoms described in our article suggests that it's not going to replace the South Beach diet anytime soon! SteveBaker (talk) 13:26, 3 January 2009 (UTC)[reply]

do the congenitally blind understand literary descriptions?

There are some things I read, for example colors I don't know the names of, even vaguely (Carmine? Puce? Bimini?), where I don't know what they're talking abut. But do the congenitally blind understand most visual descriptions? Do they know the same thing everyone else does, like what a mirror is, etc, or are these things like these weird colors are for me? —Preceding unsigned comment added by 79.122.79.41 (talk) 21:31, 1 January 2009 (UTC)[reply]

They understand as it is described to them only. This reminds me of Thomas Nagel's essay "What is it like to be a bat?" where he talks about the subjective nature of consciousness. Can you conceive of a bat's perception through echolocation? -- JSBillings 23:43, 1 January 2009 (UTC)[reply]
This is speculation but I would imagine that over time they pick up what visual based description might connote based on sighted people's reactions to them, although they cannot directly relate to it. If there is a gap in their knowledge about the appearance of something, they may miss an important visual indicator that would be apparent to sighted people. An example would be "the baseball player's shadows were long when they arrived at the field." This description would immediately indicate it a morning or evening setting, but a blind person who hasn't realized shadows change length over the course of the day will miss the time detail. If they were careful, they would ask themselves "what was the point about the shadows" and ask someone what it means. Something more difficult might be a description about a woman wearing a red dress to a party which would signal to a sighted person that she was trying to be seductive compared to say, the same woman wearing the same dress in blue. A blind person would probably realize that a red dress often indicates seductiveness only after connecting that detail with other description and then encountering the same literary red=seductiveness theme repeatedly. I would also imagine that the reverse hold true in that sighted people miss details that blind people would pick up on. A description about a blind person whistling a monotone as they walk through a doorway would not indicate to a sighted person that they were probably trying to get a feel for the size of the room by listening for the reverberations of the sound. (or something like that...) 152.16.15.23 (talk) 00:29, 2 January 2009 (UTC)[reply]
As this article shows [3] they sometimes can't make sense of the image even if they can see again. A blind acquaintance of mine said he got a much clearer idea about what an optical illusion was after someone had shown him a relief of the Necker Cube.76.97.245.5 (talk) 00:37, 2 January 2009 (UTC)[reply]
I came to different conclusions about the examples given. Red would be quite easy to associate with fire and then seductiveness. Shadows can be directly perceived by the heat of the sun. And going into a room I think they'd probably make a click or pip noise rather than a continuous tone to determine its overall size, it certainly works better for me. Interestingly on the last if it's quiet I seem to be able to hear which way a hall goes with my eyes closed quite easily even without making any noise. Dmcq (talk) 12:25, 2 January 2009 (UTC)[reply]

Also just as a note - A lot of blind people aren't walking around in total darkness (i.e. seeing nothing), but rather there are a wide degrees of what the blind can 'see' (changes in light, silhouettes etc.) - it really depends on the individual. 194.221.133.226 (talk) 10:36, 2 January 2009 (UTC)[reply]

I was told, ages ago now, by a cognitive scientist that it depends very much on the nature of the blindness. If it is due to "mere" problems with the eyes then the blind person thought in visual and spatial metaphors just the same as a sighted one (though clearly certain concepts like colors were going to be problematic). If it was due to a problem with the visual cortex of the brain, then the ability to visualize things in their head (and their understanding of visual metaphors) would also be affected. --98.217.8.46 (talk) 03:14, 3 January 2009 (UTC)[reply]

Mechanical device that keeps something spinning in exactly the right speed

Here's something I've been wondering since I was a wee lad (well, teenager, at least). I've asked a number of people this over the years (though obviously no engineer), but never gotten the answer. I've finally decided to find out the answer, once and for all, with the help of you fine refdeskers.

An old-timey mechanical wind-up clock works on this principle: you wind up a spring until there's lots and lots of tension in that spring. That tension is slowly released to the various cogs and gears inside the clock, and finally transmitted to the dials on the face of the clock. But here's my question: the spring can be wound with various degrees of tension, but the clock still goes at the same rate. If you wind the spring to half its capacity, the gears in the clock should only go at half the speed as if the spring was wound to full capacity (this is Hooke's law, no?) And yet, the dials on the face always moves at constant speed, regarless of how much tension there is in the spring. There must be some mechanical device doing this.

The reason I keep wondering about this is that there's all sorts of machines that seems to be using this same device, and whenever I see them, I wonder once more. Old movie cameras, for instance, were powered by springs or a hand-crank, yet regardless of how much tension there is in the spring or how fast you rotate the crank, the camera has to rotate the shutter and feed the film-stock at exactly 24 frames per second. Same thing with hand-cranked record players, they always have to move with the same rpm.

So, what is this magical mechanical device? How does it work? The only thing I can think of is that there somehow is something that applies a smaller counteracting force to the gears that is just big enough to ensure that they always rotate at the same speed, but increase and decrease at the same rate as the main gear (so if the tension in the spring doubles, the counter-acting force doubles too, but the difference between it and the main gear is constant), but I can't quite work out the details. Is that how it works? Do we have an article on how this device? Does it have a name?

I'd be most grateful for any answer, as I said, this has been bugging me for two decades, at least. Belisarius (talk) 23:25, 1 January 2009 (UTC)[reply]

Older mechanical clocks used a Fusee, which was later replaced by improvements in technology such as the Pendulum clock and Escapement. Clocks are pretty neat, you can make a fairly reliable pendulum clock out of tinker toys if you want. The articles I mentioned are actually very interesting reads, if you've been curious about this for a while. -- JSBillings 23:33, 1 January 2009 (UTC)[reply]
An interesting device (a bit off point) is the centrifugal governor ...used in a spring-loaded record player and a spring-loaded telephone dial to limit the speed. hydnjo talk 23:58, 1 January 2009 (UTC)[reply]
In clocks the wound spring is used as an energy store to power the clock, but not to govern its speed, (various other aforementioned mechanisms are used for that) and so the tension in the spring (until it approaches zero) has no affect on the speed at which the clocks mechanism ticks over. —Preceding unsigned comment added by 92.16.196.156 (talk) 00:10, 2 January 2009 (UTC)[reply]
For a rotating machine, Centrifugal governor is exactly right. For a time piece, you might look up Balance wheel and Pendulum.--GreenSpigot (talk) 02:28, 2 January 2009 (UTC)[reply]
Some devices like clocks had an escapement, which went tick-tock, and used a limited amount of energy for each incremental movement of a clock or watch. Other windup devices used a Centrifugal governor or flyball governor, similar to the one used on some earlier steamengines, to regulate or limite the speed. On a windup phonograph there was no intermittent escapement like on a watch, but a continuous rotation at a desired speed. If the speed tended to increase, as when the spring was wound tightly, the spinning caused the flyballs to move farther out and bend the springs to which they were attached, causing movement of the ring attached to tone end of the springs, and applying greater pressure to a braking mechanism, reducing the speed. On Watt's steam engine, the flyballs rotated about a vertical axis and governed the steam flow. On windup phonographs, the axis might be horizontal, and spring pressure rather than gravity was usually the force the centrifugal force of the balls worked against. The flyball governor worked well to prevent the fully wound spring from running the phonograph faster than was desired, but when the spring was nearly run down, it had used up all its range of regulation and could not prevent the mechanism from running slower and slower until it ground to a halt. Edison (talk) 06:03, 2 January 2009 (UTC)[reply]
The simplest governor to understand is a pendulum clock. At it's simplest:
  • The spring is attached to the gear wheel.
  • A lever arm with a single triangular tooth on each end is wedged into the gear to stop it rotating.
  • A pendulum swings back and forth.
  • At the end of the pendulum's travel it pushes on the lever arm - letting it up just enough for the tooth on one end to disengage from the gear while the tooth on the other end engages and grabs the next tooth in the gearwheel. This allows the wheel to turn by exactly one tooth for every swing of the pendulum.
  • The gear wheel also drives the hands of the clock through successive reduction gears to get hours, minutes and seconds (and perhaps days, months, etc in fancy clocks).
  • In the process of doing that, the gear wheel's rotation applies a small, fixed amount of energy to the pendulum to keep it swinging.
  • Since the period of a pendulum swing depends on its length and not on how hard you push it, the pendulum swings at the same rate no matter how much energy is in the spring.
  • This limits the amount of energy the spring can release for each swing of the pendulum - so the clock runs at the same speed until there is insufficient energy in the spring to keep the pendulum swinging against the forces of friction and air resistance.
In most long-case 'grandfather' clocks, the pendulum is set up to swing once a second because that makes a pleasing 'heartbeat' and makes the subsequent gearing that drives the hands of the clock a little bit simpler. The repeated release-and-grab cycle of the spring-driven gearwheel is what makes the characteristic ticking sound you hear in most mechanical clocks. There is generally a small adjustment you can make to the weight on the end of the pendulum to adjust the length - and hence to make the clock tick faster or slower.
Clocks that chime generally have another spring mechanism for doing that which is held in check with another lever arm that's tripped when a tooth on a wheel attached to the minute hand reaches the top of the hour (or whatever).
My long-case clock has a 30 day movement - you wind up TWO springs - one for the chimes and one for the hands. On my clock, the chime mechanism actually runs down after about three weeks - but the hands run for well over 30 days - which is a pretty amazing thing for such primitive engineering and a 100 year old mechanism.
On mechanical wristwatches, the pendulum is replaced by a wheel that is spun back and forth by a 'hair spring' - replacing the force of gravity with a spring force - but the principle is the same. SteveBaker (talk) 14:12, 2 January 2009 (UTC)[reply]


January 2

Electro magnetic theory

Find the vector magnetic field intensity in Cartesian coordinates at P2 (1.5, 2, 3) caused by a current filament of 24A in az direction on z axis extending from (i) z=0 to z=6 (ii) z=6 to z= infinity (iii) z= −infinity to z = infinity. (b) Given the electric scalar potential V=80z cos (x) cos (3 × 108 t) kV and magnetic vector potential A=26.7 z sin(x) sin(3 × 108 t) ax mWb/m in free space. Find fields E and H. —Preceding unsigned comment added by Antony salvin (talkcontribs) 04:57, 2 January 2009 (UTC)[reply]

Please do your own homework.
Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. Algebraist 05:07, 2 January 2009 (UTC)[reply]
OK, I've done it. Now what? Was there a question? Edison (talk) 05:49, 2 January 2009 (UTC)[reply]

What the Fuck is Fermet's Last Theorum?

Correct me if I'm wrong, but isn't it something along the lines of.... a mathematical proof? A proof that was "solved" by a Princeton mathematician (But not really)? He apparently had to "solve" some other math problem as well (Taniyama's Conjecture?) to untheorumize this. I'm a little confused about the whole deal. I'm not sure if this is some profound E=mc2 thing or just some obscure mathematical property that only mathematicians enjoy tinkering around with.

Sorry about the colorful language, by the way. I'ts just that when I typed this into your search box, it came up empty. I've found almost every other thing I've searched for, no matter how obscure, so I was a bit upset. And I figured it couldn't be that obscure if a country boy like me has heard of it. But for the most part, you guys do a pretty good job. God bless, and thank you for your patience. Sincerely, --Sunburned Baby (talk) 05:22, 2 January 2009 (UTC)[reply]

See Fermat's Last Theorem (note spelling). Algebraist 05:24, 2 January 2009 (UTC)[reply]
Also for future reference, Google Search (and possibly other search engines) would have spotted your incorrect spellings and offered the correct search term. See [4]. Abecedare (talk) 05:38, 2 January 2009 (UTC)[reply]
Wikipedia's inbuilt search does that also. Algebraist 05:48, 2 January 2009 (UTC)[reply]
Sometimes Wikipedia's search feature directs you to another article that doesn't exist. ~AH1(TCU) 17:04, 2 January 2009 (UTC)[reply]
But not in this case Nil Einne (talk) 05:37, 3 January 2009 (UTC)[reply]
As for whether it's profound, no, but it is surprising, easily understandable to non-mathematicians, and seems deceptively easy to prove. That's why you have heard of it and not of, say, the much more profound NP problem. --Bowlhover (talk) 07:10, 2 January 2009 (UTC)[reply]
On the other hand, the modularity theorem, which was considered intractable before Andrew Wiles proved a special case in his proof of Fermat's Last Theorem, is profound. However, you need a gound grounding in analytic number theory to even understand the statement of the modularity theorem, whereas Fermat's Last Theorem can be understood with schoolboy arithmetic. Gandalf61 (talk) 13:13, 2 January 2009 (UTC)[reply]
The theorem itself is very easy to understand - it's a simple statement:
"If an integer n is greater than 2, then the equation an + bn = cn has no solutions in non-zero integers a, b, and c."
So, for example, we know that 32+42=52 because 9+16=25 right? In that case, 'n' is 2 - and there are plenty of equations that fit. But it's a bit surprising that there are NO equations that work when 'n' is bigger than 2...not for any non-zero values of n, a, b and c ! Wow!
That's something that mathematicians had long SUSPECTED (because they never found any equations that worked for n>2 - and people had used computers to test that up to very large numbers) but they couldn't PROVE it. That's a big deal if you're a mathematician. It's very annoying to have something that simple that seems to be true - but you can't actually prove.
So - this guy Fermat (who was/is a very respected mathematician) scribbles a note in the margin of a book that says he's found a really cool proof - but he doesn't have room in the margin to write it down. Then he dies without writing the proof down anywhere. Since then, many, many mathematicians have tried to work out this simple and elegant proof. None of them succeed until just a few years ago when FINALLY someone manages to prove it - but the proof is long, horribly complicated and maybe there are only a handful of people on the planet who understand it. Worse still, it relies on several other recent proofs that are just as complicated and perhaps even harder to understand.
Did Fermat really come up with a simple/elegant proof? No. That's very unlikely indeed. Did he manage to prove it the way modern mathematicians have finally proved it? No - that's pretty much impossible. Most likely, Fermat made a mistake in his simple/elegant proof...or he had some other motive for writing that margin note.
But let me make this clear - the PROOF is hard to understand. The thing it proves is really, really simple.
The consequences of having the proof are that formal mathematics can now build on the fact that there are none of these Fermat equations - but practical sciences have been not been deeply affected either way - it's not a particularly useful piece of mathematics in itself. However, it's likely that some of the things learned along the way will eventually prove useful. This isn't as useful as e=mc2 (which is physics - not mathematics) - it's nowhere near as useful as (say) Pythagoras' theorem. What it is - is an incredible piece of mathematical reasoning - one of the most difficult things a mathematician has ever done - a stunning intellectual achievement.
IMHO, we should stop calling it "Fermat's Last Theorem" because it's really clear that Fermat did nothing to help solve the problem, and arguably (by making it seem that a simple proof existed) wasted more valuable mathematician's time than anyone else in history!
SteveBaker (talk) 13:47, 2 January 2009 (UTC)[reply]
Yes, it has wasted a lot of mathematicians' time, but it also likely intrigued and motivated many to enter the field in the first place. So in my subjective judgment the theorem and the romance that has surround the quest for its proof has been a net positive. What can take its place ? P = NP problem, "easy" Formula for primes ... Abecedare (talk) 10:00, 3 January 2009 (UTC)[reply]
Not only that, but, what with all the stuff invented to attack it, FTL has stimulated the creation of more interesting mathematics than any other problem I'm aware of. Algebraist 00:07, 5 January 2009 (UTC)[reply]
With respect, I believe there's a proof by negation approximately 8 lines in length which doesn't stretch beyond high school algebra (which, I believe, is entirely within Fermat's toolset) that involves expansion of the binomial theorem (n-inifically so, but it cancels out in the next step) midway, and I think one of those sum/difference of squares/cubes shorthands, and the rest is fairly boilerplate grade school proof. 98.169.163.20 (talk) 07:20, 3 January 2009 (UTC)[reply]
No doubt you would have included this proof here, but the margin was too narrow. - Nunh-huh 09:47, 3 January 2009 (UTC)[reply]
I suggest that we IMMEDIATELY start discussion on Village pump, proposing that wikipedia page margins be expanded. We cannot tolerate any more such losses! Abecedare (talk) 09:52, 3 January 2009 (UTC)[reply]
I don't think 98 is in danger of dying soon. I could of course be wrong Nil Einne (talk) 11:20, 3 January 2009 (UTC)[reply]
It was twenty years ago, my apologies but that's all I remember, beyond that those were the key statements of the proof. Slightly better then Fermat's margins. 98.169.163.20 (talk) 23:59, 4 January 2009 (UTC)[reply]
You probably mean this non-proof : Let n be prime, and let a,b,c have no common factor. If an+bn=cn, then a+b, a+wb, a+w2b, etc are all factors of cn (where w is a primitive nth root of 1). Let p be a prime factor of a+b. Then p also divides c. If p divides any other a+wkb, then p divides both a and b, so a,b and c have a common factor. Otherwise, from some other reasoning I can't remember, so I shall replace with handwaving for the purpose of this exposition, a,b and c still have a common factor. Q.E.D.. mike40033 (talk) 02:55, 5 January 2009 (UTC)[reply]

"If an integer n is greater than 2, then the equation an + bn = cn has no non-trivial solutions in non-zero integers a, b, and c."

Let a,b,c all equal 1.--72.200.82.14 (talk) 04:28, 5 January 2009 (UTC)[reply]

1+1≠1. Algebraist 04:33, 5 January 2009 (UTC)[reply]

penetration of UltraViolet A rays in flesh

when a 5 mm diameter spot of UV A rays is placed on human body part(by a UV laser); what amount of energy(mJ/cm^2) of UV spot is required for 5cm penetration of uv rays in flesh? 123.201.1.238 (talk) 12:13, 2 January 2009 (UTC)crony[reply]

5 cm ? That's 2 inches. You'd need enough energy to vaporize most of the flesh above it. Are you sure you don't mean 5 mm penetration ? StuRat (talk) 16:40, 2 January 2009 (UTC)[reply]
5 mm penetration? You must be great with the ladies. ok... y'all can go back to answering the question seriously now. I will stop. --Jayron32.talk.contribs 21:40, 2 January 2009 (UTC)[reply]

Solar cell production capacity

Hi

I have been unable to find any data on the worldwide solar cell production capacity ("how many square meters of solar cell can be produced per day (or per month or per year)?").

I am comparing various means of "green" energy production. Having arrived at the conclusion that covering almost all roofs with 10% efficient solar cells would provide enough electricity for a typical first-world nation, I wonder what it would take to actually do this. What would it take to produce hundreds of square kilometers of solar cell? What are the bottlenecks in solar cell production?

Thanks in advance —Preceding unsigned comment added by 81.11.170.174 (talk) 15:39, 2 January 2009 (UTC)[reply]

Check out Photovoltaics and Deployment of solar power to energy grids - and look especially at the references at the bottom of both articles. SteveBaker (talk) 15:43, 2 January 2009 (UTC)[reply]
Thank you, I found the total peak power produced by all solar cells produced in 2007 by the largest solar cell manifacturers in one of those references. It's not a surface/year, but I can calculate what I want to know from power/year just as well. —Preceding unsigned comment added by 81.11.170.174 (talk) 16:17, 2 January 2009 (UTC)[reply]
To answer your question about "bottlenecks", there are two main constraints to the take-up of PV. One is that the return on investment is rather low compared to other energy-saving measures available to households, e.g. extra insulation, solar water heating, condensing boilers. The other is that PV only generates electricity in daylight hours, so a form of storage is necessary. The usual method in domestic installations is to connect to the grid, so an inverter must be incorporated in the system, and not all jurisdictions allow for net metering, so again it may not be seen as cost-effective. Itsmejudith (talk) 17:28, 2 January 2009 (UTC)[reply]
Also, solar cells gradually lose power over their lifetimes and ultimately have to be replaced (which is costly). The business of sending unused electricity back to the grid during daylight and pulling it back at night is only working well right now because so few people do it. There could well be problems down the line when everyone (including the electricity generators themselves) have an excess of daylight power and a terrible lack of nighttime capacity. In a sense, all we're doing it pushing the storage problem back onto the electricity companies - which isn't exactly fair if 'net metering' prohibits them from charging us for the privilege of doing that. Hopefully, we end up with hydrogen powered cars or something - so we use excess daylight power to generate hydrogen and fall back on something else for nighttime supply. Certainly in a scenario where some country decided to do what our OP suggests, there would be a major 24 hour cycle storage issue. SteveBaker (talk) 19:36, 2 January 2009 (UTC)[reply]

One main bottleneck is the process of purifying the silicon. Polypipe Wrangler (talk) 04:16, 3 January 2009 (UTC)[reply]

DNA math

I need to know how many different combinations in human DNA are there before you actually get a human. I once was given this number and would like to know if 1) it is accurate,and 2) how that number would be arrived at (DNA = number times 10 to the 87th power)

I don't know what the number would be. As for how to calculate it:
Let n be the number of nucleotides in human DNA.
Let m be the number that you can change while keeping them human.
The number of possible strains of human DNA is 4^m*nCm
The number of possible strains that length is 4^n.
The fraction of strains at that length that result in a human is (4^m*nCm)/(4^n)=nCm/4^(n-m)
I don't know how high that would be, but I suspect it would be something more like one in 10^(10^(20)) — DanielLC 18:36, 2 January 2009 (UTC)[reply]
Our article on human genetic variation says "on average two humans differ at approximately 3 million nucleotides". The article goes on to say "Most of these single nucleotide polymorphisms (SNPs) are neutral, but some are functional and influence the phenotypic differences between humans. It is estimated that about 10 million SNPs exist in human populations, where the rarer SNP allele has a frequency of at least 1%". Even if there are only two alleles for each site, this gives 210 million possibilities, or about 103 million. There is a big assumption in this calculation that sites can vary independently. But it still looks as if 1087 is a big underestimate. Gandalf61 (talk) 18:58, 2 January 2009 (UTC)[reply]
You are also neglecting the effect of indels and copy number variations. Not every human has exactly the same number of base pairs in their DNA - current research indicates that features such as indels and copy number variation may have as much, if not more, effect on the differences between two humans than SNPs do. -- 128.104.112.113 (talk) 20:20, 4 January 2009 (UTC)[reply]
Not withstanding the indel issue, this is impossible to calculate because m is entirely subjective. What does "keeping them human" mean? There will be many thousands (if not millions) of possible nucleotide substitutions that would result in spontaneous abortion. Do they count, after all the outcome is still a human (albeit a dead one). How about those that result in terrible congenital malformations, where the child survives until birth, but dies shortly after? Or how about those that live to 1, 2, 5, 10, 15yrs but with zero quality of life? Where do you draw the line? And how do you quantify those changes that result in early miscarriage, because 99.9% of the time they will never even be seen. Rockpocket 20:34, 4 January 2009 (UTC)[reply]

pittuitay tumors

This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis, prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page.
This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis or prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page. --~~~~
--Milkbreath (talk) 14:10, 3 January 2009 (UTC)[reply]

How many species are there?

Our Species article doesnt say. Well it gives a patheticly large possible range. Surely you can do better? Willy turner (talk) 22:47, 2 January 2009 (UTC)[reply]

new species are being discovered all the time, so it's impossible to give an exact number. These pages discuss it in detail [5] [6] [7] —Preceding unsigned comment added by 82.43.88.87 (talk) 00:14, 3 January 2009 (UTC)[reply]
Well... no. The thing is, it's impossible to know exactly how many species there are. New ones are discovered all the time while others go extinct. It's a big planet, and we simply don't know its contents well enough. (That said, maybe someone can be a little more exact than "between 2 and 100 million species", but I'm betting that not by much.) -- Captain Disdain (talk) 00:15, 3 January 2009 (UTC)[reply]
(Doh! Edit conflict)... I was going to say:
  • Not really. The problem is that taxonomy is a human classification system (we do love to pigeon-hole things!), and by virtue of that, the rules surrounding definition of species, as well as higher and lower taxonomic levels, are pretty much arbitrary. See my recent edits and the associated discussion on the mycology of pityriasis versicolor for an example of how this makes life really difficult in medicine. Counting species is like counting grains of sand on a beach, by a committee, that can't agree on what a grain of sand actually is, and what size shell fragments should be before they're too big to be sand, and if the really small bits are sand or perhaps they should be called sub-sand, and if grains of sand should be differentiated by colour, or when they should stop for a cup of tea, or what type of tea to have, and whether there should be biscuits or cakes with the tea .... Mattopaedia (talk) 00:29, 3 January 2009 (UTC)[reply]
Well summarized, I concur with Mattopaedia. It's roughly arbitrary to define a species. I recall being taught in 7th grade science that "breeding capability" was the definite and indisputable indication of species boundaries, but since then, I have learned that it is significantly more subtle. See Species problem for discussion. Nimur (talk) 01:39, 3 January 2009 (UTC)[reply]
A related phenomenon are ring species: There are multiple populations of a species, and they can all interbreed (sustainably, i. e. the offspring can interbreed again, not like a mule which is the offspring of a horse and a donkey) with geographically neighbouring populations, but if you take two populations from different ends of the geographical distribution range, they cannot interbreed. Icek (talk) 08:57, 3 January 2009 (UTC)[reply]
The precise definition of "species" is certainly a problem - but just look at what you're asking here: I mean, sure, it's unlikely that there are more than a dozen large land animals that we don't know about - but that's a negligable fraction of the species on earth. Look at a single grain of dirt under a microscope and you'll see hundreds and hundreds of bacteria - all sorts of different kinds - more than you could easily count in an hour. If you live somewhere where there are a lot of people then maybe all of the little animals and plants (and other things) that you are looking at have been classified and named - but probably nobody did the necessary testing to see if they are all unique "species" or not...but probably that level of testing hasn't been done on all of them. Now consider that we haven't looked at grains of dirt from (say) every square mile of the earth's surface and classified every little squiggly dot that we see - we haven't looked at drops of water from every square mile of ocean and classified all of the diatoms, algae, bacteria and other little critters - we haven't taken air samples everywhere and done the same thing. We haven't looked at all of the species of gut flora in every kind of animal that we know. We're finding new species at the bottom of half-mile-deep coal mines - in deep ocean trenches and thermal vents - under the ice at the south pole there are lakes of liquid water with who-knows-how-many tiny animals and plants. The problem is simply immense - and we've barely scratched the surface of counting them all. There was a documentary on TV the other day about a bunch of cave divers exploring a cave in some god-forsaken place - they said that they expect to find a dozen new species of fish/insects/amphibians in practically every cave they visit! So it's not surprising that we have no real clue as to how many species there are. There could easily be another billion species living two miles underground where we've never even drilled - let alone started to count species. SteveBaker (talk) 13:09, 3 January 2009 (UTC)[reply]
I would say "Go to Wikispecies where you may find out all about species." but at the moment it is just a list of Latin names in need of people who can give translation and description to these latin names (stuff like Elephant and Fungus are not available on search yet but are all listed by latin with pictures). ~ R.T.G 13:29, 3 January 2009 (UTC)[reply]
The problem is, there are so many species of plants, animals, and microorganisms we haven't even discovered! There are also many species going extinct every year, some before we even discover them, so we might never know for sure. ~AH1(TCU) 15:39, 3 January 2009 (UTC)[reply]
The article on Bacteria says there are 9000 known species but probably between 10,000,000 and 1,000,000,000 species still to be discovered. This helps to show why the estimates are so confused. --Maltelauridsbrigge (talk) 20:23, 3 January 2009 (UTC)[reply]
One problem is that a large fraction of species seems to be beetles. --76.167.241.238 (talk) 00:23, 4 January 2009 (UTC)[reply]
As the British Geneticist J.B.S. Haldane famously said when he was asked "What has the study of biology taught you about God?", he replied "I'm not sure, but he seems to be inordinately fond of beetles." SteveBaker (talk) 06:37, 4 January 2009 (UTC)[reply]
When we look at the biodiversity,we find that there are nearly 2500000 species of organisms currently known to science. More than half of these are insects(53.1%) and another 17.6% are vascular plants. Animals other than insects are 19.9%(species) and 9.4% are fungi, algae, protozoa, and various prokaryotes. This list is far from being complete. Various careful estimates put the total number of species between 5 and 30 million. Out of these only 2.5 million species have been identified so far.

January 3

What is it eating?

Chewing on something?

This Nestor meridionalis meridionalis of the South Island of New Zealand was featured on the Main Page in the "Did You Know..?" section. When I first saw the photo, I thought it was eating a snake and I was consequently inclined to start a civilization; but on closer inspection, I found that the parrot in question doesn't eat snake. I'm not too sure, but that doesn't look like a Longhorn beetle grub, or a flower bud, or anything identifiable to me (maybe a melon fragment?). So, what is it eating? Nimur (talk) 04:43, 3 January 2009 (UTC)[reply]

Looks like a piece of capsicum to me.Mattopaedia (talk) 05:09, 3 January 2009 (UTC)[reply]
Given that New Zealand doesn't have snakes [8] and they would be another serious danger to our wildlife, it's a rather good thing it isn't eating one. Even more so on Stewart Island#Fauna. Nil Einne (talk) 05:29, 3 January 2009 (UTC)[reply]
Are we sure this is a wild parrot? Perhaps it's someone's pet and it's chewing on a red plastic toy of some kind? Just think of the kind of lens/camera you'd need to get that kind of a shot of a wild parrot - it doesn't seem very likely to me. But this is Wikipedia - we're a collection of real people, not some anonymous corporation. Hence you can go to the information page for the photo (just click on it) - look at the 'history' tab and see who posted the photo. Now you can go to that person's talk page and politely ask them all about their photo. This might not work - but on the whole we're a pretty friendly & helpful bunch here and I'm sure the photographer will be only too happy to talk about their (now very famous) picture. SteveBaker (talk) 12:54, 3 January 2009 (UTC)[reply]
Done. But again, it really looks like a slice of capsicum to me. Mattopaedia (talk) 14:54, 3 January 2009 (UTC)[reply]
This picture seems to have a circuitous history. I saw the exchange on the picture's most recent editor's talk page, and then followed the trail back to the original Commons entry, which was created by the Flickr upload bot. The description's link to Flickr appears to be invalid, but searching Flickr yielded what appears to be the original photo: [9]. HTH. --Scray (talk) 18:09, 3 January 2009 (UTC)[reply]
BTW, it's not obvious to me that the Creative Commons license was applied properly by the Flickr user to that photo, as required for use of that bot, but I am no copyvio expert. --Scray (talk) 18:15, 3 January 2009 (UTC)[reply]
Hmm why did you need to search flickr? The photo clearly identified the flickr source for a while. I followed the source before I posted my first message and I just checked, it's still there and there's no sign from the commons history it was ever removed Nil Einne (talk) 16:08, 4 January 2009 (UTC)[reply]
Done? You do realise this photo came from flickr right? I don't see any comments on the flickr page. Or did you PM? P.S. Personally I'm more inclined to believe it's some part of a native plant although I can't say what Nil Einne (talk) 16:08, 4 January 2009 (UTC)[reply]
Nowadays, I don't think many people have kākā as pets (not counting those in zoos or being looked after by DOC staff of course). I'm not even sure if it's legal. I don't think DOC will take kindly to people catching them and I'm not aware of any population of domesticated kākās. Also bear in mind most NZ birds are fairly tame, one of their problems given the lack of predators during most of their past. This bird was on Rakiura as I mentioned above so it's unlikely to have encountered much to scare it. (And it's also more likely you'll actually spot one). Of course you still need a decent camera and to be a decent photographer but these are becoming more common in this digital age. The photographer seems to be an extremely experience wildlife photographer anyway Nil Einne (talk) 16:01, 4 January 2009 (UTC)[reply]
AFAIK, it's legal to keep any 'pre ban' parrot (or its captive-bred offspring) as a pet. Even hand-fed Keas (which must be the ultimate 'difficult' pet) are available to buy, though I've only ever personally seen two advertised for sale in 15 years or so... --Kurt Shaped Box (talk) 01:17, 5 January 2009 (UTC)[reply]

According to the photographer, the picture was shot in the wild on Stewart Island; using, as per exif metadata, a Canon EOS 20D + the Canon 100-400mm USM IS telephoto lens. Here is another picture of possibly the same bird with something else in its beak. (By the way, the image license {{cc-by-2.0}} seems fine to me) Abecedare (talk) 00:13, 4 January 2009 (UTC)[reply]

Could it be a piece of fruit rind/peel? Perhaps the photographer was feeding it in order to get a good shot? --Kurt Shaped Box (talk) 16:12, 4 January 2009 (UTC)[reply]
Stick of rhubarb? Size looks about right. XLerate (talk) 21:19, 4 January 2009 (UTC)[reply]
Could it be part of a fuchsia flower? Tree fuchsias are plentiful on Rakiura. dramatic (talk) 22:12, 4 January 2009 (UTC)[reply]
I think that rhubarb is supposed to be toxic to parrots. Well, it's one of those 'common knowledge' things that you get told when you first purchase a psittacine, at least. --Kurt Shaped Box (talk) 01:04, 5 January 2009 (UTC)[reply]

Do we have any defense at all against meteors, black holes, etc.?

Like, big ones! In all seriousness, if astronomers detected a continent sized meteor headed our way with say, a year's warning, -couldn't we do something? Or if a black hole was heading our way couldn't we use some kind of antimatter type force to push it towards the moon? Or for that matter, if the moon ever broke free (from a black hole?) and started slowly drifting towards us, could we use some greater force to fastly push it away? When our sun expands during it's death throes, how will life forms of that era deal with it?--Dr. Carefree (talk) 09:49, 3 January 2009 (UTC)[reply]

No problem. We have loads of asteroid deflection strategies.--Shantavira|feed me 10:15, 3 January 2009 (UTC)[reply]
Um what? Antimatter type force? Our current ability to produce antimatter is very limited and our ability to use it for practical purposes almost non-existant. And I'm not quite certain what the use of pushing a blackhole to the moon is. Plus the concept of us moving a black hole anytime soon, if ever, seem absurd to me. As to what humans will do, billions of years from now if they still exist and live on earth, I don't think many people have given it that serious thought although I'm sure there's something in science fiction Nil Einne (talk) 10:51, 3 January 2009 (UTC)[reply]
In practical terms, not only do we have no defense - we don't truly know what form that defense should take (because we don't know enough about the meteors themselves) and we don't have the ability to detect meteors early enough to make a difference.
Let's look at those problems individually:
  • Detection: With a large mass on a trajectory that's pointing it at us, the earlier we do something, the easier it is. To take an extreme example: Suppose we only have a few minutes warning...we might have to try to deflect a meteor when it's one earth diameter away from hitting us square-on at the equator...we'd have to bend it's path by 30 degrees to make it miss us instead. To deflect a mountain moving at a thousand miles a second through an angle of 30 degrees in just a few seconds would probably take more energy than we have on the entire planet - it truly can't be done. However, if we know that the meteor is a threat (say) 20 years before it hits us - then the amount of deflection we need is a microscopic fraction of a degree and a really gentle nudge would suffice to save our planet. So early detection - and (most important) accurate orbital parameter determination - is a massive priority both because it gives us time (it might take 5 years to put together a strategy for deflecting this particular rock, building the spacecraft and getting it launched towards it's target) and it reduces the magnitude of our response.
  • Analysis: There are many possible categories of threat. Comets are mostly ice. Meteors come in several varieties - some are essentially solid chunks of metal, others are solid chunks of rock, still others may be loose collections of small bolders, pebbles or even dust. Right now, we don't know which is which - which ones are the most common - whether large, dangerous objects are predominantly of one kind or another. We know (for example) that Comet Shoemaker-Levy broke up into a dozen pieces as it descended towards Jupiter - if we'd had to deflect that comet and we'd sent (say) a single large nuclear bomb then a whole range of disasterous possibilities come to mind: (a) The comet might break up before our rocket gets there and we can now only deflect one out of a dozen large, dangerous chunks. (b) Our bomb might actually do nothing more than break up the comet prematurely without deflecting it's course at all. So until we "know our enemy" - we're kinda screwed. We need to send lots of probes out to look in detail at a statistically reasonable collection of comets and meteors - and do lots of science to figure out what's out there.
  • Deflection/Destruction: The problem is that breaking up a large meteor or comet without deflecting it's path doesn't help us. The total damage to the earth from a single rock that's the size of a mountain that weighs a million tonnes is precisely the same as for a million car-sized rocks weighing one tonne each or for a trillion rocks the size of basket balls weighing a few kilograms each. Simply smashing the meteor into pieces doesn't help at all! (The number of movies that get this fact wrong is truly astounding!) So we have to think in terms of deflection - not destruction. If we have enough time (see "Detection" above) then something as simple as a heavy spacecraft that flies along parallel to the course of the meteor for a few years and provides a REALLY subtle gravitational shift - might be enough. That's great because it works just as well with a flying rubble heap as it does for a mountain of nickle-iron or a million tonne dirty snowball. However, getting big, heavy things out of earth orbit and flying as fast as a meteor requires a heck of a lot of fuel and a huge amount of up-front planning. We certainly don't know about these threats early enough to do that reliably. So then we're left with hitting the thing hard with a big bomb, hitting it hard with an 'impactor' or nudging it more gently with rocket motors. None of those things will work for a flying rubble pile. For solid bodies - that'll work. We can build a probe with a rocket motor on it. Make a soft-landing onto the object and start firing our rocket. So long as the object is strong enough to take that pressure without breaking up - or without our rocket sinking into the surface or tilting sideways and deflecting the rock in the wrong direction - that could work. But it's more complicated than that if the object is spinning (as many of them are) - because now the rocket has to fire intermittantly when the rock is at the correct orientation or else the miracle of 'spin stabilisation' (which is what makes bullets fly in a nice straight line) will frustrate our efforts.
So it's safe to say that right now, we're defenseless on all three levels. Our detection ability is getting slowly better - we have surveyed some of the very largest rocks - and we're tracking their orbits. Perhaps we can now see mountain-sized rocks soon enough - but something a lot smaller than a mountain (like maybe a school-bus-sized rock) can take out a city - and we're nowhere even close to being able to track those soon enough or accurately enough. NASA have sent out probes to several meteors and comets to take a close look at them - and we've even tried firing an 'impactor' at one them...but we have a long way to go. A lot of people are thinking about deflection/destruction strategies...but no governments are building rockets and putting them into storage ready for the day when we'll need them - and funding for the entire process is distinctly lacking.
At some point we (as a species) need to seriously consider having a colony somewhere away from the Earth. There is always the possibility of the ultimate planet killer coming along that's too fast, too large, too unstable and/or too close to do anything about. Having a colony of humans living on (say) Mars with a self-sufficient life-style and a large enough gene-pool is the ultimate way to ensure the survival of the species come-what-may.
SteveBaker (talk) 12:43, 3 January 2009 (UTC)[reply]
If you'd like to learn more about it and help create "coursework", there's some activity on Wikiversity at v:Earth-impact events/Discussion, as well as an opportunity to use your imagination and research skills for colonizing off-planet at v:Lunar Boom Town. (And SteveBaker: mind if I copy over what you just wrote above? Good detail there!) --SB_Johnny | talk 12:56, 3 January 2009 (UTC)[reply]
All contributions to the ref desk fall under the GFDL - so long as you are using them under those same terms, you are welcome to take whatever you need. SteveBaker (talk) 13:45, 3 January 2009 (UTC)[reply]
Surely it would help to break it up as small objects can be burned up in the earths atmosphere, the amount of mass burned off an object will be proportional to the surface area of the object, which dramatically increases if the object is broken up. To take my point to its logical conclusion, an asteroid of a large given mass would cause significant damage, whereas the same mass of dust colliding with the earth would most likely just flouresce in the atmosphere as it "burned". —Preceding unsigned comment added by 84.92.32.38 (talk) 14:12, 3 January 2009 (UTC)[reply]
The net kinetic energy that has to be absorbed by the Earth system (atmo included) remains exactly the same. While a single large rock would probably cause more damage (at least more localized damage) based on impact, even the distributed vaporization of a massive asteroid would be a catastrophe. A school bus sized rock, sure -- vaporize it. A dino killer? That won't work. — Lomn 14:52, 3 January 2009 (UTC)[reply]
Our main capability in asteroid defense is early warning. Here is an incomplete list of early warning systems that I found:
We currently know of around 5500 (see the NEOP page) Near Earth Objects, and hundreds more are being discovered every year. So as you can see, we humans have actually put a fair amount of effort into detecting impact threats from space. No one has ever tested any potential asteroid deflection systems, but as Steve says, early detection is key. --Bmk (talk) 15:44, 3 January 2009 (UTC)[reply]
That doesn't seem quite right... if gazillions of little pieces vaporize in the atmosphere or even hit the earth (somewhat more slowed down by atmospheric friction), they wouldn't also vaporize large amounts of actual earth materials as well (which would cause snowstorms in Havana, etc.). Am I missing something? --SB_Johnny | talk 15:33, 3 January 2009 (UTC)[reply]
This is little as in much smaller than a mountain, not little as in the size of a pebble. The pieces will still be much too large to burn up in the atmosphere. — DanielLC 19:11, 5 January 2009 (UTC)[reply]

(edit conflict)A meteor the size of a continent would be larger than the Moon. The largest asteroid we know of, Ceres, is only about 1/4 the diameter of the moon, and it's big enough to be called a dwarf planet. There are plenty of other potential risks, for example a rouge star passing through the inner solar system would disrupt the trajectory of many asteroids and comets, flinging some toward Earth. The chances for a large black hole to pass through the solar system are very small, since it would be more massive than the sun but smaller than New York. Small black holes could evaporate to Hawking radiation. Remember that the asteroid/comet that probably caused the Cretaceous-Tertiary extinction (the one that killed the dinosaurs) was only about 15 km (9 miles) in diameter. As for the sun expanding, we could maybe move to Mars, but chances are we won't even survive that long. Most estimates predict there is a "high probability" for us to become extinct within the next three million years or so. Anyway, an asteroid bigger than, say, Lake of the Woods, would probably crash through the Earth's crust, exposing its mantle, causing further problems. Anyway, there are plenty of other potential doomsday events that could affect us in the near future (try exitmundi, [warning, popups]), and many of those pose a larger threat to us than the likelihood of an asteroid hitting Earth (which will, with 100% probability, eventually happen). In fact, one potentially catastrophic scenario is already unfolding, and could affect us in our lifetime, yet many are refusing to do anything about it. It's called global warming. ~AH1(TCU) 15:35, 3 January 2009 (UTC)[reply]

The sun won't just slowly expand to engulf the earth - it'll put out huge pulses of hard radiation and do all sorts of other ill-behaved things along the way. There is probably no place in the solar system where we could survive that event. However, the sun blowing up is a fairly predictable event. We can give a fairly accurate prediction as to when that'll happen - and doubtless if our ancestors survive that long - they'll know to a very accurate degree when this is going to happen. So they'd have time to do something to escape to another solar system. The problem with meteors and comets is that they are more random - and the best response is to try to deflect them somehow. We'll certainly have a few thousand or even millions of years notice that the sun is going to give up on us - but we'd be lucky to get 20 years notice of an earth-smasher en route. Black holes are not worth bothering about - there are no big-but-slow ones nearby - and we don't care too much about small ones. Fast-but-big ones are impossible to deal with - if one of those comes by, there is nothing we can do. Meteors and comets are in the middle ground - they DO wipe out entire ecologies (the Dinosaurs - and there was a report out a couple of days ago suggesting that the Clovis people of North America were wiped out by a comet/meteor impact) - and we could do something about it with present-day technology if we put our minds to it. The odds of you or I personally being wiped out by one of these things is tiny - but it's one of the top risks for humanity as a species - so I think we should spend commensurate effort on solving the problem. Global warming should be higher on our list - but comet/meteor protection ought to be up there in the top ten goals of humanity over the next 100 years. SteveBaker (talk) 02:31, 4 January 2009 (UTC)[reply]

We have one rock-solid defense against asteroids: Bruce Willis. —Preceding unsigned comment added by 79.122.10.173 (talk) 14:12, 4 January 2009 (UTC)[reply]

Pesticides

I was looking at pesticides recently and one thing which surprised me was the number of different products which are the same brand and seem to be the same thing.

For example there were a number of different products which were 12g/litre of Permethrin in the form of an emulsifiable concentrate, with the same total volume. Prices varied by a few cents in some instances. These were nominally intended for a different purpose, e.g. No Silverfish beetles, No cockroaches, No ants with appropriate instructions as to how much to dilute them (all with water only) and how to apply them although they sometimes gave instructions for some other purposes. Am I right that these are almost definitely the exact same thing but with a different label? Or is it likely they have other ingredients to aide in how they adhere to surfaces or whatever. There was one for spiders which was a higher concentration (50g/litre) which I can see the need for.

There were also a bunch of ready to use sprays for similar purposes most of which were 4g/litre of permethrin in the form of a ready to use liquid (why they recommended different dilutions for the concentrates but the RTU liquids are the same I'm not sure). These were the same quantity and IIRC the bottles were similar, I don't think they sprayed differently or anything. The prices varied more by about $2 or so. Similar to above, there could be differences in surfactants etc and particularly in this case what the containing liquid is although at least two of them said "Will remain active on inert surfaces for up to two months". Or is it likely these are more or less exactly the same product?

I do have photos of some of the products and you can also see them online e.g. [10] [11] [12] [13] and note they are registered as different products although under the same code HSR000265 (mentioned on the bottle) for the concentrates or HSR000263 for the RTU liquids.

Finally another thing I've noticed is certain wasp powders e.g. [14] have Permethrin in the same concentration (10g/kg or mg/g) as flea powder for carpets. Beyond perhaps difference in particle and in applicators on the bottle, am I right these are more or less the same thing? (Some are in higher concentration [15] but I'm guessing again there's little difference otherwise). Just to be clear I'm not referring to the stuff meant to be applied to pets, which are probably regulated differently (here in NZ they regulated as veterinary medicine as opposed to the carpet powder, wasp powder etc which are regulated as pesticides HSR000262).

P.S. It's possible some of the above have different cis/trans ratios for their Permethrin but any that did mention the ratio were 25:75 Nil Einne (talk) 11:55, 3 January 2009 (UTC)[reply]

It's certainly possible that the surfactants might be different for products labeled for indoor, horticultural, and veterinary uses. It's also possible that it's simply a marketing decision. I'm not sure how the regulators in NZ define the codes, but I know for the OMRI (wow, no article! OMRI rates products for the National Organic Program in the US) ratings in the US, it's a product-by-product registration. --SB_Johnny | talk 13:01, 3 January 2009 (UTC)[reply]

Why does kale taste so good? Does it have lots of iron or something? Because personally, I can't get enough of it. I can't understand why anyone wouldn't want to 'eat their greens'. There are people that I used to know who never eat fruit or vegetables of any kind, almost. I can understand fruit, because it's too sweet, but vegetables? No way, there's loads of good vegetables. Something like kale almost tastes better to me than any other food.--Veritable's Morgans Board (talk) 16:16, 3 January 2009 (UTC)[reply]

Our taste buds are each a little different; like human fingerprints, it's likely that no two people have the exact same taste for food. Hence, some people love the taste of certain foods more than others, because that particular food..well, arouses the taste buds, I guess you could say.
Another reason you might not be able to get enough of it is if it evokes pleasant feelings. If the first time you ate Kale was when a beloved relative served it, it may help; especially if that person is deceased and it helps you keep their memory alive. (And, if it was early enough in your childhood, you may not recall specifically that this is who served it to you first. For instance, I associate spaghetti o's with my geat grandmoth, because she always had some chef boyardee food around when I'd to see her.) It sounds like an unusual reason for something to taste good, but it's all part of how amazingly interconnnected the body is.Somebody or his brother (talk) 16:51, 3 January 2009 (UTC)[reply]
I should point out that most of what we call 'taste' is really 'smell'. Our taste buds give us only fairly crude information about flavor. (That's why things taste different - or "like cardboard" when our noses are stuffed up with a heavy cold.) SteveBaker (talk) 18:05, 3 January 2009 (UTC)[reply]
And overtly linking this to the OP, perception of smell is highly personal. This individuality is likely to be based on both experiential and genetic factors. --Scray (talk) 19:56, 3 January 2009 (UTC)[reply]
If I recall right, there's also been some research done that found something specific in dark-green vegetables (broccoli, spinach, kale, etc) that divides people. Some people are genetically far more sensitive to the taste of certain compounds in them and hence find them much more bitter (and hence more disgusting). [16] ~ mazca t|c 20:00, 3 January 2009 (UTC)[reply]
I'm quite convinced that's true of fish. So many people that I know will rave on about how wonderfully 'fresh' a particular lump of fish is - when I can barely taste the stuff at all - let alone determine freshness. Sushi is just kinda slimy and uninteresting for me - but my wife is a fanatic for the stuff - detecting amazingly subtle differences between one restaurant and the other. On the other hand - I'm pretty good at identifying types of wine and beer - so my taste mechanism obviously works well at some level. We know of all sorts of genetic differences that cause 12 different varieties of color-blindness - why should we be surprised at fishyness-tastelessness and veggy-tastelessness? SteveBaker (talk) 02:18, 4 January 2009 (UTC)[reply]
I'm afflicted with the ability to taste something nasty (metallic) in cilantro. I share this useless superpower with one parent and not the other. —Tamfang (talk) 19:59, 5 January 2009 (UTC)[reply]
I think the usually difference discussed is the dislike of some food by supertasters, not taste-blindness. But loss of tasting ability occurs with aging or with ageusia (never heard of that problem before). Rmhermen (talk) 00:59, 6 January 2009 (UTC)[reply]

"Blind spot" in face recognition?

Is there a name for the phenomenon in which a subject has difficulty telling apart specific pairs of faces (of different individuals) when most people in the general population have no difficulty telling one face from the other in those pairs? (The subject in question doesn't have a general problem recognizing or telling apart faces, only difficulty w.r.t. specific, idiosyncratic pairs.) —Preceding unsigned comment added by 173.49.15.111 (talk) 18:22, 3 January 2009 (UTC)[reply]

See Prosopagnosia. Of course, you may not recognise it, as it looks like thousand other articles. --Cookatoo.ergo.ZooM (talk) 19:28, 3 January 2009 (UTC)[reply]
Thanks for prosopagnosia reference, but the kind of confusion I was talking about is both selective and idiosyncratic, in that the subject generally has no problem recognizing and telling apart faces, except for specific pairs that most people won't find particularly similar or confusing. --173.49.15.111 (talk) 20:58, 3 January 2009 (UTC)[reply]
It's perfectly possible that this is a mild form of that condition. It seems that there is specific 'circuitry' inside the brain that is specialised for facial recognition. Prosopagnosia is a complete and utter failure of that circuitry - but it seems reasonable that there might be a partial failure that might make (say) recognising the shape of the mouth and nose work just fine - but eyebrows, eyes and ears fail miserably. It's tough to know - but 'mild' or 'partial' prosopagnosia would probably be acceptable terminology here. SteveBaker (talk) 02:10, 4 January 2009 (UTC)[reply]
The ability to recognize faces doubtless runs in a continuum, rather than an all or none distribution. Some of us are at the 95th percentile and others at the 5th percentile. Those at the 5th percentile may function pretty well, but be lousy as eyewitnesses, or as a doorman, receptionist, bartender or salesman who is expected to greet "regulars" or club members by name. Likewise a clergyman, teacher, policeman, bounty hunter or politician would benefit from being at a high level of memory for/recognition of faces. A workaround might be to remember verbally that "bushy eyebrows" is Mr. Smith, or similar cues, where someone good at facial recognition would just automatically recognize Mr. Smith. A psych experiment published in a journal a few years ago (I do not have the cite at hand) showed the power orf the normal person to recognize faces. The experimental subjects looked through an unfamiliar high school annual one time, looking at each face once. They then showed a good ability to distinguish faces they had seen from unseen faces from other similar annuals. At the other extreme, a clerk might see a person 10 times over a month and not recognize them the next time. Edison (talk) 05:28, 4 January 2009 (UTC)[reply]

Bolbidia

The following is from Aristotle's discussion of cephalopods: One of them is nicknamed by some persons the nautilus or the pontilus, or by others the 'polypus' egg'; and the shell of this creature is something like a separate valve of a deep scallop-shell. This polypus lives very often near to the shore, and is apt to be thrown up high and dry on the beach; under these circumstances it is found with its shell detached, and dies by and by on dry land. These polypods are small, and are shaped, as regards the form of their bodies, like the bolbidia. I can't seem to find anything about what a "bolbidia" is. Does anyone know? 69.224.37.48 (talk) 20:34, 3 January 2009 (UTC)[reply]

Most commentators on the Historia animalium seem to assume that a bolbidion is the same beastie as the bolitaina mentioned by Aristotle in the passage immediately preceding the one you quote. In any event, I think the best one can say is that it's some sort of small octopus; trying to identify it with any particular species is probably futile, especially since the word is apparently attested only here and in the Hippocratic corpus. Deor (talk) 00:24, 4 January 2009 (UTC)[reply]
The ancient Greeks didn't bother much with careful observation of nature or experimentation or things of that ilk. They felt that if you couldn't just think it up out of fresh air and prove it with some kind of math - then it wasn't worth considering. So it's likely that Aristotle's observations on marine life were sketchy to say the least! SteveBaker (talk) 02:06, 4 January 2009 (UTC)[reply]
Actually, Aristotle was one who did bother with observation, particularly with regard to marine life. He used to hang out with fishermen to get specimens. The problem is that he, and the fishermen, knew exactly what βολβίδιον denoted and we don't (other than that it's a small octopus), since the art of taxonomic description hadn't been invented yet. Deor (talk) 03:41, 5 January 2009 (UTC)[reply]

World steel supply irradiated ?

I saw this statement in the Wiki article Scuttling of the German fleet in Scapa Flow:

"The remaining wrecks lie in deeper waters, in depths up to 47 meters, and there has been no economic incentive to attempt to raise them since. Minor salvage is still carried out to recover small pieces of steel that can be used in radiation sensitive devices, such as Geiger counters, as the ships sank before nuclear weapons and tests irradiated the world's supply of steel."

Regarding the statement about irradiated steel, is this true ?

Thanks,

W. B. Wilson (talk) 20:35, 3 January 2009 (UTC)[reply]

I've heard it elsewhere, so it is plausible, but it involved very low levels of radiation. I do not recall the source. Edison (talk) 20:52, 3 January 2009 (UTC)[reply]
Google steel radiation battleship and you get some sources: IEEE mentions pre 1945 battleship steel as a bulk shielding material for delicate experiments detecting "cosmogenic neutron flux." Other materials of interest to such researchers include 400 year old lead. Another reliable source says that at many U.S. Dept of Energy sites, pre WW2 steel is used for shielding. Edison (talk) 20:59, 3 January 2009 (UTC)[reply]
Thanks! W. B. Wilson (talk) 21:11, 3 January 2009 (UTC)[reply]
Wow - that sounds AWFULLY bogus. I don't believe this is the reason. Sure, there obviously IS a reason for using that old steel - but I can't believe it's because of nuclear weapons.
Surely the amount of nuclear-weapon-derived radiation irradiating that 60 year old steel between the time you pull it out of the ocean and the time you form it into an instrument is comparable to the amount that modern steel picks up during the brief time between smelting the ore and casting it (remember - that iron ore is millions of years old and has been protected from atom bomb test contamination by being buried under hundreds of feet of dirt - which has got to be as good protection as 40 feet of water). The amount of time that the metal is above ground and exposed to nuclear waste contamination is going to be pretty comparable in either case. If the metal can pick up contaminants between digging the ore and making it into steel - then the finished instrument is going to be totally useless after just a few weeks because it'll pick up that same contamination during daily use.
It just doesn't make logical sense.
I'd be more inclined to suggest that steel that's been shielded from naturally occurring radiation by a large amount of water would have reduced amounts of radioactivity simply due to the half-life of whatever radioactive elements are there naturally. This would result in any radioactivity naturally present in the original iron ore having dropped off considerably over the past 60 years. Meanwhile, iron ore that's been buried in the ground (which produces background radiation from natural occurring uranium, radon gas, etc) could maybe have background levels of radiation that are unacceptably high.
I don't really see how atom bomb testing can have very much to do with it...but I could easily be wrong. If I am wrong - then I'd still lay odds that the fallout from Chernobyl was far more significant than those old bomb tests.
SteveBaker (talk) 02:02, 4 January 2009 (UTC)[reply]
Did you take a look at the sources I cited and other reliable sources from the Google search? Apparently fallout or other products of nuclear explosions since 1945 do in fact make their way into modern steel. It is not just the age of the steel per se, or the fact that it was in the ocean. Atmospheric nuclear tests after WW2 and Chernobyl may indeed have done more contamination than the 1945 tests and attacks, but that still leaves modern steel less useful for shielding sensitive detectors than steel from the pre-nuclear age. Edison (talk) 05:25, 4 January 2009 (UTC)[reply]
Yes - I did look at them - and I agree that this is what they seem to be saying - but if a reference says that 2+2=5 - then I'm going to have to at least stop and question it. This explanation doesn't make any kind of sense at all. I have no problem believing that modern steel is inferior - I just can't see how it can be artificial 'fallout' contamination that's causing that. I don't believe that those nuclear tests contaminated iron ore buried in mines that are hundreds of feet below solid rock...that just can't be true. It's also somewhat hard to believe that none of that fallout wound up floating down through 40' of water and landing on the decks of those sunken ships - or washed down into the ships from nearby rivers and beaches.
So if the ore is pristine when it's dug up - and even assuming that the metal from these wrecks is also pristine - we're only left with the time interval between mining our ore - refining it into steel and forging it into whatever the end user needs - during which time it's gonna get contaminated with all of the crud in our atmosphere and lying around on the surface layer of the soil, etc. But that's got to be comparable to the time between raising a lump of 1945 warship to the surface - cleaning off the barnacles - and reforging that into whatever the end user needs...and during that time, it's picking up contaminants at pretty much the same rate as the freshly mined ore. So it seems like there would be no advantage to using 'old' steel versus 'freshly-dug-up' steel.
I could understand not wanting to use steel made from recycled 1960 Cadillacs that have been sitting out in a scrapyard near ground-zero in the Arizona desert.
So - yeah - I could be wrong - and your sources certainly suggest that - but I still don't understand how that's possible. I strongly suspect there is more to this than meets the eye.
SteveBaker (talk) 06:30, 4 January 2009 (UTC)[reply]
It is possible that steel which was melted and re-forged would have the trace amounts of radiation throughout, but old steel which has not been molten may be immune to this radiation. Since steel does not occur naturally, the only way to get "clean" steel is that which was forged in the past before this "contamination" took place.
I have also heard this in the past, and I am unsure of its basis in truth, but that is the theory that popped into my head.RunningOnBrains 09:33, 4 January 2009 (UTC)[reply]
The reliable sources (specifically, the IEEE paper), do not cite reasons why the "pre-1945 battleship steel" is preferable or different - so there's a leap of logic to assume that this material has in any way been affected by nuclear testing. It may be different for a huge variety of reasons - different composition, different testing standards, something to do with aging process, ... etc. Nimur (talk) 14:32, 4 January 2009 (UTC)[reply]
The Health Physics journal does however. How reliable it is, I don't know Nil Einne (talk) 15:35, 4 January 2009 (UTC)[reply]
Atmospheric carbon-14 levels. Spike represents the end of aboveground nuclear testing in late 1963.
Not sure about the reliability of statements about radioactivity from atmospheric contamination, but I'm willing to say that they're plausible. Iron ore is usually refined in a blast furnace, which strikes me as a very efficient way to move very large volumes of modern, radioactives-contaminated air through the ore.
Meanwhile, the level of (radioactive) atmospheric carbon-14 just about doubled in the mid-1960s due to aboveground nuclear weapons testing. (Levels are still about 10% above 'natural' background carbon-14, which is generated mostly by cosmic rays.) Measuring the abundance of (excess) carbon-14 in tooth enamel (which lasts essentially for a lifetime, once laid down in one's adult teeth) and comparing to atmospheric abundance of carbon-14 is a recognized forensic technique ([17]) for determining the age of human remains. It's accurate to better than plus or minus 2 years.
With a half-life of about six thousand years, the level of carbon-14 in the pre-WWII steel won't have been significantly reduced by radioactive decay. That said, there are no doubt other radioisotopes released by the aboveground testing. I don't know what they are, or what their relative abundance is, or whether they would also find their way into steel. Just food for thought. TenOfAllTrades(talk) 16:08, 4 January 2009 (UTC)[reply]
Right, the problem isn't whether the iron ore is contaminated. It's whether the residual contamination in the atmosphere is enough to make its way into any steel processed in our modern, somewhat-contaminated atmosphere. And nobody is claiming the steel is glowing green—it's very minor contamination that only comes into play involving instruments that are sensitive enough to detect such contamination. And Steve, I think you underestimate the amount of residual, long-term fallout produced by the US and Soviet atmospheric testing programs, which detonated probably around 400-500 megatons worth of weapons from 1945-1963. It is significant and measurable amount. --98.217.8.46 (talk) 19:51, 4 January 2009 (UTC)[reply]
There was a time when children enjoyed "snow cream" made of snow, sugar, milk, eggs and vanilla. Then in the late 1950's or early 1960's parents stopped making it because they read about the danger of strontium 90 in the snow from atmospheric nuclear testing [18]. If steel was recycled, air used in the process would have brought these same nucleotides into the steel, certtainly in greater amounts than in 1944. If new steel is made from iron ore, there is no way to exclude the contamination laying on the ground when the ore is mined by blasting away the earth and digging up the ore. The difference is probably between a very low level in modern steel and a vanishingly small level in pre-1945 steel. Surface contamination would not put the radioactive particles in the bulk of the steel. Edison (talk) 23:04, 4 January 2009 (UTC)[reply]

sexual reproduction would never work.

God and I disagree about whether human reproduction works. The idea that you can combine two people's DNA to get a third working human is ridiculous. Would you combine two pieces of software, taking half the bits from one, half the bits from another, to get working offspring software? No... It would segfault as soon as you ran it. I estimate that fewer than 1 out of 100 humans would be born alive and well if DNA were really being 'combined' from the mother and the father. —Preceding unsigned comment added by 79.122.54.197 (talk) 23:37, 3 January 2009 (UTC)[reply]

Do you have a question? Deor (talk) 23:48, 3 January 2009 (UTC)[reply]
Yes. I'd like to clear up my confusion, hence sharing my arguments with you.
(To original Anon poster): Reality would disagree with you. -- Flyguy649 talk 23:50, 3 January 2009 (UTC)[reply]
I disgree strongly with reality on this point. The fact that it happens doesn't imply that it's possible.
But that's the definition of possible. When something can and does happen. --Russoc4 (talk) 00:07, 4 January 2009 (UTC)[reply]
then I suppose a woman u.s. president is not possible, since you said possible is what can and does happen. —Preceding unsigned comment added by 79.122.54.197 (talk) 00:52, 4 January 2009 (UTC)[reply]
I guess that wasn't what I meant. I mean that action implies definite possibility. It is possible for there to be an African American president of the US... it happened. --Russoc4 (talk) 01:23, 4 January 2009 (UTC)[reply]
it will happen in about 17 days. Didn't happen yet. In any case, you need to specify that anything that is, is possible, but that just because something isn't, doesn't mean it's impossible. - Nunh-huh 01:29, 4 January 2009 (UTC)[reply]
Comparing DNA to software is a false analogy. This is where your logic is going wrong. Read up on meiosis. Also, sometimes mixing DNA can go wrong, see: nondisjunction. --Mark PEA (talk) 00:02, 4 January 2009 (UTC)[reply]
So then if the current understanding of reproduction is wrong, what do you suggest is going on? --Russoc4 (talk) 00:06, 4 January 2009 (UTC)[reply]
To clarify Mark's answer, reproduction is not like cutting half out of each parent and sticking it together. Firstly, DNA replicates. The replicated versions of DNA are what go on to form a new human. I'm not convinced you'd understand if we went into detail here since you don't understand the basic concept, but there are plenty of good explanations out there if you search Google. —Cyclonenim (talk · contribs · email) 00:30, 4 January 2009 (UTC)[reply]
thanks for watching out for my interests, but why not try me. anyway DNA is just like software, the genetic code is completely equivalent to 2 bits per base pair -- there is NO OTHER INFORMATION it contains. Anyway you're technically right that replication happens before combination, since the reason those haploid cells exist in my testicles or your fallopian tubes at all is because they have reproduced from other cells. But then, when my sperm hits your egg, God would have these two haploid cells combine into a duploid cell (before starting to split etc), through sexual combination. Which is patently ridiculous -- otherwise you could just take a haploid version of two pieces of software, combine them willy-nilly, and get a new piece of software. Nice try God. —Preceding unsigned comment added by 79.122.54.197 (talk) 00:42, 4 January 2009 (UTC)[reply]
as an aside, your statement "there is NO OTHER INFORMATION it contains" is only approximately true. see epigenetics. - Nunh-huh 01:33, 4 January 2009 (UTC)[reply]
Don't tell God what to do with his software. bibliomaniac15 00:49, 4 January 2009 (UTC)[reply]
(edit conflict)Another analogy: science has trouble explaining how bees can fly. ~AH1(TCU) 00:51, 4 January 2009 (UTC)[reply]
are you saying you disagree with God that bees can fly? What does that have to do with my question? Start your own thread!
Most software is a waveform, DNA is matter. They are not "just like" each other.--OMCV (talk) 01:12, 4 January 2009 (UTC)[reply]
DNA encodes genes. the code is almost binary. —Preceding unsigned comment added by 79.122.54.197 (talk) 01:15, 4 January 2009 (UTC)[reply]
DNA can exist without hardware, computer software can not. There is a ton more INFORMATION in a DNA base pair than two bits.--OMCV (talk) 01:21, 4 January 2009 (UTC)[reply]
I strongly disagree. There is NO more information in a base pair than two bits because physically, chemically and in every other way two atoms of the same isotope of an element are identical. Two adenine bases are quite utterly indistinguishable from each other (unless there is some weird isotope present in one or more of the atoms - and I REALLY don't think that codes for anything special). Hence there is no place for more than two bits of information to reside. Your claim is bogus - wrong, wrong, WRONG! Furthermore - the DNA *IS* the hardware - no different from a punched card, a flash memory chip or a hard drive in that regard. It's meaningless to say that the hardware can exist without hardware. With appropriate DNA synthesis techniques, we could store Wikipedia on a DNA molecule with two bits per base-pair.
The information contained in a DNA strand is independent of the 'hardware' DNA molecule (we did the human genome project thing and stored the information from the DNA strand onto a bunch of CD-ROMs). That's no different from the information contained in a flash-memory chip. The distinction (and it's a pretty subtle one) is that "software" is the information that's stored on a chip or a disk drive someplace and "DNA" is the storage mechanism itself. In the case of computers, we can copy the software easily from one piece of hardware to another - and in the case of the information on the DNA strand, we can copy it (painfully) by gene-sequencing onto different hardware (a CD-ROM maybe) - or copy it easily by the rather brute-force approach of making an exact copy of the "disk drive" that the information is stored upon. Sure, we slightly blur the terms "DNA" and "Genome" where we rarely blur "RAM Chip" and "Software".
Can DNA exist without hardware? No - because the DNA *IS* the hardware. Can we store the information that's stored on the DNA without the DNA hardware? Yes! We already did that with many species when we gene-sequenced them - but no information can exist without being stored SOMEHOW in either matter or energy. Can we store software without hardware? No - just like the information on the DNA, it needs some sort of hardware to hold it. But that could be handwriting one a piece of paper - or photons shooting down an optical fibre. Your distinction is bogus - and grossly misleading. SteveBaker (talk) 03:44, 4 January 2009 (UTC)[reply]
Every base pair has vastly more information attached to it than the most complex computer subroutine. Lets remember that its not possible to simulate a single atom in a system containing more than one electron. There are thirty or so atoms and hundreds of electrons included in every nucleotides containing information about potential bonding and countless other physical properties. The fact that every one of these subroutines is identical and can be grossly simplified to a one of four bits does not negate any of this information. Its similar to claiming a stick figure or perhaps a nine digit number can accurately represent a person. At times such a simplification can be useful, for example in XKCD, but its still a simplification and limitations have clearly not been appreciated in this situations. The question posed here did not concern the ability of DNA, software, or a computers to store data but the functionality of the systems. Specifically the successful breeding to render functional offspring. The functionality of data is directly related to the hardware its stored in. In this case there is DNA, a type of hardware (I agree) with embedded data, versus computer software, which is comprised of bits independent of its hardware. It is not surprising that a system with an interrelated data/hardware system displays greater functionality, such as self replication, than a software system that simply resides in its hardware. To look at it from another angle we could also ask why two paper dolls (mostly hardware) can't breed to form a new paper doll while people can. As I mentioned there are similarities between people and dolls (stick figures) but for some reason most people think that sounds dumb. That was my point.--OMCV (talk) 05:07, 4 January 2009 (UTC)[reply]
It's very clear that you don't understand the first thing about information theory (and I happen to have a degree in the subject). And it's too hard and too late for me to explain it to you. You have not understood the distinction between the "information" and the "substrate" in these two cases. That blurring of boundaries is throwing off your thinking and causing you (honestly) to talk gibberish. Suffice to say that it's as irrelevant that a DNA base-pair has all of that internal 'state' (due to atoms and electrons and 'stuff') as it is irrelevant that the transistors that make up one 'bit' in your RAM chip is made of atoms and electrons and stuff. It's not the amount of stuff inside that matters - it's the amount that's stored and reproduced when the information is 'expressed' (copied, transferred, used). In the case of DNA, it doesn't matter what the spin on the 9th electron of that 3rd carbon atom inside that base pair is because that information doesn't get passed on to the other strand of the DNA as it's copied - and it doesn't cause any difference in how the gene that it's a part of forms a protein molecule. So just as the precise state of the atomic structure of one bit of your RAM chip doesn't affect how Internet Explorer runs - so the internal state of that base-pair doesn't affect how the DNA works. Hence two bits per base pair - period. This is a VITAL property of systems like software and DNA - and it's the reason computers use digital circuits and not analog circuits for running software...if every teeny-tiny quantum fluctuation affected how your software ran - it wouldn't work repeatably and it would be useless. Similarly, if the precise electron configuration of a DNA base-pair mattered when copying it or expressing a gene as a protein - then the DNA would make different proteins all the time and we'd die within a few milliseconds! Accurate reproduction and expression is a property of purely DIGITAL systems - and DNA is exactly that - a quaternary-digital system. Two bits per base pair - period - nothing else matters.
As for self-replication. We most certainly can (and do) make self-replicating software (we call them "viruses"...and there is a reason for that!). It turns out not to be very useful because computers can run multiple instances of a single copy of the code - but that's "an implementation detail" that doesn't affect any of the arguments...if it did, we'd make our software replicate - it's trivial to make it do that.
If you still think otherwise then I'm sorry but you simply don't know enough science to comment intelligably on such a technical matter.
SteveBaker (talk) 06:07, 4 January 2009 (UTC)[reply]
I don't have a problem with self-replication. It's the idea of two pieces of software replicating with EACH OTHER that I find ridiculous. And no, we don't have examples of two different viruses of the same "species" (close enough to reproduce 'sexually') combining with each other willy-nilly, so that if they have fifteen offsprings, each will be a healthy, working piece of software, and also subtely different from its siblings. It's a ridiculous thought, and I can't believe God would have people combine in such a way. It's absurd. As I said, I estimate fewer than 1 out of 100 offspring combined in this way from two people's DNA would be a healthy, functioning human. 79.122.10.173 (talk) 13:49, 4 January 2009 (UTC)[reply]
It's irrelevant that the code is almost binary, that doesn't make it software. DNA is not made of electrical signals, ones and zeroes and on and off, but chemicals. Not only that, but if we have to compare it to numbers, it's not two different digits it's four (for the four bases). You can't say DNA is like binary code when it simply isn't. —Cyclonenim (talk · contribs · email) 01:24, 4 January 2009 (UTC)[reply]
Software isn't "made of electrical signals" either - I can store software in my brain (I can recite the quicksort algorithm from memory in 10 different computer languages - so it's not even in 1's and 0's until after I type it into a computer and have the compiler compile it). I can store software as magnetic signals (most of my software is on a magnetic disk at this very moment) - Computer software is transmitted to the Mars rovers every day using a radio link - the software is 'made of' photons for the many minutes it takes to get to Mars...and I could go to any one of a hundred companies around the world and have them store a short piece of software as a DNA molecule (See: Gene synthesis) - although at about $0.25 per bit, I'm not sure I'd be storing anything very large that way! Then we can go the other way - we can gene-sequence a DNA molecule - get the long list of A's, G's, T's and C's - and replace each one of those letters with '00', '01', '10' or '11' and our genome information is just 1's and 0's. We can send it to Mars on a radio beam, copy it to our hard-drives - and (interestingly) pay a company $0.25c per bit to turn it BACK into DNA again. There quite simply ISN'T a distinction between software and the bits that make up a DNA molecule. You are confusing the information with the thing that holds the information. SteveBaker (talk) 03:58, 4 January 2009 (UTC)[reply]
Yes, DNA codons consist of three nucleotides, where four different nucleotides are possible. --Russoc4 (talk) 01:29, 4 January 2009 (UTC)[reply]
So it's a base-4 coding system - quaternary - not binary. However, that's a trivial distinction - they are both 'digital' codes and that's what matters. In the early days of computers, there used to be computers that worked in base-10. We software engineers habitually 'pretend' that binary digits are grouped into three or four so we talk about base 8 (octal) and base 16 (hex) numbers without caring too much how they are represented 'under the hood'. Many communication systems use base-4 coding schemes. That DNA happens to use base-4 is quite utterly irrelevent in terms of this analogy. You can analogize the DNA 'genetic code' with a piece of software code and the comparison stands up quite well: Nucleotides are like bits (except they are in base-4), Codons are like machine-code instructions, Genes are like subroutines, Chromosomes are like compilation units, the entire Genome is like a computer program (and a surprisingly small one IMHO). The analogies are very close (and I don't think that's an accident). However, the OP's analogy fails quite utterly - but not for that reason. My complete answer (which explains this point) is below. SteveBaker (talk) 03:07, 4 January 2009 (UTC)[reply]
While my background is in biologyu

I once found that if I added subroutines to a program from a library of Fortran subroutines, each did what it was supposed to do and the overall program worked. This might be a better analogy to the genetic combinations of sexual reproduction than your false analogy of taking a few bits from one program and a few bits from another. Each of the combined units has to have a certain degree of completeness, like a gene, and not just random base pairs like your analogy would imply. You need complete modules. Edison (talk) 01:31, 4 January 2009 (UTC)[reply]

(EC)I'll bite a bit. Meiosis is not like trying to put together two random programs; it's more like taking two versions of a program that were once the same but have been further modified or tweaked by different developers and then put together. However, the coding for each module within the software would have to be about the same size and in roughly the same place within the overall program. (I'm not a programmer; I'm a biologist. Pardon my gross simplification/poor analogies of programming. And I'm thinking Windows and its various components in my analogy.) Human chromosomes are by and large extremely similar, each is around 96-99% identical to its sister chromosome. (This is from the recent sequencing of the haploid human genome reported in Nature earlier in 2008; I've got to run, but I'll get the exact numbers and ref later.) So during meiosis, the two chromosomes swap genetic material but the genes are essentially in the same order, with only minor differences between them. You aren't trying to put together a fly with a human; that wouldn't work. Or Windows crossed to Mac OSX. I'll post more later, but I have to run now. -- Flyguy649 talk 01:38, 4 January 2009 (UTC)[reply]


The thing our OP is missing is that the two humans are ALMOST identical - they differ only by the tiniest percentage of their DNA (remember - human DNA differs from chimpanzee DNA by less than 1% - so a 1% difference is a LOT!) But if you attempt to cross-breed (say) an Aardvark with an Apple Tree - then the odds of getting a successful living plantoid-creature from the random gene shuffling are indeed almost exactly zero. I don't know where this 1 in 100 estimate comes from - but I can't believe there is any science whatever behind it - so that's what we call "a wild guess" and not "an estimate"!
Our OP's computer program analogy is also faulty. If you took two functioning computer programs that were both derived from the same original source code and did almost exactly the same thing - and were identical except for a few different lines of code and a couple of different constants - then you could indeed take random subroutines from one and put them into the other and the resulting code would stand a very good chance of working indeed. (In the software business, we do this all the time when we do a 'merge' using a version control system such as subversion - and indeed the resulting hybrid of my latest version and my co-worker's latest version does indeed work 99% of the time.)
More importantly for the survival of our offspring - there is a lot of redundancy in the genome - there are many genes that are duplicated on two different strands - so if one gene fails, the other takes over. That's how someone can be a 'carrier' of something like sickle-cell anaemia without suffering any consequences from it. It's only when both parents have one defective copy of the gene that their offspring stands a 1 in 4 chance of getting two bad copies and the disease manifests itself.
So you may rest easy - this is no problem at all to understand - no deep mysteries - just simple genetics. SteveBaker (talk) 01:41, 4 January 2009 (UTC)[reply]
I thought the Windows analogy was good. People can have a random assortment of different versions of different DLLs and it all still works. Sort of. By the way there are genetic algorithms implementing something like sex used in software and they are quite good at evolving useful designs. Dmcq (talk) 12:14, 4 January 2009 (UTC)[reply]
Nice one! Genetic algorithms are an excellent response. Remove the "reproductive" component and the algorithm will not improve. --Scray (talk) 15:55, 4 January 2009 (UTC)[reply]

In the computer code analogy, mixing bits or machine language instructions from two different programs is likely to end badly. But if the programs were Basic, and one was displaying a picture while the other was playing music, or printing text, mixing the instructions would often yield a program which would execute, occasionally with interesting results. There could still be structural errors which would prevent execution, but sometimes the result would be something like the combined execution of the two programs. If one programs said print "hello" and the other said to sound a "beep" these two events could happen in turn. Loops or "goto" statements could produce random results when intermixed. Edison (talk) 22:49, 4 January 2009 (UTC)[reply]

So here's where genetics and programming as we do it differ; especially as regards the poster's presumed dilemma. In order for there to be two humans creating a third human, they must be functional "programs." That is, PRINT "HELLO WORLD" and PRINT "HELLO World" are a priori valid programs, otherwise there would not be the issue at hand. Additionally, DNA when it is copied has a number of "protections" "built-in" such that it is very unlikely that copying PRINT -> PRINT you'll yield anything other then PRINT. It's still possible, and something along those lines may contribute to stillbirth, cancer, and spontaneous abortions. But the data isn't just held in quadrinary form - every value is duplicated, so not only does a mutation have to spontaneous occur AND sneak past the error checker (which was called p13 when I was in school, although apparently they went around and named these things since then) it also must occur at two sites! Even giving a 50-50 probability (which is surely outrageously in the arguments' favor) for all that, it's still less then 1/8 possibility for a single point mutation (contrast with 100:1, the OP's claim). Also in our favor is that a lot of our genetic material is junk DNA, or functionally irrelevent - say, my eye color. Even if I mutate a brand new spanking eye pigment, I'm still a functional human being. Finally, DNA - as best I understand it - doesn't seem to be a blueprint in the terms we'd think of it - "Make a five foot long spine, see page 15 for details," but rather, it's more like a progressive algorhythm (although I seem to have that name wrong - if anyone remembers the term, by all means, edit and claim) where "any" set of data can fit in. That is, if there was a point mutation in "five foot long spine", it would of necessity be a valid product (as opposed to, for example's set, a cluster resulting in "dive tool rong line") - although perhaps not necessarily "human".

Or, in short, while you're entirely likely to get PRINT "HELLO WoRlD" from the two parents, it's not terribly likely you'll just hop on over to "MOV AX,GGGG" It can and does happen, but consider Levenshtein distance as applied and given two functional inputs. 98.169.163.20 (talk) 23:55, 4 January 2009 (UTC)[reply]

While SB and 98 have given good responses and others have briefly mentioned this a key point is that you don't just randomly get bits and pieces of DNA from you parents. You get whole chromosomes. When we're talking humans your parents by and large don't really have different genes. They have different alleles, in other words different versions of the same gene. Most of these differences have little effect. To use a programming example, let's say you have a subroutine which in response to a signal adds questions marks to the end of sentences (that signal is sent by other subroutines which only send the signal when the sentence is a question). Parent A has two functioning copy of this subroutine, Parent B has two defective one. If Parent A and B mate, the resulting program could have either two copies of one gene, or one of both. If it has at least one copy of the subroutine, it will add the questionmarks. If it has none it won't. The later will be a little annoying but is not critical. We know that because parent B had it and parent B wouldn't exist if the problem was critical. And that's a key point. It's not possible that parents are missing key subroutines (or genes) because they wouldn't themselves exist if they were (it is possible they only have one copy of a key subroutine or gene). P.S. One clear example of the flaw in the OPs thinking is this "otherwise you could just take a haploid version of two pieces of software, combine them willy-nilly, and get a new piece of software". Except that a mother and father aren't two pieces of different software. They are both humans. We're not talking about a Dreamfall and NOD32 'mating' but two different versions of NOD32 with very minor differences mating. Nil Einne (talk) 10:49, 5 January 2009 (UTC)[reply]
Yes - exactly. And (as I pointed out) - software engineers do PRECISELY that as a routine part of their jobs. Already this morning I took a version of the software that I've been working on for a couple of weeks - and a version of the same software that someone else has been working on for several weeks and used the 'merge' feature of the subversion version control software to merge them together. The resulting program runs like a charm. Sometimes it doesn't come out like that - but generally, it does. This is precisely analogous to some set of DNA data that started off as a common ancestor of husband and wife - which has been altered (very slightly) by evolution over the past dozen generations so that the husband's version of those DNA strands is a little different from his wife's. The resulting "merge" of those two sets of information is (just like my merged software) very likely to work just fine. More importantly (as others have pointed out) - the way cellular biology works - there are two copies of most of the information in the DNA - and you get one from each parent - so there is a fair bit of redundancy that make the system much more fault-tolerant than my software merge analogy. That's reflected in the fact that perhaps one in fifty of my software merges results in a program that either won't compile or crashes in some way - even though both 'parent' versions of that software compiled and ran just fine...where much more than 1 in 50 children grow up without significant DNA defects. SteveBaker (talk) 15:18, 5 January 2009 (UTC)[reply]
It's an invalid analogy. DNA is more akin to data, not executable code. If I have two well-formed XML files, both of which conformed to the same set of strict XSD rules, you absolutely can combine the two XML files to produce a third valid XML file. 216.239.234.196 (talk) 13:36, 5 January 2009 (UTC)[reply]
No - I can't agree with that. There is VERY little difference between 'code' and 'data'. XML is virtually a programming language. Very often, what's data for one program (eg a '.html' file for Apache) is also code for another (JavaScript for Firefox). Sufficiently complex data file formats (like XML) can have bugs in them that are similar in nature to coding errors. I can't accept that analogy. SteveBaker (talk) 15:18, 5 January 2009 (UTC)[reply]
Executable code contains machine level instructions that a CPU will execute. Data does not. Think of code as action; data as information. 216.239.234.196 (talk) 17:50, 5 January 2009 (UTC)[reply]
Actually, code and data are indistinguishable. If you change a file extension from .exe to .bmp, what was once a program is now a weird-looking picture. That's how buffer overflows work, if you enter the right data, it becomes code and gets exectued. Franamax (talk) 18:39, 5 January 2009 (UTC)[reply]
So I repeat - is JavaScript code or data? It doesn't contain "machine level instructions" because it's interpreted, not compiled so by your first criteria, it's data (and Apache would agree with you - but Firefox would not). It does cause 'action' - so by your second critera it's 'code' - but an XML file can also be "action" if it's interpreted as (say) a route for a robot to follow. You could have a tag in the XML that tells the robot to repeat the previous route - or to repeat that route until it hits a wall...before you know it, your XML "data" has become "code". I suppose one could resort to the Church-Turing thesis and say that anything that acts equivalently to a Turing machine is "code" and everything else is "data" - but the BIOS ROM in your PC (because it cannot modify itself) is not code by that definition. Think about programming in Logo. You say things like "left 90 ; forwards 10 ; right 90" to make a robot move around - is that a route like our robot's XML route-description-file? Is it code? Sorry - but your distinction is blurry as all hell - and there is a very good reason for that - some things are both code and data - some things are clearly just code and others just data - and yet other things change their nature from code to data and back again depending on the context.
In a sense, DNA is code that is "executed" by the RNA molecule 'computer' - in another sense it's data that the RNA "program" processes. It's a lot like JavaScript in that some processes are treating it as data (the process that duplicates a DNA molecule for example doesn't care what 'instructions' the DNA strand contains - it's just data) - while other processes treat it as code - with a proper instruction set (such at the process that creates proteins). Read our article genetic code - it describes DNA as a set of 'rules' - which is pretty close to saying 'code'. A codon is made of three base-pairs. Each base-pair represents a 2 bit value - so there are 4x4x4=64 possible codons. The analogy with bits and machine-code instructions is completely compelling. The 64 codons contain instructions to the RNA to grab a particular amino acid and hook it onto the end of the protein - but also has instructions that stand for "START" and "STOP". The analogy is even more complete than that - some organisms have different RNA "computers" that use different machine-code sequences from ours - just as the old Z80 computer had a slightly different instruction set from the (broadly compatible) Intel 8080. The RNA "computer" even has things like exception handling to deal with illegal instructions and prevent the system from "crashing" if a bad DNA sequence is encountered. We implemented all of these things into our computers before we understood how DNA worked - it's an amazing case of "convergent evolution" where cellular processes and electronics have converged on almost identical solutions. SteveBaker (talk) 18:39, 5 January 2009 (UTC)[reply]

Congrats to the OP for a provocative question topic... I presume from the way the question was posed that the intent was to stir up a great argument and it has certainly been fun to read. However, as stated by several others already, the analogy is false. While it may be true that DNA is "information" and that some aspects of the DNA code are similar (and in many ways analogous) to computer code, we're missing a huge point here. Humans are diploid organisms, meaning that there are two nearly exact copies of each chromosome, and two nearly identical copies of (almost) every gene being expressed at the same time. It isn't as though fertilization slaps together random halves of two parental genomes to get a new whole (it was already suggested that the OP review meiosis for further clarification).

  • My question back to the OP is, Why don't computer systems work in such a way as to execute two nearly identical copies of each program simultaneously?

I assume (as a non-expert in computer programming) that it would be a lot more difficult to do that way. THAT is why the initial question is a false analogy. Computer systems are the equivalent of a haploid genome. To pose the original question, you would need to have a computing environment in which every computer was a diploid system (new DOS = Diploid Operating System?) running two versions of each program (gene) simultaneously. Allow the user of each computer system to make small alterations (mutations) in the programs to optimize them for their own uses. THEN, select two computers (perhaps via an online dating service?), randomly choose one version of each program running on those computers (the haploid gamete), compile the two sets of programs together into a new, unique diploid software combination (the diploid offspring), and determine whether the resulting software could operate a new computer. Keep in mind that you'd have to limit users to tweaking the existing programs rather than writing new programs within a given operating system, unless you wanted to create a new species of computer systems. I expect there would be changes made in programs that would be incompatible when both programs were being run simultaneously, but you might do better than 1:100 if you designed the system robustly enough. --- Medical geneticist (talk) 21:56, 5 January 2009 (UTC)[reply]

January 4

Engineering indexes

I'm trying to get a handle on the publication history and relative importance of an engineer in the area of civil and mechanical engineering. He does work in earthquake resistance of structures and (more recently) mitigation of explosion damage to buildings.

I'd rather not name the individual in question, since I don't think the poor guy wants to see a Ref Desk discussion about him pop up the next time he does a vanity search on Google. For testing purposes, some of the (recently-deceased) 'grand old men' of the field include George W. Housner and John Blume.

Can anyone who has some experience with civil engineering research point me to the typical indexes and sources one might use to check out an engineering researcher's background and academic credentials? (I know that in the biomedical sciences I'd hit PubMed/Medline, and that for pure math and physics I'd go straight to arXiv — is there an engineering equivalent?) TenOfAllTrades(talk) 01:32, 4 January 2009 (UTC)[reply]

Try Engineering Village (also known as Inspec, Compendex), Web of science (also known as Institute for Scientific Information (ISI) database), , Scopus, or simply Google Scholar. These all are general engineering databases; I am not aware of any specialized to Civil Engineering alone. Abecedare (talk) 01:45, 4 January 2009 (UTC)[reply]

Frequency of mass

Since all mass has an associated energy according to Einsteins E=mc^2, and E=hf (according to Planck?); then can it be said that every mass has associated frequency and is traveling at the speed of light through something? If not, why not?--GreenSpigot (talk) 02:25, 4 January 2009 (UTC)[reply]

De Broglie hypothesis. Algebraist 02:29, 4 January 2009 (UTC)[reply]
Aha! But is everything traveling at c?--GreenSpigot (talk) 02:44, 4 January 2009 (UTC)[reply]
No. Algebraist 02:50, 4 January 2009 (UTC)[reply]
All matter has an associated wave. For 'macroscopic' objects (like Aardvarks...I'm trying to work Aardvarks into as many answers as possible today!) - the frequency is insanely high. De Broglie says that the wavelength is planks constant divided by momentum (which is mass times velocity). Since plank's constant is an insanely tiny number and we're dividing it by a macroscopic mass - the wavelength is tiny and therefore the frequency is crazily high. 6.6×10−34 divided by the momentum of a typical Aardvark (we'll go with 66kg because it's a BIG aardvark and it's just fallen off a cliff so it's moving at 1ms-1 relative to us) gives it a wavelength of 10-33 meters - and an Aardvark frequency of about 3x1040Hz (why is this not mentioned in Aardvark?). Considering that the highest frequency we 'naturally' consider dealing with is cosmic rays at about 1019Hz - this is an outrageously high frequency! SteveBaker (talk) 02:54, 4 January 2009 (UTC)[reply]
Yes but the aardvark universe is outrageous. Does, therefore, all matter travel at 'c' in the space time continuum?--GreenSpigot (talk) 02:59, 4 January 2009 (UTC)[reply]
A wave packet...possibly a falling aardvark
Not exactly. You have to distinguish the concept of a 'wave packet' which has a group velocity that isn't tied to the wave's speed (which is indeed 'c'). The graphic (at right) gives some kind of a visualisation of what's going on. The big bunched-up blob is the particle and it is moving across the screen at whatever (sub-light) speed it needs to...but the high frequency squiggles inside are moving along at the speed of light. Technically - the particle is spread out over space - it's not a point as you might expect - and that's something that comes out in Schrodinger's equation as a 'probability density function'. The blob is showing us the probability that the particle is at such-and-such position. In the case of an electron (for example) this is a rather diffuse thing - it's mass is tiny so it's wavelength is rather long - so we have the whole Heisenberg uncertainty principle business going on - where we don't know where the particle is if we measure it's momentum accurately (which you'll recall figures into that wavelength calculation from De Broglie) - or if we pin down it's position accurately - we can't know it's momentum. Our Aardvark however has a VERY small wavelength - so (fortunately for anthills everywhere) we know pretty much exactly where they actually are. SteveBaker (talk) 03:20, 4 January 2009 (UTC)[reply]
Just want to check, is it appropriate to say that a macroscopic object is associated with a well-defined wavelength, as per de Broglie? Not that I know much about this at all, but I felt it strange to describe an object by a wave with a wavelength much smaller than the object itself and therefore thought it applied only to particles. —Bromskloss (talk) 16:37, 5 January 2009 (UTC)[reply]

Advantage of large population

What are the advantages of the world population being as large as it is, and what would the potential advantages be of it being even larger? NeonMerlin 04:25, 4 January 2009 (UTC)[reply]

Well, "large labor pool" comes to mind, but since we're no longer engaged in massive slave labor-driven projects, that may not be such a huge advantage. Overpopulation is increasingly becoming a problem as it is; I'd be inclined to think that the advantages are far outweighed by the disadvantages. I'd be very interested in hearing arguments and facts to the contrary, though. -- Captain Disdain (talk) 04:42, 4 January 2009 (UTC)[reply]
I've been thinking a lot recently about the opposite case - the downsides of a falling population. But if I reverse my 'time arrow' - I can come up with some relevent thoughts on this topic:
Any possible benefits of population growth can't be in the form of production or consumption - because at best they cancel out - and at worst, consumption outstrips production because of limited resource availability. (Food, water, land, minerals, fuel, plants, animals - you name it!)
So the benefits of more population could only be in the areas of things like ideas or software - where one idea can serve any number of people and a piece of software can be copied as many times as you like - hence more people means more ideas and therefore more benefits.
Let's imagine a world where the population doubled:
Consider (say) the business of making cars. If you have twice the number of people - you need twice the number of cars but you have twice the number of people to work in car factories to make them - so no benefits.
But consider the business of making computer games. If you have twice the number of people writing them - then there are twice as many different titles on the store shelves - but the cost of physically manufacturing twice the number of disks is very tiny. So we'd have twice as much variety - twice as much choice. Since the total number of games sold would double - and the total number of people working on them would also double - the profit (and cost) per game wouldn't change. You'd expect to see twice the number of new movies made each year - twice the number of new opera's written.
So if more choice is a good thing - then so is more population.
Sadly - doubling the population also doubles the amount of CO2 pumped into the atmosphere - doubles the rate at which we pull fish out of the oceans (which dramatically reduces the fish population - resulting in immediate disaster) - doubles the amount of fossil fuels we need - and therefore halves the amount of time until we run out. This would be a truly monumental disaster.
So I see almost 100% downside - but in areas where ideas matter - it's possible there would be some upside. Of course in some fields there is already too much in the way of new ideas. Scientists find it increasingly impossible to keep up with the latest news in their chosen fields because the journals where such things are reporting are proliferating too greatly. Every year there are movies that I'd be interested to go and see - but there are more of them than I have time for. So ultimately - even this small upside loses it's value. I suppose our notional car factory has twice the number of clever guys figuring out fuel saving strategies - so there might be some claw-back there - but I think it's small.
Interestingly, the small upside of a growth in ideas is a major problem with shrinking population size...if we could somehow halve the world's population (which would solve an AWFUL LOT of problems!) - the number of new movies, new computer games, clever new products from Apple corp - all of those things would slow down dramatically. With half the number of people making new TV shows - but the exact same number of hours to fill each week - the number of reruns on TV would skyrocket. That would be necessary too because with only half the number of people to advertise to - the advertisers can't afford to spend as much.
I think mankind could take care of that though - and in a sense things like Wikipedia are already doing that. By getting a better quality of life for this smaller population (because we have less to struggle with in terms of pollution and dwindling natural resources) - people could have more leisure time. We could improve things like the Internet (half the number of people means more bandwidth per person) - and have better penetration of arts and sciences from other cultures - and have a larger PERCENTAGE of the population generating interesting new content. Within limits - it's probably manageable.
SteveBaker (talk) 05:45, 4 January 2009 (UTC)[reply]
I suppose one could come up with a scenario where a large population is necessary to save the Earth, like a meteor heading towards Earth and everyone working to create the rockets and equipment needed to save the Earth. It's an extremely unlikely scenario, though.
There is a huge benefit to an ever-increasing population, however, in that more workers are then available to support the retired population. China is headed into a period where this law will work against them, as their One Child Policy has produced workers who now must support two retired parents, and, in some cases, four retired grandparents, in addition to their own children. That's a lot to ask of a single worker. While an ever increasing population isn't the same as a large population, it inevitably leads us there. StuRat (talk) 14:46, 4 January 2009 (UTC)[reply]
A large population may be nessecary when people decide not to reproduce. This is happening in some European countries and the population is falling. Developing countries are still reproducing quickly but even their population increase will soon decelerate. ~AH1(TCU) 14:54, 4 January 2009 (UTC)[reply]
Except that people stop reproducing precisely because of the large population. This can be due to wanting to prevent the global problems caused by overpopulation, or due to more immediate concerns. For example, a lack of housing can make people reluctant to have children, if they know they will all be crowded into the same small apartment. StuRat (talk) 15:04, 4 January 2009 (UTC)[reply]
This is not a professional answer, rather some thoughts and additional questions. An ever-increasing population in which more workers are necessary to support the retired sounds a lot like a pyramid scheme. Given finite resources, wouldn't this eventually lead to collapse? Also, the idea that people stop reproducing precisely because of a large population seems like a good model given the assumption of very short lifetimes - won't our huge rate of resource consumption, combined with our long lifetimes lead to an overshoot above the equilibrium, followed by a shortage of resources? 41.243.38.111 (talk) 19:22, 4 January 2009 (UTC) Eon[reply]
I think it's more likely that people reproduce less because they often have guaranteed pension now and don't need to rely on their children to support them when they are old. Another important reason may be "the pill". Icek (talk) 14:15, 5 January 2009 (UTC)[reply]
SteveBaker said that increasing production and increasing consumption would cancel out. This is false. Cost does not increase linearly with production. Some times, the cost per unit decreases, hence Mass production. Software is the extreme form. Other times, the cost per unit increases, such as energy. The question is which happens more. Also, StuRat said that people stop reproducing because of large population. IIRC, it's mostly based on how well off they are. People who are better off have fewer kids. — DanielLC 22:36, 4 January 2009 (UTC)[reply]
For your argument it is sufficient that the cost is not proportional to production; e. g. there is some fixed capital cost and linear operating cost. Icek (talk) 14:09, 5 January 2009 (UTC)[reply]
Well - yes - but that effect decreases with volume. Sure, a big car factory that only makes one car a day is going to be way less efficient than one that's utterly maxed out on production. However, at some point that factory can't make anymore and you have to build a second factory - and then your fixed costs just doubled. So the idealised view isn't really right. Cars made by (to take an example I actually know about) the MINI company (a subsidiary of BMW) in just one heavily loaded factory in Oxford cost a comparable amount to similar cars made by Honda, Ford and others - yet their production is about a quarter what each of their rivals makes each year. So in that case, there was little if any savings through volume because the other companies have to have multiple factories running. Also, as volume increases, the fraction of the cost of the item in terms of labor and raw materials starts to dominate the selling price. Sure, you save on the initial design costs - but those will become increasingly negligable as more cars are sold. I think the benefits of a larger population in terms of reduced cost per item are going to asymptote to almost nothing in most industries. (Although - as I said - there are exceptions such as the software, movie and TV businesses).
You don't have to build a second factory, you can just expand the bottlenecks. You don't have to design a new car. You can just build the old one. You don't even have to design a new factory. Your resources and advertising may be cheaper in bulk. There's a reason markets tend to get dominated by a few large corporations. On the other hand, as production in general increases, you have to mine less easily accessible ores. You can no longer get cheap power from things like geothermal energy. That sort of thing also happens on a smaller scale with individual businesses, I just can't think of any good examples at the moment. — DanielLC 18:59, 5 January 2009 (UTC)[reply]
Look at our diseconomy of scale article for more examples. I wish I'd written it myself. :-) StuRat (talk) 03:04, 6 January 2009 (UTC)[reply]

A related question: What is the lower limit for the population at which we still could have as high a standard of living as we have it now in industrialized nations? We would need a sufficient number of experts from every profession (well, in an ideal world, maybe some professions wouldn't need to exist, like lawyers), but I have little idea about what's the number of people that could be considered "sufficient". Icek (talk) 16:46, 5 January 2009 (UTC)[reply]

The tricky part is in areas of entertainment and such. You could probably reduce the world population to a few millions and still keep agriculture and technology running. But we would be unlikely to be able to afford (for example) the Hollywood movie business. The latest James Bond movie cost $230,000,000. If we only had 2 million people - the cost of making that movie would be pretty similar to what it is today. But then everyone would have to pay $120 to go see the movie in order for it to break even...and that assumes that every single person wants to see it! That means that movies at that production price simply won't exist. However, with more or less unlimited natural resources - it's likely that overall prosperity would be vastly bigger - and perhaps then $120 to see a movie isn't so unreasonable. But then consider that with around 3,000 times less people around than there are now - there would only be a handful of talented actors - maybe only one or two movie directors - it would be REALLY tough to keep that kind of activity afloat.
The problem with answering this clearly is that there are just to many variables. If we reduced the population by the same factor throughout the world - the consequences would be different than if all but a few countries were simply eliminated. Having just a couple of million people spread throughout the world would cause transportation nightmares - pushing everyone together into just one region the size of (say) a single US state would have different consequences because we'd be unable to take advantage of natural resources throughout the world.
SteveBaker (talk) 18:04, 5 January 2009 (UTC)[reply]
If we're going to assume a good infrastructure everywhere or something else that doesn't exist in reality, increasing the standard of living elsewhere to make up for it would be trivial. If not, I doubt this is possible with a population of any size. — DanielLC 18:59, 5 January 2009 (UTC)[reply]

Acceleration?

In this video, the parameter "acceleration" is specified in a couple of places. For example, "Acceleration ramp from 0 to 30g." What does this mean? I can't think of anything else to control on a wave generator besides frequency and, perhaps, amplitude. Besides, in a simple harmonic oscillator, acceleration is not constant anyway. --VectorField (talk) 04:49, 4 January 2009 (UTC)[reply]

Well, it didn't actually say it was a simple harmonic oscillator - so I suppose an initial accelleration at the start of each cycle is possible. But I agree - it's a strange way to state the amplitude and/or wave-shape. I presume that this number comes about because of the mechanism they are using to shake the liquid...but it's hard to know. SteveBaker (talk) 05:14, 4 January 2009 (UTC)[reply]
It's common practice with shaker tables, like the one in the video, to specify the output in units of acceleration. This is because standards for vibration and shock tolerance, such as those in MIL-STD-810, are specified in units of acceleration. For a sinusoidal waveform, the stated acceleration is usually the RMS value, but for other waveforms it could be the peak. You can convert from acceleration to displacement by integrating twice:
a = A sin ωt ==> v = -(A/ω) cos ωt ==> d = -(A/ω²) sin ωt
where a is instantaneous acceleration, A is peak acceleration, ω is angular frequency, v is speed, and d is displacement. The last expression shows that peak displacement (what you called amplitude) is A/ω². Don't forget to convert your 30g figure into m/s².--Heron (talk) 12:31, 4 January 2009 (UTC)[reply]
RMS acceleration makes sense, but strikes me as a queer way of specifying amplitude. Perhaps acceleration is more relevant to durability than amplitude? So,
rms acceleration = 30g = 300m/s² -> max acceleration = 600m/s² -> displacement = A/ω² = 600m/s² /(2π 120hz)² = 1.06mm.
That sounds about right for a displacement, eh? --VectorField (talk) 00:42, 5 January 2009 (UTC)[reply]
Almost right. I think you want to multiply RMS by root 2, not 2, to get peak. As you say, acceleration, not amplitude, is what breaks things. In vibration testing, nobody cares what the amplitude is, as long as it's small enough to fit inside the test chamber. :) --Heron (talk) 21:47, 5 January 2009 (UTC)[reply]
It means that the acceleration starts at 0g and increases in a linear manner up to 30g. Whats the problem with that?--GreenSpigot (talk) 01:24, 5 January 2009 (UTC)[reply]

Why is Jelly easier to swallow than water?

During the last week or two I've been suffering from the acute phase of Infectious mononucleosis/Glandular Fever. During this time many of the symptoms are quite like tonsillitis and swallowing is painful. Can anyone suggest a good scientific reason as to why Jelly is less painful to consume than water is? Noodle snacks (talk) 06:47, 4 January 2009 (UTC)[reply]

Well if you're a UK resident, and jelly is what kids eat with ice cream (as opposed to the US meaning of "jelly" which is "sweetened fruit preserve"), there are two effects: one is that jelly is almost always served cold, which numbs the throat. The other effect is the gelatine (or gelatine substitute) used to set the jelly, which will coat the throat and soothe the inflammation. --TammyMoet (talk) 10:52, 4 January 2009 (UTC)[reply]
Water can also contain some irritants, like chlorine products, used to kill off the little nasties. This is true of tap water and bottled water that the makers fill from tap water. In addition, bottled water may contain some irritating chemicals which have leeched out of the plastic bottle into the water. StuRat (talk) 14:33, 4 January 2009 (UTC)[reply]
Such as Bisphenol A. However, what if it's previously boiled water? ~AH1(TCU) 14:51, 4 January 2009 (UTC)[reply]
That would kill bacteria in the water, and maybe reduce the amount of chlorine products, but I doubt if it would remove chemicals leeched in from the plastic much. Just letting water sit for a while (in a glass container) also seems to allow chlorine products to vent off. StuRat (talk) 14:58, 4 January 2009 (UTC)[reply]
I'm not convinced the amount of Bisphenol A is enough to be much of an irritant. Indeed even chlorine. Note also any jelly made from said water is likely to contain the same chemicals. However the surface area your exposed to may be lower. IMHO the primary advantage other then the ones Tammy mentioned is that jelly is a softer product then water. Okay it's difficult to compare since jelly is a solid and water a liquid but IMHO it's true. Nil Einne (talk) 15:23, 4 January 2009 (UTC)[reply]
If by jelly we mean the fruit spread, then the plant which produces the fruit has filtered out many of the irritants, like chlorine, from the water. If by jelly we mean some concoction made with water and gelatin, then it has probably been let to sit long enough for most of the chlorine to outgas. StuRat (talk) 16:41, 4 January 2009 (UTC)[reply]
I'm still not convinced. According to [19] [20] (not the greatest sources but the best I could find) the halflife of chlorine in water is 1-5 hours. If your talking about a hospital setting I doubt the jelly would have been made more then 6 hours beforehand. Even in a home it seems resonable the jelly woulkd often be consumed within 3-6 hours of it being made. Perhaps not entirely consumed but the OP didn't say the jelly is fine if it's 2 days old but if it's just recently made its not. More importantly, whether at home or in a hospital there's a very good chance the jelly would have been covered for most of the time. Furthermore, it seems entirely plausible the half-life of chlorine in set jelly which is therefore solid/semi-solid would be significantly higher. All in all, particularly given it's not uncommon people will have an uncovered glass of water besides them for several hours it seems entirely plausible to me the water will actually have less chlorine then the jelly. BTW the production of jam does require water although that wasn't what were talking about. P.S. If you're in the US apparently a lot of water is now chlorinaminated and although I'm guessing the OP isn't from the US, in that case even leaving the jelly to set in the open air is not going to help you much. Nil Einne (talk) 10:20, 5 January 2009 (UTC)[reply]
I wasn't thinking of a homemade gelatin dessert, but one bought at a store (such as the two shown at the top of that article). Perhaps the original poster can tell us which one they meant ? StuRat (talk) 18:15, 5 January 2009 (UTC)[reply]
Another possible downside to drinking water is that hard water (such as well water), may contain high mineral levels, some of which can act as irritants. If the water is treated with a water softener, then it will have a high sodium level, instead, which can also be an irritant. StuRat (talk) 14:58, 4 January 2009 (UTC)[reply]
Alternatively, it may be the lack of stuff in the water. Pure water is bad for exposed (rough, raw) tissue. Try a lightly-salted water. Not good for hydration, though. Saintrain (talk) 19:52, 4 January 2009 (UTC)[reply]
Nothing at all to do with "stuff" or irritants, and everything to do with viscosity. Swallowing is a reasonably well-studied phenomenon, as it's necessary to evaluate stroke victims who have swallowing difficulties before permitting them to try to eat. In brief, the ease of swallowing varies with the amount you're trying to swallow at once (bolus volume), and by the consistency of what you're trying to swallow (liquid, semi-solid, and solid). Studies have shown that it's easier to swallow low bolus volumes, and it's easier to swallow semi-solid than liquid. [21]. - Nunh-huh 02:29, 5 January 2009 (UTC) (P.S. is British "jelly" the same as American "Jell-o"?[reply]
) <- Here's a closing parenthesis for you. Yes, I think you're right on the Jell-O theory, with the general name being gelatin dessert. StuRat (talk) 03:10, 5 January 2009 (UTC)[reply]
Many thanks for the ")". I have a chronic shortage. And for solving the "jelly" mystery for me...though in America it's traditionally accompanied by whipped cream, not ice cream! - Nunh-huh 03:30, 5 January 2009 (UTC)[reply]

Placebo

I was reading the wiki article on placebos and the placebo effect. I'm a major non-subscriber to non-western medicine. (Obviously the corollary is that I'm a major subscriber to western medicine.) I don't buy astrology, and I don't like religion or god, I don't believe in 'mystical energies' or in the healing power of faith.

My question therefore is--after reading about the placebo effect, people can actually get (somewhat) better just by thinking that they are? I don't really understand how that's possible, what's relieving the pain, or helping the suffering?207.172.70.176 (talk) 07:28, 4 January 2009 (UTC)[reply]

When given a placebo, there is no external, bioactive treatment relieving the pain, or helping the suffering. However because your body thinks it has been given something that has that effect, it is "tricked" into thinking it is feeling better. While there are obviously limits to the placebo effect, its remarkable how powerful, or influential, expectation is how our body reacts to injury or treatment. Have you ever noticed that you can cut yourself without even noticing (suggesting, therefore, there is no pain), but the moment you become aware of the cut it begins to throb? You see this in children (and association footballers) all the time, when they take a few moments for their brain to realise what just happened to them after they fall over, then they begin to cry. This is essentially the placebo effect in reverse: when you expect something to be painful, you begin to feel pain. Likewise, when you expect something to remove pain, the pain is attenuated. It works because pain and suffering are essentially mental states (in comparison to nociception). You can train the brain (through meditation, for example) or trick it (by placebo) to bypass or escape these states completely. See also Health applications and clinical studies of meditation.
While its relatively easy to appreciate how the interpretation of pain in the brain can be modulated by expectation of relief, its harder to explain how physiological effects are mediated by placebo, but they are. For example, a notable study showed that men with an enlarged prostate who were given a placebo reported improvements such as faster urine flow, and many even suffered from side effects (including impotence, diarrhea and constipation) - the so-called "nocebo" effect. The mechanism of these physiologic responses to a placebo are unknown, but the researchers suggest the the expectation of relief could have resulted in smooth muscle relaxation around the bladder, colon, prostate and urethra, which resulted in the effects reported.
So while relief of symptoms are now well established, what isn't clear, is whether anything can really be "cured" by placebo. People might feel better, but they aren't really "getting better" (in the example above, the paitients' prostates were still enlarged, even though some of the adverse effects of the enlargement was nullified by the placebo). There are occasional claims that people have been cured of cancer by placebo, but preciously little data to back it up. Rockpocket 08:18, 4 January 2009 (UTC)[reply]
I would note two distinct effects:
1) Placebos sometimes make people report that they are better, when they really aren't, as measured in any objective way. For example, somebody reports their flu is improved, but an analysis of their spittle finds the same count as before.
2) Placebos sometimes actually make people better, as measured in some objective way. There could be several mechanisms to explain this. Here are two:
a) It causes different behavior which helps reduce the disease. For example, if you have a rash, scratching it can make it worse. If the placebo convinces you that it doesn't itch any more, you stop scratching and the rash heals faster.
b) It reduces stress, which is known to trigger or worsen symptoms. The exact mechanism may be related to stress hormones in the blood, like adrenalin, which cause physical problems when they remain for extended periods.
I would expect that auto-immune diseases would benefit more from placebos, as they are more likely to be triggered by stress and behavior. StuRat (talk) 14:23, 4 January 2009 (UTC)[reply]
Also, a study shows that half of all American doctors have no ethical issues with prescribing placebos. ~AH1(TCU) 14:49, 4 January 2009 (UTC)[reply]
Another reality is that many symptoms simply resolve if one is patient. A placebo, like any other medicine, is given a little time to work by the taker. If the symptoms are self-limited, the placebo gets the credit. --Scray (talk) 15:47, 4 January 2009 (UTC)[reply]
I haven't done much reading on the subject, it's generally believed that the opioid system is involved in the placebo effect, as an opioid receptor antagonist such as naloxone can block many of the effects of a placebo. Here's a ref: http://www.ncbi.nlm.nih.gov/pubmed/15820838 --Mark PEA (talk) 16:45, 4 January 2009 (UTC)[reply]

Animal cooperation and co-learning

it just struck me, can you teach animals to teach others explicitly or not.

Train a rat that when an LED in a box turns on, if you don't do something in 15 seconds it will shock you. Then can you put another rat in the cage and have him watch the other doing that?

Similarly, If you have two cages and two rats, but both levers have to be flipped for it to do anything.

I'm not asking for theories, I just want to know if anyone did this in the past, any reports on it... 68.37.71.40 (talk) 08:04, 4 January 2009 (UTC)[reply]

Animals rarely teach one another, but they often learn from each other. There is a subtle, but important difference: Teaching is a form of altruism, it requires the teacher modifies their behaviour to explicitly assist the student. Learning simply means the student watches and copies, at no cost to the teacher. While there are loads of examples of animal learning, there are only rare examples of animals teaching one another: See PMID 16407943 for one such example. Another possible example is Bonnie, a 30-year-old female orangutan at the Smithsonian National Zoo. She made headlines when she worked out how to whistle (the first primate to ever do so [22]) then soon after another orangutan, Indah, a friend of Bonnie's, aquired the skill also. Its not entirely clear, though, whether Indah simply copied Bonnie, or there was genuine teaching involved. But the fact that no other primate has copied human whistling, despite significant efforts to teach them, suggests there could have been some teacher/student interaction between Bonnie and Indah. Interestingly, this was done spontaneously and without human influence. Whether animals can be taught by humans to teach each other is an intriguing question. Rockpocket 09:06, 4 January 2009 (UTC)[reply]
What about birds giving flying lesson to their young? That's surely teaching. --86.125.163.133 (talk) 12:15, 4 January 2009 (UTC)[reply]
You you have any reliable examples of birds giving flying lessons to their young? Rockpocket 19:57, 4 January 2009 (UTC)[reply]
How about a mother predator who will catch prey, bring it back to her offspring, still alive, then let them practice catching it ? StuRat (talk) 14:12, 4 January 2009 (UTC)[reply]
That's learning, not teaching. The mother doesn't give feedback to the kids about how to improve, but rather demonstrates how to catch the prey and the children copy her. —Cyclonenim (talk · contribs · email) 14:38, 4 January 2009 (UTC)[reply]
I disagree. The mother will show them the proper technique (without the final kill), then encourage the young to do the same. If they don't she will show them again and again, until they get it right. This "showing them" phase has no benefit to herself, but is solely designed as a demonstration for her offspring. StuRat (talk) 14:52, 4 January 2009 (UTC)[reply]
Ah, very true. Good point. —Cyclonenim (talk · contribs · email) 17:22, 4 January 2009 (UTC)[reply]
Yes. Prey-handling skills have been demonstrated to be taught in meerkats (see PMID 16840701). But as Maelin points out, below, it all depends on how you wish to define teaching. A strict definition would require awareness of the ignorance of students and a deliberate attempt to correct that ignorance by the teacher. That is very difficult to demonstrate in animals. It unlikely the adult meerkats are actually thinking about teaching (our impression that they are, is just anthropomorphising on our part). Rather they are responding innately. The outcome may appear similar, but there are key mechanistic, behavioural differences. Rockpocket 19:53, 4 January 2009 (UTC)[reply]
One difficulty in the question is that it's not clear, even in situations where teaching appears to be going on such as predators bringing wounded prey for their offspring, whether the teacher is actively aware that the goal of the exercise is learning. Real teaching is a remarkably complex cognitive process, when you think about it. It requires the teacher to be able to model, in her head, the mind of her student - to create a mental representation of another mind with different knowledge to her own. The teacher then needs to work out how her own actions could increase the knowledge in that modelled mind of her student. This is an incredible bit of cognitive work going on here and it's not at all obvious whether any other species at all are capable of it. A leopard dragging wounded prey back to her young may not, at any stage, be aware that her goal is to educate her cubs. It may (indeed, probably is) simply be a purely instinctive behaviour, with the mother never realising (nor, perhaps, even being capable of understanding) that she is helping her cubs to learn. We humans are truly amazing creatures. Maelin (Talk | Contribs) 15:32, 4 January 2009 (UTC)[reply]
Wow, some of the best theroies on those type of experiments appear in the Saw movies but they involve only human behavior. The easiest to understand is probably in the wild, some chimpanzees eat termites by licking a stick and poking it into the termites hole so that little termites will stick to it and they learn that by watching each other. Also some types of monkey eat walnuts and they learn from each other to break them open using a rock (sorry can't rembember the exact monkey but they are strong little guys). All creatures possessing them have the capability to learn stuff but even your lovable Boxer dog will forget he has a neck and choke himself to death on his leash just because he is happy to be alive (ie, yes they are all capable but some would rather die or feel pain forever than participate in a pointless electrocution box) ~ R.T.G 18:02, 4 January 2009 (UTC)[reply]

I once had students try putting a naive rat in a Skinner box to see if the naive rat would learn from the highly trained rat that pressing the lever repeatedly brought water as reinforcement. Sadly, the naive rat just took the reward when it arrived and never pressed the lever, then the rats would fight. This is unpublished and thus anecdotal. Humans or apes might be quicker to learn by observation. Edison (talk) 22:42, 4 January 2009 (UTC)[reply]

Washoe is said to have taught some of the sign language she knew to another chimpanzee. Whether that was a deliberate attempt on her part, or perceptiveness and copying behavior on the part of the other is unclear. This article discusses orangutans playing charades in order to get specific desired food items. The article points out that "charades relies upon an awareness of what others do and do not understand". Recent research suggests that dolphins may possess a concept of mind. Finally, this article gives a good overview of animals teaching their young, but it is pretty loose on the definition of what constitutes "teaching." 152.16.16.75 (talk) 02:09, 5 January 2009 (UTC)[reply]

Where/when was c first used as the symbol for the speed of light?

Can anyone tell me where/when c was first used as the symbol for the speed of light (or for an equivalent like Maxwell's ratio of the electrostatic to electromagnetic unit, which has the dimension of length/time)? I know modern notations of Maxwell's equations contain c, but I don't have access to Maxwell's Treatise to find out whether it was Maxwell, Fizeau or someone else who may have first used the symbol c.Yetanothername (talk) 08:16, 4 January 2009 (UTC)[reply]

Although I'm not usually one for short answers, I'll just have to say click here. So much more information than I could even paraphrase with any justice. It's dry, I know, but what better scientific material is there right? Good day to you and I hope that helps. Operator873 (talk) 12:41, 4 January 2009 (UTC)[reply]
Great link! --Scray (talk) 15:52, 4 January 2009 (UTC)[reply]

The link is superb, and exactly what I was looking for! (I had always doubted the validity of Azimov's remark, but I see there is also more basis for it than I suspected.)Yetanothername (talk) 19:10, 4 January 2009 (UTC)[reply]

Time dilation

In the time dilation article it says that:

"In special relativity, the time dilation effect is reciprocal: as observed from the point of view of any two clocks which are in motion with respect to each other, it will be the other party's clock that is time dilated. (This presumes that the relative motion of both parties is uniform; that is, they do not accelerate with respect to one another during the course of the observations.)"

But if it were so, then wouldn't the two effects cancel each other out, and leave us with no time dilation whatsoever under special relativity? Then why do physics textbooks say that we can travel into the future by traveling close to speed of light, when we see that Earth's clocks ticks slowly at us too due to the reciprocal time dilation effect?

Thanks.

76.68.9.253 (talk) 15:31, 4 January 2009 (UTC)[reply]

In a word, no. Take two observers which are not accelerating; say, a man on a space walk in the middle of nowhere, and a man in a spaceship. The man on the spacewalk sees the spaceship traveling at 0.4c past him, and (if he could somehow see a clock on the spaceship) he would see the spaceship's clock running more slowly than his wristwatch. However, remember that neither man is accelerating: this the key point of special relativity! Because neither man is accelerating, there is no physical difference between the spaceship moving toward the spacewalk man at 0.4c, or the man moving towards the spaceship. Because of this "relativity" of non-accelerating frames (or lack thereof), the time dilation effect works both ways. If the man in the spaceship were able to see the spacewalk man's wristwatch, he would see that the wristwatch is running slower than his own clock.
In the case where you would use near-light-speed travel to travel into the future, you would only notice this change if you returned to the place where you started at rest (say, Earth). This would require a great force accelerating you away from Earth to a high speed, then an even greater force to accelerate you back. It is these great accelerations which lead to a different time effect, which is covered by general relativity, not special relativity. -RunningOnBrains 16:59, 4 January 2009 (UTC)[reply]
Thanks for the detailed response. It cleared up a long-standing confusion I've had with the twin paradox (I guess I should have consulted that article in the first place). Actually I was going with a different vein in my last question, allow me to rephrase it more clearly using the following scenario:

Let's say that about six months after a spaceship left earth, one of the astronauts traveling on the spaceship gives birth to baby A. By now the spaceship's engines are shut off and so the spaceship is coasting at a constant velocity 0.9c relative to earth ("for all eternity" -- as deemed by mission control), so that both earth and spaceship are now inertial frame with relative velocity 0.9c between them. Now, AT THE SAME TIME as baby A's birth on the spaceship, one of the mission directors gave birth to baby B on Earth. So that baby A and B are born at the same time, and are traveling in their respective inertial frames of reference that are 0.9c relative to each other at their birth and until their death. If both babies live to be 75 years old in their proper times, will B appear to die earlier than A to A's frame of reference, or will A appear to die earlier than B to B's frame of reference?

Thanks.

76.68.9.253 (talk) 18:24, 4 January 2009 (UTC)[reply]

That 'AT THE SAME TIME' is frame-dependent. Is it at the same time in earth's frame, or in the spaceship's frame? It can't be both. Algebraist 18:26, 4 January 2009 (UTC)[reply]
Let's say at the same time in earth's frame. And I also realized that I have phrased the final question incorrectly, allow me to rephrase it as follows:

If both babies live to be 75 years old in their proper times, will B see herself die ahead of A, since B sees A's clock dilated; or will A see herself die ahead of B, since A also sees B's clock dilated?

Thanks.

76.68.9.253 (talk) 20:07, 4 January 2009 (UTC)[reply]

after reading the Relativity of simultaneity article suggested by the Algebraist, I've reached a disturbing conclusion: B will see herself die ahead of A in B's frame of reference, and A will see herself die ahead of B in A's frame of reference, and BOTH ARE CORRECT!!! In fact, since A and B are separated in space, there will always exist frames of reference in which A died earlier than B, and frames of reference in which B died earlier than A, both observations are correct. Am I right?

Thanks,

76.68.9.253 (talk) 20:16, 4 January 2009 (UTC)[reply]

Yes - the disturbing thing about relativity is the end of the concept of 'simultaneous'. Observers moving at different speeds will come to different conclusions about the order that events happen...and the death of the twins is no exception to that. There are yet weirder things to consider because it's not just time that gets distorted - mass and distance also get changed. The Ladder paradox (and the related man-falling-into-hole paradox) really hurt my brain. SteveBaker (talk) 21:52, 4 January 2009 (UTC)[reply]
Thanks for the article, SteveBaker. My brain is already popping. 70.52.150.155 (talk) 20:58, 5 January 2009 (UTC)[reply]

Asteroid orbital elements

Hi. I'm looking for orbital elements for the asteroid 99942 Apophis. I need an online source from which I can copy-and-paste in a manner similar to this. I can easily find comet data such as from here, but I cannot find the asteroid data, can someone help search for such a site? Thanks. ~AH1(TCU) 15:45, 4 January 2009 (UTC)[reply]

There is the JPL Small-Body Database browser. (Which I notice has fixed the problem with their precision field, since the last time it was mentioned here.) I think that gives you most of what you're looking for, but I'm no astronomer, so don't take my word for it. APL (talk) 16:30, 4 January 2009 (UTC)[reply]

Please identify this flower

Please identify the species of this flower, so that I can (or you can) include that in the image description page. --Kprateek88(Talk | Contribs) 16:37, 4 January 2009 (UTC)[reply]

It will dramatically assist identification if you specify the time and location that you took this photo. Nimur (talk) 17:49, 4 January 2009 (UTC)[reply]
Metadata on the description page says 28 December 2008 in Indore, IndiaMatt Eason (Talk &#149; Contribs) 18:10, 4 January 2009 (UTC)[reply]
That's right. This was in some sort of a garden, so it's likely that this flower was planted there (as opposed to occurring naturally). --Kprateek88 (talk) 11:14, 5 January 2009 (UTC)[reply]

It looks like some kind of Coreopsis. There are several different species and cultivars that are commonly grown in gardens throughout the world.--Eriastrum (talk) 21:14, 4 January 2009 (UTC)[reply]

According to Wikicommons, where this photo can be seen, it is a species of cosmos. [23] . Richard Avery (talk) 08:47, 5 January 2009 (UTC)[reply]

Special relativity, time dilation, practice?

(In responese to the Time dilation question above) So, equipment available today could use a laser to broadcast pulses and a receiver to time them at enormous speeds. My wireless receiver at home goes at 384,000,000 binary digits per second (its pretty advanced but its a non-expensive Motorola bluetooth type thing). Is it not possible to set up a supersonic craft with a transmitter/reciever and one on the ground which stay in line with each other and measure timing variations relative to speed? Surely if the speed of light would cause 8 x reduction in time-speed (is it 8 in Einsteins theories?) the fluctuations at double the speed of sound (which planes are capable of) could be measured by todays extremely capable hardware? Any notable time-speed theory experiments? ~ R.T.G 17:42, 4 January 2009 (UTC)[reply]

We can doa lot better than that. Please see the GPS article. the GPS system works by placing extremely accurate clocks in extremely well-known orbits. Orbital speed is a lot faster than aircraft speed. The GPS system must account for both the slowing predicted by special relativity and for the speed-up caused by general relativity. The fact that the system's accuracy requires these corrections is a proof that the effects exist. -Arch dude (talk) 18:14, 4 January 2009 (UTC)[reply]
It does say that at GPS#Relativity thanks Arch dude ~ R.T.G 19:00, 4 January 2009 (UTC)[reply]
The specific experiment you describe has also been done several times - flying an atomic clock around the world on airliners is sufficient to produce measurable time differences with one sitting on the ground. According to [24]:
"Two scientists, Hafele and Keating, did the most direct test of relativity possible, in 1971 they flew one set of atomic clocks around the world on a commercial jet liner and then compared them to a reference set left behind on the ground. The scientists flew the clocks around the world twice, once east to west and then west to east."
"The atomic clocks on the planes flying east lost 184 nanoseconds ... They gained 125 nanoseconds due to the gravitational red shift. The planes flying west gained 96 nanoseconds due to their motion and gained 177 nanoseconds due to gravity. The measured effects were within 10% of the predicted effects which was within the 20% error in the experimental technique. (The effect of gravitational redshift has now been confirmed to better than 1%)"
SteveBaker (talk) 21:43, 4 January 2009 (UTC)[reply]
It always seemed to me to be discussed as an unproven theory. Doesn't the fact that the direction of the plane affected the amount of difference show some proof that gravity and friction were not the only forces at work? Maybe not proof of a time-material but at least shows a strength in cosmic (or other) influences in motion. Wow. That will be their warp drive. ~ R.T.G 01:05, 6 January 2009 (UTC)[reply]
Oh - no - far from it! It's an exceedingly well proven theory - there have been hundreds (at least) of completely different tests of relativity - and they all come out right on the money. The reason the direction matters in the airplane experiment is because the earth is rotating with one plane and against the other - so their relative velocities are not the same. As the article says - the results were within 10% of the predicted value - within error estimates of 20% - which means "it worked!". There are plenty of scientific theories to be skeptical about - but relativity shouldn't be high on your doubt-list! SteveBaker (talk) 02:11, 6 January 2009 (UTC)[reply]

What is the difference between neuronal circuitry and neural circuitry?

Hi: I'm working on the Wiki article about the essay "Is Google Making Us Stupid?" and I have been unable to determine the difference between the adjectives neural and neuronal. The question has also been posed at WikiAnswers by someone other than me. If you could enlighten me about the differences I would appreciate it. Nicholas Carr, in his essay "Is Google Making Us Stupid?", says "Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory." However, in an email, he told me that "Given what we know now about neuroplasticity, it seems certain that internet use is changing our neuronal circuitry." So he even seems to use the terms "neural circuitry" and "neuronal circuitry" interchangeably. The same goes for "neural network" and "neuronal network", as well as "neural level" and "neuronal level"—terms which are used on page 117 of Norman Doidge's book The Brain That Changes Itself without any apparent differences. I can't see any at least. Sincerely, Manhattan Samurai (talk) 19:18, 4 January 2009 (UTC)[reply]

"Neural" means "of, relating to, or affecting a nerve or the nervous system" (Merriam Webster), whereas "neuronal" means "of, or relating to a neuron" (Wictionary - MW just redirects to "neuron"). That is, "neural" involves the large scale nervous system, whereas "neuronal" involves the small scale nerve cells (neurons). However, since the nervous system is made up of neurons, something that relates to or affects the neurons will relate to or affect the nervous system as well, so the to can be taken as synonyms in most cases. I'd use "neural" in most cases (being the older and more widely used word), only using "neuronal" when I wanted to stress the "on the cellular level" connotation. -- 128.104.112.113 (talk) 20:05, 4 January 2009 (UTC)[reply]
I agree. In my personal experience as a jobbing neuroscientist, they are often used interchangably. I get the impression neuronal tends to be used more often when referring to specific, defined circuits (because it pertains to specific neurons), whereas neural is more often used when referring to more complex, undefined circuits (because it pertains to the nervous system). That said, there are plenty of examples of the opposite [25]. I'd just choose one, and stick with it. Rockpocket 20:17, 4 January 2009 (UTC)[reply]
Thanks. I too came to a feeling that neuronal was low-level and neural was high-level. However, sticking to one or the other is out of the question considering everyone I have come across uses both terms.Manhattan Samurai (talk) 21:25, 4 January 2009 (UTC)[reply]
I meant stick with one type of usage in the article (unless in a quote), per WP:MOS. Since there is significant ambiguity, it probably doesn't matter which is used in the article, but consistency would certainly make it easier for the reader. Rockpocket 22:05, 4 January 2009 (UTC)[reply]

How often do transposons jump?

Depending on the type, transposons can either jump around in the genome or they can proliferate by copy-and-pasting. I wonder how often this happens. Will a typical active transposon in a cell of my body jump once an hour, or once a month, or is it a rare event happening only a few times in my body in my life, or even rarer than that? I suppose the rate depends on the type of cell? Thanks, AxelBoldt (talk) 22:12, 4 January 2009 (UTC)[reply]

That is a really difficult question to answer, because actually observing transposition as it happens is like looking for a needle jumping around a haystack. Moreover different transposons will have very different rates of transposition, so generalizing is probably not helpful.
Nevertheless, some studies in Saccharomyces cerevisiae Ty retrotransposons have been carried out (we only have an article on Ty5, but the one they used was Ty1). The experiments were quite complicated, and contain a few assumptions that may or may not be accurate, but the bottom line was that a Ty1 transposition would occur in 1% of cell divisions observed. Given the number of transposons in the genome (and assuming each could function equally well in their assay), they calculated that the mean rate of transposition per Ty1 element was between 10-4 and 10-3 per generation. Therefore, calculating in a few more assumptions, on average Ty1's transpose once every 2000 to 20,000 hours. Thats between once every 2.5 months, and once every 3 years.
The authors predict that this rate estimate is probably erring towards the high side, and it could be more than 10-fold slower but not likely to be faster. Its also worth noting that Ty1 appears to have a higher transposition rate than Ty2 in their assays system (by as much as 25:1), and so could be an unusually active transposon and therefore not typical. Then you have to ask yourself, how applicable is this to human transposons? Rockpocket 00:07, 5 January 2009 (UTC)[reply]
Thanks a lot, that was a very thorough answer. If we take the rate "once every couple of years" as a rough estimate, another important piece of information would be the total number of active transposons in the human genome. Would you like to add some of this to the transposon article? I could also do it if you give me the reference for the paper you mentioned. Cheers, AxelBoldt (talk) 02:01, 6 January 2009 (UTC)[reply]
Sure, the primary reference is PMID 17815421 (though PMID 1662752 also discusses the data). Rockpocket 06:52, 6 January 2009 (UTC)[reply]
Rock, do you have any information on what actually causes a transposition event? It's always seemed to me that the jumpable transposon should be created either once per gene transcription (or once per DNA replication, depending on the type) - or never at all, if there's no promoter sequence for it. Is it just an accidental transcription, is there an active system to repress transposons, what? I may have just spotted an interesting paper at PMID 2851719, but if you have any general information, it's appreciated! Franamax (talk) 07:20, 6 January 2009 (UTC)[reply]
And to follow up on a brief look at the paper I just mentioned, "regulation of Ty transposition occurs at a posttranscriptional level" seems to imply an active system for repression - so do you know what that system is? and that paper involves dropping in GAL1 promoters to fire up the Ty elements - so again, what activates them in the wild? Franamax (talk) 07:27, 6 January 2009 (UTC)[reply]
I think the mechanism of transposition induction is likely to vary between Class I and Class II transposons. I'm far from an expert on the subject (though I do find them fascinating), but I know there are some examples of inducing agents. Copia has been shown to be responsive to a "variety of environmental stresses" [26] (possibly because it contains heat shock promotor-like sequences.) Both the Tys in cerevisiae and IS10 (in E. coli) are induced by DNA damage via UV light or 4-nitroquinoline-1-oxide treatment. [27] I'm not sure the mechanism for this has been elucidated, but it appears to be under the control of the SOS response in bacteria. [28] So it appears that at least one endogenous mechanism is stress related, which kind of makes sense, I suppose.
There are both epigenetic, and genetic mechanisms are involved in repression of transposition. An example of the former: biotinylation of histones appear to repress transposition in some systems [29] In the case of the P element, the presence of a genes that inhibit transposase is sufficient to inhibit transposition (and, it follows, genes that promote transposase expression will induce transposition). I don't know whether there is a co-ordinated "active repression system" per se, or its simply the case that the transposons are by default quite stable, or that their RNA usually gets degraded by nonsense mediated decay, and there has to be specific (and quite rare) circumstances for them to escape that and fully retrotranspose. Rockpocket 08:25, 6 January 2009 (UTC)[reply]
Thanks. Fascinating? Yes, maximally so. I like the idea that HSP's would induce transposons, along the lines of "we've got a problem here, time to shake things up to see if we can find a new solution". I recall a paper about induction of HSP90 in A. thaliana which produced generationally stable phenotype changes - you've provided a clue to a possible mechanism. I recall also that I've seen some evidence that the homeobox genes seem particularly resistant to transposon insertions (which probably is the only thing keeping us from getting angel wings :). A vastly interesting topic - we seem to be getting closer and closer to adding a {{historical}} tag to the Junk DNA article. :) Franamax (talk) 09:02, 6 January 2009 (UTC)[reply]

Hard-boiled eggs

When peeling a hard-boiled egg, sometimes the membrane is very hard to detach from the flesh, yet other times it comes away readily. What causes this difference, and is there any way to ensure that it peels away easily more often? (I am referring to hens' eggs in particular). DuncanHill (talk) 23:15, 4 January 2009 (UTC)[reply]

For hens' eggs, I was shown when I was a child that after boiling, you must quickly dunk the eggs in cold water specifically to make them easier to peel. I don't know if that works for roosters' eggs, so it is good you are only interested in hens' eggs. -- kainaw 01:23, 5 January 2009 (UTC)[reply]
Yes very good.... I of course intended the eggs of Gallus gallus, rather than those of any other fowl. DuncanHill (talk) 01:43, 5 January 2009 (UTC)[reply]
Yeah, that'll do the trick. The idea is that the soft insides of the egg contract in the cold water, while the hard shell doesn't. This makes it easier to peel the shell off. -- Captain Disdain (talk) 01:38, 5 January 2009 (UTC)[reply]
It's not the hard shell I'm worried about, but rather the soft and flexible membrane. DuncanHill (talk) 01:44, 5 January 2009 (UTC)[reply]
Are not roosters male birds that don't lay eggs?--GreenSpigot (talk) 01:45, 5 January 2009 (UTC)[reply]
Indeed, Kainaw was pointing out the redundancy of my specifying hens' eggs. DuncanHill (talk) 01:47, 5 January 2009 (UTC)[reply]
Sometimes a little redundancy doesn't hurt. At least you won't get hard-boiled snake's eggs. The dunking treatment is supposed to make the inside of the egg contract, while leaving the membrane adhering to the shell, so it is indeed designed to deal with the membrane. The trick is to dunk and then peel rather rapidly, as if you leave the egg sitting around for a while you loose the advantage of the contraction of its insides and the membranes re-adhere. - Nunh-huh 02:32, 5 January 2009 (UTC)[reply]
I just figured that he specified hen's eggs in case someone might have thought to ask for him to specify between chicken eggs and, for example, quail eggs. Dismas|(talk) 03:07, 5 January 2009 (UTC)[reply]
Well, yes, but the quail that lays the eggs is the hen... - Nunh-huh 03:55, 5 January 2009 (UTC)[reply]
Dismas is right, I meant chicken eggs, as opposed to those of other fowl. I was using the word hen in its common usage to mean a female Gallus gallus. By the way, we have cocks here, not roosters. DuncanHill (talk) 11:56, 5 January 2009 (UTC)[reply]
Ah, nothing like Ref Desk pedantry ("I am soooo clever!") to completely derail something. And if you hadn't specified chicken eggs, you'd probably get some response about how duck eggs might be totally different, and thus there's no way to answer the question without more information. (Can you tell I have grown quite tired of this sort of "wit"?) --98.217.8.46 (talk) 15:47, 5 January 2009 (UTC)[reply]
Your complaint would have some weight if the comment you are complaining about didn't supply an answer. However, as can be seen, the main substance of the comment contained an answer that has been verified by others as being correct. -- kainaw 02:52, 6 January 2009 (UTC)[reply]

Don't hard-boil chicken eggs. Eggs plus cold water, bring to a boil, remove from heat, wait for 17 (seventeen) minutes, remove eggs from hot water and put into cool water. Fresh eggs will always be more difficult to peel than 2 to 3 week old eggs - its the law! hydnjo talk 02:38, 5 January 2009 (UTC)[reply]

I'd like to speak in favor of the 70° egg (see [30]). Some prefer 65° or 67° eggs, but for any of these you really need a water-bath with a thermostat. - Nunh-huh 02:46, 5 January 2009 (UTC)[reply]
The biggest factor I've found is using the aged eggs like Hydnjo suggestion: 2 to 3 weeks old. Anythingapplied (talk) 19:20, 5 January 2009 (UTC)[reply]
The only thing tasiter than a hard boiled Gallus gallus hen egg with a little salt on it is a devilled one of same. I've never had me fill of 'em. Yum!. Edison (talk) 04:12, 6 January 2009 (UTC)[reply]

ZMF in special relativity

If the centre of mass of a two body system has non-zero momentum what is the simplest way to calculate the energy available for particle creation (the total energy of the particles in the ZMF less their rest energy) from data of the particles energy in the lab frame.

The only method I can see that I am sure that works is to calculate the particles velocitys from their energys, then using the rule for velocity addition, caluclate their velocities in the lab frame, and then from this calculate their energies in the ZMF. However due to the need to calculate the velocity of the ZMF. And the equations get extremely complex and messy and I am sure there must be a simpler way. Is their a quick way to calculate the energy stored in the system as moving the centre of mass? —Preceding unsigned comment added by 84.92.32.38 (talk) 23:31, 4 January 2009 (UTC)[reply]

Unfortunately I don't know an easy way, as their always seem to occur quartic equations. But at least they can be solved in radicals. Icek (talk) 18:09, 5 January 2009 (UTC)[reply]
There is a way to do this using four-vectors, in particular, the four-momentum. The four-momentum is special because a Minkowski norm of a four-momentum vector is invariant under translation into different frames of reference. I'll state (without proof) that the total energy available for particle creation is , where is particle 1's four-momentum and is particle 1's four-momentum.
Expanding this out using the distributive law (which holds for the Minkowski inner product), this because . Evaluating the Minkowski inner products results in: . For reference, was evaluated in particle 1's rest frame, was evaluated in particle 2's rest frame and was evaluated in the lab frame. Also, note that and are ordinary three-vectors (ie. normal momenta). If is the angle between the two particles then the total energy available for particle creation is . Knowing the particles' rest mass and energies in the lab frame, you can calculate the magnitude of their three-momenta in the lab frame and just substitute into the most recent equation.
To prove my previous statement (that the total energy available is just the square root of the sum of four-momenta dotted with itself), let the lab frame be the ZMF so that the 3-momenta are equal in magnitude but opposite in direction. Then you should get (in the ZMF). Someone42 (talk) 11:24, 6 January 2009 (UTC)[reply]

January 5

Possible Meteorite strike

Looking at Southern Russia on google earth Two round lakes caught my eye.North 52.44'58.11" East 78.37'05.37" Bolshove lake The other lake not sure the name but its @ North 54.35'17.46" East 71.45'21.10"Nasa has an icon in the area called KULUNDA STEPPE Could these lakes be Craters from meteorites? They are almost prefect circles.My email is <removed to prevent spam> Thanks. —Preceding unsigned comment added by 5544mik (talkcontribs) 01:02, 5 January 2009 (UTC)[reply]

Not answering the question, but here's a quick link to the location: 52°44′58″N 78°37′05″E / 52.749475°N 78.618158°E / 52.749475; 78.618158. cycle~ ] (talk), 02:01, 5 January 2009 (UTC)[reply]
I'm no expert - but wouldn't there be raised edges if they were craters? Where did all of the ejected material go? The agriculture around these lakes appears to go right up to the edge of the lakes without interruption - which seems unlikely unless the edges of the lakes are dead flat...but I could easily be wrong. SteveBaker (talk) 04:07, 5 January 2009 (UTC)[reply]
While it's certainly plausible, I wouldn't say its the most likely, or even one of the top 3 most likely methods of formation for these lakes. Just because a lake is circular does not mean that it is a crater lake; I'm sure most lakes in flat-terrain areas are roughly circular in shape. They appear similar to Kettle ponds, although I doubt glaciers would reach that far south in recent eras. It could be an old shaft mine which has filled with water (although this is unlikely given the flatness of the nearby terrain). It could be a man-made reservoir or irrigation lake. It could also just be a natural dip in the terrain which is below the current water table. Without knowing the depth of the lake, it is hard to make a judgment in any case. -RunningOnBrains 20:40, 5 January 2009 (UTC)[reply]
An old eroded crater may not have a raised rim. —Tamfang (talk) 20:31, 5 January 2009 (UTC)[reply]
I don't see anything round enough to scream 'crater'. —Tamfang (talk) 20:31, 5 January 2009 (UTC)[reply]

Ray that stops internal-combustion engines

I was browsing through an old newspaper online and found an article about supposed Nazi secret weapons. The article (Hurt Doberer, "New Reich Weapon May Be Dust Bomb, New York Times, 15 October 1939, p. 36). went over some things that th Nazis had claimed to invent in the 1930s, and points out that none of them are that spectacular or were original to the Germans. The last entry was:

Ray Z—the ray that could stop an internal combustion engine. (This secret also was shared. And, anyway, the ray, though effective when dealing with a near-by motor-car engine, ha never been effective against an airplane.)

I've never heard of such a thing. I googled "Ray Z" and "Z-Ray" and found, well, nothing relevant. Anybody have a clue what this refers to? --98.217.8.46 (talk) 01:59, 5 January 2009 (UTC)[reply]

Electromagnetic pulse? Did they know how to generate one by non-nuclear means back then? I dunno. --Kurt Shaped Box (talk) 02:17, 5 January 2009 (UTC)[reply]
I figure that everyone who ever went to school to learn to work on radars (the big ones that track aircraft) will have heard a story about a radar killing engines or catching a camera shop on fire. The claim is that the electromagnetic radiation from the radar will cause all spark plugs to fire at once - killing the engine. I never attempted anything of the sort. However, I have seen fluorescent bulbs light up only through the power of the radar flowing through the air. Come to think of it, it also caused my radio to turn on and faintly play some music as the radar sweep passed by. I had it sitting on top of the cabinets above my work desk, so it was above the safe altitude. -- kainaw 02:53, 5 January 2009 (UTC)[reply]

There's no need to worry. If the Nazis do indeed invent such a weapon, our esteemed scientist Nikola Tesla has already invented a death ray capable of shooting planes out of the sky and annialating entire armys. [31] 216.239.234.196 (talk) 13:27, 5 January 2009 (UTC)[reply]

---

So the answer is, basically, nobody knows, really? Except maybe radar or EMP, neither of which existed at energies in which this would be possible (much less a routine thing) in 1939? What impresses me about the article is the smug way the author shrugs off this idea as something both the Nazis developed and that many other people developed. Obviously law enforcement agencies today would be pretty interested in something that could easily stop car engines (it would make car chases a thing of the past, at no risk to the officer or the public). --98.217.8.46 (talk) 19:20, 5 January 2009 (UTC)[reply]

Could this have been written by the same NYT ignoramus who mocked Robert Goddard? Moral: never ever fully trust what you read in the papers about science. Clarityfiend (talk) 20:54, 5 January 2009 (UTC)[reply]
The cops are interested in the idea. See "Stopping Cars with Radiation" (2007). --Heron (talk) 21:37, 5 January 2009 (UTC)[reply]
That device attacks the microprocessors controlling modern cars, which weren't around during the second World War. I doubt even an EMP could have much of an effect on an old-fashioned internal combustion engine; EMPs affect tiny sensitive circuits used in computers and such, not levers and pistons and other mechanical components.
I don't know what "near-by motor-car engine" is supposed to mean, but I'd say any device that can stop a vehicle reliably is tremendously useful, unless "near-by" means going up to the car with hand tools and disassembling the engine. --Bowlhover (talk) 05:35, 6 January 2009 (UTC)[reply]

Are there any Nobel winning scientists in baseball?

Seems a bit far fetched, believe it or not, - not all jocks are stupid. I figured at least one of them cured the cancer of a teammate or two. Has anyone ever abandoned our beloved game of baseball and followed a noble scientific pursuit? Has any big leaguer ever broke out a chemistry set in the locker room?--Baseball and and and Popcorn Fanatic (talk) 02:29, 5 January 2009 (UTC)[reply]

Frank Sherwood Rowland played at College level (for Ohio Wesleyan University), but the Laureate who is was the best baseball player is probably Lester Pearson (who wasn't a scientist, but won the Nobel Peace Prize in '57). He was a semi-pro who played for the Guelph Maple Leafs. Rockpocket 03:32, 5 January 2009 (UTC)[reply]
Not a Nobel winner, but Kerry Ligtenberg (formerly of the Atlanta Braves, among others) has a degree in chemical engineering: http://mlb.mlb.com/team/player.jsp?player_id=117763 —Preceding unsigned comment added by 24.98.239.50 (talk) 03:44, 5 January 2009 (UTC)[reply]
Dr. Bobby Brown was an all-star third baseman for the New York Yankees, and studied medicine while still an active player. He became a cardiologist and surgeon with a successful practice for several decades, then returned to baseball after retiring from medicine as President of the American League. — Michael J 23:27, 5 January 2009 (UTC)[reply]

Titanium Knives vs. Stainless Steel

I saw some Titanium kitchen knives on sale at a houseware store this weekend. They were rather expensive, but very nice looking. Are there any pros or cons of using titanium for knives? Do they hold their edge longer than normal stainless steel? Are they sharper? --71.158.216.23 (talk) 06:24, 5 January 2009 (UTC)[reply]

This blog post seems to have some good points. Dismas|(talk) 07:00, 5 January 2009 (UTC)[reply]
Aside from the points mentioned there, titanium is also not as dense as steel; according to titanium and iron, its density is only 4.5 g/cm^3, compared to iron's 7.9 g/cm^3. (Stainless steel's density is similar to that of iron since it is only a few percent carbon.) For that reason, it's sometimes used for mountain equipment. See the camping utensils here, for example. Of course, I doubt your houseware store was targeting mountaineers, but titanium offers a much greater advantage in mountain equipment than it does in a regular kitchen. --Bowlhover (talk) 09:08, 5 January 2009 (UTC)[reply]
According to our article on Mohs scale of mineral hardness, hardened steel is quite a bit harder than titanium (2-3 times harder on the scale of absolute hardness). I'm not a materials scientist, but I believe blade hardness would be directly correlated with how well it holds its edge. --Bmk (talk) 13:44, 5 January 2009 (UTC)[reply]
Harder materials are less likely to wear down, but more likely to chip or crack. Japanese swords often combine harder steel for the cutting edge with softer supporting steel, to get the best benefits of both. StuRat (talk) 18:06, 5 January 2009 (UTC)[reply]
I don't think I'll be using any Samurai swords in the kitchen! --71.158.216.23 (talk) 03:02, 6 January 2009 (UTC)[reply]
Not even a Ginsu ? But what if you need to cut a can in half and then slice tomatoes with the same knife ? StuRat (talk) 05:06, 6 January 2009 (UTC)[reply]

Daytime Conflict with clock time

The day time is not exactly equal to 24 hours .It is short of 4 mints a day .But th eclock shows exactly 24 hours then how will this difference keep correct. —Preceding unsigned comment added by 123.237.213.65 (talk) 07:54, 5 January 2009 (UTC)[reply]

See sidereal day. The clocks are based on the Sun's apparent movement; every 24 hours, the Sun returns to approximately the same place in the sky. Because Earth is also orbiting the Sun, it only takes 23 hours and 56 minutes for a star to return to the same position. That's why a sidereal day is 23 hours and 56 minutes. To see how this works, look at the diagram in the article I linked; it explains the concept much more concisely than I can using words. --Bowlhover (talk) 09:34, 5 January 2009 (UTC)[reply]

Headphones jack or line out?

I am going to record the audio signal from a reel-to-reel tape recorder. It offers both a headphones jack and some kind of line out DIN connectors (and arc of 5 pins or so) on the back. Which option would give me the best result?

For AD-conversion, I'm thinking about using a gramophone, which connects to a computer via USB, and which has a connector for taking in an external audio signal. If it makes my question easier to answer, we can assume that I'll connect the reel-to-reel directly to the simple sound card of the computer instead of to the gramphone. —Bromskloss (talk) 09:50, 5 January 2009 (UTC)[reply]

Probably the LINE connections would give the best impedance match - but IMHO, you should try both and see what gives you the best results - you won't break anything by doing that. Old tape recorders tend to be pretty well-behaved. The things that are a pain to interface to is old record players...but I guess you've already solved that one! SteveBaker (talk) 14:51, 5 January 2009 (UTC)[reply]
Thanks for taking the time. Impedance matching, does it really matter? The input impedance of the receiving equipment is supposed to be "very high" anyway, isn't it? As for the "pain", do you refer to RIAA equalization? It's convenient not having to fiddle with that. —Bromskloss (talk) 15:06, 5 January 2009 (UTC)[reply]
I would use the line out thereby bypassing the headphone amp (which may introduce some distortion). Impedances do not have to be matched as long as you are not overloading the output of the tape recorder. Direct input to the sound card should not be a problem.--79.75.49.50 (talk) 16:56, 5 January 2009 (UTC)[reply]
We seek to load up an output. The input impedance of the thing you are connecting to should be as high as or higher than the output impedance of the source, and never lower, to avoid distortion and damage. What does a gramophone to do with electronic audio? That is a British term for a wind up disc phonograph with purely acoustic output. Edison (talk) 04:07, 6 January 2009 (UTC)[reply]
Ah, I did have my doubts I was using the right word. So, phonograph it is. —Bromskloss (talk) 10:03, 6 January 2009 (UTC)[reply]

How long will it take for Pluto to clear its neighborhood?

I was just wondering if anyone's ever tried to figure out how long (persumably, millions or billions?) of years for Pluto (through collisions with meteors, comets, asteroids, etc.) to clear its neighborhood and become a "planet"? —Preceding unsigned comment added by 216.239.234.196 (talk) 13:19, 5 January 2009 (UTC)[reply]

I don't know the answer to your question - but I'm not sure that this would ever happen. Clearing the neighborhood would be a major deal - as our article on Pluto points out, Pluto's mass is only 7% of the mass of the other objects in its orbit. Earth's mass, by contrast, is 1.7 million times the remaining mass in its own orbit. So Pluto would have to get something like 14 times bigger before it could be considered to be a planet under the present rules. Of course that would depend on whether Charon would gain mass proportionately or disproportionately to Pluto - and whether that large weight gain would cause their orbits to misbehave and one to collide with the other or break apart and form yet more debris to be swept up. Worse still - Pluto/Charon's orbit around the sun is retrograde and tilted at a crazy angle - if much of the debris turns out to be in the plane of the ecliptic (as one might expect) then Pluto's rare intersections with the ecliptic (just once every 120 years!) would make the opportunities for it to pull in rocks in orbits that intersect somewhat unlikely. So my gut feel is that the answer has to be "a very long time" - possibly longer than the predicted life of the Sun. SteveBaker (talk) 14:44, 5 January 2009 (UTC)[reply]

how many horsepower can a horse pull with?

how mnay horsepower can a horse pull with? 1? —Preceding unsigned comment added by 79.122.29.166 (talk) 16:03, 5 January 2009 (UTC)[reply]

See Horsepower from a horse. Echinoidea (talk) 16:27, 5 January 2009 (UTC)[reply]
Ah, that explains it. I've often wondered why a 50 horsepower car has difficulty going up a steep hill, when 50 horses would have no trouble pulling the same car up the same hill. StuRat (talk) 18:00, 5 January 2009 (UTC)[reply]
No, that's not the reason. A car is not designed to climb a steep hill. A small tractor has a lot less power, but it can climb the hill because it has a much lower gear ratio and it has cleats on the tires. Similarly, horses can "gear down," and they have adaptive traction control (loosely speaking.) You can raise a large car up a steep incline, or even a vertical cliff, using a 1Hp winch, given the proper gearing. -Arch dude (talk) 02:19, 6 January 2009 (UTC)[reply]
I saw a comic once, which clearly stated that one horsepower is defined as the power of the prototype horse in a Parisian archive. ——Bromskloss (talk) 16:22, 5 January 2009 (UTC)[reply]
Note that there is a difference between horsepower and torque. --Russoc4 (talk) 02:25, 6 January 2009 (UTC)[reply]
When early steam engines were rated in "horse-power" the steam engine promoters were very conservative in rating the engines. Thus a factory owner or mine owner who replaced a horse driving a sweep arm around with a 1 "horsepower" steam engine was pleasantly surprised, and advised his friends to also buy steam engines. Edison (talk) 04:04, 6 January 2009 (UTC)[reply]

Racemates/racemic mixtures

In my exam specification we know that we have to learn about optical isomers, with dextroenantiomers and laevoentantiomers and why reactions tend to produce racemic mixtures.

I understand these terms, but I do not know why racemic mixtures are formed. Why is one enantiomer not formed more than the other? Note this isn't homework, it's revision. Cheers :) —Cyclonenim (talk · contribs · email) 16:04, 5 January 2009 (UTC)[reply]

In many cases, the production may go via an SN1 reaction or similar, in which the intermediate molecule in the reaction actually loses a functional group temporarily - as the carbon center, for a brief moment, only has 3 other groups attached to it any existing chirality at that centre is lost. Even in the case of a bimolecular reaction (such as an SN2 reaction) there is often no real factor that causes the attacking group to come in from one side in preference to another. ~ mazca t|c 17:54, 5 January 2009 (UTC)[reply]
Er, in a normal SN2 reaction, there is usually a nearly complete preference for the atacking group to come in opposite to where the leaving group is. That type of reaction usually gives clean inversion of configuration, completely not racemization. DMacks (talk) 04:29, 6 January 2009 (UTC)[reply]

Genetics

Some traits come in two varieties( for example Mendel's round and wrinkled peas with green and yellow colors)Do all traits for all species come in only two varieties?Justify the answer by explaining the relationship between genes and traits. —Preceding unsigned comment added by Shadnasa (talkcontribs) 17:35, 5 January 2009 (UTC)[reply]

This really sounds as if it breaks the "The reference desk will not do your homework for you" rule. -- Aeluwas (talk) 17:43, 5 January 2009 (UTC)[reply]
Please do your own homework.
Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know.
All traits come in just two states, such as "blue eyes" and "not blue eyes". :-) StuRat (talk) 17:55, 5 January 2009 (UTC)[reply]
You could start by reading the Genetics article. The Quantitative trait article will at least give you the "yes/no" part of your answer. The rest of the question is going to require some basic understanding of genes/proteins and sounds like a good essay to work on. You will find sections on "multifactorial inheritance", "complex traits", "polygenic traits", etc. in any good genetics textbook. As a general rule, if the question says "all", you should automatically suspect that the answer is "NO". The goal here is for you to understand why not. --- Medical geneticist (talk) 19:06, 5 January 2009 (UTC)[reply]

Volume controller

I got a fancy new radio for christmas and would like to listen to it at work, however it does not have a headphone jack, only a "line out" jack. The volume knob on the radio does not affect the "line out" signal, and when I plug my headphones in it is super loud. My headphones do have their own volume knob, but the control is not fine enough and I have to turn it to its lowest setting just so it doesn't hurt my ears. So currently I have to choose between super loud, kinda loud, and off. I'm wondering if there is a device I can stick between the radio's line-out jack and my headphones which would allow me to have greater control over the volume, or is there another solution? — jwillbur 18:05, 5 January 2009 (UTC)[reply]

Considering how cheap headphones are, you could just buy a set with more precise volume controls. Specifically, look for a volume slider or knob, not a separate "up" and "down" buttons, as that type never seems to have much precision. If you want a free solution, put something between the headphones and your ears which will absorb most of the sound. StuRat (talk) 18:44, 5 January 2009 (UTC)[reply]


A volume control is just a variable resistor - if you have any construction skills you could buy a 'potentiometer' (aka: a 'variable resistor' or a 'pot') and wire it up in series with your headphones. That would give you another volume control that you could use to cut the volume from the radio before sending it to the headphones - and the headphone volume control would still operate after that to adjust to personal preference. Sadly, I can't think of any devices that you can just go out and buy that'll do the job. Some radios have either a switch or a menu option to switch the output jack between 'LINE' and 'Headphone' levels - but I guess you've already checked that. SteveBaker (talk) 18:49, 5 January 2009 (UTC)[reply]
I would expect a headphone amplifier, like this one, to be an off-the-shelf solution. -- Coneslayer (talk) 20:47, 5 January 2009 (UTC)[reply]
Cheap off the shelf solution: [32]--GreenSpigot (talk) 01:29, 6 January 2009 (UTC)[reply]

Artificial Gravity

I heard a idea in a sci-fi book one time to create artificial gravity in a space station all you have to do is spin the space station at the right speed so that the Centripetal force pushing you into the outer edge of the space station matches that of gravity on earth. If this is indeed possible, how fast would it have to spin to simulate Earth’s gravity? I suppose it'd be a function of the diameter? Would it be possible to spin fast enough that it would rip anything apart that happen to be in the center? Now that I think about it, this all seems wrong to me since if we choose the station to be the reference point it as the station is stationary and the universe is spinning around it so no force should even be created. Am I thinking about that wrong? Thanks. Anythingapplied (talk) 19:12, 5 January 2009 (UTC)[reply]

All this and more is covered in Artificial_gravity#Rotation. Except maybe the bit about tearing something apart in the center, which doesn't seem all that likely to me unless you were spinning it at ridiculously fast rates and the thing in the center was especially prone to being ripped apart (you'd have to spin it at a rate that would be much much much more powerful than any simulated earth gravity, yes? You'd be essentially just creating a giant centrifuge). --98.217.8.46 (talk) 19:13, 5 January 2009 (UTC)[reply]
Centripetal force can work to provide artificial gravity. However, there are some reasons why it hasn't been done so far:
1) Since the force of gravity reduces to nothing at the center, that means the apparent force of gravity changes when you move toward or away from the center. Also, for a small station, the distance between your head and feet causes a significant change in apparent gravity. This causes nausea.
2) A rotating station is more difficult to use for docking. Generally, docking would only be possible at the axis, allowing two ships at most, and they would still need to match the station's rotation to dock, which is difficult, but not impossible.
3) Items like solar panels and antennae, which need to point in one direction, either need to be moving constantly relative to the station, which makes for many moving parts to break down, or they would need alternative designs. For the antennae, you could have a separate, nearby, non-rotating antenna ship that always points toward the target, and uses a low-power signal to communicate with the rotating ship with the people on board. The rotating ship could use an omni-directional antenna. For solar panels, you could put them on all sides of the rotating ship, so some are always pointed towards the Sun. This would, of course, increase weight relative to energy produced, but would also provide for redundancy, eliminate the need to use energy to aim them at the Sun and would reduce complexity which would be likely to cause failures.
4) Space walks away from the axis wouldn't work, as the astronauts would be pushed away from the ship. Thus, you would need to stop the space station's rotation to do exterior maintenance. StuRat (talk) 19:30, 5 January 2009 (UTC)[reply]
I think the major reason we have not done this yet is that we have not put anything bigger than a large truck worth of manned station up into orbit. Especially to overcome problem 1 above, you need a big station. For our small stations with small, highly trained crews that only stay up for a few weeks or months, the price for artificial rotation is not worth it. If we ever get to build real habitats, rotation will likely be the means if providing simulated gravity. Von Braun's space station design was a torus with a diameter of 76m, i.e. with similar outer proportions to the ISS, but much much more livable space and a much larger crew. --Stephan Schulz (talk) 20:03, 5 January 2009 (UTC)[reply]
That's partly true, but I think the main reason is that we want to study all the negative effects of microgravity on the human body and how to combat it. Lose of muscle mass and bone strength are major issues. If the ISS was built as a rotating wheel, we lose an opportunity to figure out a cure/treatment. 67.184.14.87 (talk) 23:54, 5 January 2009 (UTC)[reply]
That sounds rather unethical: "Let's damage the health of some astronauts so we can collect data". StuRat (talk) 23:58, 5 January 2009 (UTC)[reply]
This one doesn't quite fit in with the other's: Microgravity (say 1% of normal gravity) is actually the best for getting work done, as massive objects (like satellites) can be moved easily and yet your tools don't float away. This might mean you want to spin the station slowly, instead of at the 1g speed. StuRat (talk) 20:12, 5 January 2009 (UTC)[reply]
The force doesnt dissapear when you change reference frames because the artificial gravity is due to the centripetal acceleration. This means that the spinning frame is non-inertial (as spinning frames always would be), and so the force is maintained when the transformations between reference frames occurs.


Certainly 'spin gravity' works. There is absolutely no doubt about that. The problem is that (as already noted) the station has to be large enough that you don't notice significant difference between gravity at your feet and your head - but remember that once you are spinning the thing - these outer sections of the station are going to be pulling on the central 'hub' with the full force of gravity. So instead of the station being a typical lightweight construction - it's suddenly got to be built with all of the structural strength of something like a bridge on earth. Since we've already agreed that it also has to be large - you have something that's physically huge and has to be strong - so it's chunky too. That's going to make for a pretty major launch weight. The suggestion to use fractional 'g' is a good one - but it's essentially impossible to determine what fraction of a 'g' is enough to counteract the alarming dangers of staying up there for prolonged amounts of time. We know that zero 'g' is incredibly harmful - both to health while aboard the craft - and (more worrying) to the long-term health of the astronauts once the mission is over. We know that 1g is good...but what happens if you spend a year on the moon at 1/6th g? We really don't know because short of building a moon-base or an actual spinning space station - we can't do the experiment. It might be sufficient (for example) to have a pair of small cabins (each the size of a phone booth - say) attached to opposite ends of a long cable so that they are able to spin. The astronauts take turns to eat, sleep and exercise in 1g while they do all of their daily work activities in zero 'g'. That might be enough to keep them healthy - and it would be vastly cheaper than spinning the entire station. But we don't know. As for the issue of performing maintenance on a spinning space station - you'd have to use ladders and safety lines and all of the other apparatus that you need for doing that kind of thing down here on earth. That at least is something we understand! Alternatively - you could always have a large flywheel for storing the rotational inertia while the station (or perhaps just the two phone-booths) is spun down to normal speeds. That would consume relatively little energy - so altering the amount of artificial gravity and allowing transfers from the non-spinning parts of the station. Docking could be handled the same way - turning off the spin for however many days the docking is going on - and putting spin back on again once the shuttle departs. SteveBaker (talk) 21:57, 5 January 2009 (UTC)[reply]
Not only the gravity gradient (difference between feet and head) is of concern, but also the Coriolis effect. For constant apparent gravitational acceleration, Coriolis effect is proportional to and gravity gradient is proportional to .
The tension in a rotating ring or the maximal tension in a rotating rod is both , where is the density of the material, is the pseudo-gravitational acceleration (9.80665 m/s2 is one standard gravity ('g')) and is the radius. This and the article on tensile strength gives you an estimate how large you can build the structure - with steel it would be a few kilometers.
If there is only 1 rotating wheel-like space habitat, then it's a problem to change the orientation due to gyroscopic effects - it's better to have 2 counter-rotating wheels on one axle. To prevent bending of the axle during re-orientation one could build opposing magnets at the rims of the wheels where the forces due to the torques are smaller than at the axle.
Icek (talk) 23:12, 5 January 2009 (UTC)[reply]

I'll just make two points relating to the fact that artificial "gravity" due to rotation of a structure diminishes as you get nearer the center. First, this could be useful. If it's found that say a level equal to say 60% or 100% Earth gravity is necessary to maintain the health of the station occupants, but 1% is better for some kinds of work area, then all that has to be done is to place those work areas near the center of the station and the living quarters farther out. This might involve an elongated station rather than a wheel-shaped one that would put most of the usable space around the rim.

And second, the original poster asked if the station might come apart at the center. While a flaw anywhere in the structure could cause its destruction, this would be more likely to happen near the rim, where the forces are greatest. But such a thing isn't a major risk: the forces are well understood, just as gravity is on Earth, and it's only a matter of properly engineering the structure to resist them reliably. It might be harder than doing it on Earth because the costs of lifting materials into space are large, but it's basically just a matter of engineering. And while ordinary structures sometimes do fail here on Earth, it's a pretty rare event. --Anonymous, 00:00 UTC, January 6, 2009.

A Zebra Finch as a pet?

Does anyone here have one? What are they like as pets? My impression from seeing them is that they don't really do much, have no interest in interacting with people and are a bit like just having a sparrow in a cage. Whenever I've seen them, they just seem to jump from perch to perch, tweet, eat, drink and back away if a person gets too close. Am I wrong? --84.67.67.100 (talk) 23:14, 5 January 2009 (UTC)[reply]

The article on Zebra finches mentions a bit about their behaviour and taming them. Mattopaedia (talk) 00:37, 6 January 2009 (UTC)[reply]

Have you only seen them? You must hear them before you ever consider them as pets. Zebra finch vocalizations can be very loud and, to some people, very, very annoying. Make sure you like what you hear first ;) --Dr Dima (talk) 01:20, 6 January 2009 (UTC)[reply]

January 6

Looking back in time through space

In terms of looking into space, does anyone know, in light years, what is the furthest back in time man has peered? Are we talking thousands or millions of years here? 79.75.238.142 (talk) 02:37, 6 January 2009 (UTC)[reply]

More like billions: [33]. StuRat (talk) 02:43, 6 January 2009 (UTC)[reply]
13,699,600,000 years ago!
It's a substantial fraction of the time since the big bang. We've observed and mapped the "cosmic background radiation" - which according to our article was just 400,000 years old at the time. We believe the universe is 13.7 billion years old - so the answer is something like 13,699,600,000 years. (OK - we should be rounding that to 13.7 billion). (That's "years" not "lightyears" - a light year is a measure of distance - not time). In some sense, it's not possible to look further back in time than that because there was nothing like atoms for photons to bounce off of - so we've pretty much seen as far back as it's possible to see. SteveBaker (talk) 05:15, 6 January 2009 (UTC)[reply]
If you consider detecting microwaves to be a method of "peering", the cosmic microwave background radiation has been travelling in space since the universe first became transparent to light 13.7 billion years ago, 400 000 years after the Big Bang. It isn't possible to detect light from any earlier time because earlier photons were continuously being emitted and scattered before travelling any appreciable distance.--Bowlhover (talk) 05:11, 6 January 2009 (UTC)[reply]

Potassium supplement dosage

I was at the health food store this weekend and looked at Potassium supplements. I was surprised that they were all 99mg and only supplied 3% of your RDA. Any one know why they are all capped at 99mg? A person would have to take 33 tablets to get the full recommended daily allowance! --71.158.216.23 (talk) 03:00, 6 January 2009 (UTC)[reply]

The reason is that an overdose of potassium can kill you, so they don't want to take any chances that the supplement, along with your normal diet, will do that. The 3% is just so they can claim their supplement has a valuable nutrient in it. Potassium is actually what they use in lethal injections (in much higher dosages, of course). See hyperkalemia. StuRat (talk) 04:53, 6 January 2009 (UTC)[reply]
That explains the reason why there would be a limit. If all the tablets are exactly 99 mg, then a likely reason for that particular size is that someone wrote the law or regulation so as to say "any tablet containing 100 mg or more of potassium requires a prescription" rather than "any tablet containing more than 100 mg of potassium requires a prescription". Perhaps at the time the next-largest size below 100 mg was 50 or 75 mg, and they assumed it would continue to be, but manufacturers saw a loophole and created 99 mg tablets in order to gain a competitive advantage and stay within the law.
In that paragraph I'm just guessing, but I do know about a similar occurrence in the field of railroads. In 1922 the Interstate Commerce Commission in the US was trying to reduce the number of train crashes due to signals being passed at danger, so they mandated the installation of measures such as automatic train stops on all railways that allowed trains to run at 80 mph or more. And the result is that to this day a large number of main US rail lines have a speed limit of 79 mph. --Anonymous, 07:56 UTC, January 6, 2009.

The Universe

Could it be possible that the universe is spherical and when one looks through a telescope in any direction they could see all the way around the universe back to ones position at earth in the future,assuming light would bend around the universe and also assuming one had a telescope that powerfull.Grimmbender (talk) 03:58, 6 January 2009 (UTC)[reply]

Sounds like you mean the universe being on a spherical surface not being a sphere itself. Consider standing on Earth: if you go forward a long distance along the surface you come back to where you are. But if you are underground and you move in a straight line (cartesian, not spherical) you wind up bursting through the surface and heading out into space. The only way "universe is a sphere" would lead to "seeing forward back to behind you" is if light somehow bounced (or tunneled, or whatever) around when it got to the edge. DMacks (talk) 04:21, 6 January 2009 (UTC)[reply]


It is possible that the universe wraps around itself like that (I think we'd be talking about a hypersphere or something) - we don't know for sure - but if it does, we'll never be able to do the experiment you're thinking about because the 'observable' universe appears to be smaller than the entire universe. Because the speed of light is the universal speed limit - we can only ever see or know about parts of the universe that are close enough for light to have travelled from there to here in less than the time since the big bang. Anything further away than that (including, perhaps, the back of your own head) is forever too far away to ever be visible. SteveBaker (talk) 05:05, 6 January 2009 (UTC)[reply]
"The back of one's head is inches from the eyes but too far away to be seen." DMacks sends contents of coffee mug out for tox-screen. DMacks (talk) 05:17, 6 January 2009 (UTC)[reply]
I haven't seen anything later than this and this. It looks like the Universe is a closed Poincaré dodecahedral space, it is a "small universe" with positive curvature, and yes, we can see all the way around it (same glowing spots from the CMB in different parts of the sky). This is difficult stuff, I can put the cites here or send copies of various papers and reviews to anyone interested. Franamax (talk) 07:41, 6 January 2009 (UTC)[reply]

Four fundamental interactions as four formulas

It's my understanding that the inverse square law applies to gravitation for most intents and purposes, but with famous exceptions such as the Mercury anomaly. I'm trying to understand (1) why exactly the relativistic understanding of gravity changed the actual calculations of orbit, and (2) whether all the fundamental interactions can be expressed as simply as gravitation (either Newtonian or Einsteinian) can. I know this is a "big" question, so feel free to contribute whatever you can — don't feel pressured to answer the "whole thing"! Also, let me know exactly how I am thinking about this incorrectly, as is usually the case with me and quantum physics. Lenoxus " * " 05:01, 6 January 2009 (UTC)[reply]

It's not quantum physics here, but general relativity. Mercury does obey the inverse square law for gravity, but space is behaving weirdly. I think Kepler problem in general relativity will answer most of your question (if you get the maths, which I don't just now ;-). --Stephan Schulz (talk) 09:11, 6 January 2009 (UTC)[reply]

mech engineering related(new idea)

i want to do project on solar pumps,in a new way.i want that hand-pump must be operated automatically. for that my idea is that like in IC as connecting rod connects piston and crank, in this my idea is to connect piston(some long rod may be connected along its axis)and handle of hand pump by a connecting rod, so that linear motion of piston causes handle to move so taht we can get water w/o human. my question is whether we can achieve this.pls answer its urgent —Preceding unsigned comment added by 210.212.223.138 (talk) 05:55, 6 January 2009 (UTC)[reply]

Ice in craters of Mercury

There may be ice in craters at the North Polar region of Mercury. How many creaters with possible ice in them? What is the diameter and how deep? —Preceding unsigned comment added by Johnz Johnz (talkcontribs) 09:29, 6 January 2009 (UTC)[reply]

Atomic masses

which element is chosen as a standard for measuring the atomic masses?why? —Preceding unsigned comment added by 59.103.70.116 (talk) 09:32, 6 January 2009 (UTC)[reply]

According to atomic weight the standard is 1/12 of the mass of an atom of carbon-12. I think the isotope is chosen because it has the same number of protons and neutrons. A historical account can be found in [34]. EverGreg (talk) 10:16, 6 January 2009 (UTC)[reply]

chemicals for surgical gloves

i am small scale manufacterer of surgical gloves. i want to improve my product. so, i want a chemical composition for (natural latex)surgical gloves. Thanking you.Arijitkm (talk) 10:08, 6 January 2009 (UTC)[reply]

Natural latex is a polymer of isoprene. Graeme Bartlett (talk) 10:58, 6 January 2009 (UTC)[reply]

Definition of "Life"

How do we define "Life"? I have already looked up in a number of books. One said "Life is a set of characteristics which distinguish living organisms from non-living objects" but I want a definition without relativity with non-living things. Please note this is not a homework. Many thanks.