Wikipedia:Reference desk/Archives/Science/2008 August 18

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Science desk
< August 17 << Jul | August | Sep >> August 19 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

August 18[edit]

Confidence boosting[edit]

This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis, prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page.
This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis or prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page. --~~~~

- EronTalk 01:35, 18 August 2008 (UTC)


What is an example of natural effects of heat transfer? —Preceding unsigned comment added by Ianf50 (talkcontribs) 07:22, 18 August 2008 (UTC)

What is your definition of "natural"? Does the fact that humans feel hotter in higher temperatures and humidities count? How about a lake warming up during the course of the day? If you're looking for meteorological phenomenon, almost all such phenomena involve heat transfer or the lack thereof. Perhaps the articles convection, thermal radiation, and heat conduction would help. --Bowlhover (talk) 09:00, 18 August 2008 (UTC)
A natural consequence of heat transfer is that in an isolated system, temperature differences among different parts of the system tend to reduce over time. In other words, the system tends to move toward thermal equilibrium. -- (talk) 14:40, 19 August 2008 (UTC)
And fulfill the second law of thermodynamics. --Bowlhover (talk) 06:59, 21 August 2008 (UTC)

Sperm cell[edit]

What is a sperm cell? Where is it found? BINITA GUPTA (talk) 09:20, 18 August 2008 (UTC)

See Sperm. Zain Ebrahim (talk) 09:23, 18 August 2008 (UTC)

Building an atomic bomb[edit]

If the US was able to built an atomic bomb in the 40s, shouldn't rogue states also be able by now? Of course, they don't have the brilliant minds of that time, however, they also don't have to start by scratch. Mr.K. (talk) 10:36, 18 August 2008 (UTC)

Yes. The recipe for building an a-bomb can be considered common knowledge today. The challenge is buying the raw materials and building the infrastructure needed to build the bomb. This has been a backdrop for sanctions and actions against Iraq, Iran and North Korea. Another option is of course to buy a ready-made bomb. For instance, the US has leased nukes to Turkey. (See List_of_states_with_nuclear_weapons#Nuclear_weapons_sharing)
The article Nuclear proliferation may have more details. EverGreg (talk) 10:59, 18 August 2008 (UTC)
In the 40's, it took a 60,000 acre building and 1/6th of the entire US electricity generation capability to refine enough Uranium for just a couple of modest-sized bombs. It is indeed the issue of finding the raw materials that is the main obstacle to every nation having one. Uranium enrichment plants are LARGE and use a lot of power - that makes them hard to hide. They also take a long time to produce significant amounts of product. As for the actual bomb construction, the technical issues are spelled out in broad terms all over the place but some rather subtle details have to be gotten right to avoid the 'fizzle' that produced such a low yield in the North Korean nuclear test. The basic principles of bomb making were understood at the outset of the Manhatten Project - most of the effort went into resolving those fine details. SteveBaker (talk) 11:29, 18 August 2008 (UTC)
Though it should be noted the low yield for the NORK bomb was probably due to them trying to be especially fancy with it. If they had chosen a conservative design, I'm sure they could have pulled it off. The differences between the first atomic bombs (e.g. Hiroshima and Nagasaki) and a bomb small enough to fit onto the head of a missile are legion. Making a simple bomb, a gigantic kludgy thing like The gadget, is the easier thing to do. -- (talk) 11:38, 18 August 2008 (UTC)
(ec) The difficulty in building an atomic bomb is one part knowledge, one part experience, two parts political ability, and maybe two parts materials. (I just made up the parts.) For a simple atomic bomb, knowledge is the easiest part, and always has been since 1945. There was enough knowledge released only two days after Nagasaki (see Smyth Report) to have a pretty good blueprint for how to make an atomic bomb. Experience is something anyone can develop—given enough time and other resources.
Political ability has proven a major factor in whether nations have or have not acquired nuclear weapons. Sanctions, pressure from allies and enemies, treaties, export restrictions, etc., have proven the only real bulwark against endless proliferation. It has hardly been a perfect system but the number of proliferators, while rising over time, is still relatively small (and could be much worse).
Materials are the lynchpin of the whole operation (and such has been known for a long time—the was the thesis of the Acheson-Lilienthal Report). Without raw materials for bomb making, you'll never get one even if you have all the other ingredients. Enriched uranium is not as hard to produce today as it was in the 1940s, but it's pretty dang hard. Plutonium is the "easier" route but it's no walk in the park either.
So knowledge is a part of it. Not the entire thing, though. It is easier for a nation to get a bomb today than it was in 1945? Definitely. Nuclear technology and knowledge has become far more widespread—a nuclear engineering textbook today can tell you how to run a reactor to generate as much plutonium as possible, and that's something that was known to only a few people back in the 1940s. But an atomic bomb is still an immense technical program. Not as immense as it once was, but still pretty immense. -- (talk) 11:35, 18 August 2008 (UTC)
One last little comment: if you look at pages like Fat Man and Little Boy and Nuclear weapon design you'll see LOTS of information that looks absolutely vital to making a bomb. At first glance it looks like EVERYTHING is out there. In reality though there is a lot of stuff that is not out there, but you have to be pretty well-trained to see where the big gaps are. For example, there's really nothing in the public literature about how plutonium behaves when under several megabars of pressure—as it is the center of a bomb. This is not accidental—it's a classification category. If one were a bomb-producer, this is the sort of knowledge would be necessary to investigate, and getting it wrong could have consequences to the effectiveness of your effort. If you don't know what the secrets are, it can be hard to judge how many secrets there still are. (I don't know what the secrets are either, but I know of the existence of some of the secret areas.) -- (talk) 11:44, 18 August 2008 (UTC)
I have to question the mention above of a 60,000 acre building in the 1940s. According to our article List of largest buildings in the world, the largest building now by area is 990,000 square metres (244 acres). Where was this gigantic building, and what became of it? Thanks. Wanderer57 (talk) 16:01, 18 August 2008 (UTC)
Perhaps that was a misquote of the statement in Manhattan Project that the "Oak Ridge facilities covered more than 60,000 acres". --Heron (talk) 18:00, 18 August 2008 (UTC)
Sorry - yes. 60,000 acres for the entire facility - but it was mostly buildings. I saw some contemporary photos a while back - they were enormous factories. SteveBaker (talk) 02:33, 19 August 2008 (UTC)
K-25 was the giant enrichment plant, in case anyone is curious. --Allen (talk) 04:09, 19 August 2008 (UTC)
And the time it was built, it is worth noting, it was the largest building in the world. So SteveBaker's overall point is correct. (Even if his statement about the actual acrage, or even that it was "mostly buildings", are not quite true. Most of the Oak Ridge site was wilderness. It was intentionally quite isolated. see map. But really, we're splitting hairs here... it was one of many unprecedentedly large facilities developed during the Manhattan Project, at immense expense. The spent the equivalent of $5 billion modern USD per bomb. That's a lot.) -- (talk) 05:01, 19 August 2008 (UTC)

I am suspicious of the statement that 1/6 of U.S. electric generation was used to make the 2 atomic bombs, since at the time there was no transmission network capable of carrying that much electricity from the various generating stations to Oak Ridge, Paducah, Los Alamos, or wherever. Electricity was consumed much closer to the generating plant than this implies and there were no humongous generators (with outputs equal to the total generation of several average states) near the gaseous diffusion plants. Edison (talk) 05:12, 19 August 2008 (UTC)

I don't recall if it's 1/6th but it is something huge like that (this document says 1/7th). It's mostly going to Oak Ridge, not the other installations. It took a tremendous amount of electricity to operation Y-12 and K-25ridiculous amounts. One of the main reasons they set it up at Oak Ridge is so they could be right next to the Tennessee Valley Authority in order to suck up all that power. Power requirements were a major issue on the Manhattan Project (and continued to be well into the Cold War—in 1956, the three gaseous diffusion plants together consumed 12% of US electricity; more energy than was produced by the Hoover Dam, the Grand Coulee Dam, and TVA combined). -- (talk) 00:22, 20 August 2008 (UTC)

In very general terms, a uranium "gun type" bomb is very easy to build and I suspect most Wikipedians could build a functional bomb if they had access to the necessary amount of highly-enriched uranium. But we don't so the world is safe from Wikipedian-built uranium gun-type bombs.

On the other hand, it's probably a lot easier to get plutonium, especially if you and your comrades aren't all too concerned about your personal safety during the extraction process (from nuclear reactor spent fuel). (And it would help to have "insiders" who arrange the nuclear reactor to breed more plutoniium than it might otherwise produce.) But plutonium is only effective in an implosion-type bomb and to build one of those, you need to not only be as intelligent as your average Wikipedian but also a damned-fine machinist who can form precise bits of plutonium, high-explosives, neutron reflectors, and the like. A neutron-producing initiator is also pretty-much required for an implosion bomb, and so far, RadioShack doesn't carry those. So again, the world is mostly safe from Wikipedians bearing nuclear weapons.

Atlant (talk) 17:45, 19 August 2008 (UTC)

Plutonium is actually quite difficult to produce in bomb-ready form and bomb-ready quantities. It's "easier" than uranium enrichment in some ways but it's no walk in the park. It gets played down by people who imagine it just involves sticking uranium in some heavy water but if you look at the facilities they actually developed to deal with the plutonium problem (General Groves' Now It Can Be Told is quite good in this regard), you can see how difficult it actually was to produce the tiny amount of weapons-grade plutonium they did during the war. (And you have to be pretty concerned for your own safety when dealing with spent fuel; it's a short-term hazard as well as a long-term one). -- (talk) 00:22, 20 August 2008 (UTC)
I love the story (I think it's in one of Richard Feynman's books) about the room at the Manhatten project that had a large shiney yellow sphere sitting on the floor, holding open the door. Feynman goes to pick it up so he can close the door and finds that it's heavy....insanely heavy...impossibly heavy! He asks about it and it turns out that it's a sphere of solid gold! They'd commissioned it from Fort Knox because they needed something of similar size and mass to the plutonium warhead for some test or other. When he asked why such a valuable thing should be used as a door stop, he was told that the actual plutonium sphere that it stood in for was tens of thousands of times more valuable - and to emphasise that, they decided that in comparison, the gold might just as well be used as a door-stop. SteveBaker (talk) 02:11, 20 August 2008 (UTC)
While we're on the subject of heavy stand-ins... have you ever held a substantial piece of uranium metal? It's amazingly heavy. Completely tricks out your mind, which assumes, you know, little piece of black metal will weight more or less as much as lead or steel or some other metal we have usual contacts with, but man, is it ever heavier than it looks. It's unfortunately very hard to get larger than shard-sized pieces of uranium metal in the US (and no, I'm not talking about enriched, before some wiseguy asks—what I'd do to have a solid uranium doorstop! -- (talk) 01:00, 21 August 2008 (UTC)
You don't. IF it is at/exceeds "critical mass"/"critical size"., you won't be here to use it as a door stop. (talk) 01:58, 23 August 2008 (UTC)
That's probably why 98... warned-off the wise-guys by saying "unenriched". How big is the critical mass of that? And if we're speaking about depleted uranium, I think the critical mass is (quasi-)infinitely large as we prove by loading ammo bays with hundreds or even thousands of pounds of the stuff.
Atlant (talk) 13:10, 23 August 2008 (UTC)

Christmas suicide[edit]

Is it true that much more people commit suicide in Christmas or is it an urban legend? Mr.K. (talk) 10:37, 18 August 2008 (UTC)

According to this TIME magazine article, it's the 11th of June. Fribbler (talk) 10:45, 18 August 2008 (UTC)
As with many questions of this nature, Snopes has the answer. -- Captain Disdain (talk) 10:46, 18 August 2008 (UTC)
The TIME magazine article referenced above is from the year 1932. I would venture to guess that those statistics are outdated and no longer very accurate. cheers, 10draftsdeep (talk) 14:06, 18 August 2008 (UTC)
It also probably doesn't help that they are from the middle of the Great Depression. Not so swell a time. -- (talk) 05:06, 19 August 2008 (UTC)
I doubt much more people in Japan, China or India commit suicide in Christmas... N.B. Our article on suicide actually mentions the holiday issue. It's usually helpful to read articles before asking questions Nil Einne (talk) 10:34, 19 August 2008 (UTC)

Two wheels, spinning in different directions[edit]

Suppose that you put a bike-wheel on a stick and started spinning it. If you grasp the wheel by both hands, you'll feel the gyroscopic effect, making it harder to turn it around. What if you put another wheel on there, but spun it in the opposite direction at the same velocity? Would the gyroscopic effect double, or would it be cancelled out? My intuition tells me it would be doubled, since both wheels are independent of each other, and the effect of the each is to stabilize the stick, but I've been told that that is incorrect, that they in fact cancel each other out. Which is correct? (talk) 11:46, 18 August 2008 (UTC)

A single gyroscope
Short answer: They cancel out.
Longer answer: In the picture (from Gyroscope) you have the wheel spinning about the red axis. If you twist the axle (say) clockwise around the green axis, then there is a resulting rotation about the blue axis. Now, take two of those, rotate one through 180 degrees so the two red axles are pointing in opposite directions - then the blue (output) axis of one wheel points up and the other points down and the two green (input) axes are lying on top of each other. So - when you twist the pair about the green axis, one wheel attempts to rotate about the upward pointing blue arrow and the other about the downward one. Since they both try to rotate (say) clockwise about those opposite arrows - the motion cancels out. SteveBaker (talk) 12:45, 18 August 2008 (UTC)
Note that the canceling of such an effect by things rotating in opposite direction has very practical use in some helicopters (see Coaxial rotors). -- (talk) 13:00, 18 August 2008 (UTC)
I think that it's just the fact that it doesn't cause the helicopter to rotate counter to the direction of the rotor. I'd expect taking away the gyroscopic effect would make it harder to fly, not easier. — DanielLC 15:41, 18 August 2008 (UTC)
The reason for having counter-rotating coaxial rotors isn't because of gyroscopic effects - it's because you can avoid the need for a tail rotor. Stability in helicopters comes from the 'coneing' of the rotor blades (where they don't spin in a perfectly flat plane - but rather follow the surface of an inverted cone) - this produces an effect like dihedral in a fixed wing aircraft - which confers much stability. SteveBaker (talk) 02:27, 19 August 2008 (UTC)
I was under the impression that you use the tail rotor to avoid gyroscopic effects? Aren't those the source of the torque that they counteract? And isn't the advantage of two rotors (among others) that when they rotate at opposite directions they counter the torque? Am I confused on the torque source? -- (talk) 04:53, 19 August 2008 (UTC)
No. Gyroscopic forces only come into play when the axle is twisted in a direction different from the one it's rotating in. If a helicopter is just hovering in still air, there are no twist forces on the main rotor disk - so there are no gyroscopic forces. However, the tail rotor is still working hard to stop the helicopter from spinning. The way to visualize what the tail rotor does is to imagine a toy helicopter with a motor spinning the main rotor. If you imagine toy with the rotor spinning and grab the rotor and hold it still - what would happen is that the body of the toy would start spinning in the opposite direction - right? Well, the air resistance (drag) on the rotor blades in a real helicopter has the same effect. As the drag tries to slow down the blades, there is a force on the fuselage trying to rotate it in the opposite direction. To avoid the helicopter spinning around faster and faster, you need some kind of a force keeping it straight. That's what the tail rotor does. In effect it's a variable pitch propellor that pushes on the tail to counter the tendancy of the fuselage to rotate in the opposite direction of the rotors. By altering the pitch on the tail rotor you can increase or decrease that force and thereby steer the helicopter...but that's just a handy side-effect of it's main function. This YouTube video and also this one show very graphically what happens when the tail rotor fails when the helicopter is just hovering...the helicopter starts to spin faster and faster.
There are gyroscopic forces at play in helicopter dynamics - when the helicopter rolls sideways or pitches forwards or backwards, there are gyroscopic forces because you're twisting the main mast at right angles to the rotation of the blades. When you pitch the helicopter forwards, it'll tend to roll to one side or the other - and when you roll, it'll tend to pitch forwards or backwards. But that force is comparable in size to the original roll or pitch movement - and it's easily countered with the cyclic pitch control. It helps that rotors are kept as light as possible - it's not a solid disk and it's very light compared to the mass of the helicopter itself.
SteveBaker (talk) 12:50, 19 August 2008 (UTC)

Mail sorting[edit]

How is address recognition done for postal mail? Many address labels are hand-written, often with appalling handwriting - and a misreading of, say, a house number, or a postal code could send a piece of mail to the wrong place. However, I hear of very few stories of misplaced mail - how do the postal services do it? Do they rely heavily on humans reading the address labels, or do they have spectacularly good OCR? In the latter case, how come "commercial" OCR is still far off being able to reliably read handwriting? — QuantumEleven 13:45, 18 August 2008 (UTC)

Is it really? I thought banks are using it too for reading transfer forms. That said, it's certainly easier to recognize postcodes and numbers (as long as you're sure of their position) than general text. As soon as the post reaches that village, there will be persons reading the final address. Also, I'd guess handwriting OCR simply doesn't sell to the general public. --Ayacop (talk) 14:27, 18 August 2008 (UTC)
Forms that are going to be read by OCR usually have a separate box for each character which makes the job much easier. Addressing an envelope is much more variable. --Tango (talk) 17:03, 18 August 2008 (UTC)
The Post-service's business is getting mail to you efficiently. They will have invested very heavily in advanced OCR programs/processes and will pay for the best quality (or rather the one that provides best service at lowest price) system they can. The price that such software can command for business-systems that require it to be correct in 99% (or whatever) of cases will be huge, compare that to small-office/home-use OCR and it is a case of the company will sell a lower-quality/less intensive version of their software for the home-market - because selling their best product to home-users would ruin their profit-margin for business-sales. The R&D costs of developing a new improved OCR program are probably huge and they need to recoup them some how, seeing as it is most important to things like Post Offices and paper-business intensive industries (e.g. banks with cheques/etc.) then they can command a huge fee for that, but the relatively small home/small-business market cannot, so they sell a watered down version for them. (talk) 14:55, 18 August 2008 (UTC)
Address OCR is helped by having a relatively small set of possible words. This makes it easier to guess the text than for general-purpose OCR. For instance if some letters in "Boston, Ma" are garbled, like "Bo**n, Ma" the only possible choices may be "Boston, Ma" and "Bolton, Ma". Then a street name in the rest of the address may exclude Bolton. A general-purpose OCR may have to consider words like "Born", "Boron", "Boomin", "Bow pen" as well as unknown words, making accurate OCR much harder. EverGreg (talk) 15:12, 18 August 2008 (UTC)
Is your mail really that accurate? I regularly have pieced lost and delayed for 3 weeks or more. Evidently a person in the mail sorting system thinks that Bermuda is in North Carolina and Southampton, Bermuda is Southampton, England. Consequently a significant portion of our mail is sent to the ends of the earth before being forwarded to us. I say it must be a person making this error because a computer would have been fixed by now. That 3 week delay is then compounded by the outrageously slow sorting in our domestic system, which can add another 2-3 weeks without batting an eyelid. A month and a half to deliver a postcard is pretty typical. Plasticup T/C 15:55, 18 August 2008 (UTC)
The country is supposed to go on its own line in all capital letters, after the rest of the address. So,
Southampton (postal code)

rather than

Southampton, Bermuda (postal code)

Have people writing you been doing this? --Random832 (contribs) 19:38, 18 August 2008 (UTC)

The initial sort for domestic mail is based solely on using OCR on the zipcode and is extremely fast (something like 12 envelopes per second). Looking for numbers in the last position on an address is considerably easier than reading arbitrary text, and should allow mail to reach the appropriate regional hub (or local hub if you include the full 5+4 zip). Illegible codes are handled by humans (actually by transmitting scans to computer screens rather than physical sorting). Yes, some mistakes get made which will get picked up at the regional hubs (when they try to process the rest of the address), and redirecting that mail can add a few extra days to transit. Once at the hub, OCR continues on the full text, but the list of valid addresses is considerably smaller which helps. Dragons flight (talk) 19:33, 18 August 2008 (UTC)
One thing that helps is that there is natural redundancy in the address. If you OCR both the street name AND the city name AND the zip/postal-code then you can check a bunch of things:
  • Does that street exist in that city?
  • Does that city use that postal code?
  • Does that street lie within that postal code?
  • Is there a building with that number on that street?
Even when something is misread - it's rather unlikely that it will be misread CONSISTENTLY. So a misread of the postcode is easily detectable if it doesn't match the city. Even if you misread both the city AND the postcode AND you do so in such a way that the incorrect postcode does indeed belong to the incorrect city - then there is still a good chance that the street name won't fall within the right postcode or indeed that the city won't even contain a street of that name. Providing the system can reject errors, humans can always pull in the slack and give you a reliable system.
SteveBaker (talk) 02:22, 19 August 2008 (UTC)
I might also point out that it's highly likely that the vast majority of mail the post office handles these days has typed addresses, often with bar codes. I get maybe two hand-written letters a month. I get maybe half a dozen printed items a day. The latter are obviously much easier to sort. -- (talk) 05:04, 19 August 2008 (UTC)

Strongest acid![edit]

I know that Aqua Regia is the strongest of all acids. But among Nitric acid, Hydrochloric acid and Sulpruric acid, which one is the strongest? Anyone to answer is heartily thanked. (talk) 14:40, 18 August 2008 (UTC)

Aqua Regia is not the strongest of all acids. According to its article, fluoroantimonic acid is the strongest acid known (measured by the Hammett acidity function). Algebraist 14:43, 18 August 2008 (UTC)
According to Strong acid, Hydrochloric acid is the strongest of the ones you mention. Fribbler (talk) 14:46, 18 August 2008 (UTC)
One problem with questions like this is that in casual conversation, it's easy to argue "Acid X is stronger than Acid Y because X not Y can dissolve Something". But acidity is only one of several issues involved in dissolving something. The great counterexample is that we store concentrated "strong acids" (sulfuric, nitric, hydrochloric) in glass bottles and they are stable there. But hydrofluoric acid, which is much less "acidic" than thoes others dissolves glass very easily! DMacks (talk) 15:40, 18 August 2008 (UTC)
I have read hypochlorous acid is the strongest acid in water. Because, it gives the largest quantitly of H+ions. —Preceding unsigned comment added by (talk) 15:52, 18 August 2008 (UTC)
Might be time to (re)read the hypochlorous acid article. DMacks (talk) 16:01, 18 August 2008 (UTC)
It's time to give a plug to the helium hydride ion, the strongest acid known. It can protonate any other substance. Graeme Bartlett (talk) 01:36, 19 August 2008 (UTC)

Related question, are there any acids that are strong enough to melt/dissolve things like they do in the movies? Like melting metal and things like that? ScienceApe (talk) 05:05, 19 August 2008 (UTC)

What would be the effect of such acids on the human body? Can acids really be used for torture like in EVE Online?Avnas Ishtaroth drop me a line 05:42, 19 August 2008 (UTC)
Hydrochloric acid can create an effect somewhat like burning of the skin, and could certainly be used for torture. Some of the acids mentioned above would be unsuitable: hydrofluoric acid is probably too lethal, while fluroantimonic acid's tendency to react explosively with water would make it far too dangerous. Algebraist 08:03, 19 August 2008 (UTC)
Why would you bother to treat your detainees with costly acids when you can do it with water and get away with it? (talk) 16:21, 19 August 2008 (UTC)
So, does anyone have any pictures or videos of fluoroantimonic acid reacting with familiar household objects? :) --Kurt Shaped Box (talk) 16:41, 19 August 2008 (UTC)
That would be tricky, since you'd have to desiccate the air somehow to avoid the acid decomposing. Of course, it would decompose into hydrofluoric acid, which would then do horrible things to your household objects, and possibly (since the article mentions the decomposition reaction is explosive), yourself. Algebraist 16:53, 19 August 2008 (UTC)

science / about the reproduction of animals[edit]

name:rabbit young ones:? group:? adaptation:? —Preceding unsigned comment added by (talk) 15:33, 18 August 2008 (UTC)

homework? See Rabbit. - EronTalk 15:37, 18 August 2008 (UTC)
See also Young Ones, group, adaptation, reproduction, and how to pose a question.--Shantavira|feed me 08:20, 19 August 2008 (UTC)

Habitable planet/moon systems[edit]

There are many examples in sci-fi of habitable (and populated) planets with habitable (and often populated) moons. Based on our current understandings, how likely is it that such planet/moon systems exist (or can exist)? —Preceding unsigned comment added by (talk) 15:43, 18 August 2008 (UTC)

Yes. First one to spring to mind is Talax in Star Trek: Voyager (Neelix is from one of its moons), although I suppose it's possible the moon was terraformed. If you allow terraforming, then there are loads of examples, but there are plenty without that (the number of times the phrase "M-class moon", or even "M-class asteroid" are mentioned in Star Trek is enormous!). In real life, however, I would expect it to be very unlikely. For a start, the mass of a planet capable of supporting human-like life has to be within a fairly narrow range (too heavy and you crush the people, too light and it can't hold an atmosphere), so you're probably going to have a double planet, rather than a planet/moon system. I'm not sure how likely it is for a double planet to be habitable - there would presumably be very extreme tides, but that wouldn't necessarily be a problem (there are examples of life on Earth that depend on tides to live). What kind of day/night pattern they would have, I don't know, the extreme tides would probably result in them being tidally locked, which would probably result in a very long day (similar in length to a month on Earth). That would tend to cause large temperature differences between night and day (the atmosphere would reduce it, though, so it wouldn't be like night and day on Earth's moon, where the temperature changes are in the order of 100's of degrees). All this would probably result in very different life than we have on Earth, but it wouldn't rule out life of some form. --Tango (talk) 17:16, 18 August 2008 (UTC)
Tango, I'm not sure I follow you on day length in a tidally locked system. Wouldn't that depend on the angular momentum of the entire system? If the two equal bodies rotated about their mutual centre (barycentre?) every 24 hours (with the plane of rotation in the orbital plane), they would both have 24 hours days, wouldn't they? Franamax (talk) 22:36, 18 August 2008 (UTC)
Yes, but that would be a massive amount of angular momentum. I would expect the system to start with the same amount of angular momentum as other similar systems, and that will remain constant as the tidal forces adjust things towards a locked state. This is currently happening between the Earth and Moon (the Moon is already locked, the Earth will be in about 50 billion years [if it still exists by then, which is unlikely!]) and the predicted final period is about 47 days. (According to Orbit of the Moon.)--Tango (talk) 23:05, 18 August 2008 (UTC)
Assuming their co-orbit lies more or less in the plane of the ecliptic (a reasonable assumption) - then if they are tidally locked and of similar sizes then whenever the side of your planet that's perpetually closest to the other one is facing towards the sun (ie anytime around midday), there would be a total eclipse of the sun. So the two sides of those planets facing each other would get vastly less sunlight than the sides facing away from each other. That (and the lack of tides) would result in some pretty freaky weather patterns. SteveBaker (talk) 02:09, 19 August 2008 (UTC)
The inclination would have be be really small to get total eclipses every day/month with any decent separation between the planets. Partial eclipses would be pretty common, though. --Tango (talk) 17:31, 19 August 2008 (UTC)
Extreme weather patterns, within a reasonable limit, may actually be a benefit to life. More activity implies more possibilities for complex organic molecules to gather, concentrate, and react. Evolution should also proceed faster because there's a greater pressure to adapt. --Bowlhover (talk) 07:36, 24 August 2008 (UTC)
Jupiter's moon, Europa, is now considered one of the prime candidates for extraterrestrial life in the solar system, due to the likelihood of liquid water under an icy surface and evidence of other useful chemicals. Of course, such life, if it exists, is almost certainly going to resemble bacteria more than it does humans. Given the number of extrasolar planets discovered so far that are several times larger than Jupiter (which is probably due more to selection bias than the actual distribution of planet sizes), it is not unfeasible that there may be moons the size of, say, Mars, which could be inhabited. Confusing Manifestation(Say hi!) 22:58, 18 August 2008 (UTC)
Yes, but Jupiter isn't habitable, so that doesn't fit the OP's requirements. --Tango (talk) 23:05, 18 August 2008 (UTC)
Not habitable by life like ours, perhaps, but I see no problem with floating bags of hydrogen living high in the atmosphere. StuRat (talk) 01:55, 19 August 2008 (UTC)
The amount of radiation emitted from Jupiter's interior is pretty daunting for life - also, there are strong VERTICAL currents in the atmosphere that would push any delicate floating gasbag down to crush-depth and then up to high altitude "POP!" risks. It's not impossible - but it's a really tough call. SteveBaker (talk) 02:09, 19 August 2008 (UTC)
Well, how likely is it that there are habitable worlds out there AT ALL? We really don't know. How many of them are inhabited (as well as habitable) is even more of a wild-assed-guess. But if we suppose that there are plenty of inhabited worlds - then the question as to whether significant numbers of them have inhabited moons is a more interesting question. There are some things we can say at the outset:
  • If the planet's orbit is in the 'habitable zone' of a suitable star - then it's moon is also in the habitable zone...that's good!
  • If it is indeed a requirement for life that there be oceans with sizeable tides (this is a common theory - but it's not proven) - then the planet has to have a decent sized close-in moon in order to have life. Obviously, the moon will have tides too...big ones probably!
  • We know of two planet/moon systems (Earth/Luna and Pluto/Charon) within our solar system where the planet and the moon are of reasonably comparable sizes - to the point where they could justifiably be called "Binary planets" (remember - Pluto used to be called a "planet" - but it's much smaller than our moon - so we could reasonably call Earth/Moon a "binary planet"). In that case, both would at least have enough gravity to (in principle) sustain an atmosphere and perhaps an ocean. That's good too because it suggests that such "binary planets" are likely to be very common.
  • We believe that there is a strong chance that a large impact on one planet could send chunks flying off and arrive at another nearby planet. We also believe that primitive life (Bacteria, virusses) could survive a journey on such a rock. So if life developed on one planet of a binary pair, it could certainly be transferred over to the other moon/planet if conditions were reasonable upon it's arrival. This is also good for life developing on both objects more or less at the same time...and probably sharing much basic biology too. More good!
  • BUT: It seems that large moons such as Charon are "captured" relatively long after the formation of the solar system - and our own moon seems to have come about from an exceedingly violent collision of some other minor planet with the Earth. In neither scenario is it likely that both planet and moon will have the same composition. Our own moon has hardly any water and no atmosphere. Charon also has a totally different atmosphere and surface to Pluto. Hence if one object supports life - there appears to be absolutely no guarantee that the other will also have the right chemical make-up. But we're going on a sample of only two cases - so it's possible we just got unlucky in our Solar system. There is no easy way to know.
Conclusion: If life is commonplace then I don't think it's at all unreasonable for life to develop on a pair of objects such as a small planet with a large moon - or a true binary planet. But then maybe life isn't commonplace - and/or maybe pairs of objects with similar composition are somehow very unlikely - in which case, it would be exceedingly unlikely that you'd find this kind of thing going on. We just don't know...but we soon will.
Science fiction writers like this scenario because it puts two civilisations sufficiently close together that they can reasonably interact in ways that make for a good plot. Possibly the most extreme example of this (and a really great SciFi book if you haven't read it) is "Rocheworld" (by Robert Forward) where a pair of planets get so close that their surfaces have distorted into teardrop shapes and nearly touch each other.
SteveBaker (talk) 01:57, 19 August 2008 (UTC)
Hi. I once read a list of moons where life is would be the most likely, likely meaning probably less than 0.001% chance. I've seen Mars's underground and Jupiter's atmosphere mentioned as likely candidates, and Saturn's moon Enceladus and even Neptune's moon Triton were on that list as well, in our own solar system, but it's more likely for life to exist in other solar systems, although we haven't really observed a perfectly ideal exoplanet just yet. ~AH1(TCU) 16:38, 19 August 2008 (UTC)
The fact that we haven't observed one doesn't mean there couldn't be large numbers of them - our techniques for observing exoplanets can't (or, at least couldn't until very recently) detect such small planets even if they were there. --Tango (talk) 17:31, 19 August 2008 (UTC)

light chains of immunoglobulin[edit]

why two types of light chains are required by the antibody —Preceding unsigned comment added by (talk) 15:48, 18 August 2008 (UTC)

See Immunoglobulin#Light chain. --JWSchmidt (talk) 00:59, 19 August 2008 (UTC)

Aero engine lubrication[edit]

tryings to find out lubricant used on Merlin aero engines during WW2-- (talk) 16:39, 18 August 2008 (UTC)

Our article Rolls-Royce Merlin says that early models used pure ethylene glycol, later models used a water/glycol mix. DuncanHill (talk) 16:52, 18 August 2008 (UTC)
Fraid not: Ethylene glycol coolant was circulated by a pump through this passage to carry o
Coolant is not lubricant-- (talk) 16:58, 18 August 2008 (UTC)
Sorry, misread your question. DuncanHill (talk) 17:09, 18 August 2008 (UTC)
There are some still flying. 'Phone the BBMF? Philip Trueman (talk) 17:15, 18 August 2008 (UTC)
According to "History of Aircraft Lubricants" on Google Books - it was a mineral oil with some "secret additives". The exact nature of the additive is unknown - except that it contains "anti-scuffing agents". Speculation is that it was "tri-cresyl-phosphate"...there is also discussion about it being an especially high viscosity oil. This seems to be a really complicated matter - with the composition of the lubricant changing over the life of each individual engine - and over the years as supercharger pressures increased. That book seems to be the bible for this - if you can get access to a copy, you'll know all you ever wanted to know - and probably a lot more!
     History of Aircraft Lubricants
     By Society of Automotive Engineers, Society of Automotive Engineers
     Contributor Society of Automotive Engineers
     Published by SAE, 1997
     ISBN 0768000009, 9780768000009
     164 pages
SteveBaker (talk) 01:25, 19 August 2008 (UTC)

Red meat[edit]

What causes "red" meat to be red? wsc```` —Preceding unsigned comment added by (talk) 17:21, 18 August 2008 (UTC)

Mostly myoglobin. -- Coneslayer (talk) 17:30, 18 August 2008 (UTC)
See also this previous discussion. Dostioffski (talk) 21:27, 18 August 2008 (UTC)


The mother article says that for most infants, the first word sounds like 'ma' hence, most if not all languages have similar sounding words for mother. Is there a reason why most infants say 'mama' first? is it because 'ma' is easy for infants to pronounce?

If so, I wonder if infants that say much more difficult words first end up having a higher IQ or something! Coolotter88 (talk) 20:06, 18 August 2008 (UTC)

Interesting. While I know the stereotype is 'mama', I'm pretty sure I was reading not long ago that the most common first 'word', as it is the easiest sound, was 'papa' or 'dada'. Perhaps people 'count' 'mama' first? Hmmm, must search. (talk) 20:58, 18 August 2008 (UTC) Ignore that. Read this article and be enlightened! The whole thing is good, but cut to section 6 if you want to get straight to your answer. Or read Mama and papa :) (talk) 21:58, 18 August 2008 (UTC)
I believe infants usually start "speaking" by making repeated consonant-vowel combinations. Few of those would be recognised as words in most languages, so odds are good that "mama" will be one of the first "words" spoken, since there aren't many other options. How many other such words are there? "papa", "dada", "baba" (could be interpreted as "baby" by an optimistic parent) are all I can think of off the top of my head. Remember, it needs to be a word a parent could realistic expect their child to say, otherwise they'll accept it for the coincidence it really is. --Tango (talk) 21:13, 18 August 2008 (UTC)
FYI: My son learned "dada" as his first word - he didn't learn "mama" until he already had a vocabulary of perhaps 100 or more words (much to the annoyance of my wife!!) - but he used it to name all adult humans - so I think it was just a simple misunderstanding of the meaning of the word. SteveBaker (talk) 00:58, 19 August 2008 (UTC)


I was looking at this article and thinking it could do with some improvement. I could do with some help in finding keywords and directions to look in.

For one thing, it refers to wells being warmer in winter than summer. I suspect that this is down to groundwater being about the same temperature when the outside temperature is colder, and could also lead in to a mention and perhaps discussion of how humans perceive things largely through contrast. Hence lukewarm water feeling hot when you're acclimatised to cold and cold when you're acclimatised to hot. This seems relevant, but I can't think of the name of this phenomenon or similar things. Also, while I'm pretty sure this is what's going on with the well water, I'm not as sure as I'd like to be, nor do I have a source.

If anyone could offer links, sources, useful words, ideas, etc. that would be nice :) I'd ask for contributions to the article itself, but I'm sure you're all terribly busy... (talk) 20:55, 18 August 2008 (UTC)

Well, that article was apparently written in 1728 AD [1], so it could probably use an update :) Reading the original, they seem to actually be saying that well water is not warmer in winter, among other blazing refutations of the whole idea.
I'm not sure that you can usefully introduce the idea of temperature contrast as a retroactive explanation for the extent to which ancient Greek philosophers took the idea. It seems like more of a yin-yang approach than one of perception using the senses. Franamax (talk) 22:21, 18 August 2008 (UTC)
The original idea is clearly wrong. But you can see why they may have thought that. If you are outside in the snow, freezing your ass off - 10 degree centigrade well water might well seem warm - but if its in the height of Greek summer, then you're at 25 degrees C and 20 degree well water seems cool. Then they saw well water "smoking" in the depths of winter (which is because cold air tends to be very dry - so evaporation of warmer water is easier than in hot weather where the humidity is higher). They put two and two together and made five. The Greeks were notorious for never doing actual experiments to back up their theories - so this should come as no surprise.
Certainly humans perceive most things as differences rather than absolute values - so this phenomenon would seem to them to be pretty commonplace. This perception by 'difference' rather than 'absolute' appears in many (if not all) of our senses - Color constancy is the visual version of it. But you can also "get used to the smell" of you pet dog or whatever in your home - and cease to notice it. Visitors to your house may be all too aware of it. It's true for the sense of touch too - when you put on an item of clothing over bare skin, you initially feel it - but after you've been wearing it a while, you stop noticing. The underlying basis being that it's useless to have your senses continually sending your conscious mind the same data over and over again - it's more efficient to only tell you when something changes.
SteveBaker (talk) 00:53, 19 August 2008 (UTC)
This book calls it sensory adaptation. It seems to have been a significant concept in the history of philosophy, when the idealists were battling it out with the realists. --Heron (talk) 18:41, 19 August 2008 (UTC)

Does human perception of time speed up with age?[edit]

Just wondering if an adult perceives time as going faster than a child, for example. If 3 hours seems like the longest, biggest night in the world to a child, but to an adult it's just a normal event. That kind of thing. Also is it true that in those terms half your life is over at age 20?--Quadrilateral Tertiary (talk) 23:00, 18 August 2008 (UTC)

I've always personally believed that time perception varies depending on how quickly you wish the time to pass, with quicker = slower. --Kurt Shaped Box (talk) 01:00, 19 August 2008 (UTC)
Time perception is pretty complicated. There are a lot of factors involved. Is age part of it? I wouldn't doubt it at all that children get impatient much quicker than adults (I recall when I was 6 or so thinking that an hour was an unbelievably long amount of time—that if I had to wait an hour for something I might as well just give up; now hours fly by, I hardly notice). But I doubt very much that it is a linear function, even one that changes much once you become "an adult" of some form. I don't think old people perceive time as flying by on a day to day level (even if the years have "flown by"). -- (talk) 03:17, 19 August 2008 (UTC)
My grandmother, a generally trustworth observer, insisted that time passed faster as one aged, and that a summer seemed like a long time when you were 6, but went by in a twinkle when you were 76; that babies grew to adults very fast when you were old. I have found no basis for disagreement with her observation. As one ages, each year seems to slip away faster. This is difficult to test in a meaningful formal experiment. Edison (talk) 05:07, 19 August 2008 (UTC)
There are plenty of people on the Internet asking why time seems to pass faster with age, so I would assume adults do perceive time intervals as being shorter when looking back. The reason, on the other hand, is not so clear.
This new scientist article suggests "some elderly people feel that the days seem to drag, but that the years flash by" and continues that this may be because few events take place in an elderly person's life. Thus, elders believe a given period of time is long until they examine it retrospectively and note that not much happened. The article also says that other factors such as memory and IQ may affect time perception.
The same logic should extend to adults. Tasks that one has long been accustomed to seem like an inconsequential part of life, not memorable moments, and are hence not well-remembered. A 7-year-old remembers only a few previous winters and might find the first snowstorm of the year interesting, but a 40-year-old would likely not take much note.
The top answer given at Answerbag is also convincing: as a person ages, every unit of time that passes is a smaller portion of his life. When looking back, younger people find that a greater portion of their experiences are from that unit of time and conclude it must have lasted a long time. --Bowlhover (talk) 07:46, 19 August 2008 (UTC)
Well, my own experience is definitely the same as your grandmother's -- not that I'm that old, but the summers of my childhood and this summer have passed at a completely different rate, it seems, and things only seem to be speeding up. I've got a birthday coming up, and it seems like I just had one. At this rate, I've subjectively lived through the vast majority of my life alrady -- and that's a cheerful thought, isn't it? This is speculation, of course, but I think a lot of it has to do with the fact that when you encounter something new, it kind of stops you in your tracks and forces you to take account of it, you know? And when you're a kid, everything is new. When you get to be in your thirties, though, most of it is old hat; it's not really making much an impression. It's probably as much a function of memory as it is one of time perception. -- Captain Disdain (talk) 07:13, 19 August 2008 (UTC)
I don't think you can tell by your own experience. Is it that your perception of time has changed as you got older - or is it that your memory of your perception of time has changed as you got older? Our brains don't have enough storage capacity to remember every detail of everything - and we selectively forget unimportant detail and compress old memories into shorter and blurrier incidents. It's not unreasonable to assume that this deletion of old data would be accompanied by a mental note relating to how busy you were.
Suppose (to pick a trivial example) that your memory were to operate like a computer and maintained a count the number of times when you were busy doing something that interests you in every month of your life. As memory deletes details of precisely what you were doing, all you have left is this "busy counter". Kids play all the time - so your early memories would indicate that you were REALLY busy - although (of course) you wouldn't remember every tree you climbed, every rope you jumped, etc. Old people do less with their time - so more recent memories would have much smaller "busy counter" values. This might consciously translate to: "A lot of things happened each summer when I was a kid" - which could easily be perceived as "Back then, summers went on forever" - simply because more things happened.
I'm not saying that's the exact mechanism (we don't know) - but it's perfectly possible that AT THE TIME you perceived time identically when you were young and now - and it's only the MEMORY of that which has changed over the years as your old memories were compacted, sieved and slotted into place to make room for new ones.
SteveBaker (talk) 12:26, 19 August 2008 (UTC)
Well, sure. Though I should probably point out that the difference between the summers when I was in the first grade and when I was in the sixth grade felt pretty major even when I was in the sixth grade, and the summers when I was in the sixth grade seemed much longer than, say, this past summer. But of course I couldn't say whether that has more to do with my memory or my perception of time at the time these events took place. On a hunch, I'd say it's probably a combination of the two. It's an interesting phenomenon, to be sure. -- Captain Disdain (talk) 12:34, 19 August 2008 (UTC)
I think one way to test time perception would be to ask movie-goers of different ages: "how long is it since you saw (say) Star Wars - Revenge of the Sith?". (and also ask the same question for several other movies.)
In fact, Revenge of the Sith came out just over three years ago. My theory is that older people are more likely than younger ones to underestimate how long ago they saw a film. Perhaps, in the case of the Sith film, they might answer "it was a couple of years".
There are complications to the experiment. One is that some people will know some answers very precisely from memory landmarks. "I saw that on my 12th birthday so it was 3 years ago." This type of answer does not rely on a general sense of "how long ago it seemed".
I wonder if this experiment or a similar one has been done. Wanderer57 (talk) 16:31, 19 August 2008 (UTC)