Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 92.224.205.128 (talk) at 19:25, 9 December 2009 (→‎Zombie Plan). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 5

causes of internal ache and chokeness along the throat

Removed request for medical advice. The only advice Wikipedia can give is to call a doctor and have a face-to-face meeting with him/her. Only a medical professional can give responsible medical advice.

Pyruvic Acid vs. Pyruvate as end product of Glycolysis

Most sources I've seen (incl. wiki) say that pyruvate is the end product of glycolysis. Except I was reviewing some biology in the Schaum's Outlines and it said pyruvic acid. According to wikipedia the formual for pyruvic acid is C(3) H(4) O (3) (I don't know how to do subscripts) and pyruvate is C (3) H(3) O(3), which makes sense given that pyruvate is the ionized form. In the glycolysis article it says pyruvate is the end product but if you look at the picture (glycolysis overview) then end product is has 4 hydrogen, which would make it pyruvic acid, not pyruvate. This makes more sense because after glycolysis if fermenation occurs the end product, supposedly pyruvate, is reduced twice by the two NAHDH to make a 6 hydrogen compound, which doesn't make sense because pyruvate reduced twice would only have five hydrogens. So my question is: is the end product of glycolysis pyruvate or pyruvic acid? Thanks, 76.95.117.123 (talk) 02:19, 5 December 2009 (UTC)[reply]

They are the same thing. Pyruvic acid is C3H4O3, and pyruvate is the anion C3H3O3-. If you read the article on pyruvic acid, the second sentence of the lead tells you just that. Pyruvate is the form used by the Citric Acid Cycle. ~ Amory (utc) 02:50, 5 December 2009 (UTC)[reply]
They are the same thing, and which form prevails basically depends on the pH in the cell. In this case it's most likely pyruvate - any pyruvic acid generated would have dissociated into pyruvate and proton anyway. Tim Song (talk) 02:59, 5 December 2009 (UTC)[reply]
So I could say either as the answer to the question? But if the pyruvic acid disassociated into pyruvate then fermentation wouldn't produce a 6 H compound. The only reason I'm curious is that I do Science Bowl and the question sometimes comes up. Which answer would be more correct? I kinda said that pyruvate is the ionized form of pyruvic acid in my question btw...66.133.196.152 (talk) 03:09, 5 December 2009 (UTC)[reply]
Say pyruvate because that is the form it will be in given the conditions. Also, pyruvate and H+ are among the reactants in anaerobic respiration. I saw you said that about the ions, apologies if you felt slighted. I just wanted to set up the proper subtext and background. ~ Amory (utc) 03:25, 5 December 2009 (UTC)[reply]
If they say it's wrong based on that grounds you can always appeal. Pyruvate may be temporarily protonated in an enzyme at the active site, but usually what happens is that the COOH group has to be deprotonated. This gives the COO- system the electron it needs to expel the weak carbonyl-carbonyl bond and cleave as carbon dioxide. It can't cleave if it's protonated. ;-) The two-carbon molecule remaining (acetaldehyde) is further oxidised is attacked by the sulfur thiol of CoA to become acetyl CoA. John Riemann Soong (talk) 03:28, 5 December 2009 (UTC)[reply]


You can think of it like this: The proton of pyruvic acid helps supply protons to the proton pump in the electron transport chain. Note that NADH (reduced form of NAD+) carries 2 electrons but only one proton. The other "lost" proton has to come from deprotonating pyruvic acid. ;-) (As you might know, carboxylate is a weak base so it's not very good at taking back the lost proton.)

Decarboxylation (loss of CO2) donates a pair of energetic electrons (to NAD+) that will be used for the electron transport chain. The thermodynamic stability of CO2 helps drive the donation.

Acetyl-CoA is a useful anabolic building block (if you want to build sugars or fatty acid]]), but if you want to oxidise it all the way (use all its energetic electrons) it's kinda hard to oxidise and pull electrons (via evolving CO2) out of a molecule to nothingness (converting acetaldehyde to formaldehyde and formic acid would be a pretty bad idea), so it goes through the citric acid cycle. John Riemann Soong (talk) 03:57, 5 December 2009 (UTC)[reply]

Ok thanks John Riemann Soong! The first explanation you gave helped me alot. And if I challenge I say the wiki ref desk told me :-)

66.133.196.152 (talk) 04:11, 5 December 2009 (UTC)[reply]

heat modelling

this code has come as an outcome of modelling of spot modelling process.i have arrived at eqation (1).If we take initial heat (due to atmospheric conditions) in each point as unity.this in coded in initialisation section. Now dq is sent to ode45 for solving in a prescribed time domain and with initial condition y0=0.

function dq = heat(t,q)
p=5;%number of variables
%--------------------------
% generation of const matrix
%---------------------------
A = [5 4 3 2 1]';----------------------------------------arbitrarily chosen constant A,B,C,D
B = [5 4 3 2 1]';------------------------------------------------
C = [5 4 3 2 1]';------------------------------------------------
D = [5 4 3 2 1 ]';------------------------------------------------
%----------------------------------
dq = zeros(p,1); 
%-----------initialisation-----------------
for i=1:p
   q(i) =1;
end
%----------------------------------------
    dq(1) = A(1)*q(2) + B(1)*q(1) + D(1);
for i=2:p-1
    dq(i) = A(i)*q(i+1) + B(i)*q(i) + C(i)*q(i-1) + D(i); -----------------(1)
end

here 'i' represents the weld number.code has considered the contribution from a point before ,a point after the point 'i', and contribution of heat added in next point.now my problem is that i want to optimise this process i.e. minimize dq.i.e.i need the welding to be cooled fastly.so what parameter should i consider for optimisation and what method should i adopt. SCI-hunter (talk) —Preceding undated comment added 03:01, 5 December 2009 (UTC).[reply]

See this duplicate inquiry at WP:RD/Math. Takes your pick but not both. You are in a little maze of twisty passages. hydnjo (talk) 03:49, 5 December 2009 (UTC)[reply]
I have formatted the code for readability. Nimur (talk) 04:45, 5 December 2009 (UTC) [reply]
The code seems rather nonsensical - the first 'for' loop sets all members of q(1..p) to 1. So surely the second loop sets every element of dq(n) to A(n)+B(n)+C(n)+D(n) ? Why so much complication? You don't say what language this is written in - but what C-like programming language has arrays that start from index 1? This suggests that whatever this code is intended to do...it's not doing it. SteveBaker (talk) 16:16, 5 December 2009 (UTC)[reply]
It was subtly implied that it was Matlab code; between the syntax and the reference to ode45; the OP might want to read our guide on asking for help with code. Nimur (talk) 19:11, 5 December 2009 (UTC)[reply]

its written in matlab and its approxmately functioning correctly.please help now 220.225.98.251 (talk) —Preceding undated comment added 16:28, 5 December 2009 (UTC).[reply]

What do you mean by "minimize dq" ? dq is a function (vector in your code).
More broadly, try following the following steps:
  1. Formulate a clear mathematical statement of the physical problem you are trying to solve.
  2. Derive (or pick) a mathematical solution/algorithm for the problem (or its discretized/approximate version)
  3. Write Matlab code for the algorithm. Test and debug it.
Right now, you seem to be at step 3, and it is not clear (at least to us) if you have followed the previous steps. As such, your code does what it does, but we cannot determine if it actually implements the algorithm derived in step 2, and if the algorithm solves the problem in step 1 (remember GIGO).
PS: You should consult fellow students for tips on better Matlab coding; your current code is pretty poor. For example, the function takes in inputs t and q, and then doesn't use either. Instead it simply defines q. Also your first loop can be replaced by q = ones(p,1). Note that this review is intended to guide, not criticize. Hope it helps. Abecedare (talk) 16:52, 5 December 2009 (UTC)[reply]
Abecedare, dq is not a function. That is the syntax for declaring a return value. The function, heat(t,q), returns a vector whose local name is dq. This is standard MATLAB code style. What is unclear is why the code overwrites q, which is an input; and why it does that overwrite in such an inefficient and convoluted way. I suspect the OP used "pseudocode" or dummy assignments instead of writing a comment or actually implementing the correct physics. If the OP reviews Abecedare's and others' suggestions, and our software help guidelines, it will greatly help us answer the problem. I'm also going to posit that the simulated annealing article may be conceptually helpful, as well as the heat equation article. Nimur (talk) 19:40, 5 December 2009 (UTC)[reply]
I meant to indicate that dq is not scalar valued, so it doesn't make sense to try and minimize it. My language was ambiguous though; thanks for pointing it out. Abecedare (talk) 19:54, 5 December 2009 (UTC)[reply]
I guess if not otherwise specified, minimizing a vector implies minimizing its L2 norm. Nimur (talk) 22:30, 5 December 2009 (UTC)[reply]

Bohr Magneton Number for Copper Sulphate.

Resolved

I'm currently trying to calculate the dimensionless Bohr Magneton number peff for CuSO4·5H2O. The formulae I have are:

and

Where all the symbols have usual meanings and values. From this, peff should be:


However, the formula I have been given for the dimensionless Bohr magneton number is:

Where the fundamental constant of magnetism of an electron is squared in the denominator, how can this be? Thanks for any help 188.221.55.165 (talk) 13:32, 5 December 2009 (UTC)[reply]

When you substituted, you forgot about that square root. Outside the square root use but if you move it inside the parenthesis you have to square it.. Graeme Bartlett (talk) 21:30, 5 December 2009 (UTC)[reply]
Oh, yeah....simple....thanks Alaphent (talk) 08:26, 6 December 2009 (UTC)[reply]
And I thought I would have to understand Bohr Magnet(r)on number, but actually only algebra was needed! Graeme Bartlett (talk) 05:58, 7 December 2009 (UTC)[reply]

Does rice water has chemical reaction with mineral water's plastic bottle?

I had collect some rice water after washing rice for watering plants.
Because of keep raining these few days, I kept the rice water in plastic bottles to watering plants later.
But I found that after about two weeks, the plastic bottles had been harden and bloat.
The base of the bottle also bloat until hardly to stand on a flat surface.
I'm wondering is there any chemical reaction between rice water and mineral plastic bottle?
I'm curious and wish to know more about this condition, and also the reason why the bottle becomes like this.
Can anyone helps to find out the reason?


There is some problem statements i wish to know:
1. What is the fators affecting the bottle to bloat and harden?
2. What is the effect (positvely and negetively)
2. Does the chemical reaction brings harm to human?
3. Does it brings harm to plants if i watering plant with the rice water in it?


This is the condition i kept the rice water for about 2 weeks:
1. Date I kept the rice water in plastic bottles: 21/12/2009 to 5/12/2009 (I discovered out the condition on 5/12/2009)
2. Temperature: about 27 degree celsius to 33 degree celsius (sometimes in air-conditioned of 24 degree celsius)
3. Place I kept it: in a cupboard in my room
4. Not exposed to sunlight.


And these are few of the pictures of the bottle's condition:


--perfection is not intact.. (talk) 19:26, 5 December 2009 (UTC)[reply]



We can't look at the pictures unless you put them somewhere that we all have access to - upload to Commons or somewhere equally accessible. Mikenorton (talk) 17:14, 5 December 2009 (UTC)[reply]
Your links are inaccessible as well. bibliomaniac15 18:00, 5 December 2009 (UTC)[reply]
Thanks, that's much better. Mikenorton (talk) 18:51, 5 December 2009 (UTC)[reply]
I'm sorry about previous condition. I'm still a newbie in wikipedia, that's why I'm keep finding the instructions and ways to fix those problems. And thank you for your help to guide me. Just now, I'm still finding the ways to reply.--perfection is not intact.. (talk) 19:26, 5 December 2009 (UTC)[reply]
Perhaps you are inadvertantly making rice wine? 75.41.110.200 (talk) 18:06, 5 December 2009 (UTC)[reply]
I have to agree that fermentation of rice starch in the water, creating carbon dioxide, is a likely cause of this. Mikenorton (talk) 18:58, 5 December 2009 (UTC)[reply]
But I just kept the rice water after I wash the rice. At first my motive is just to watering the plants later because the past few days were raining. Until yesterday only i found out that the shape of bottle had change and had harden. Err..does it means that I'm accidentally make of rice wine which produce carbon dioxide, and the carbon dioxide had harden the bottle?--perfection is not intact.. (talk) 19:26, 5 December 2009 (UTC)[reply]
Two weeks at that sort of temperature is certainly enough to ferment the starch. The bottle should have a bit of pressure inside, when you open the top it will outgas. There should be some nasty smell associated, your result is probably not drinkable. But plants may be able to tolerate it. Graeme Bartlett (talk) 21:25, 5 December 2009 (UTC)[reply]
Yes!!I just try to open the cap and it releases gases..so is it carbon dioxide release?I don't have lime water at home,so unable to test it.And there is nasty smell too!But after I open the cap and the gas had releases, the bottle had back to it's softness. So, can I conclude that rice water under the condition of high pressure and the temperature will results in fermentation? Does it apart of anaerobic respiration reaction?But there is no yeast inside.Besides that, does bacteria inside the bottles can replaced the yeast? --perfection is not intact.. 06:28, 6 December 2009 (UTC)
I doubt that the bottle itself has changed its hardness. It's just that the contents are under such high pressure that they're pushing out against the walls of the bottle really hard.
Incidentally, it's only a matter of time before those bottles burst and spray that rice water all over the place. It might not be a good idea to store them so close to a bunch of books. APL (talk) 21:52, 5 December 2009 (UTC)[reply]
Thanks for reminding me. =) I had placed the bottle in a bucket.I had an crazy idea..I wonder that if i still keeping the rice water in that bottle, how long does it takes to burst or does it will burst. It may test the "toughness" of the bottle too..haha xD --perfection is not intact.. 06:28, 6 December 2009 (UTC)
The yeast naturally arrives from the air. 75.41.110.200 (talk) 18:15, 6 December 2009 (UTC)[reply]
Indeed it does: in Belgium, Lambic beers are fermented by allowing wild yeasts (and some bacteria) to drift in and 'infect' the wort, rather than by adding cultivated yeasts as in more conventional brewing. On a similar note, I find that if I partly consume a carton of pure orange juice, but then leave it in my refrigerator for a couple of weeks, it begins to ferment, adding a not-unpleasant tang to the taste. 87.81.230.195 (talk) 00:50, 7 December 2009 (UTC)[reply]

Eye water

What is the substance composed of that wets and lubricates the human eye? Mac Davis (talk) 16:26, 5 December 2009 (UTC)[reply]

See tears. The standard "wetness" is referred to as "basal tears" and according to the article it contains contains water, mucin, lipids, lysozyme, lactoferrin, lipocalin, lacritin, immunoglobulins, glucose, urea, sodium, and potassium. Matt Deres (talk) 16:38, 5 December 2009 (UTC)[reply]
It is also called lachrymal fluid. Googlemeister (talk) 15:54, 7 December 2009 (UTC)[reply]

classical music and emotions

I find it very strange which songs trigger strong emotions in myself — e.g., I get flushing waves of "tingles" whenever I hear Pachelbel's Canon, even though I can't recall having any strong memories associated with the song. Bits of Wagner hit me similar. I would probably generalize that it is probably only classical music that affects me in this particular way (the waves of "tingles," whatever that is), but I'm not a particularly big fan of classical at all (and haven't spent long amounts of time listening or playing it or anything along those lines), and generally do not think of myself as a terribly sentimental person (nor someone who is unusually appreciative of or interested in music). What causes this? Is it just some sort of long-lost association to music playing in stores around Christmastime when I was a child? Some property of this type of music itself—mathematical "problems" being proposed and solved? Just a sign of how complicated and weird the human brain is? I know there has been a lot written and researched on music and the brain, but I'd love a summary, if someone out there has thought about it much. --Mr.98 (talk) 16:45, 5 December 2009 (UTC)[reply]

Anything in the music psychology article give you any clues? Between cultural conditioning and a biological predisposition to perceive rhythm, tonal scales, and harmonics, music can inspire a strong psychological response. It's pretty much impossible to pinpoint what exactly triggers this response for you, but a lot of research has been done on music and psychology. Nimur (talk) 19:15, 5 December 2009 (UTC)[reply]
I have heard the term aural orgasm used to describe this, although I can't find any particularly reliable sources that define it. Mitch Ames (talk) 02:57, 6 December 2009 (UTC)[reply]
Isn't this sensation the basic meaning of the word "thrill"? --Anonymous, 04:55 UTC, December 6, 2009.
The experimenting (or torturing) physician in Clockwork Orange was surprised at the strong reaction the young thug Alex had to classical music. Stanley Kubrick seemed to be making the same point: Alex epitomised unsentimentality and was not particularly well educated, but he responded to Beethoven, not pop, with bliss. BrainyBabe (talk) 19:37, 6 December 2009 (UTC)[reply]

Showering with contact lenses

Why do most manufacturers of soft contact lenses warn against showering with them in or using tap water to rinse out the lens case? What negative effects could showering with them in have on the lenses? Thanks! --98.108.36.186 (talk) 20:27, 5 December 2009 (UTC)[reply]

It's to do with contaminating the lenses. Normal tap water carries bacteria that, in the normal way of things, isn't a problem for most people. However, if it gets on your lenses the bacteria will be in contact with your eyes for hours at a time, and your tears can't wash it away properly. If these are lenses you wear for more than one day, the bacteria will continue to breed and grow, feeding on bits you haven't properly washed off the lens. And they'll still be there, more plentiful than ever, when you next put them on. It can potentially blind the lens wearer. Here [1]. 86.166.148.95 (talk) 21:51, 5 December 2009 (UTC)[reply]
Thank you! —Preceding unsigned comment added by 98.108.32.19 (talk) 01:45, 7 December 2009 (UTC)[reply]

Raccoon

I was just walking back across campus and I saw a small animal on the path ahead. I assumed it was a cat, but the nose was the wrong shape, so I assumed it was an opossum. No biggie; opossums are vicious, but they aren't likely to have rabies. Then when I was close enough to clearly make out the raccoon's markings (it's at night), I noticed that it was so content on drinking the contents of the puddle that it didn't notice me. I must have passed within five feet of it. It is a college campus, so perhaps it's just abnormally tame, but isn't an early sign of rabies an intense thirst? I did look at the article, but I can't tell whether the thirst comes before or after the animal is unable to drink. Just as a note, I did report it to campus police. Falconusp t c 23:31, 5 December 2009 (UTC)[reply]

You seem to have been checking it out quite intently -- perhaps you're into raccoon drinking-voyeurism? :) I'm just saying that it's very easy to jump from "it was drinking and didn't see me" to "it must have a rabid thirst for it to not have seen me." DRosenbach (Talk | Contribs) 00:50, 6 December 2009 (UTC)[reply]
Well, I thought it odd, as most wild animals will at least look at you when you walk within a few feet. Falconusp t c 00:56, 6 December 2009 (UTC)[reply]
Have you ever spent some time around racoons, even wild ones? Most that live close to people generally behave exactly as you describe, in my experience. When I was growing up, it was not uncommon to have racoons in my yard picking through the trash. They frequently didn't even pay me any attention, even if i yelled, threw rocks, whatever. I had to get close enough to grab them, and they would move far enough away for me to pick up all the trash. Then they went back to picking through it as soon as I walked away. They simply don't seem to pay humans much mind, and they certainly weren't much afraid of me. --Jayron32 01:08, 6 December 2009 (UTC)[reply]
My experience with urban raccoons is that they don't care very much about humans and regularly ignore them. (Dogs are another matter.) They are tough animals that nobody hunts near cities. --Mr.98 (talk) 01:34, 6 December 2009 (UTC)[reply]
Well, I guess I made a big deal out of nothing. I have just never seen an animal do that. Falconusp t c 01:52, 6 December 2009 (UTC)[reply]
A little known fact about raccoons is that they wear raccoon coats not for fashion reasons. Bus stop (talk) 02:02, 6 December 2009 (UTC)[reply]
I often see raccoons around the house, and they are pretty intelligent creatures. They aren't likely to regard you with anything more than peripheral vision unless you do something problematic. Vranak (talk) 04:15, 6 December 2009 (UTC)[reply]
I don't know, I frequently see raccoons peering in through the bottom panel of my glass door at night. They're pretty curious. Looie496 (talk) 17:45, 6 December 2009 (UTC)[reply]
That's because you've got stuff they'd like to eat and rummage through, not because they care about you. --Mr.98 (talk) 20:10, 6 December 2009 (UTC)[reply]


December 6

Vacuoles, vacuolation, vacuolisation and vacuolization

Is vacuolation the same as vacuolization? As it stands, the former currently redirects to a section in the article for vacuoles in which it states that this is a process in which vacuoles form pathologically, while the latter is its own article that might seem to indicate pathosis but doesn't necessarily spell it out nicely. Vacuolisation, which appears to me to be nothing more than a (perhaps British) spelling variant of vacuolization, redirects to the main article on vacuoles. This is what I think -- correct me if I'm wrong:

  1. Vacuolisation and vacuolization are spelling variants of the same thing
  2. Vacuolation and the aforementiones spelling variants are variant words of the same thing -- sort of like dilation and dilitation.
  3. The mini-section on this concept within the article on vacuoles should make a statement or two about it and include a link to the article that will delve into it deeper.

Let me know if there's any disagreement on the definitions, etc. before I go ahead and do it. Thanx! DRosenbach (Talk | Contribs) 00:47, 6 December 2009 (UTC)[reply]

It may be best if this discussion happened on the talk page of the articles in question (pick one to have the discussion, and leave notices on the other talk pages). Since this involves a question which stands to have a material impact on the content of the article space, the discussion should probably happen on those talk pages, since editors who edit and patrol those articles would likely be interested in it. --Jayron32 01:04, 6 December 2009 (UTC)[reply]
I didn't imagine that the talk pages of any of these articles were nearly as high-volume as this page. Additionally, the editors of the aforementioned articles obviously have left this crucial point unmanaged for some time. DRosenbach (Talk | Contribs) 02:12, 6 December 2009 (UTC)[reply]
OK -- I placed notes on both the article talks to see here. Now we can discuss it here. DRosenbach (Talk | Contribs) 02:15, 6 December 2009 (UTC)[reply]
Wouldn't WikiProject Biology be the place to discuss such article issues? Fences&Windows 16:13, 6 December 2009 (UTC)[reply]
I agree with Fences - discussion of article content belongs in article Talk space, where it will be archived along with the article, or in a linked wikiproject created for such a purpose. -- Scray (talk) 18:08, 6 December 2009 (UTC)[reply]

Why does increasing CO2 concentration matter?

Doesn't 350 ppm CO2 absorb the same amount of infrared from the miles of Earth's atmosphere as 400 ppm? I can see how the difference between 350 and 400 ppm CO2 in air would change how much infrared could be absorbed by a test tube's width, but for the vast depth of Earth's atmosphere, I just can't understand how it could change the total absorption. Are there any articles or sources that discuss this? I've looked at greenhouse gas, global warming, radiative forcing, and their talk pages, but maybe I overlooked something? 99.56.139.149 (talk) 01:13, 6 December 2009 (UTC)[reply]

350 to 400 ppm represents an increase of 14.2%, so all other things being equal, this will increase the "greenhouse effect" contributed by the CO2 by 14.2%, which is a significant and measurable amount. Regardless of the size of the sample, a 14.2% increase is a 14.2% increase. --Jayron32 01:25, 6 December 2009 (UTC)[reply]
Unfortunately not, Jayron - changing the concentration at the bottom of the atmosphere by 14% does not increase the total absorption by 14%. See optical depth for the physical mechanism of increased gas concentration on total atmospheric absorption. For gas of uniform density, optical depth is exponentially related to concentration. Compound this by the fact that the atmospheric profile is also roughly exponentially decaying with altitude. Nimur (talk) 02:23, 6 December 2009 (UTC) [reply]
Also keep in mind that greenhouse effect is only one effect. Atmospheric chemistry and climate are extremely complicated subjects. It is probably a great misrepresentation to say that CO2 is harmful primarily because of its contribution to a greenhouse effect. As you correctly point out, the albedo change and the difference in optical depth between 350 ppm and 400 ppm are very small. I would go so far as to call them negligible, and I can find a lot of planetary science references to back me up on that. However, and this is critical - the greenhouse effect is only one of many ways that a changing atmospheric composition affects climate. You may want to read climate change, which discusses some of the mechanics, and atmospheric chemistry, which will broaden the view of how carbon and other atmospheric constituents affect conditions on Earth. Nimur (talk) 01:57, 6 December 2009 (UTC)[reply]
You can use a web-based radiative transfer model to help you see that the amount of radiation absorbed at 350ppm is not the same as at 400ppm. It's actually easier to see the difference if you use the pre-industrial value for the concentration of CO2 of ~280ppm and a value we're likely headed towards ~450ppm. -Atmoz (talk) 01:59, 6 December 2009 (UTC)[reply]
Spaceborne measurements of atmospheric CO2 by high resolution NIR spectrometry of reflected sunlight, published in GRL in 1997, is a good quantitative overview of the Carbon Dioxide near-infrared spectrum in an experimental, in-situ, atmospheric context. Nimur (talk) 02:01, 6 December 2009 (UTC)[reply]
The other paper I like to point out in discussions of "global warming" and atmospheric chemistry is this 2003 Eos publication: Are Noctilucent Clouds Truly a “Miner’s Canary” for Global Change. This paper points out some very interesting atmospheric effects - notably, it provides the novice atmospheric scientist with a reminder about conservation of energy. Unless the net power from the sun is changing (which is experimentally not the case), then for any "global warming," there must be some "global cooling" somewhere else - in this case, the mesosphere [2]. Observations of mesospheric weather therefore would be a good indicator of climate change - probably a better indicator than (say) average temperature measurements or atmospheric chemical content. "Of the infrared radiatively most important gases (CO2,O3,O, and perhaps H2O), none can currently be measured with sufficient accuracy at mesopause altitudes to establish its abundance there within anything like percent accuracy, not to speak of any significant long-term change." Therefore, these numbers about Atmospheric Carbon Content are sort of useless - remember, all the quoted numbers are for the troposphere, and almost all the data comes from surface measurements. The actual total carbon content of the atmosphere, per the opinions of the scientists of these papers, is actually very poorly known. On top of this, our only method to probe it is via NIR optical density measurements - and the first paper I linked will give you some idea of the quantitative measurement accuracy for that. Unfortunately, these statements and this line of reasoning sparked huge controversy back in 2003, because it does not tow the simplistic "more carbon ppm = evil" rote argument. But in reality, it's simply establishing an actual scientific context for evaluating the meaning of one particular surface measurement - atmospheric carbon concentration at the surface level. Changing the tropospheric carbon content will certainly result in a different chemistry mechanism in the upper atmosphere, and again, we have extremely complex, non-greenhouse-effect climate-change consequences. Nimur (talk) 02:07, 6 December 2009 (UTC)[reply]
I'm not following the conservation of energy argument. In order for global warming to demand global cooling, the earth would have to be treated as a "closed" system (with a constant and equal input and output). The net input from the sun is assumed constant, but isn't the fundamental argument of the greenhouse effect that the amount of energy radiated from the Earth is decreasing? If energy in is constant and energy out is decreasing, net energy in the system is increasing. Compartmentalizing the system might change the amount of energy locally (at surface or at mesosphere) but the system as a whole can still experience a net increase. Open systems need not obey conservation of energy, and the earth is not a closed system. SDY (talk) 02:31, 6 December 2009 (UTC)[reply]
If the planet temperature increases, its blackbody spectrum will change and it will radiate power faster according to the Stefan-Boltzmann law. Surface temperature may change as a result of greenhouse effect, but planet effective temperature cannot. Nimur (talk) 02:34, 6 December 2009 (UTC)[reply]
According to our articles, Stefan's Law relies on emissivity, and the effective temperature is also a function of albedo. Again, emissivity is arguably what is changing, and changes in albedo (ice has very high albedo, melted ice has less) are also a concern. Why must effective temperature be constant? SDY (talk) 03:25, 6 December 2009 (UTC)[reply]
Nimur, you are complicating things much more than necessary. It is true that the effective temperature does not change. But that does not require cooling of the mesosphere (though some cooling is possible). None of that is required to understand the basic idea behind global warming which is what the question is about. Dauto (talk) 03:17, 6 December 2009 (UTC)[reply]
Ok. What I said above is not entirely accurate. The effective temperature CAN change if earth's albedo change. But it will not change as a (direct) consequence of the increase on atmospheric CO2. Dauto (talk) 04:31, 6 December 2009 (UTC)[reply]
The original questioner noted that a change of 350ppm to 400 ppm does not significantly change the transparency of the entire atmosphere (integrated over the full height) to infrared wavelengths. This is a scientific fact, set forth in the articles I linked. The complexity comes in because climate change can still occur even though the additional CO2 is not adding to the cumulative greenhouse effect. So, the logical question is - "if climate change is not strictly the result of greenhouse effect, then what is it an effect of?" And, again, the answer is "very complex atmospheric chemistry changes which may result in a different energy distribution in the troposphere." Sorry that this is not a simple answer - but "more carbon = more greenhouse effect" is an overly simplistic and scientifically incomplete picture. Let me succinctly rephrase: adding CO2 may still cause climate changing effects, even if the total change in atmospheric IR absorption is negligible, because other effects come into play. Nimur (talk) 05:27, 6 December 2009 (UTC)[reply]
That's a bunch of nonsense. The additional CO2 IS adding to the greenhouse effect, and that's why the earth's mean temperature is increasing. Dauto (talk) 14:06, 6 December 2009 (UTC)[reply]
I'm sure that is what you read about in high school science textbooks and the newspaper, but I would suggest moving to a geophysics or planetary science journal to get a more accurate scientific picture. Here is a nice, albeit old, piece from Science: Cloud-Radiative Forcing and Climate: Results from the Earth Radiation Budget Experiment, (1989). Again, experimental and quantitative results suggest that carbon dioxide induced "greenhouse effect" is not the most relevant effect. It may play a role, and anthropogenic carbon may be a root cause of some other changes, but the climate change is not due only to greenhouse effect: "Quantitative estimates of the global distributions of cloud-radiative forcing have been obtained from the spaceborne Earth Radiation Budget Experiment (ERBE) launched in 1984.... "The size of the observed net cloud forcing is about four times as large as the expected value of radiative forcing from a doubling of CO2. The shortwave and longwave components of cloud forcing are about ten times as large as those for a CO2 doubling. Hence, small changes in the cloud-radiative forcing fields can play a significant role as a climate feedback mechanism." Do you really intend to stick to your simplistic model of greenhouse insulation, when experimental observation has repeatedly shown it to be 10 times smaller than other atmospheric physics effects?[3][[4] Even these are small compared to massive climate-scale energy redistributions, e.g. Does the Trigger for Abrupt Climate Change Reside in the Ocean or in the Atmosphere? (2003). To reiterate: the carbon dioxide in the atmosphere is present; it is probably anthropogenic; and its biggest impact on climate is probably not actually related to greenhouse warming, but to other effects that CO2 can induce. Nimur (talk) 15:11, 6 December 2009 (UTC)[reply]
Nimur, I do not deny that there are many complex positive and negative feedback effects that must be taken into account in order to reach a precise quantitative description of climate change. But none of that is necessary to give the OP an answer that makes sense. You said "the additional CO2 is not adding to the cumulative greenhouse effect". And that's just not true. Dauto (talk) 15:45, 6 December 2009 (UTC)[reply]
It seems that my efforts to link to scientific papers are not getting my point across. Let me make an analogy, which is actually very analogous to the situation (except that CO2 is blocking "upgoing" photons which are re-radiated from the earth... I'm only concerned with the opacity of the atmospheric window, though, so direction doesn't matter). Imagine that you are building a roof, and for some reason you are trying to block light from the sun, and you use thick steel plates to block the sunlight. Each steel plate is 1 inch thick, and blocks most of the photons. For your purposes, you want to really block the sunlight, so you build a giant structure and you put 350 steel plates between you and the sunlight. Now, along comes an upstart engineer, who says he has 50 more steel plates in the scrap-yard, and he's going to add them to your structure, whether you want them or not. Two questions: (1)how much more sunlight are those extra 50 steel plates going to block? Probably none. (2) Are there other problems that those extra steel plates will produce? Absolutely. Your structure wasn't designed for 400 steel plates on its roof.
How does this correspond to the atmospheric carbon situation? Well, the carbon dioxide atoms are narrow-band absorbers of photons. They really only affect a small part of the total solar energy spectrum. And by the time we have 350 ppm, they are pretty much blocking all the sunlight in that particular part of the infrared spectrum. Dauto, you are absolutely correct, in that adding more carbon will increase the absorption - in the same way that adding more steel plates to a roof will block more photons. Because of the way that exponential functions work, this change is negligible. So, if we really have a problem with adding excess carbon, it isn't because of the greenhouse effect or because those carbon molecules will be blocking any extra solar energy. Other effects are the real potential problem - and we need to understand those effects to make sure our roof doesn't collapse under the weight of 50 extra steel plates. Nimur (talk) 16:08, 6 December 2009 (UTC)[reply]
No, Nimur, that is not a good analogy. There is a very good article by Spencer Weart on RealClimate here. Yes, the direct opacity of the atmosphere is not significantly changing when adding more CO2. But what does happen is that the "final" emission layer moves further up the atmosphere, providing more chances for re-emission towards the ground. This is a physical effect, not a chemical process. And while this also is an exponential decay, it still is quite significant - that's why doubling CO2 without feedbacks gives us a ≈1℃ increase in temperature. --Stephan Schulz (talk) 16:30, 6 December 2009 (UTC)[reply]
That is a good read, Stephan. And, as you say, the absorption profile is very relevant as well. Changing the concentration will change the relevant scale height for the near infrared spectral effects. I still disagree with your unsourced assertion that doubling carbon would yield a 1 degree celsius increase in surface temperature. I stand by the references I linked earlier - most importantly, the quantitative analyses of total radiative effects and energy balance experiments - but at this point I think it's moot to argue. Nimur (talk) 16:40, 6 December 2009 (UTC)[reply]
Thanks Schulz. That's finally putting us into the path towards giving the OP a sensible answer to the question asked. The point of the greenhouse effect is not how much of the earth's radiation gets absorbed by the atmosphere. How much of that energy finds its way back to the surface IS the relevant question. Dauto (talk) 16:39, 6 December 2009 (UTC)[reply]
I think you somewhat misrepresented that Eos review: it is about Noctilucent clouds and whether they are signals of climate change, and does not mention anything about conservation of energy. Indeed, the article states that "The temperature will be affected by any anthropogenic changes of the CO2 and/or O3 abundances". Your comment that "Unless the net power from the sun is changing (which is experimentally not the case), then for any "global warming," there must be some "global cooling" somewhere else" is wrong as Earth is not a closed system; the cooling occurs in space. Of course atmospheric content affects surface temperature, which is why Mars is freezing, Earth is warm and Venus is toasty. Why are you quoting from 10-20 year old papers about how CO2 affects climate when there are newer articles on the topic? e.g. [5][6][7] Fences&Windows 16:54, 6 December 2009 (UTC)[reply]
I usually quote papers I've read - sometimes I read them 10 years ago. There's no shortage of new material. But, given that everybody is trying to establish a long-term-change, don't you think it may be worth checking primary source data from previous decades before making bold claims about massive changes in recent years? In any case, ERBE was a great experiment on a great spacecraft, and a hallmark of empirical data collection for global climate studies. It should be cited more often. Nimur (talk) 16:56, 6 December 2009 (UTC)[reply]
That picture above should help make clear why adding more CO2 to the atmosphere increases the surface temperature even after saturation is achieved. The amount of energy cycling between the earth's surface and the atmosphere can be (and is) much larger than the amount of energy coming from the sun. More CO2 increases the energy being fed back to the surface warming it up. Dauto (talk) 22:12, 6 December 2009 (UTC)[reply]
is the difference in heat absorption between the green (300 ppm) and blue (600 ppm) from 20 km of atmosphere anywhere near 14%?

Original questioner here. I understand what's been said, but nobody has addressed my actual question: why does the concentration change so slight in bands where the atmosphere is almost completely opaque make any substantial difference in the total amount of absorption? Or to put it another way, the diagram on the right shows the transmission spectra of 300 ppm and 600 ppm. The total amount of energy difference represented by the difference between the blue and green lines isn't anywhere near 14%, is it? What is the actual amount of energy forcing between the actual Earth's atmosphere at 350 ppm and 400 ppm?

Update: I take it back! Stephan Schulz addressed my question correctly at 16:30 above. Thanks Stephan! 99.62.185.148 (talk) 00:53, 7 December 2009 (UTC)[reply]

Size of average Caucasian female head

What is the average size of a Caucasian female head? --I dream of horses (T) @ 02:25, 6 December 2009 (UTC)[reply]

14.4 centimeters. Nimur (talk) 02:29, 6 December 2009 (UTC)[reply]

Slightly in theme, the circumference of the head of a cat is equal to the length of his tail. So, when a cat goes to a hat shop, he only has to let the clerk measure his tail. --pma (talk) 09:40, 6 December 2009 (UTC)[reply]
That sounds problematic for a tailless cat. moink (talk) 11:58, 6 December 2009 (UTC)[reply]
Seems problematic for a headless cat as well. Dauto (talk) 14:01, 6 December 2009 (UTC)[reply]
The problem is moot for a headless cat. --Tango (talk) 15:03, 6 December 2009 (UTC)[reply]
Note however that neither tailless nor headless cats wear hats. --pma (talk) 16:21, 6 December 2009 (UTC)[reply]
Are you sure? I don't think that is entirely true. SpinningSpark 16:44, 6 December 2009 (UTC)[reply]
You mis-read the part about headless and tailless. Nimur (talk) 16:52, 6 December 2009 (UTC)[reply]
I think you're right about Spinningspark but I believe this disproves the first part of the statement although it's difficult to be certain (the tag says it's a manx and there's no tail visible but some manx cats do have a bit of a tail) Nil Einne (talk) 13:40, 9 December 2009 (UTC)[reply]
If I decapitate a hat-wearing cat, the hat might stay on the head, but you (Tango and pma) are saying that "the cat is not wearing his hat" (...because he is not wearing his head). If we assume the essence of being of a cat is based on his brain, we have just proven that a cat's brain is below his neck rather than being in its head. Wanna co-author the paper? DMacks (talk) 18:01, 6 December 2009 (UTC)[reply]
From an answer to a question a few months ago, I have to sadly report that this paper already exists. Essence or no, decerebrate cats are quite alive and do have most of their normal respiratory and gastric functionality intact, because these functions are controlled by the spinal cord and brain stem. Nimur (talk) 18:06, 6 December 2009 (UTC)[reply]
I haven't looked at the paper, but there is a difference between decerebrated and decapitated. The cerebrum is just one part of the brain. --Tango (talk) 23:07, 6 December 2009 (UTC)[reply]
I would define a cat as the combination of a cat's head and a cat's body. If you cut the head of a cat, you no longer have a cat. (The definition is chosen primarily so that you will be wrong, but it is a justifiable definition!). --Tango (talk) 23:07, 6 December 2009 (UTC)[reply]
We're really drifting off topic from the OP's question. Sorry for my contribution to that effect. Per the guidelines, let's stay on topic for the OP. Nimur (talk) 18:11, 6 December 2009 (UTC)[reply]
How do Manx cats get measured for a hat? 78.149.206.42 (talk) 18:00, 8 December 2009 (UTC)[reply]

Human nervous system latency

Resolved

I am looking for the human nervous system latency time. I couldn't find it in the artical nervous system. Basically, I am looking for the number of milliseconds or microseconds between 2 events.

Situation: the human is driving a car and a child crosses the road 20 meters ahead. The driver has to turn the wheel to avoid the child.

Event 1: the light reflected by the child (visual signal) enters the eye of the driver.

Event 2: the hand of the driver starts moving to turn the wheel.

Assume that the human is normal, awake, has not been drinking and attempts to react as fast as possible. I am basically looking for the total latency necessary for these operations: visual signal to reach the brain, brain processing of the signal and recognising danger, brain making decision to turn the wheel, brains instructing hand to turn the wheel, then message to transit from brain to hand and arm muscles, and finally, muscles to begin contractions. I don't need the split between the operations, just the total number of milliseconds between event 1 and event 2. Could you please help? This is not homework question :-) --Lgriot (talk) 03:27, 6 December 2009 (UTC)[reply]

Reaction time is probably the best we have; it cites 180-200 milliseconds to detect a simple boolean visual stimulus; your instance is a much more involved problem and so you can, I think, expect the reaction time to be greater. --Tagishsimon (talk) 03:37, 6 December 2009 (UTC)[reply]
Thanks, that is exactly what I was after. --Lgriot (talk) 04:48, 6 December 2009 (UTC)[reply]
In the Highway Code in the UK there are a set of stopping distances for a variety of speeds. In that publication, stopping distance is given as the sum of thinking distance and braking distance. Thinking distance is invariably given as the distance in feet being equal to the speed in mph. So the estimated time to see something happen and shift a foot to the brake pedal (similar to seeing something and steering) is estimated by the UK driving authorities as about 0.7 seconds. --Phil Holmes (talk) 11:25, 6 December 2009 (UTC)[reply]
That's not exactly the same, though, even though I doubt it would matter much in practice. You hopefully always drive with your hands on the steering wheel, but I'm guessing the same can't be said for having a foot on the brake all the time. -- Aeluwas (talk) 14:18, 6 December 2009 (UTC)[reply]

Interferomics possible editorial problems

I recently came across this article and after doing some corrections noted that the term was coined by researcher Gaurav Rana, when I reviewed the history the article was created by User:Gauravsjbrana, a subsequent google search revealed this article [jmd.amjpathol.org/cgi/reprint/9/4/431.pdf] which mentioned the emerging field of Interferomics in 2005 however makes no attribution to Gaurev Rana. I see a number of issues here firstly it is a specialized field so inaccurate editing maybe remain undetected, second it appears to be self promotion, thirdly it could be false representation. I am hoping someone with more experience in these matters can take a look. Matt (talk) 03:57, 6 December 2009 (UTC)[reply]

  • edit - oops it appears I may have asked this question in the wrong place
  • please ignore this article I have posted a welcome to this user and a short note regarding the possible issue with the article Matt (talk) 04:21, 6 December 2009 (UTC)[reply]

my son ( 10 yearls old )

Removed request for medical advice. Wikipedia cannot give medical advice. Only a medical professional can give responsible medical advice.

William Thompson

What substantial contribution did William Thompson make in the field of physics? Kittybrewster 12:34, 6 December 2009 (UTC)[reply]

I suspect you're thinking of Lord Kelvin. - Nunh-huh 12:45, 6 December 2009 (UTC)[reply]
Indeed. Thank you. Kittybrewster 13:13, 6 December 2009 (UTC)[reply]
Pub quiz question? Fences&Windows 13:30, 6 December 2009 (UTC)[reply]
Science test paper. "Homework for Grown-ups". Kittybrewster 13:59, 6 December 2009 (UTC)[reply]
Maybe we should but the disambig link to the Thomson page a bit higher on that page? It seems like a pretty easy mistake to make. --Mr.98 (talk) 14:36, 6 December 2009 (UTC)[reply]
Thought the same thing... --Stephan Schulz (talk) 15:34, 6 December 2009 (UTC)[reply]
I couldn't see him anywhere on that page! I'll go and add him. Dmcq (talk) 16:20, 6 December 2009 (UTC)[reply]
Well, the issue is that he doesn't have a p in his name, so ol' Lord Kelvin himself doesn't belong on that page... I've done something like what I think might be useful (putting the non-P see-also at the top, rather than at the bottom). --Mr.98 (talk) 16:43, 6 December 2009 (UTC)[reply]
If we want to increase the usefulness of disambiguation pages, we shouldn't insist on exact spelling. Similar spelling or similar pronunciations should be enough; the pages are so that articles that could be confused with each other can be found and distinguished. There should be one Thomson/Thompson disambiguation page for each Thomson/Thompson, with appropriate redirects pointing to it. - Nunh-huh 23:10, 6 December 2009 (UTC)[reply]
Not to mention Tomson or even Tompson (although there are only two of those). Mikenorton (talk) 23:21, 6 December 2009 (UTC)[reply]

How long are viruses active?

When you are suffering from the Common cold you are spewing cold viruses all around your home or workplace when you cough and sneeze. How long can these viruses stay active and possibly infect someone else? And what eventually happens to them -- do their molecules eventually disintegrate, or do they just spread out so much that they can no longer cause an infection? —Preceding unsigned comment added by Fletcher (talkcontribs) 10:37, 6 December 2009

A quick search of the reference desk archive (box at the top of this page) for 'virus "outside the body"' yields a link to a relevant discussion. Also, our Common cold article has some relevant info, though I would agree that those resources don't answer your question directly (I did not search the RefDesk exhaustively, so others may find a really good answer to what seems like it would be a frequently asked question). Virus survival in the environment varies widely based on environmental conditions. How those conditions affect viral infectivity depends on viral characteristics. For example, the most common cause of a cold is one of the many serotypes of Rhinovirus. Rhinoviruses are picornaviruses, which have a RNA genome, making them more susceptible (than DNA viruses) to genetic damage (which would render them noninfectious); making them much more resilient, though, is their lack of lipid coat such that they survive complete drying. Additional issues include the amount of virus shed, since a heavily-shed virus (relative to its infectious dose) will remain infectious longer. It seems clear that cold viruses can remain infectious for days (PMID 6261689, full text here), at least under some conditions (keep in mind that virus from your nose would never be in "buffered water", and drying in the presence of albumin, as they did in some experiments that showed more prolonged infectivity, is closer to the normal situation). This article references earlier studies on environmental persistence of infectious rhinoviruses, and the efficacy of various disinfection measures. There's also an interesting study of flu virus viability relative to environmental conditions. -- Scray (talk) 17:11, 6 December 2009 (UTC)[reply]
Thanks, very helpful! Fletcher (talk) 17:58, 6 December 2009 (UTC)[reply]

Burns and clothing vs. bare skin

Are burns more or less severe when the burn area of the victim is covered by clothing? That is, does clothing, as opposed to bare skin, alleviate or exacerbate the severity of burns? I assume that it probably depends on the type of burn, so could the question be answered for the various types of burns (chemical burns, electrical burns, hot oil burns (that is, cooking oil), open flames, radiation burns, and steam burns)? —Lowellian (reply) 17:07, 6 December 2009 (UTC)[reply]

First Aid for Soldiers[8] distinguishes between natural and synthetic materials (roughly). "Caution - Synthetic materials, such as nylon, may melt and cause further injury." They also distinguish between the cases whether the fire and flames are still burning on the clothing, or if the flames are extinguished. After the situation is safe, the general instructions are to expose the burn by cutting and gently lifting clothing away, but leaving in place any cloth or material which is stuck to the burn area. They also have special caveats for cases of chemical burns and blisters. Following treatment, the entire area is re-covered in sterile field dressing, to protect the burn area. Nimur (talk) 17:38, 6 December 2009 (UTC)[reply]
Thanks, but I wasn't asking how to treat clothing burns; I was asking whether burns are less or more severe against clothing or against bare skin. That is, does clothing have protective value against burns, or do they only exacerbate burns? —Lowellian (reply) 17:45, 6 December 2009 (UTC)[reply]
Of course, it depends on the clothing. As noted above, nylon will melt and exacerbate the burn. Conversely, an asbestos apron or a flame-retardant PPE will not burn and will also insulate the victim from heat. Materials like (real, non-synthetic) leather will probably serve a pretty good protective role. The more uncertain case are fibers like cotton or wool, which will burn. These will probably exacerbate a burn and may increase the contact-time with the flame/heat source, but it depends on conditions. In some cases, the flame may actually carry more heat away than it produces, but in general I think the direct exposure is a bad thing. Nimur (talk) 17:51, 6 December 2009 (UTC)[reply]
(ec) Clothing is a blessing and a curse. It acts as a physical block, preventing as much "burning agent" from getting to the skin. For example, fabric insulates or slows heat transfer, absorbs small hot-oil drops so they cool before soaking through (if they soak all the way through at all), keeps as much concentrated sulfuric acid from reaching one spot, etc. And certain fabrics are well-designed to block penetration specific burning agents. But once the cause is removed, the clothing keeps the burning agent (what's left of it) close to the skin, leading to prolonged burning. For example, the soaked fabric is still transferring heat or sulfuric acid to the skin. And the results of the fabric exposure to the burning agent can have additional effects beyond "whatever the burn itself is" (see Nimur's comment about synthetic fabrics melting). So the "cause of the burn" isn't removed until the fabric is. DMacks (talk) 17:54, 6 December 2009 (UTC)[reply]

Greenhouse Gases

How is it that CO2 rises in the Atmosphere when it is heavier than Air?Taskery (talk) 18:10, 6 December 2009 (UTC)[reply]

Convection, or mixing of gases, is the dominant descriptor of the troposphere. This means that because of uneven heating and turbulent fluid motion, things like wind and updrafting occur, resulting in a "well-mixed" gas distribution. At higher altitudes (notably, first the stratosphere, stratified on temperature; and above, the mesosphere, layered based on chemical content), gases separate out based on their velocity or molecular mass, but this is not the case in the lowest regions of the atmosphere. Note that there are some exotic mechanisms which can carry CO2 to even higher than equilibrium altitudes. Middle Atmosphere Dynamics is a good book if you are interested in some other ways CO2 can "float" its way up. Nimur (talk) 18:16, 6 December 2009 (UTC)[reply]
The region of atmosphere that is well-mixed is called the turbosphere or homosphere, and is separated from the heterosphere by the turbopause, which is usually well above the stratosphere. --Stephan Schulz (talk) 18:31, 6 December 2009 (UTC)[reply]

Knives and electrical sockets

I have heard it said that it is dangerous to stick the tip of a knife into an electrical socket. But as long as the knife is non-metal (e.g. plastic knife) or has a non-metal handle, I don't see why this would be any more dangerous than sticking an ordinary electric plug into the socket. I don't mean digging deep into and actually cutting up the socket; I mean just sticking the tip in as far as it will go without forcing. A plastic knife wouldn't even conduct, right? And wouldn't the wooden or plastic handle on a metal knife serve as insulation from the metal blade the same way that the plastic or rubber base on an electric plug serves as insulation from the metal prongs? —Lowellian (reply) 18:34, 6 December 2009 (UTC)[reply]

An electric appliance provides an electrical path from the live wire to the return wire. If you stick things in the outlet, you are the easiest electrical path to ground - so even if you are insulated, it's still less safe than plugging in an appliance cord. Nimur (talk) 18:42, 6 December 2009 (UTC)[reply]
A plastic or wooden knife won't conduct unless it is wet. Most knives are metal, though, and even for ones that have a plastic or wooden handle, you will usually see, if you look at them closely, that they aren't easy to grasp in a way that avoids touching any metal. Looie496 (talk) 18:49, 6 December 2009 (UTC)[reply]
The pins of the electrical plug are of a standardized length - it's possible that a longer blade might short out the wires behind the socket too. But I agree that 110v (or even 240v) isn't going to arc though the plastic handle of a knife any more than placing your finger against the plastic housing of the electical socket is going to result in electricity jumping into your body. But these kinds of advice are put out there for the average knucklehead who hasn't noticed that the plastic side-plates of his knife are held on with a couple of brass rivets - and touching one of them might result in a shock. I've stuck electrical screwdrivers and volt-meter probes into electrical sockets dozens of times - but it always has to be a matter of thinking out each move carefully before you do it. Where are you going to be putting your fingers - where will the current flow. The trouble is that 99% of people don't do that - so "Don't stick knives into electrical outlets!" is very good advice. SteveBaker (talk) 20:57, 6 December 2009 (UTC)[reply]
Actually I'd hope that the plastic housing of the electrical socket would be explicitly designed as a good insulator, whereas the plastic knife probably would not be. Thus it might be easier for the electricity to arc through the knife than the socket. Mitch Ames (talk) 11:59, 7 December 2009 (UTC)[reply]
Why would you even want to do that? Do you stick beans up your nose? —Preceding unsigned comment added by 79.75.87.13 (talkcontribs) 16:46, 6 December 2009

More climate questions

One of the basic arguments of the Kyoto Protocol is that more carbon = bad. What is the relationship between increased CO2 and increased adverse effects? I'd think average ocean pH, average global temperature, and average temperature in the polar regions have been talked about enough that some sort of estimate could be given. Are these generally linear, exponential, logarithmic, sigmoid, or other mathematical relationships? Do these relationships work differently for other known bad actors (e.g. methane)? SDY (talk) 19:11, 6 December 2009 (UTC)[reply]

I don't think there is the simple relationship you are looking for. For temperature, we have a fairly good idea that doubling CO2 implies approximately 3 ℃ of warming in the limit, i.e. when equilibrium has been reached. See climate sensitivity. The basic relationship (logarithmic warming) is the same for all greenhouse gases, AFAIK. However, the practical effect differs, since methane decomposes quickly into CO2 and water (with water raining out), while CO2 keeps accumulating. "Average ocean pH" will take a long time to equalize - ocean overturn times are on the order of millennia. Surface water acidifies a lot faster. Ocean acidification has some estimates for surface acidification. --Stephan Schulz (talk) 19:31, 6 December 2009 (UTC)[reply]
Have there been any estimations of how close the system is to that limit currently? What physical condition is that limit consistent with (saturation of C02 in the upper atmosphere, perhaps)? SDY (talk) 19:45, 6 December 2009 (UTC)[reply]
This paper sets forth the required measurement accuracy needed to estimate how close we are to a particular equivalent CO2 concentration in the atmospheric column. Nimur (talk) 19:59, 6 December 2009 (UTC)[reply]
The limit is reached when the radiative forcing is zero, i.e. when the imbalance caused by extra CO2 is balanced by the greater emission caused by a warmer planet. A simple analogy is a pot of water on a stove. As long as the stove is off, temperature of stove and pot will tend to be equal. Put the stove on at a low setting, and the stovetop will heat up quickly, while the temperature of the water lags. Steady state is reached when the water does not heat up any more. --Stephan Schulz (talk) 20:14, 6 December 2009 (UTC)[reply]
(edit conflict) The limit is associated with the Earth system reaching thermal equilibrium and limited primarily by the thermal inertia of the oceans. Given an energy inbalance of a few W/m2 created by an enhanced greenhouse effect, the oceans will continue to gradually warm for a century or two. The surface warms fastest, but that gets mixed downward over time and it takes a long time to reach a practical equilibrium given the shear mass of the ocean. This is generally referred to as "warming in the pipeline" or "already committed warming". In rough numbers, after the thermal inertia is overcome the ultimate change in surface air temperature averages may be roughly double what has been observed in the current short term. Dragons flight (talk) 20:16, 6 December 2009 (UTC)[reply]
It really depends who you ask, and what you consider "consensus". Here's a few good articles in Science: Climate Impact of Increasing Atmospheric Carbon Dioxide (1981), Anthropogenic Influence on the Autocorrelation Structure of Hemispheric-Mean Temperatures (1998), Detecting Climate Change due to Increasing Carbon Dioxide (1981), Where Has All the Carbon Gone? (2003), and (to pre-refute any claims that I am linking old science), here's The Climate in Copenhagen, (December 4, 2009). As you can see, even the qualitative patterns are hard to establish, let alone quantitative estimates. What is generally agreed is that excess carbon does yield negative results. But few quantitative predictions seem to agree. Nimur (talk) 19:34, 6 December 2009 (UTC)[reply]
The problem with coming up with a simple mathematical model is that there are a lot of effects - some with positive feedback terms - adding together. So (for example) the increased greenhouse effect causes a temperature rise that (presumably) has a simple relationship to the amount of CO2 in the upper atmosphere...Great! (you might say)...but it doesn't end there. That temperature rise causes melting of sea ice - which results in a change in albedo from bright white snow to dark ocean. Ocean doesn't reflect the suns' heat away as well as snow. That causes yet more heat to be absorbed than would be predicted by the greenhouse effect alone. So our simple relationship is not quite right - we have to correct for the albedo change. But the relationship between global temperature rise and local temperature rise at (for example) the North pole is complicated. The weather systems that cause the temperature to change at the pole on a day-to-day basis are chaotic (mathematically chaotic) - the "butterfly effect" and all that. So a tiny error in our measurement of global temperature rise can cause a much larger error in the assessment of polar ice temperatures.
But the trouble with that is that if the temperature remains 0.1 degree below freezing - then the ice stays frozen. If it's 0.1 degree above freezing then the ice starts melting...that's such a 'knife-edge' effect that an error in our temperature math of even a tiny fraction of a degree makes the difference between ice - and no ice. When you consider that the resulting temperature is dependent on how much ice melted - you have something that's unpredictable at a year-by-year basis. We know about general trends - more CO2 means more heat, no question about that - more heat means less ice, no question about that - and less ice means more heat absorbed, that's for 100% sure. But precisely the shape of the CO2, final-temperature curve - all we can say that generally, more CO2 means more heat...but putting a simple mathematical curve to that is tough. As bad as that is, it's only one of maybe a hundred other interacting effects. More heat melts glaciers too - but the meltwater flows down under the glacier, lubricating it's contact with the rock and soil beneath - causing it to slide downhill faster and enter the warmer ocean prematurely. As ocean levels rise, light colored land gets covered by darker water - and the albedo changes. As the oceans warm up, they expand (water expands as it warms) - so ocean levels get yet deeper. But then, warmer oceans MIGHT promote algal growth which would absorb more CO2, helping things a little...but then as CO2 dissolves into the oceans, it makes the water more acidic - and that might kill off more algae.
This whole mess is a tangles maelstrom of interactions covering many, MANY subsystems. We can say a lot about the trends - but putting any kind of mathematical function to the effect is very tough. One worrying aspect of this is that many of the effects (like the potential for deep-ocean Methane Clathrates to melt, dumping ungodly amounts of another greenhouse gas, Methane, into the air) are extremely poorly understood. Another worrying thing is that we keep finding new and subtle effects that are making matters worse.
But the trend is inexorably up - that much we know for sure. We also know that CO2 persists in the upper atmosphere for thousands of years - the amount we have put up there already isn't going away anytime soon. Current arguments are mostly about limiting the rate of increase in the amount we're adding every year! Only a few countries are talking about reducing the amount we produce - and none are talking about not producing any more CO2 at all. So the upward trend is there and we're not able to stop that. The best that we can do is to buy ourselves more time until we can figure out what (if anything) we can do about this mess. SteveBaker (talk) 20:43, 6 December 2009 (UTC)[reply]
The knife-edge thing doesn't really work for me. There is X amount of energy in the system, and some of that energy is used for the phase change, and the 0.1 degree temperature change at a phase change is not a minor investment of energy (i.e. 0.1 C to -0.1 C is nothing like 0.3 C to 0.1 C), and I would assume that any decent model takes that into account and that an estimate for the amount of melted ice is not absurd to attempt. SDY (talk) 21:07, 6 December 2009 (UTC)[reply]
I think everyone is trying to tell you it is more complicated than this. Initially warmer poles may mean more air borne moisture, hence more snow and faster ice accumulation for example. People talk of thickening ice in the centre but melting at the edges. Given that accurate weather forecasts more than about 10 days ahead seem to elude mankind you are asking for a precision or simplisticity which just isn't there. --BozMo talk 21:14, 6 December 2009 (UTC)[reply]
SDY, if the temperature of the air in contact with the ice is above freezing the ice will melt. It may take some time because, as you said, the nergy investment can be high. But it will melt. Dauto (talk) 22:19, 6 December 2009 (UTC)[reply]
I'm not expecting exact numbers, but there's more to the relationship than a knife edge. A 0.1 degree change in atmosphere temperature will not immediately melt all of the ice in the world, since there's an interaction between the ice and the atmosphere (and the ocean). I'm expecting that it's a question of "how much ice will melt and how rapidly will it melt" instead of "all ice immediately melts when this point is crossed." That's what I mean with the "knife-edge thing not working for me." Am I totally off base when I expect that climate change will cause the world to end "not with a bang but a whimper?" SDY (talk) 23:21, 6 December 2009 (UTC)[reply]
OK - try not to focus on that one specific thing (although it's enough that we can debate the answer at all!). The point I'm trying to make is that there are easily a hundred things like that that we know can cause temperature change either as a feedback effect or directly because of CO2 concentrations. If even a few of those are not well understood - or are hard to calculate accurately from imprecise data (sensitive-dependence on initial conditions...chaos theory) - then we cannot make accurate predictions. When there are sharply non-linear effects - and many of them - and each affects all of the others - then you have no way to provide a simple formulation of the consequences.
I don't think anyone thinks this will cause a literal end to the world. It could wipe out a lot of important species - humans may face starvation in alarming numbers if crops fail and invasive species run amok. The consequences for the health, well-being and lifestyle of everyone on the planet will be significant. But the planet will definitely survive - there will be life - and mankind will survive and probably still be on top. But the consequences are potentially severe. Whether it's a "Bang" or a "Whimper" depends on your definition. If this fully unfolds over a couple of hundred years then in terms of the history of life on the planet - it's a very brief "bang" - but from a human perspective, it'll be a long, drawn-out decline stretching over many lifetimes...I guess you could call that a 'whimper'. Was the Permian–Triassic extinction event a bang...or a whimper? At its' worst, this could be kinda similar in extent (80% of vertebrates going extinct) and recovery time (30 million years). SteveBaker (talk) 00:23, 7 December 2009 (UTC)[reply]

(undent) The original answer wasn't very clear, probably because my original statement wasn't very clear. I clarified, and did not get a response that really helps. My conclusion, based on the response, is that it is a very poorly understood network of systems and drawing any sort of conclusion at this point about the interaction between ice melting and air temperatures is preliminary. Is that an honest approximation of the current state of the art?

Fundamentally, I guess the stance I'm coming from is that of a rational skeptic. Climate science makes extraordinary claims (i.e. predictions of mass extinction events), but is there extraordinary evidence to support them? My impression given the inability to answer what seem like pretty basic questions is that the extraordinary evidence does not exist. Is this also an honest approximation of the current state of the art, or am I simply reading the wrong sources? SDY (talk) 07:20, 7 December 2009 (UTC)[reply]

I would call that a reasonable assessment of the state of the art. Quantitative models vary widely in their predictions, and no global atmospheric model that I am aware of has accurately predicted numerical values for either CO2 concentration nor for ice melt rates over the long-range timescales. Other quantitative models do exist, but there is huge disagreement about values of parameters, etc., because the way that these complex networks of interrelated systems actually connect together (via thermal physics, optics, chemistry, etc.) are still uncertain. Simplified models that are not global atmospheric simulations also exist, and in the limited scope of estimating a specific parameter or a specific local region, these models can be very accurate. But again, I am not aware of any global climate simulation which accurately models the entire atmosphere/oceans/etc., and also predicts ice melt rates. Nimur (talk) 07:41, 7 December 2009 (UTC)[reply]
I would also go out on a limb and agree with you that in general, many bold claims are made about global climate change. Often, for no particular reason, these bold claims are deemed "part of the great scientific consensus and backed by overwhelming evidence." That is silly. Certain specific claims about climate change are scientific consensus. Certain specific claims about climate change do have overwhelming evidence. Those particular claims are easy to find reputable publications and quantitative data for. But I notice a very disappointing trend to attribute such overwhelming certainty to every claim about climate science. Even ludicrous claims about catastrophic consequences are sometimes asserted to be "consensus" viewpoints, which is counter to reality. Runaway global warming, for example, seemed to be the reported opinion of a nonexistant "consensus" for a long time in many pop-science magazines. Real claims should be backed by specific references and specific experimental or modeling data. Asserting "consensus" is moot - scientific fact is not subject to a majority vote. Data is either valid or invalid; conclusions are either logical deductions from valid data, or not. Nimur (talk) 07:50, 7 December 2009 (UTC)[reply]
I would suggest the OP reads the actual IPCC reports, especially the IPCC Fourth Assessment Report SPM, which contains both projections and certainties for many parameters and claims. The popular press likes dramatizing, and the right-wing blogosphere is completely useless. The climate sensitivity in the range of 2-4.5℃/doubling is fairly solid. But that is still a large range. Regional predictions are still very uncertain, and in the end for many effects are what matters. From the global predictions we know some regions will be hit hard, and others a lot less, but we cannot yet reliably predict which regions are hit how. Science also cannot predict how much greenhouse gases we release, as that is a political/economical question. Mass extinction, however, is not an extraordinary claim at all. There is no doubt that we are already in a mass extinction event, and by rate of species disappearing, one of the worst in history. We do that even without climate change, simply by taking over nearly all ecosystems, and doing things like shipping rats and dogs to New Zealand. --Stephan Schulz (talk) 08:49, 7 December 2009 (UTC)[reply]

Science games for kids

Christmas is rolling up and I will be spending some of it in the company of little ones, say toddler-ish to ten years old or so. I would like to have ideas of what to do with them, to test their scientific knowledge and cognitive development in (as the kids say) the funnest way possible. These need to be simple things to do, without fancy equipment. I am thinking of things like pouring liquid from a tall thin glass to a short wide glass, asking them which glass holds more, and seeing at what age the kid "gets" that the volume is the same. Any ideas for how I can approach this? Needless to say, my young relatives and friends' kids are very brainy babes. BrainyBabe (talk) 19:26, 6 December 2009 (UTC)[reply]

Quantum Physics: Explained through Interpretive Dance. Nimur (talk) 20:04, 6 December 2009 (UTC)[reply]
This may be a bit tough for 10 year-olds - but kids vary a lot: [9] (from my personal Wiki). SteveBaker (talk) 20:06, 6 December 2009 (UTC)[reply]
I was unable to get your human hair width measurement experiment to work at all. I made an honest effort, but it was very hard to get diffraction fringes. I was able to get diffraction fringes by shining the laser through two razor blades closely spaced, but not around a human hair. Nimur (talk) 20:11, 6 December 2009 (UTC)[reply]
Ah Christmas: Standard keeping quite ones for brainies of that age are 142857 (get them to multiple it by 2,3,4,5,6 and guess what 7x is), which day of the week has an exact anagram, lateral thinking games (hanged man and puddle, dwarf and lift, anthony and cleopatra, surgeon "thats my son" etc.), which two digit prime has a square which looks the same upside down and in a mirror, wire through an ice block... I am sure other people know squillions... --BozMo talk 20:10, 6 December 2009 (UTC)[reply]
Balancing two forks and a cocktail stick on the edge of a wine glass...whats the algo to find the dud ball out of ten with balances and only three weighing... fox/ chicken and grain with a river boat... the various logic puzzles with black and white hats Prisoners_and_hats_puzzle Hat_puzzle... --BozMo talk 20:13, 6 December 2009 (UTC)[reply]
Surely 9 balls and two weighings would be more challenging? - Jarry1250 [Humorous? Discuss.] 21:38, 6 December 2009 (UTC)[reply]
The version I know you are not told if the dud is too heavy or too light. That makes nine and two impossible and ten and three hard (especially for people who start off by putting five on each side). But I am sure there are loads of variants/ --BozMo talk 21:45, 6 December 2009 (UTC)[reply]
There's a book called "Physics for Entertainment" ("Zanimatel'naya fizika" in Russian original) by Yakov Perelman. I remember enjoying it immensely when I was a kid. It was written in 1930's, so it does not rely on the modern technology; but that does not make it any less fun. I know it has been translated into English, although I am only familiar with the Russian version. You can try finding the English version in the library. --Dr Dima (talk) 21:09, 6 December 2009 (UTC)[reply]
The trick using slaked cornflour never ceases to amaze... Make up a goo using cornflour (cornstarch for our American cousins) and water to the consistency of runny double cream. If you bang the container on the table and invert it over the head of the nearest brat it won't spill if you do it properly! --TammyMoet (talk) 10:24, 7 December 2009 (UTC)[reply]
These all sound great for the older kids, but I said brainy, not genius! I think several-digit multiplication is beyond the toddler set. There are some tempting phrases to google here: I'd never heard of slaked cornflour, so thank you all, and keep 'em coming! BrainyBabe (talk) 22:54, 7 December 2009 (UTC)[reply]
Slaked cornflour (more colorfully known as "oobleck") is just regular cornflour and water mixed at just the right consistency. It is a classic non-newtonian fluid - it behaves very strangely. If you do violent things to it, it behaves like a rubbery solid - if you prod it gently, it's a pretty runny liquid. I'm sure you could come up some neat things to do with it - but you're going to need to experiment a bit to get the right consistency. SteveBaker (talk) 04:14, 8 December 2009 (UTC)[reply]
Google "red cabbage juice indicator". Red cabbage juice is a good acid base indicator and basically will go through the entire rainbow of colors depending on pH. Kids can have fun slowly adding acid or base to red cabbage juice (basically the water that red cabbage has been cooked in) and watching the colors change. Not sure what kind of game this would make, but little kids like all of the pretty colors. --Jayron32 23:20, 7 December 2009 (UTC)[reply]

Diesel automobiles without diesel particulate filters

I want a list of the currently produced diesel automobiles which aren't available with a diesel particulate filter. --84.62.213.156 (talk) 20:34, 6 December 2009 (UTC)[reply]

Cartography - spherical map - globe section

Encyclopaedia Britannica (CD, 2006)- Student library - Maps and Globes: "A useful compromise between a map and a globe, provided that not to much of the Earth has to been shown, is the spherical map, or globe section. This is a cutaway disk having the same curvature as a large globe. It is usually large enough to show an entire continent. A spherical map shows the shape of the Earth accurately but is much cheaper to produce and much easier to carry and store than a globe." Are there really such things? I ask a german cartographer, he has never heard of this. I could not find a single link on google regarding this globe sections. What is the exact scientific term for such a map? I am witing on the german article Globe (de:Globus). --Politikaner (talk) 21:31, 6 December 2009 (UTC)[reply]

I found one old reference here [10] (there are a few of similar vintage [11]) to the use of spherical maps that were segments of globes, presumably that is what the Britannica is talking about, but no modern references yet. Mikenorton (talk) 21:44, 6 December 2009 (UTC)[reply]
Slightly more recent reference, (1956), [12] (look at the bottom of page 317) which refers to the "design and production of spherical map sections displaying a portion of the globe at a scale of 1:1,000,000". Mikenorton (talk) 21:53, 6 December 2009 (UTC)[reply]
Have you tried Map projection as a starting point? --BozMo talk 23:22, 6 December 2009 (UTC)[reply]
If I'm interpreting it correctly, we're talking about maps shown on segments of spheres, so no projection is necessary. --Tango (talk) 23:32, 6 December 2009 (UTC)[reply]
Or do they mean like this: [13]? --BozMo talk 23:29, 6 December 2009 (UTC)[reply]

As far as I understod from your answers, this spherical maps where only marginal in cartography and are now history. I will integrate the scarce information and the links in my article. Thank you for your help. --Politikaner (talk) 21:59, 8 December 2009 (UTC)[reply]

Cholesterol and sodium

Does exercise get rid of cholesterol and sodium in addition to fat? --75.33.216.153 (talk) 21:38, 6 December 2009 (UTC)[reply]

Sodium levels are controlled by osmotic systems. Being ionic and very water-soluble it is easy to get rid of excess (simply drink more water and urinate more). I really think cholesterol is an endogeneous thing. Ingested cholesterols only account for a small fraction of cholesterol supply; the rest is produced as a pathway related to fatty acid synthesis. Thus the only practical solution to controlling cholesterol is via medication that can inhibit those pathways. See COX-1 and COX-2. John Riemann Soong (talk) 01:03, 7 December 2009 (UTC)[reply]

Vigorous exercise will generally cause you to sweat, sweat contains sodium - so yes, your sodium levels will decrease (sometimes to pathologic levels, see hyponatremia). Cholesterol is a cell-wall component, I don't think it features in any primary catabolic (energy-producing) pathways, though if you exercise to the point of muscle wasting you'll probably be burning cholesterol along with everything else. However, reading the cholesterol article, total fat intake plays a role in serum cholesterol levels, which suggests to me that if exercise helps to burn off circulating lipids in the blood serum before the liver can synthesize cholesterol, less cholesterol will be produced - but I'm not positive on the timeframes involved. Franamax (talk) 01:41, 7 December 2009 (UTC)[reply]
I'll defer here to the two responses below, which while not directly sourced seem to me entirely reasonable. Absolute levels of sodium will decrease with exercise, but relative concentration apparently may not and I don't know enough about ion transporters in the cell wall to have an opinion on how (primarily) nervous system function is affected. And if HDL itself is catabolized (rather than being recycled as a marker molecule), that too I am unfamiliar with. So best to just ignore my whole post maybe. :) Franamax (talk) 04:02, 7 December 2009 (UTC)[reply]
Sweating doesn't reduce your sodium levels: it's hypotonic fluid, its sodium concentration is lower than the rest of your extracelluar fluid. When you sweat, you're losing more water than sodium. It's replacing sweat with hypotonic fluid of an even lower sodium concentration (water) instead of an isotonic or hypertonic replacement (e.g. Gatorade) that would lower your sodium levels. - Nunh-huh 01:57, 7 December 2009 (UTC)[reply]
Exercise can be beneficial to blood cholesterol levels, which is all anyone cares about. Your body is full of cholesterol, but its only the stuff that clogs your arteries that makes any difference. Cholesterol ends up in the blood in two forms, HDL cholesterol and LDL cholesterol, these are respectively "good" cholesterol and "bad" cholesterol. People often miss why we test for blood cholesterol. Its not the cholesterol per se which is always bad, its that the cholesterol is a marker for things that are going on in your body. HDL and LDL are used as chemical tags attached to molecules in your body that tells them where to go. HDL is associated with catabolic processes, that is molecules taged with HDL are basically heading to be broken down. LDL is associated with anabolic processes, that is those molecules are heading somewhere to be added to your body. In general, having an excess of LDL means your body is growing, so LDL can indicate an excess of caloric intake, high blood sugar levels, and a general trend of increasing fat storage. Higher HDL levels are generally associated with lower blood sugar levels, breaking down fat, and lower overall caloric intake. There are lots of other factors involved, but exercise in itself can improve cholesterol ratings because exercise increases catabolic processes in your body by using up stuff for energy, especially fat stores. --Jayron32 03:16, 7 December 2009 (UTC)[reply]
Well LDL is also "inherently" bad in that it promotes (i.e. it's more than a marker) lipid accumulation in blood vessels right? Otherwise why would people use statins? John Riemann Soong (talk) 05:31, 7 December 2009 (UTC)[reply]
Because LDL-tagged lipid molecules tend to drift around the blood until the glom onto each other. HDL-tagged molecules are heading to the liver to be "eaten up", so they don't hang around a long time. The essentially get filtered out. LDL-tagged molecules are basically saying "We're ready to be used to build new cells" and if there isn't anywhere in the body that needs lots of new cells, these molecules just hang around until they accumulate in veseels and cause a mess. So yes, LDL-cholesterol can, of itself, cause problems, but the underlying concern can still be addressed, in many people, by some amount of behavioral modification; i.e. to exert control over those processes which decrease anabolism (lower blood sugar) and increase catabolism (more exercise). There's also been some interesting research out that people who use cholesterol lowering drugs like statins may not have much significant positive health outcomes; that is while their cholesterol numbers may be significantly lower, they don't have significantly lower incidents of cardiovascular disease. It seems somewhat like a case of covering up a problem rather than fixing it. It doesn't mean a whole lot to lower your cholesterol if doing so doesn't have an effect on the quality or length of your life. The only people for whom statins actually show positive outcomes are those for who actually have active heart disease. For people with no known heart disease symptoms, while statins do lower cholesterol numbers, they don't actuall seem to reduce the risk of emerging heart disease. See this article from CBS news which explains some of the controversy, and this article (subscription required) from a peer-reviewed journal. The CBS article actually makes some good points about advertising used in the case of Lipitor; the ad claims percentage improvements based on some pretty shoddy statistics. In the case cited, those taking Lipitor saw an incidence of 2 heart attacks per 100 people, and those taking placebo say an incidence of 3 heart attacks per 100 people. The question then becomes whether widespread prescription of Lipitor results in net positive health outcomes for the most people, given that it isn't preventing that many heart attacks among the general, healthy, population and that it does have documented side effects which need to be taken into account. --Jayron32 18:15, 7 December 2009 (UTC)[reply]

Why aren't rugby players overwhelmed with injuries?

In gridiron football, the players wear helmets and padding. Even with that protective equipment, concussions, broken bones, torn tendons, and other serious injuries occur frequently. So how is it possible that rugby players, who also play a full-contact sport and don't wear any protective equipment at all, aren't overwhelmed with more frequent and severe injuries? —Lowellian (reply) 21:52, 6 December 2009 (UTC)[reply]

Rugby players do wear some protective equipment (Rugby union#Equipment). It is against the rules to tackle above the shoulders, and I think there are rules about how many people can tackle a player at once. I don't know if similar rules exist in other forms of football. --Tango (talk) 22:29, 6 December 2009 (UTC)[reply]
Yep. Also see Rugby_union_equipment#Body_protection. There is an injury rate I think its about 300 injuries per 100,000 hours played but I easily could be a factor of ten out: anyway about ten times higher than most other sports I seem to recall, my mum was a school doctor and it was in one of her books. But also a large number of rules evolved over a long time to minimise injury, not just no neck tackle but no tackling in the air, no collapsing scrums, no playing when down etc etc. I am not sure that many players wear anything other than a mouthguard as protection despite some light padding being allowed (sometimes there is a bit in a Scrum cap to protect Cauliflower_ears). And of course it is not a complaining culture: even at school level players with broken bones (fingers for example) sometimes carry on to the end of a match which is allowed as long as there is no blood flowing but obviously a bit daft. --BozMo talk 22:43, 6 December 2009 (UTC)[reply]
Come on! We all know the real reason is that Americans are weak sissys who wouldn't survive half an inning of a real Englishman's game like rugby or contract bridge! --Stephan Schulz (talk) 23:31, 6 December 2009 (UTC)[reply]
I know they slam you around in bridge, especially when you're most vulnerable ... but innings? Clarityfiend (talk) 23:46, 6 December 2009 (UTC)[reply]
Well, yes. Aside from their padded version of Rugby (which I played in school and is unbelievably dangerous!), American sports are most British 'girly' games that have been adapted into tamer forms for American men to play. Baseball is really just "rounders" - which is played almost exclusively by girls in the UK (and has been since Tudor times) - and Basketball is really just "netball" (since 1890 at least). Hockey is really hockey with padding and a nice slippery surface so you can't get a decent grip before you whack your opponent with a bloody great stick - but, again, it's mostly a girly game in the UK. That leaves golf...'nuff said? (And you can get some wicked paper-cuts in contract bridge when the play gets rough!) :-) SteveBaker (talk) 23:50, 6 December 2009 (UTC)[reply]
Baseball, basketball, hockey (what you 'pedians call "Ice hockey" for some reason) and gridiron football are all Canadian inventions. We came up with them to distract attention from lacrosse, which is basically just hitting each other with sticks for 60 minutes. :) Franamax (talk) 01:49, 7 December 2009 (UTC)[reply]
That's an odd statement. Baseball (aka Rounders) has been around in the UK since 1745..that makes the sport 123 years older than Canada (est 1867). Basketball (aka Netball) was first played in the UK in 1890's - no mention of Canada there. Our article on Gridiron football says the gridiron started out in Syracuse University (NewYork, USA). I guess we're going to have to demand some reference here. Lacrosse...yeah - that's a pretty serious sport. Hockey - but without the rule about the stick not being higher than shoulder-height - allowing some pretty decent head-shots. Yeah. The only trouble is that I can't think about lacrosse sticks without thinking about the delicate young ladies of St. Trinians. SteveBaker (talk) 03:26, 7 December 2009 (UTC)[reply]
Sources, could be a problem for sure, that's why I confined myself to small print. Abner Doubleday codified the current rules of baseball (more or less), I read somewhere he came from Ontario (could be wrong); James Naismith set up modern basketball, an expat Canadian; Football, again my reading has been that it was codified as a Canadian university sport, then watered down to give the wimpy Americans one extra try with the fourth down :) ; hockey - oh yeah, we still get the head-shots, the euphonisms are "combing his hair" and "laying on the lumber a bit" although the incidence has decreased drastically. Lacrosse, well all due respect to the ladies, but when the Toronto Rock take the field, wearing shorts and short-sleeved shirts i.e. no body protection whatsoever in those areas - yeah, it's a scene man... :) (But I will retract anything I can't source immediately, which is mostly everything I just said) Franamax (talk) 03:45, 7 December 2009 (UTC)[reply]
Canada has a perfectly legitimate claim to both Basketball and Gridiron Football. Basketball was invented in the U.S. (we luckily have documentation on that one) by James Naismith in about 1891 (Netball came a few years later, and is specifically a derivative of it). Naismith was in the U.S. when he invented the sport, but he was Canadian by birth. Gridiron Football was essentially invented by Walter Camp, but his changes were basically modifying the existing forms of football in the U.S., which were predominately recognizable as Rugby. Since rugby was introduced to U.S. colleges by McGill University, one can claim that McGill had a large role to play in introducing modern Gridiron codes of football. However, there is not one inventor of either of these games, just that Canadians played a prominent role in both of them. --Jayron32 05:21, 7 December 2009 (UTC)[reply]
(Without evidence) There are several possible reasons for this. One is that if you have all of that padding & protection, you can tackle harder without risk of hurting yourself - therefore it may be that well-padded players are simply hitting with harder surfaces (helmets, for example) and with more power than an unprotected player could. Secondly - there is a phenomena where people are prepared to accept a certain level of risk in what they do - and improving the protection simply makes them take larger risks. When seatbelts were first mandated for cars, the number of injuries in car accidents decreased sharply - but it has gradually crept back up as people now feel safer and are therefore increasing the risk back to where it was before the seatbelt laws. Perhaps, with less risk to self, the guy with the odd-shaped ball is less careful about avoiding situations where he might be tackled. SteveBaker (talk) 23:50, 6 December 2009 (UTC)[reply]
I'm sure I heard somewhere that, because of this, rugby has more injuries but American football has more fatalities. Vimescarrot (talk) 01:32, 7 December 2009 (UTC)[reply]
This article [14] lists a bunch of fatalities amongst Fijian rugby team - so Rugby players clearly do die during professional play. The only recent references I could find to fatalities in American football were heat-related fatalities amongst young players - not due to collisions during play. But PubMed [15] says that close to 500 people have died from brain-related injuries and that was 70% of all fatalities - but mostly amongst children. So we should guess around 710 fatalities since the 1940's - so maybe 80 or so fatalities per year. The Canadian public health people [16] found zero Rugby fatalities between 1990 and 1994. But it's really hard to gather fair statistics because Rugby is played in a LOT of countries - and American football in just a handful - but the number of people involved seems higher in the US and Canada than in other countries. SteveBaker (talk) 03:43, 7 December 2009 (UTC)[reply]
I don't have any hard statistics (although did come across this which hints at it [17] in relation to discussion on Max Brito who suffered a spinal cord injury at 1995 Rugby World Cup and became tetraplegic) but it's my understanding most serious rugby injuries are at the lower levels of play, i.e. not involving the top professionals and that this isn't just because of the obviously significantly greater numbers but because of the greater experience and knowledge of how to avoid serious injuries and how to avoid causing serious injuries and also because they tend to be fitter. At a guess I would presume it's the same for American football/gridiron but I don't really know. There are a variety of articles here [18] [19] [20] [21] [22] [23] which I come across dealing with rugby injuries particularly spinal cord ones which may have some useful statistics Nil Einne (talk) 11:35, 7 December 2009 (UTC)[reply]
As a single data point, I can offer a distant uncle who died tragically young from a broken neck. Rugby at university, after winning a scholarship. The first of his family to go. So, at that level, people certainly have died. 86.166.148.95 (talk) 20:03, 7 December 2009 (UTC)[reply]
A long time later but came acros this while searching for something else. One point I forgot to mention is there's likely better medical attention and care, particularly initially (which includes people not doing completely daft things like moving someone who may have a spinal injury without an appropriate brace when it isn't absolutely essential). Nil Einne (talk) 04:13, 18 June 2011 (UTC)[reply]
I think some good points have already been made but a key point which hasn't been mentioned yet that I noticed is that in Gridiron/American football players are tackled regardless of whether they have the ball. While I don't play nor have any interest in the sport it's my understanding one of the key expections is that every player tries to tackle a rival player. This compares to rugby where you only tacke a player with the ball. Also just repeating what was said above, a number of rules in rugby have evolved to try & reduce injury. I came across [24] which may be of interest with comments from two people who've played both sports. Nil Einne (talk) 11:27, 7 December 2009 (UTC)[reply]
Several sites seem to indicate that a lot of rugby injuries happen in the scrum...not so much during tackles. SteveBaker (talk) 14:35, 7 December 2009 (UTC)[reply]
Precisely. ~ Amory (utc) 16:35, 7 December 2009 (UTC)[reply]
This is also mentioned in our Scrum (rugby union)#Safety of course. I didn't mention this earlier but it's probably the key area of focus in the attempts to to reduce the risk of injury (the 2007 rule change mentioned in Scrum (rugby union)#Safety for example). Some of course suggest an end to contested scrums ala Scrum (rugby)#Rugby league but as the earlier article says, there's little support for that. There is I think a simple code of ethics that if anyone yells 'neck' during a scrum, you stop pushing (but probably don't pull out of the scrum since if the guy's neck is injured that's not going to help) which is described here [25]. (Such codes of ethics aren't of course that uncommon in sport when they're called for.) That site also mentions a number on injuries primarily in American football players. BTW I also came this story [26] on Gareth Jones (rugby player) a professional Welsh rugby player who died last year just to emphasise it isn't just in countries like Fiji that they happen. Nil Einne (talk) 17:54, 7 December 2009 (UTC)[reply]
I'll also add (I don't think it was mentioned above) that since you cannot pass forward in Rugby, there is a much larger emphasis on speed, agility, and overall athleticism. In (American) Football, there is an emphasis on either throwing far or being really, really big. You want a players who can run like the devil, but the emphasis still comes from the throws, not avoiding getting hit. ~ Amory (utc) 16:35, 7 December 2009 (UTC)[reply]

Are there any other types of radiation besides electromagnetic radiation?

Hi! I would like to ask if there's any other types of radiation besides electromagnetic radiation? By electromagnetic radiation I mean all of it's subcategories from gamma rays to infrasound. Thank you for your help!JTimbboy (talk) 23:19, 6 December 2009 (UTC) —Preceding unsigned comment added by JTimbboy (talkcontribs) 23:18, 6 December 2009 (UTC)[reply]

There's gravitational waves. Of the four fundamental forces, only electromagnetism and gravity work over long distances, so only they really radiate in a noticeable way. There are also cosmic rays, but they are actually particles. --Tango (talk) 23:27, 6 December 2009 (UTC)[reply]
What definition of radiation are we using here? The standard one contains lots of non-EM radiation (e.g., alpha particle, beta particles, neutrons, etc.). Sound is "acoustic radiation". --Mr.98 (talk) 00:24, 7 December 2009 (UTC)[reply]
Two points. First: Infrasound is not a kind of electromagnetic radiation. Second: Being made of particles is not a reason to exclude cosmic rays from the list of radiations since light is also made of particles. Dauto (talk) 04:22, 7 December 2009 (UTC)[reply]
Out of curiosity, if light is ambiguously photon/energy, do the same particles exist for different wavelengths? I.e. are there "photons" in X-rays and in the infrared? SDY (talk) 17:01, 7 December 2009 (UTC)[reply]
Yes. All types of electromagnetic radiation are essentially the same; there's nothing special about the stuff our eyes happen to be able to see. Shorter-wavelength radiation corresponds to higher-energy photons. Algebraist 17:05, 7 December 2009 (UTC)[reply]
The relationship between quantization energy and wavelength is defined by the Planck constant. SpinningSpark 17:08, 7 December 2009 (UTC)[reply]
(ec) All wavelengths of light are made up of photons. Different wavelengths are made up of photons of different energies (E=hf - the energy per photon is the Planck constant times the frequency). --Tango (talk) 17:08, 7 December 2009 (UTC)[reply]

Thank you for your advice! Sorry, I mixed up infrasound with extremely low radio frequency, because they have a bit coinciding frequency. Could anyone give me examples of particle only radiation (I hope that radiation is the right word to discribe it)? And examples of radiations that work over short distances too?JTimbboy (talk) 15:24, 7 December 2009 (UTC)[reply]

Alpha radiation, beta radiation and cosmic rays are all forms of particle radiation (I should say fermion radiation since, as Dauto points out, light can be thought of as being made of particles). When I mentioned short range forces I was talking about the strong nuclear force and the weak nuclear force. They only work on the scale of atoms and smaller and the particles involved are almost always virtual particles, so can't really be thought of as radiation. --Tango (talk) 15:39, 7 December 2009 (UTC)[reply]
There is also neutrino radiation which is of some importance to astronomy. See Cosmic neutrino background and Neutrino astronomy. There is a large burst of neutrino radiation from supernova explosions. SpinningSpark 16:51, 7 December 2009 (UTC)[reply]
Muons are an important component of the secondary radiation created by cosmic rays when they strike the atmosphere. 169.139.217.77 (talk) 00:42, 8 December 2009 (UTC)[reply]

Total acres in California?

How many acres are in the entire state of California? —Preceding unsigned comment added by 98.248.194.191 (talk) 23:32, 6 December 2009 (UTC)[reply]

104,765,440. --Tango (talk) 23:33, 6 December 2009 (UTC)[reply]
104,765,165, from US Census Bureau figures rather than the rounded off ones in the Wikipedia article. SpinningSpark 02:07, 7 December 2009 (UTC)[reply]
Any place with a coastline will have a somewhat indeterminate exact area because of the fractal nature of the coastline - so going to that degree of precision is probably too much anyway. SteveBaker (talk) 02:59, 7 December 2009 (UTC)[reply]
The fractal nature of the coastline causes a problem for determining the length of the coast but not so much for the area enclosed. The USCB quotes the figure in square miles to two decimal places, so they must think they have it nailed to within a handful of acres. The precision I have used roughly corresponds to the precision the source has used and I am sure we can take the USCB as being expert in these matters. SpinningSpark 03:47, 7 December 2009 (UTC)[reply]
The difference between the two answers is 0.0002%. I would imagine that is well within the difference of the size of the state between high-tide and low-tide, so SteveBaker's answer is sort of correct; it probably isn't the fractal nature of the coastlines, its that the actual size of the state is fluctuating, and the amount of that fluctuation is larger than the difference between these two measurements. That wouldn't necessarily apply to a state like Colorado, however differences is measuring the land area may also arise. For example, the land area may be calculated from a map projection in one or both cases, and EVERY single map projection will introduce errors, just differing amounts. Plus, does one or both calculations assume a perfectly flat topography, or are they taking changes in elevation into account? --Jayron32 03:57, 7 December 2009 (UTC)[reply]
Out of curiosity, I did the math. The coast of California is at least 850 miles long (much greater including small bays and inlets), and the difference between the two figures given is 275 acres. Along an 850-mile coast, we can lose 275 acres of land surface by submerging a bit less than three feet of coastline. Neat. TenOfAllTrades(talk) 04:28, 7 December 2009 (UTC)[reply]
Which would be a very small difference in tides, so QED. Of course, that still does not answer if we replace California with Colorado, which being bordered by straight lines and not having a coastline, should have no variability in its area. But, as I said, different methodologies will result in different measurements of land area. However, with Colorado, there is likely to be a really correct answer due to the nature of its boundaries, which is different from California. --Jayron32 04:38, 7 December 2009 (UTC)[reply]
These calculated areas assume that all hills have been squashed flat to sea level (on some projection). In reality, the land area will be much greater (if you had to spray it, for example) when slopes are taken into account. Are there any calculations of the actual 3-D surface area? It shouldn't be too difficult to get an approximation with modern GPS technology. Obviously the answer would depend on the resolution, but a reasonable compromise might be to take a height measurement every few yards to avoid counting areas of small rocks and molehills. Perhaps the calculation would make a bigger difference where I live (northern UK) where most of the cultivated land is on a slope. Dbfirs 08:59, 7 December 2009 (UTC)[reply]
The USCB says of the accuracy of its data;
The accuracy of any area measurement data is limited by the accuracy inherent in (1) the location and shape of the various boundary information in the TIGER® database, (2) the location and shapes of the shorelines of water bodies in that database, and (3) rounding affecting the last digit in all operations that compute and/or sum the area measurements.
Ultimately, this database takes its geographic information from US Geological Survey data. The USGS define the coastline as the mean high water line, so no, California is not changing its area on a daily basis with the tides. It does of course, change over time due to such things as erosion, deposition and changing sea levels. SpinningSpark 13:43, 7 December 2009 (UTC)[reply]
Argh! Where do I start?
The reason I invoked "fractals" was precisely because I assumed that the USGS would use the mean high water mark or some other well-defined standard for tidal extent. The area of a fractal may be just as unknowable as the length. The length is not just unknowable - but also unbounded (in a true mathematical fractal, the length is typically infinite). The error in calculating the area can be bounded (eg by measuring the convex hull of the water and of the land and calculating the exact area between them) - but the actual area is only known for some mathematical fractals. For example, we know the exact area of a koch snowflake - but we don't know what the true area of the Mandelbrot set is [27]) - and we certainly don't have an answer for "natural" fractals. If you have to measure the coastline accurate to within (according to TenOfAllTrades) three feet - then the fractal nature of having to carefully measure around the edge of every large rock of every crinkly little tidal inlet (to a precision of three feet!?!) ensures that your error will easily be a few hundred acres.
This is why SpinningSpark and Jayron32 are both incorrect and TenOfAllTrades calculation (while interesting) isn't what matters here.
To Dbfirs - GPS technology doesn't help you at all unless you are prepared to clamber over the entire state logging GPS numbers every few feet...and that's not gonna happen. GPS is irrelevent here. (If you wanted reasonably accurate height data for California, you'd probably use the NASA radar study they did with the Space Shuttle a few years ago.)
As for measuring the 3D area of the land instead of a projection - please note that mountains are also fractal. The area of a true 3D fractal is typically infinite just as the length of a 2D fractal is infinite. (See: How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension). A mountain is a 3D fractal - a coastline is only a 2D fractal - but the problem is exactly the same. Do you include the surface area of every boulder? Every little rock and pebble? The area of every grain of sand on the beach? Do you measure area down to the atomic scale? Where do you draw the line?
So - you have to use the projected area - that's your only rational choice - and the fractal nature of the coastline prevents you from getting an accurate measure (certainly not down to a precision of a couple of hundred acres). Since we're told they USGS uses the TIGER dataese - I'm not at all surprised that there are errors because that data is a pretty crude vector outline of coastlines. That dataset was primarily drawn up for census purposes - it was never intended to be an accurate description of the shape of coastlines. SteveBaker (talk) 14:30, 7 December 2009 (UTC)[reply]
The bigger question is does it really matter. The purpose of measuring area and boundaries to extreme accuracy is basically property rights: i.e. this land is mine, delineated from your land by this line. The state charges property taxes based on so-and-so dollars per acre, etc. So the real calculation of the real area of an entire state, while intellectually interesting, is practically trivial. Even for defining things like borders within bodies of water, and water rights; these are established by treaty and law, and these treaties and laws often include "fudge factors" for erosion and the like. The rules which define U.S. state borders around the Mississippi River, for example, include stipulations which allow for both slow erosion of river banks; in this case, the border drifts with the river, so states along the Mississippi have differing areas every day. If the river "abandons" an old channel and forges a new channel (i.e. oxbow lake formation); in that case the border does NOT change with the river, which explains why there are some locations in states along the Mississippi which are on the "wrong" side of the river, c.f. Kaskaskia, Illinois. Yes, any calculations are going to be approximations, but civil authorities cannot live with the "mathematicians" solution which says "its all fractals, and unboundable, so there is no definite answer". There always exists a set of error bars which is both workable in civil society, and which eliminates the fractalness of the problem. --Jayron32 16:19, 7 December 2009 (UTC)[reply]
You're right of course - this 0.0002% error is entirely unimportant. If anything both the Wikipedia number and the US Census Bureau number should be rounded off to the nearest thousand acres or so - and both groups should be chastised for using an unreasonable amount of precision! The only interesting point here is what the size and cause of the error truly is - and hence how much rounding should be applied. SteveBaker (talk) 16:43, 7 December 2009 (UTC)[reply]
Wikipedia cannot be faulted for quoting data to the same precision used by our sources. To round off numbers because of some supposed fractal "smearing" would be original research on our part unless the sources themselves also say this is happening. SpinningSpark 18:14, 7 December 2009 (UTC)[reply]
The discrepancy is suspiciously close to the difference between the international foot and the U.S. survey foot, or international acres and U.S. survey acres.—eric 18:59, 7 December 2009 (UTC)[reply]
You might be suspicious but I think you should show some good faith. The discrepancy is due to what I said it is in my first post, that is, a rounding error, both figures are ultimately from the same source in the same units. As far as I know, there is no such thing as an international acre, but if there were, it would be 1/640th of an international mile squared and the discrepancy would be 4ppm, not 2ppm. SpinningSpark 00:45, 8 December 2009 (UTC)[reply]
... and since the area measured the old way with a surveyor's chain would be significantly greater (because of slopes), the slight inaccuracy in a theoretical projected area doesn't really matter. Thanks SteveBaker, I should have said satellite survey, since GPSatellites only transmit (except for correction data). I was thinking of GoogleEarth where heights are given (or estimated?) every few yards. In the UK, we could use Ordnance Survey data which is surprisingly detailed on height. I agree that there is no one correct answer for 3-D area, which is why I said average every few yards, but there is no "correct" way to do this, so it will probably never be done. Projection area depends on the projection applied to the calculation. Steve suggests rounding to the nearest thousand acres, and that sounds reasonable in view of the inherent inaccuracies. Do we have any experts on projections who could estimate the differences in the area estimated using different projections? Dbfirs 20:04, 7 December 2009 (UTC)[reply]
Well, I've spent a large fraction of my career worrying about map projections (I used to work in flight simulation). Really, the only reasonable projection in the era of computers is to project onto the mean sea level WGS-84 spheroid. That's the internationally recognized standard for the shape of the planet - it's what GPS uses. I don't know what other projections would make sense - certainly not the flat-earth versions. If the survey pre-dated WGS-84 (which was established in 1984) then probably it would have used an earlier definition for the shape of the planet - maybe the Clarke spheroid of 1886. Calculating the error between the two is do-able - but it's way more work than I'm signing up for! SteveBaker (talk) 04:03, 8 December 2009 (UTC)[reply]
You are confused about a few things, the World Geodetic System does not specify a map projection. See also: Datum (geodesy)#Reference datums, Datum (geodesy)#Vertical datums, reference ellipsoid, geoid, mean sea level, etc.—eric 18:55, 8 December 2009 (UTC)[reply]
Thank you to both of you for the links. They've given me lots to read and think about. I've been puzzled about the difference between map areas and real areas for nearly fifty years! ( If mean sea level rises with global warming, will all calculated areas have to be increased? Yes, I know that one metre in 4 million is only 0.000025%, and that's only about a 0.00005% increase in area. ) ;-)Dbfirs 10:05, 9 December 2009 (UTC)[reply]

beef slaughter house

A friend touring a beef slaughter house said he saw cows walking into an open horizontal barrel one at a time and when the gate was closed the cow inside the barrel started shaking violently until a man put a hammer handle like device between the cows ears to kill it. He said the violent shaking was because the cow knew it was going to be killed. Is this true or is there some other explanation? 71.100.160.161 (talk) 23:42, 6 December 2009 (UTC) [reply]

There are procedures used in some slaughterhouses where cattle, once they enter the stunning box are electrically stunned using tongs or other devices before the use of a captive bolt pistol and then ejected for exsanguination. Nanonic (talk) 00:06, 7 December 2009 (UTC)[reply]
The cattle have been under severe stress since they were loaded onto the truck back at the farm, not much attention is paid to comforting cows on the way to slaughter. The natural response of a cow to any sort of confinement or other stress is to run away from it (but not too far, they're herd animals). When the cow goes into the killing box, it's being confined to very close to zero movement, this is essential to get a clean kill. For the cow though, it's now gone from severe stress to ultra-maximum stress and its body will react accordingly. However to say the cow "knows" it is going to be killed, to me imparts rather more self-awareness than a cow actually possesses. They are sensing a dangerous situation and responding instinctively. Franamax (talk) 02:00, 7 December 2009 (UTC)[reply]
I wonder whether the cow was shaking because of the electrical stunning? Electric shock can cause muscle tremors. If so, it wouldn't have been conscious at the time. SteveBaker (talk) 02:50, 7 December 2009 (UTC)[reply]
No, he says it walked into the barrel and did not start shaking until the gate was closed. The front part of the barrel obscured everything but the head and it was not until 5 or 10 seconds later when the operator waled up on a platform and administered the (I presume) shock. 71.100.160.161 (talk) 03:37, 7 December 2009 (UTC) [reply]
Nope, the cow doesn't know that closing a gate behind it means it is now going to be killed. What it does know is that it can no longer back up ergo it can no longer move at all - this is my maximum stress scenario. If you're going to confine a cow, you have to be sure the pen has higher sidebars than the cow, else it might try to jump over. Think of this from the POV of the cow, everything that has happened to it today has been bad and now it's getting worse. You can throw down some nice alfalfa hay in front of a cow in a pasture field and very calmly shoot it in the head (believe me). The cow is just panicking from the general situation. Franamax (talk) 04:16, 7 December 2009 (UTC)[reply]
The problem with that scenario, which is why the first is used, is transporting hundreds of pounds of meat from the field to the slaughterhouse, which is expensive. If you killed every cow in an open field, then paid somebody to drag it onto a truck, then transported the dead carcass to the abattoir, it would be a) very expensive and b) less sanitary due to the amount of time the dead meat isn't being processed appropriately. If you actually kill the cow at the slaughterhouse, you get to prep the meat almost instantly after death (which is more sanitary) and the cow basically walks himself to his own death, saving LOTS of money and labor. If you find this method of death to be immoral in some way, then you may not want to eat cow. If this doesn't really bother you, then feel free to eat the cow. --Jayron32 04:56, 7 December 2009 (UTC)[reply]
You may be interested in the work of Temple Grandin to reduce the anxiety of cattle in the slaughterhouse. -- Coneslayer (talk) 13:48, 7 December 2009 (UTC)[reply]

Since we can't ask the cow's opinion the question is moot. Cuddlyable3 (talk) 11:21, 7 December 2009 (UTC)[reply]

Sure we can. The answer is moot. --Stephan Schulz (talk) 12:02, 7 December 2009 (UTC)[reply]
It's a moo point. -- Coneslayer (talk) 13:48, 7 December 2009 (UTC)[reply]
Stephan just made that joke. --Tango (talk) 17:20, 7 December 2009 (UTC)[reply]
MOOray COWristmas. Cuddlyable3 (talk) 21:49, 7 December 2009 (UTC)[reply]

December 7

Change in kinetic energy

It's pretty easy to show, classically, that the observed change in kinetic energy doesn't depend on the frame of reference of the observer: it is a direct result from the equation relating the kinetic energy in a certain reference frame with that of the center of mass reference frame. How can it be shown that the change in kinetic energy (or energy), relativistically, doesn't depend on the reference frame? Do you have to go into the mathematical details to derive this result, or is there an a priori way of coming to the same conclusion?

A second, related question: Does an object's potential energy change between reference frames? I would think it would, because an object's potential energy depends on the relative distance between two objects, which, by Lorentz contraction, changes with reference frame. —Preceding unsigned comment added by 173.179.59.66 (talk) 00:18, 7 December 2009 (UTC)[reply]

That what you said is not even true classically, let alone relativistically. Dauto (talk) 03:12, 7 December 2009 (UTC)[reply]
It was (is) difficult to understand your question. That's probably why you got no answers. Also I don't think this field is well studied. Dauto: what is not true classically? Ariel. (talk) 09:16, 7 December 2009 (UTC)[reply]
The OP said "It's pretty easy to show, classically, that the observed change in kinetic energy doesn't depend on the frame of reference". Well, that's not true. the change in kinetic energy DOES depend on the frame of reference. Dauto (talk) 14:46, 7 December 2009 (UTC)[reply]
Why not, assuming that the system is closed? If the total kinetic energy in a certain reference frame is K, and the total kinetic energy in the center of mass reference frame is K_0, and the velocity of the center of mass relative to the reference frame in question is V, then K = K_0 + MV^2/2 (where M is the total mass). So ΔK = ΔK_0 (V won't change if it's a closed system). So the change in total kinetic energy will always be the same as the change in total kinetic energy of the center of mass, and thus the change in kinetic energy will always be the same. —Preceding unsigned comment added by 173.179.59.66 (talk) 17:45, 7 December 2009 (UTC)[reply]
Why should we assume the system is closed? Dauto (talk) 18:37, 7 December 2009 (UTC)[reply]
You aren't making sense. The kinetic energy in the centre of mass frame is zero. Or are you talking about the centre of mass of an n-body (n>1) system? It is easier to consider one object: If in my frame a 2kg object is moving at 1 m/s and speeds up to 2 m/s its KE increases from 1J to 4J. If your frame is moving at 1 m/s relative to mine in the same direction as the object then in your frame it starts off at rest (0J) and speeds up to 1 m/s (1J). In my frame the increase in energy was 3J, in yours it was 1J. As you can see, the change it kinetic energy is dependent on the frame of reference. (That's the classical view, the relativistic view is similar and gets the same overall conclusion.) --Tango (talk) 19:11, 7 December 2009 (UTC)[reply]
I see what he's saying. It's true if momentum is conserved. That said, energy isn't conserved, so it's not a closed system, and it would seem pointless to assume momentum is conserved. I guess it's useful when you're talking about energy changing between kinetic and other forms. For example, if there are two balls each with a mass of 1 kg moving towards each other at 1 m/s that stick when they hit, the amount of energy lost in the collision is 2J regardless of reference frame. 67.182.169.172 (talk) 01:46, 8 December 2009 (UTC)[reply]
Yes, thank you...but how do you show this?
Potential energy certainly changes. Consider two identical springs, one held highly compressed by a (massless) band. Now zip past them at relativistic speed; the total energy of each must scale by γ, and you must see some of that increase in the compressed spring as additional potential energy because the other one has the same rest mass, thermal energy, and (rest-mass-derived) kinetic energy and the difference in energy is larger the compression energy in the springs' rest frame. --Tardis (talk) 15:38, 7 December 2009 (UTC)[reply]

Okay, it appears that I'm bad at making sense. 1) I wanted to say at the beginning that the system was closed, like in a collision, I just forgot to mention it. 2)It is a many body system (ie, as in a collision)...so basically, I'm asking if the change in kinetic energy in a collision (elastic or inelastic) is the same in all reference frames. —Preceding unsigned comment added by 173.179.59.66 (talk) 01:08, 8 December 2009 (UTC)[reply]

Lets say you have a system of particles with masses , . The kinetic energy is given by where is the total energy. Now suppose that those particles suffer a series of collisions (which may be elastic or not) and after the collisions there are particles with masses , . The kinetic energy is given by where is the total energy after the collisions. The change in kinetic energy is , where the term cancles since the system is isolated and energy is conserved. Note how the final result depends only on the rest masses which are independent of the referencial used. I hope that helps a bit. Dauto (talk) 22:22, 8 December 2009 (UTC)[reply]

Redox rxn. How can I tell if a reaction is redox or not?

How can I tell if a reaction is a redox reaction just by looking at the chemical equation? Can someone show me an example? Thank you.161.165.196.84 (talk) 04:31, 7 December 2009 (UTC)[reply]

A redox reaction is one where the oxidation numbers of some of the elements is changing in the reaction. All you do is assign oxidation numbers to every element in the chemical reaction. If the oxidation number for some of the elements is different on the left from on the right, then it is a redox reaction. If all of the oxidation numbers stay the same on both sides, then it is not a redox reaction. But you need to actually know how to assign oxidation numbers before you can do anything else here. Do you need help with that as well? --Jayron32 04:34, 7 December 2009 (UTC)[reply]

Yes, that would be great. My understanding is this: Hydrogen is usually +1, Oxygen is usually -2. In binary ionic compounds the charges are based on the cation's (metal) and the anion's (non-metal) group in the Periodic Table. Polyatomic ions keep their charge (Ex// Phosphate is -3, Nitrate is -1).

Now, my textbook says the following and I am not sure what this means: "In binary molecular componds (non-metal to non-metal), the more "metalic" element tends to lose, and the less "metalic" tends to gain electrons. The sum of the oxidation number of all atoms in a compound is zero." I'm not quite sure what the first part affects, but the second part is simply saying that once all oxidation numbers have been assigned, the sum of those numbers should be zero. Is this correct? — Preceding unsigned comment added by 161.165.196.84 (talkcontribs)

Yeah, that's it. You should assign oxidation numbers per element not just for the polyatomics as a whole. Let me give you a few examples of how this works.
  • Consider CO2. Oxygen is usually -2, and there are two of them, so that lone carbon must be +4, to sum up to 0, the overall charge on the molecule. C=+4, O=-2
  • Consider P2O5. Oxygen is usually -2, and there are 5 of them, so the TWO phosphorus have to equal +10, so EACH phosphorus has an oxidation number of +5. P=+5, O=-2
  • Consider H2SO4. Oxygen is usually -2, and hydrogen is almost always +1. That means that we have -8 for the oxygen and +2 for the hydrogen. That gives -6 total, meaning that the sulfur must be +6 to make the whole thing neutral. H=+1, S=+6, O=-2
  • Consider the Cr2O7-2 ion. In this case, our target number is the charge on the ion, which is -2, not 0. So, in this case we get Oxygen usually -2, and there are 7 of them, so that's a total of -14. Since the whole thing must equal -2, that means the two Chromiums TOGETHER must equal +12, so EACH chromium has to equal +6. Cr=+6, O=-2.
There are a few places where you may slip up. H is almost always +1, except in the case of metalic hydrides; in those cases (always of the formula MHx, where M is a metal) H=-1. Also, there are a few exeptions to the O=-2 rule. If oxygen is bonded to Fluorine, such as in OF2, fluorine being more electronegative will make the oxygen positive, so O=+2 in that case. Also, there are a few types of compounds like peroxides (O=-1) and superoxides (0=-1/2) where oxygen does not have a -2 oxidation number. These will be fairly rare, and you should only consider them where using O=-2 doesn't make sense, for example in H2O2, if O=-2 then H=+2, which makes no sense since H has only 1 proton. So in that case, O=-1 is the only way it works. However, these are rare exceptions, and I would expect almost ALL of the problems you will face in a first-year chemistry class to be more "Standard" types as I describe above.--Jayron32 06:09, 7 December 2009 (UTC)[reply]
Note that O being an oxidation state of (-1) is the reason why peroxides are such strong oxidants. Oxygen is more stable at oxygen state (-2), and so peroxides are susceptible to nucleophilic attack where one oxygen atom accepts electrons and pushes out hydroxide or alkoxide as the leaving group (cuz of the weakness of the oxygen-oxygen bond). John Riemann Soong (talk) 06:29, 7 December 2009 (UTC)[reply]
An important note is that "oxidation states sum to zero" ONLY in neutral compounds. If your compound is an an ion (for example, perchlorate or phosphate) or NAD+, then oxidation state will sum up to the charge of that ion. E.g. hydronium has an oxidation state +1. (It makes sense, right?) John Riemann Soong (talk) 06:33, 7 December 2009 (UTC)[reply]

This is helpful, thank you very much to all. Chrisbystereo (talk) 08:09, 7 December 2009 (UTC)[reply]

Value of a microchip

If you were to take all the metals and so on out of a chip (your choice, the newest Pentium, a digicam's image sensor, etc) and price them according to whatever tantalum/aluminum/titanium/cobalt/etc is going for, what would a chip's value be? I'm just curious what the difference between the cost of the components and the cost of the labor and such put into making it all work together. Has anyone ever even figured this out before? Dismas|(talk) 04:46, 7 December 2009 (UTC)[reply]

The chip is basically a few grams of silicon, plastic, and maybe copper and iron. I can't imagine that the materials would be more than a few U.S. cents, if that much. The lion's share (99.99%) of cost of the chip itself is labor. --Jayron32 04:51, 7 December 2009 (UTC)[reply]
I'm very aware that the value would be small but I was just wondering how small. My job is to make them and I've been spending the last few weeks looking at them under a scope and this question popped into my head. Dismas|(talk) 05:10, 7 December 2009 (UTC)[reply]
Well, you probably have more accurate measures on the amounts of metal deposited in your process; and if you discount everything that gets wasted when it's etched away, you probably end up with a chip that contains a few nanograms of aluminum, a few picograms of boron, and a couple milligrams of silicon. Other trace metals depend on your process. Perhaps a better way to price everything is to count the number of bottles of each chemical solution or metal ingots that you consume in a given day/week/whatever, and divide by the number of chips produced. Again, this doesn't account for waste material, so you have to do some estimation. Nimur (talk) 08:06, 7 December 2009 (UTC)[reply]
Pure silicon costs a lot more than impure. Does that difference count as labor to you? Metal ore in the ground is free for the taking. Making the base metal is all labor. Pretty much the cost of everything is just labor and energy. There is no "component price" for things, the only question is where do you draw the line and say "this is labor cost", and this is "component cost". I suppose - to you - it depends on if you buy it or make it. But globally there is no such line. To answer the question you are probably actually asking: I would suggest adding up the estimated total salary for everyone in your company, and subtract that from the gross income, and subtract profit. (If it's a public company you should be able to get those numbers.) Then you'll have things like overhead, and energy to include or not, as you choose. Ariel. (talk) 09:12, 7 December 2009 (UTC)[reply]
(either I was still sleepy, or I had an unnoticed ec - Ariel says essentially the same above) Also, the question is not very well-defined. For a microprocessor, you need very pure materials. A shovel of beach sand probably has most of the ingredients needed, but single-crystal silicon wafers are a lot more dear than that. If you pay bulk commodity price for standard-quality ingredients, the price of the material for a single chip is essentially zero. But in that case you will also need a lot of time and effort to purify them to the necessary level. --Stephan Schulz (talk) 09:13, 7 December 2009 (UTC)[reply]
Wouldnt a vast inclusion of cost be R&D? I remember someone quoting West Wing on this desk about pharmaceuticals that would be relevant: "The second pill costs 5 cents, its that first pill that costs 100 million dollars." Livewireo (talk) 18:18, 7 December 2009 (UTC)[reply]
Indeed. The cost is almost entirely R&D, I would think. That is a labour cost, though. --Tango (talk) 20:56, 7 December 2009 (UTC)[reply]
Nevermind. I said I make them, I didn't say I owned the company and had access to all the costs associated with making them. I just wanted to know how much it would be if I melted it down and sold the constituent metals and such. I didn't think I was being that unclear. I'll just assume it's vanishingly small. Dismas|(talk) 20:19, 7 December 2009 (UTC)[reply]
It would cost far more to separate the components than the components would be worth. Your question is easy to understand, it just doesn't have an answer - not all questions do. --Tango (talk) 20:55, 7 December 2009 (UTC)[reply]
The fun of the question is how close to zero it is. Bus stop (talk) 21:10, 7 December 2009 (UTC)[reply]
Thank you, Bus stop. I think you get my question most of all. I didn't mention labor at all. Or R&D. Nor did I ever say anything about the cost of separating the components. Again, nevermind. Dismas|(talk) 22:01, 7 December 2009 (UTC)[reply]
Ok, but you can't get round the purity issue mentioned above. There isn't a single value for silicon, say, it depends on the purity. How pure the silicon would be depends on how much labour you put into separating the components. --Tango (talk) 22:12, 7 December 2009 (UTC)[reply]
And the quantity of metals depends wildly on the actual die, photo masks, etc. As I mentioned above, you can estimate the masses of these constituent ingredients better than we can. Different mask patterns can leave as much as 100% or as little as 0% of a particular deposited layer - so there is no "in general" answer. You just have to estimate layer thickness and layer area for each stage of the process. Some typical numbers for areas and thicknesses might come out of articles like Self-aligned gate#Manufacturing process. Nimur (talk) 22:17, 7 December 2009 (UTC)[reply]

mesomeric versus inductive effects for the pka of catechol (ortho-diphenol)

I actually thought that o and p benzenediols should have higher pkas than phenol because of the destabilising mesomeric effect, but it seems that catechol (ortho-diol) has a pka of 9.5 (according to wikipedia). Google seems to say resorcinol (meta-diol) has a pka of 9.32 while para-diphenol is 9.8. This source seems to give a different frame of values.

My hypothesis is that the inductive effect is also at play, where having a carbanion resonance structure next to a (protonated) oxygen atom will stabilise it somewhat. And of course, the further away the two groups are from each other, the weaker the inductive effect, so it's why the para-diphenol would have the highest pka's of all the diphenols, while the meta-diol would barely see any mesomeric effect and mostly see the inductive effect. Is this reasonable? Is it supported by literature? John Riemann Soong (talk) 05:44, 7 December 2009 (UTC)[reply]

phenols as enols

I'm looking at this synthesis where a phenol is converted into a phenoxide and then used to perform a nucleophilic attack (in enol form) on an alkyl halide. My question is: why use lithium(0)? It seems a lot of trouble when you could just simply deprotonate phenol with a non-nucleophilic base like t-butyl hydroxide. Is it because a phenolate enolate is more nucleophilic at the oxygen? If so why not use something like lithium t-butyl hydroxide to bind the phenolate more tightly? John Riemann Soong (talk) 06:24, 7 December 2009 (UTC)[reply]

I disagree with "a lot of trouble". Weigh a piece (or measure a wire length) of metal, drop it in, and you're done. Seems no worse than measuring your strong base (often harder to handle and/or harder to measure accurately). And where does that base come from? Do you think it more likely to be a benefit or a problem to have an equivalent of t-butanol byproduct (the conjugate acid of your strong base) in the reaction mixture (note that the chosen solvent is non-Lewis-basic) and during product separation/purification? The answer to every one of your "why do they do it that way" is because "it was found empirically to work well enough and provide a good trade-off for results vs cost." Really. Again again, nothing "in reality" works as cleanly as on paper, so you really have to try lots of "seems like it should work" and you find that every reaction is different and it's very hard to predict or explain why a certain set of conditions or reactants is "best" (for whatever "best" means). It's interesting to discuss these, but I think you're going to get increasingly frustrated if you expect a clear "why this way?" answers for specific reactions. On paper, any non-nucleophilic base will always work exactly as a non-nucleophilic base, and that's the fact. In the lab, one always tries small-scale reactions with several routes before scaling up whatever looks most promising. DMacks (talk) 07:05, 7 December 2009 (UTC)[reply]
Along those lines, the best source for a certain reaction is the literature about that reaction. The ref you saw states "The use of lithium in toluene for the preparation of alkali metal phenoxides appears to be the most convenient and least expensive procedure. The procedure also has the merit of giving the salt as a finely divided powder." DMacks (talk) 07:12, 7 December 2009 (UTC)[reply]
Sorry I guess my experience with oxidation-state 0 group I and II metals so far has been with Grignard and organolithium reagents. From an undergrad POV, they are such an awful pain to work with (compared to titrating a base and acid-base extraction)! Also -- deprotonated phenols can act like enols? Why aren't aldol side reactions a problem to worry about during the synthesis of aspirin from salicylic acid? And why aren't enol ether side reactions a worry here? John Riemann Soong (talk) 07:25, 7 December 2009 (UTC)[reply]
The cited ref notes that the enol-ether side product is a huge problem (3:1 of that anisole product vs the "enolate α-alkylation" product they are primarily writing about). If the goal is "a difficult target", it doesn't matter if the reaction that gives it actually only gives it as a minor product compared to some other more likely reaction. The standard result of "phenoxide + SN2 alkylating agent" is O-alkylation, with other isomers being the byproduct. However, in general for enolates, the preference for O-alkylation vs C-alkylation is affected by solvent (especially its coordinating ability), electrophile, and metal counterion. It's unexpected to me that they get so much of it, but if there's any there and you want it badly enough, you go fishing through all the other stuff to get it. That's what makes this reaction worthy of publication...it does give significant amounts of this product and allows it to be purified easily from the rest. DMacks (talk) 09:16, 7 December 2009 (UTC)[reply]

phenol-type quinoline

What do you call a phenol-type quinoline with an hydroxyl group substituted in the 8-position? I'm trying to find out its pKa (in neutral, nonprotonated form), but it's hard to know without realising what its name is.

(Also, it is an amphoteric molecule right?) These two pkas appear to interact via resonance, making for some weird effects on a problem set ... (I'm considering comparative pH-dependent hydrolysis rates (intramolecular versus intermolecular) for an ester derivative of this molecule...) John Riemann Soong (talk) 07:30, 7 December 2009 (UTC)[reply]

Standard IUPAC nomenclature works pretty well for any known core structure: just add prefixes describing the location and identity of substituents. So quinoline with hydroxy on position 8 is 8-hydroxyquinoline (a term that gives about 132,000 google hits). Adding "pka" to the google search would help find that info. The protonation of these types of compounds is really interesting (both as structural interest and in the methods to study it)! All sorts of Lewis-base/chelation effects. DMacks (talk) 09:03, 7 December 2009 (UTC)[reply]

acidic proton question (Prilosec & Tagamet!)

Okay sorry for posting the 4th chem question in a row! I'm trying to figure out the acidic proton in two molecules, Tagamet and Prilosec. I'm given a pKa of 7.1 for the former and 4.0 and 8.8 for the latter. I don't know which ions the two in Prilosec the two pKas correspond to. (Possibly I feel there are much more basic and acidic sites, but they are not detailed or outside the range of discussion?)

With Tagamet imidazole is the the most obvious candidate for being a base with a pkB near 7, but I'm wondering why not the guanidine type residue? It has a nitrile group on it -- but how many pKa units would it raise? pKa of guanidine is 1.5, so plausibly a CN group could raise it to 7?

Oh yeah, and Prilosec. I'm ruling out the imidazole proton, but I feel that alpha-carbon next to the sulfoxide group is fairly acidic, cuz it has EWGs on both sides PLUS the carbanion could be sp2-hybridised if the lone pair helps "join" two conjugated systems. But the imidazole and pyridine lone pairs also look good for accounting for some of those pKas. Why aren't there 3 pKas? I think the imidazole-type motif in Prilosec is responsible for the pKa (pka of conjugate acid) of 8.8 -- but why the elevated pKa compared to normal imidazole? And why would the pKa of pyridine fall that low? (It has an electron donating oxygen substituted in para-position!) But assigning the pKas the other way round doesn't make sense either. Slightly disconcerted as I know these lone pairs are basic. John Riemann Soong (talk) 09:50, 7 December 2009 (UTC)[reply]

science

at firt there were judt two societies i.e hunting and gathering society but there was still life,the people were still living no equality was present evey one was equall and there was peace all over.But now a days cause of science there was no peace,no equality no respect every one is induldge in earning money.so wat should be hapened if the science and its invention are removed from our society? should we again start hunting and gathring society just 4 the sake of peace and equality? —Preceding unsigned comment added by Umair.buitms (talkcontribs) 13:31, 7 December 2009 (UTC)[reply]

What makes you think hunter-gatherer societies were peaceful? They generally had greater equality since there wouldn't be enough food to go around if there was an elite that didn't hunt or gather, but they certainly fought neighbouring tribes. Do you really want equality, though? Surely everyone having a low standard of living is worse than some people having a high standard of living and others having a higher standard, which is the case in the modern developed world. --Tango (talk) 13:38, 7 December 2009 (UTC)[reply]
I agree with Tango - there was unlikely to have been "equality" in the early days of humanity - and certainly no "peace". In modern times, there are still a few hunter-gatherer societies out there in places like the Amazon rainforest that science has not touched. For them, there is still warfare between tribes - women are still given one set of jobs and the men others - and there are still tribal leaders who rule the lower classes. The one place where equality is present is in "racial equality" - but that's only because they don't routinely meet other races because of the geography.
As for removing science and invention - our society literally could not exist that way. The idea that (say) 600 million Americans could just put on loincloths and start hunting and gathering is nuts! There would be nowhere near enough food out there for that to happen - without modern agriculture, we're completely incapable of feeding ourselves. We would need for perhaps 599 million people to die before the one million survivors could possibly have enough to eat.
I think your idyllic view of hunting & gathering is severely misplaced. It's a cruel, brutal existence compared to the relative peace and tranquility that is modern life.
SteveBaker (talk) 13:51, 7 December 2009 (UTC)[reply]
It a very common if very confused view, that all human problems are a product of modernity and so forth. It's true we have some new problems... but the problems of civilization are all there in part because living outside of civilization is so brutal. It is similar to the point of view that animals want to be "free"—most appear to want a stable food source more than anything else, because being "free" means starving half of the time. That's no real "freedom". --Mr.98 (talk) 14:44, 7 December 2009 (UTC)[reply]
At least Sabre Toothed Tigers are extinct this time around. APL (talk) 15:16, 7 December 2009 (UTC)[reply]
Do you have any references to support your utopian view of the hunter-gatherer societies? All evidence I've seen points to a society in which tribal warfare is common. Women are possessions. Children are expendable. And attempts to advance society are only accepted if they allow the tribe to attack the neighbors and steal more women and children. I feel that modern society is a bit more peaceful than that. -- kainaw 13:57, 7 December 2009 (UTC)[reply]
To be fair, women as possessions is more what you get after the arrival of basic agriculture (herding), when the link between sex and children is more clearly understood and the concept of 'owning' and inheriting is established. Societies everywhere have cared about their own children: they were not viewed as expendable, except in as much as 'people who are not my family/tribe' are viewed so. If children were really viewed as expendable, there wouldn't be any concern about continuation of the family and providing inheritance, and hence there would be no possessiveness of women: the whole 'women as possessions' thing is about ensuring the children they bear are verifiably the children of the man who thinks they're his: without that, there's no reason for the man to care if the woman has sex with other men. The OP may have a hopelessly utopian view, but I'm not convinced yours is any more accurate. If nothing else, the Old Testament gives us accessible sources written up to three thousand years ago: the overwhelming feeling I get from it is how little the way people think has changed in the most basic ways. It is full of people caring very much about their children, way back. 86.166.148.95 (talk) 18:50, 7 December 2009 (UTC)[reply]

I'm going to be all cheesey and link you to Billy Joels We didn't start the fire. 194.221.133.226 (talk) 14:06, 7 December 2009 (UTC)[reply]

I think even One Million Years B.C. was closer to the truth than the OP's utopian vision. Though their makeup probably wasn't as good :) Dmcq (talk) 14:27, 7 December 2009 (UTC)[reply]

OP, the viewpoint you expressed is known as anarcho-primitivism. You can read our wikipedia article, which includes views of both proponents and critics. See also Luddite, Neo-Luddism etc for less extreme versions of anti-modernism movements. Abecedare (talk) 15:39, 7 December 2009 (UTC)[reply]

Also, for what it's worth, the development of science and so-called modernity was really just the logical outcome of a successful hunter-gatherer society. It's a lot easier, efficient, and survivable to build a house and a farm instead of wandering around hoping you find food, water, and shelter. Agriculture leads to spare time, spare time leads to advancements, which lead to greater agriculture, which eventually leads to Twitter. You can't go back, nobody would chose death over relaxation and creativity. ~ Amory (utc) 16:26, 7 December 2009 (UTC)[reply]
It's not that inevitable - plenty of societies didn't develop agriculture until they were introduced to it by other societies, some in modern times (eg. Australian Aborigines). Things wouldn't have needed to be too different for agriculture to have never been developed anywhere (or, at least, not developed until millennia later than it was). --Tango (talk) 16:34, 7 December 2009 (UTC)[reply]
These are basically value judgements we are all making. These are subjective answers we are giving. Not surprisingly we favor what we have. Bus stop (talk) 16:41, 7 December 2009 (UTC)[reply]
Re: not inevitable: I seem to recall [citation needed] that one of the ways archaeologists identify remains as early-domesticated goats rather than wild goats, is to look for signs of malnutrition. Captive goats were less well fed than wild goats. 86.166.148.95 (talk) 18:54, 7 December 2009 (UTC)[reply]
I've never heard that, but it makes some sense. Goats were often raised for milk, rather than meat, and they don't need to be particularly well nourished to produce milk (actual malnourishment would stop lactation - animals usually don't use scarce resources on their children if they are at risk themselves). --Tango (talk) 18:57, 7 December 2009 (UTC)[reply]
The actual experience of prehistoric hunter-gatherers is a serious bone of contention among anthropologists, made all the more difficult by various wild claims made by armchair researchers of the 18th and 19th centuries. (See state of nature and Nasty, brutish, and short). Among the complications are these: HGs lived in wildly diverse ecologies, meaning they had wildly diverse lifestyles, with wildly diverse advantages and disadvantages - how can you evaluate the lifestyle of an averaged out Inuit/San person meaningfully? Also, the few remaining HGs live at the very edges of the habitable earth, which makes it difficult to extrapolate what life was like in more normalized areas. In very, very, generic terms you can say this: people who lived the HG lifestyle worked a lot less per day than the farmers their descendants eventually became, they had few diseases compared to farmers, and they probably had more well-rounded diets than farmers. While there was surely enough sexual discrimination to go around, it was probably not nearly as bad as in the farming communities and the whole "slavery to acquisition" we in the modern world play to was pretty much non-existent; you can't build up wealth if you've got to slug everything on your back. On the other hand, they had relatively slow population expansion so when disasters did hit, it might spell the end to the band or tribe. Inter-band warfare was a real hit and miss kind of thing too - there were neighbours to be trusted and others that weren't, but with no central authority, there was really nobody "watching your back" if relations got out of hand. On the whole, it probably was quite a nice existence if you happened to be living in a reasonable area and didn't mind living your life within an animistic/mystical framework where you have enormous understanding of the surface of the world around you, but virtually no grasp of the real reason why anything happens. No books, no school, no apprenticeship, very little craft specialization beyond perhaps "women gather, men hunt" kind of thing. Matt Deres (talk) 21:43, 7 December 2009 (UTC)[reply]
Of course you can build wealth if you need to haul it around. Your form of wealth would most likely be draft animals so that you can carry more stuff around. Googlemeister (talk) 22:21, 7 December 2009 (UTC)[reply]
Domestication of animals is part of the road to civilisation. If we're talking about early hunter-gatherer societies (which I think we are - if we're talking about later h-g societies then you may be right), then they wouldn't have domestic animals. They wouldn't have had much to carry around. Simple clothes, stone tools, ceremonial items. Their economy was 99.9% food and I don't think they had the means to preserve it for long. --Tango (talk) 22:47, 7 December 2009 (UTC)[reply]
It does not need to be animals, slavery has been around for some time as well. Googlemeister (talk) 16:54, 8 December 2009 (UTC)[reply]
The basics of smoking meat have been known for a very long time, but I think it really came down to not wanting to carry the result around. In order to not exhaust an area, most HGs had to keep on moving at regular intervals and the use of pack animals was, while not completely unknown, not something widely employed. Dogs were probably in use for help with the hunt, but once you start semi-domesticating your prey animals you're not really hunting-gathering anymore - now you're herders, and eventually pastoralists, perhaps practising transhumance. Anthropologists use the term "pastoralist" in a more narrow sense than our article does, basically only using it for those groups that have only minimal physical goods and a high reliance on the herd animal. The classic example there are the Nuer. Matt Deres (talk) 01:33, 8 December 2009 (UTC)[reply]
Smoking meat might help you get through a bad winter, but it isn't going to allow you to build up a retirement fund. I think a HG would have two possible wealth levels - enough food for their tribe to survive and not enough food for their tribe to survive. I can't see any way they could have significantly more wealth than than having enough food to eat. --Tango (talk) 11:51, 8 December 2009 (UTC)[reply]

1. This subject would have been more appropriately placed on the Humanities Desk.
2. The OP is delusional.

Life, in the state of nature, is "solitary, poor, nasty, brutish, and short" leading to "the war of all against all."
  — Thomas Hobbes, Leviathan (1651)

B00P (talk) 23:24, 7 December 2009 (UTC)[reply]

Getting back to the OP: you may have been misled by the name into thinking there were two societies originally, one that hunted and one that gathered. In fact, hunter-gatherer is a generic name for pre-agricultural groups, almost all of which ate both types of foods: those (mostly animals) which some members (mostly male) had hunted, and those (mostly plants) which others (mostly female) had gathered. (Another name for these societies, should you wish to research further, is foragers.) It is true that most of these groups had to be mobile, to follow the food, and as such could carry little with them, so they did not accumulate wealth in the sense in which we understand it. However, there are always exceptions. One well-studied example are the Indigenous peoples of the Pacific Northwest Coast, who lived in a rich and fertile ecosystem, and particularly the Haida, who developed an impressive material culture -- so much so that they had to invent the potlatch in order to get rid of (or share around) the surplus. And that relates to another sort of wealth, a social and cultural wealth as opposed to a material one. Much harder to demonstrate than grave goods! BrainyBabe (talk) 23:29, 7 December 2009 (UTC)[reply]
@B00P- Please don't wp:BITE the newbies or post nonsense. Hobbes was a philosopher who never did any fieldwork, never studied the topic and just made shit up as he went along, pretending a state of nature had once existed so he could have an excuse to make up even more shit without any basis in reality. Explaining why you think such-and-such a policy is good is perfectly fine, inventing lies and making crap up out of whole cloth is the worst kind of academic fraud. You don't do yourself any favours by quoting him as if it was worth anything. Matt Deres (talk) 01:43, 8 December 2009 (UTC)[reply]
I think that in good times they had peace of mind beyond our wildest imagination. Bus stop (talk) 23:32, 7 December 2009 (UTC)[reply]
Indeed. So pervasive is the "solitary, poor, nasty, brutish, and short" lie that most people don't know that H-Gs in fact lived in familial groups, had no poverty, weren't particularly nasty or brutish, and lived longer, healthier lives than their farming cousins. I think that, if most people could see what kind of life they'd reasonably expect to lead, they'd choose hunting and gathering over farming any day of the week - less work, less disease, less worries, no boss - who wouldn't want that? Matt Deres (talk) 01:52, 8 December 2009 (UTC)[reply]

(edit conflict)I highly doubt the world would be able to support 6+ billion hunter gatherers. I would think that alone would answer the OP's question - if we, as a species, reverted back to hunting and gathering, it would require the deaths of billions. I would say that suggests that, "4 the sake of peace and equality", we definitely should not do this. TastyCakes (talk) 23:34, 7 December 2009 (UTC)[reply]

Well, death is the great equalizer ... so in some sense, "4 the sake of peace and equality", we should do this unless we do not value equality.... "I'll rant as well as thou." Nimur (talk) 18:09, 8 December 2009 (UTC)[reply]
Agriculture and settlements didn't necessarily make life better, but it made life more productive. Settled farmers can produce more food than can hunter-gatherers, and more food = more children.[28] Thinkquest estimates the Earth's carrying capacity for hunter-gatherers to be 100 million.[29], and the carrying capacity assuming the farming of all arable land as 30 billion people. As soon as human societies struck on the strategy of settling and farming this inevitably expanded either by cultural exchange or by the farmers progressively expanding their territory to the detriment of hunter-gatherers. We have pre-Western contact Australian Aboriginals and Bushmen/San as remaining examples of hunter-gatherers; Amazonian tribes aren't a great example as they derive from pre-Colombian civilizations who left behind the Terra preta, so they have traditions and social structures that persist from those times. Here's a study of hunter-gatherer societies at the end of the last ice age, which looks at societal transitions.[30] As for violence and peace, the death rate of adults of the Hiwi is about 2% per year, and the death rate in Cro-Magnon and our sibling species the Neanderthals is estimated to have been 6% per year.[31][32] Over half of deaths in the Aché pre-contact with Westerners were due to violence, so the idea that it is modern society, technology, science or civilization that causes violence is plain wrong, but it does make our killing more efficient and able to be done on a larger scale. Fences&Windows 18:42, 8 December 2009 (UTC)[reply]

Photon

Does the energy of a photon depend on the reference frame? I would think so, because observers in difference reference frames measuring the frequency of a photon will measure different values (because their clocks run at different rates), and E=hf. But then a parado seems to arise: If observer A measures the energy of a photon to be E, then an observer B moving relative to A should measure a lower energy, γE. But in B's reference frame, A should measure an energy of γA. So who measures what energy? —Preceding unsigned comment added by 173.179.59.66 (talk) 17:54, 7 December 2009 (UTC)[reply]

See redshift. Redshift is precisely photon energies being different in different frames. Redshift is determined by the relative velocity between the source and observer. The difference in velocity between two observers would mean each sees a different redshift - the one receding from the source faster (or approaching the source slower) will see the energy as lower. --Tango (talk) 18:04, 7 December 2009 (UTC)[reply]
There are other factors that influence the observed frequency besides the γ-factors. If everything is taken into account, there is no paradox. See doppler effect. Dauto (talk) 18:08, 7 December 2009 (UTC)[reply]
Also, see Relativistic Doppler effect, which extends the mathematics to apply to a wider range of relative velocities of reference frames, accounting for additional effects of relativity. Nimur (talk) 19:08, 7 December 2009 (UTC)[reply]

Okay, so basically you're saying that the equation relating the two observed frequencies would be the Doppler equation, regardless if the emitter is actually seen? —Preceding unsigned comment added by 173.179.59.66 (talk) 01:02, 8 December 2009 (UTC)[reply]

What do you mean by "regardless if the emitter is actually seen?" If you detect a photon than (by definition) the emitter is being seen. Dauto (talk) 09:00, 8 December 2009 (UTC)[reply]
What I meant was, you don't know the velocity of the emitter. But I realise now that it shouldn't matter. —Preceding unsigned comment added by 173.179.59.66 (talk) 11:29, 8 December 2009 (UTC)[reply]

And a related question (actually, this was the motivator for the first question): suppose that a photon strikes a proton (at rest in the lab frame) and produces a proton and a pion or something. The first question (this was an exam question) was to find the threshold energy, which I did without problem. The second question asked to find the momentum of the pion if the photon has the threshold energy. So my strategy was to find the velocity of the center of mass and then make that the velocity of the pion...how would you do this though? —Preceding unsigned comment added by 173.179.59.66 (talk) 01:17, 8 December 2009 (UTC)[reply]

If the photon has the threshold energy, and it actually has that interaction you describe, then all the energy is used up in creating the pion. There is no energy left to move anything, so pion and proton are stationary. But the photon had momentum so there will need to be some movement to carry that away. I guess you will need a simultaneous equation, one to preserve energy and one to preserve momentum. momentum of photon=momentum of proton+momentum of pion. (as vectors); energy of photon=kinetic energy of pion+kinetic energy of proton. Graeme Bartlett (talk) 03:16, 8 December 2009 (UTC)[reply]
Indeed, you need to solve the equations simultaneously. If you're clever, you'll be able to work out what directions the proton and pion fly off in. --Tango (talk) 15:59, 8 December 2009 (UTC)[reply]
It probably goes without saying, but the above equation for energy conservation should be:
energy of photon=kinetic energy of pion + energy equivalent of the pion's mass + kinetic energy of proton
Otherwise we're not accounting for the mass of the pion being created. TenOfAllTrades(talk) 16:22, 8 December 2009 (UTC)[reply]
Oops, missed that out. For the direction, I am guessing that the minimal energy situation will not have any side to side or up and down movement, so that only a a scalar needs to be considered. Photon hits proton and pion and proton head off in the same direction as the original photon. Your idea about changing the coordinates sounds wise. If you pick coordinates in which total momentum is zero, afterwards you will still have momentum zero. The minimal energy situation in this frame will be pion and proton sitting stationary at the same spot. Graeme Bartlett (talk) 20:57, 8 December 2009 (UTC)[reply]
The centre of mass frame does make things easier to calculate, but your final answer isn't very useful - yes, everything ends up at rest, but it's at rest in a frame that is moving very quickly compared to your lab. --Tango (talk) 22:54, 8 December 2009 (UTC)[reply]
It is usefull in so far as it stablishes that the proton and the pion will be moving together in the lab. ref. frame. Dauto (talk) 02:33, 9 December 2009 (UTC)[reply]
So at the minimum: energy of photon=hF=½v2(mass of pion+mass of proton) + energy equivalent of the pion's mass
Momentum of photon=hF/c=v(mass of pion+mass of proton)
Divide energy formula by momentum:
c=½v + (energy equivalent of the pion's mass)/(mass of pion+mass of proton)/v which you can solve for v perhaps (c=speed of light, h=Planck's constant F=frequency of photon). Graeme Bartlett (talk) 11:12, 9 December 2009 (UTC)[reply]

Area of Hong Kong

I was reading the question above about the size of California and I was wondering - has anyone ever gone and added up the total floor space in a dense city like Hong Kong, including all the floors in all those skyscrapers, as well as area on the ground, and compared that to its geographical area (1,104 square km, according to the article)? How much larger would Hong Kong, for instance, be? When viewed in that light, would the List of cities proper by population density change dramatically (ie cities with people living in big sky scrapers coming out looking better, ie less dense, than cities with lots of "1 story slums")? TastyCakes (talk) 19:23, 7 December 2009 (UTC)[reply]

I vaguely recall that such statistics (total habitable area) are commonly collected by governments, tax administration authorities, electric/water utilities, fire-departments, etc. I can't recall if "Total habitable area" is the correct name. I'm pretty sure that the statistic of habitable- or developed area (including multi-story buildings) as a ratio to total land area is commonly used for urban planning. Nimur (talk) 19:54, 7 December 2009 (UTC)[reply]
Floor Area Ratio. Sorry, the technical term had eluded me earlier. This article should point you toward more explanations of the usage of this statistic. Nimur (talk) 21:05, 7 December 2009 (UTC)[reply]
Ah ok, thanks for that. Have you ever heard of it being calculated for an entire city? TastyCakes (talk) 23:41, 7 December 2009 (UTC)[reply]
That's really the point - it's sort of a zoning ordinance metric for urban areas that's supposed to be more analytic than just capping the maximum number of stories per building. Apparently its use is widespread in urban Japanese zoning codes. It has also seen limited use in American urban planning, e.g. Boulder, Colorado. This source, Studio Basel (apparently a private architecture and urban studies insititute), claims that Hong Kong's floor area ratio is 10-12: "the highest urban density in the world". Nimur (talk) 00:32, 8 December 2009 (UTC)[reply]
(I thought I had posted this about six hours ago but I have just come back to find an edit conflict screen) Well the offiice space is 48 million square feet. I'll leave it to Steve Baker to add on something for the fractal cracks between the floorboards. This book has some information on residential space but its well out of date. SpinningSpark 00:23, 8 December 2009 (UTC)[reply]
Actually this is the argument for using floor area ratio as a density metric - it compares developed land area to actual habitable space by dividing the useful square footage by the area of its allocated plot - instead of dividing by some unknown estimate of the total city land area (which would include things like undeveloped hills, trees, spaces between buildings). FAR is more like an integral - it weights each building's floor space by the differential land-area unit that is allocated for it, and then accumulates and averages for the entire city. Cracks between floorboards aren't at issue - but unzoned land and undevelopable terrain are specifically not included in the total statistic. Nimur (talk) 00:37, 8 December 2009 (UTC)[reply]

Ah ok, I get that many square feet as being almost 4.5 square km, less than half a percent of Hong Kong's area. I can't imagine residential areas being hugely larger than that, but maybe I'm wrong? It seems that, even if residential space is several times office space, Hong Kong's usable surface area is only increased a few percent by all those sky scrapers and other buildings. Is that a fair assessment? TastyCakes (talk) 15:27, 8 December 2009 (UTC)[reply]

Only if you count "all land in the borders" as "usable." Another definition of "usable land" might be any land area that is zoned or districted for development. A floor area ratio of 10 means that you are multiplying the effective usable area by a factor of 10. Nimur (talk) 17:13, 8 December 2009 (UTC)[reply]

Converting from degrees K to F

Could someone please answer this question? Thanks, Kingturtle (talk) 19:58, 7 December 2009 (UTC)[reply]

The formula for converting K to F is F = 1.8K - 459.7 Googlemeister (talk) 20:02, 7 December 2009 (UTC)[reply]
That's rounded; F = 1.8K - 459.67 is the exact formula. "Degrees Kelvin" is obsolete terminology, by the way; they've been just called "kelvins" (symbol K, not °K) since 1968. For example, 273.15 K (kelvins) = 32°F (degrees Fahrenheit). --Anonymous, 21:04 UTC, December 7, 2009.
Google can actually answer these types of questions. What is 1.416785 × 10^32 kelvin in Fahrenheit? -Atmoz (talk) 21:54, 7 December 2009 (UTC)[reply]
WolframAlpha does this too, and gives other (scientific) information about the conversion for comparison. TastyCakes (talk) 23:37, 7 December 2009 (UTC)[reply]

Compact florescent bulbs

What is the acceptable temperature range at which you can use these lights? I ask because I want to know if I can use it outside when it is -50 deg, or if it will not work at that temperature. Googlemeister (talk) 19:59, 7 December 2009 (UTC)[reply]

Form our Compact fluorescent lamp article: CFLs not designed for outdoor use will not start in cold weather. CFLs are available with cold-weather ballasts, which may be rated to as low as -23°C (-10°F). (...) Cold cathode CFLs will start and perform in a wide range of temperatures due to their different design. Comet Tuttle (talk) 20:16, 7 December 2009 (UTC)[reply]
The packaging will indicate the acceptable range for the bulb. They are universally dimmer when cold, so this may be a persistent issue considering -50 (c or f) is 'pretty darn cold' in the realm of consumer products. —Preceding unsigned comment added by 66.195.232.121 (talk) 21:27, 7 December 2009 (UTC)[reply]

Inductive electricity through glass

With Christmas season here, I had an idea... Many wireless chargers use inductors to "transmit" electricity from a base unit to a device. Does anyone make that sort of thing that transmits electricity from inside the house to outside? I'm not considering a high-powered device. I'm considering the transmit/receive devices to be within an inch of each other on opposite sides of a window. -- kainaw 21:46, 7 December 2009 (UTC)[reply]

I think normal wireless rechargers should be able to transmit through glass. 74.105.223.182 (talk) 23:55, 7 December 2009 (UTC)[reply]
I figure it would, but I don't want to recharge my watch or toothbrush through a window. I'm looking for something to send electricity through a window. I would like to have two parts. The indoor part will have one end plug into an outlet and the other end stick to the inside of the glass. The other part will stick to the outside of the glass and have a socket to plug something into (like Christmas lights - which is what gave me the idea). -- kainaw 01:34, 8 December 2009 (UTC)[reply]
You'll need some conversion electronics to higher frequency. Inductive power transfer at 60 Hz requires very large coils. Most commercial devices I've seen operate at hundreds of megahertz and usually convert back to DC on the receiving-end. See Inductive charging, if you haven't already found that article. Nimur (talk) 07:14, 8 December 2009 (UTC)[reply]
It's a really good idea. If it's not expensive, you should try to bring it to market. (Although unless you can patent it, expect to be copied.) If you need more power, you can increase the area of the device. A suction cup will probably not work - they rarely can stay on for long. Ariel. (talk) 03:42, 8 December 2009 (UTC)[reply]

December 8

reaction of lead sulfate and sodium bicarbonate

While getting a new car battery, I watched them clean a lot of white, lead sulfate corrosion from the terminals by pouring some baking soda mixed with warm water on it. It foamed up quite a bit.

What did that reaction create? Elemental lead?

After, he just wiped it up with a shop towel. How badly contaminated is that towel? I wonder what they do with it. Ariel. (talk) 03:39, 8 December 2009 (UTC)[reply]

The lead sulfate probably did nothing and was probably not there, but the sulfuric acid would have reacted with the bicarb, to release Carbon dioxide. After this neutralization the product would be safe to handle. Just sodium sulfate. Graeme Bartlett (talk) 05:46, 8 December 2009 (UTC)[reply]
No, it couldn't have been sulfuric acid - that's a liquid. These were soft white crystals. Maybe they were soaked with sulfuric acid? They did seem kind of wet looking. So they just stayed as lead sulfate? Ariel. (talk) 07:59, 8 December 2009 (UTC)[reply]
There would be some sulfuric acid mixed in with the crystals, that is what makes the fizz. But there could also be copper sulfate from dissolving the copper wires, and calcium sulfate, from dissolving dirt. Any lead sulfate is pretty insoluble and stable.

Rendering a planet uninhabitable

How large would a single explosion need to be to render an Earth-like planet temporarily uninhabitable? Would 5.55×1020 joules do it? Horselover Frost (talk) 04:42, 8 December 2009 (UTC)[reply]

That would depend on a lot if things, and uninhabitable to whom? Chicxulub crater#Impact specifics says "estimated to have released 4×1023 joules of energy". Many species survived that. PrimeHunter (talk) 05:01, 8 December 2009 (UTC)[reply]
You would probably enjoy this website: how to destroy the earth. Ariel. (talk) 05:08, 8 December 2009 (UTC)[reply]
Wierd. One of the most outrageous methods included in that website is attributed to myself. I can think of a situation where I would have contributed it, but have absolutely no memory of doing so (though I do remember visiting the site in the past). Feels wierd to see your name where you don't expect it, perhaps especially when you've got a fairly uncommon name -- I ain't no Bob Smith. Wierd. --203.202.43.54 (talk) 09:14, 9 December 2009 (UTC)[reply]
That's a humorous website - but some of his claims are a bit unscientific: "it may be possible to find or scrape together an approximately Earth-sized chunk of rock and simply to "flip" it all through a fourth spacial dimension, turning it all to antimatter at once."[dubiousdiscuss]. Nimur (talk) 06:08, 8 December 2009 (UTC)[reply]
He knows. Read a little lower: "But since the proposed matter-to-antimatter flipping machine is probably complete science fiction....." Ariel. (talk) 08:10, 8 December 2009 (UTC)[reply]
(1) there is no point in providing 3 (three) significant digits of your energy output when neither the notion of an "Earth-like planet" is well defined; nor time-span, means, and area of the said 5.55×1020 joules delivery are specified. (2) To be fair: Sun churns out about 1400 Joules of light per second per square meter of the Earth cross-section. Earth radius being about 6400 km, the said cross-section is approximately pi*6.4e6^2 ~= 1.3e14 m^2. That is, Earth receives about 2x1017 Joules of sunlight every second. That's 6x1020 J in about an hour. If you deliver your energy over large area over large time (much longer than an hour), not much bad is gonna happen. (3) If you deliver it to the Earth core, no-one would even notice. Heat capacity of iron is about 500 J/kg/K, so 5.55×1020 joules will increase the temperature of 1×1018 kg of iron by 1 K. Earth core weights many orders of magnitude more than 1×1018 kg, so forget it. (4) However, if you deliver your energy at a relatively shallow depth, it's going to produce a pretty nasty earthquake locally. Energy released in a magnitude 8 earthquake is about 4×1018 J; magnitude 9 is about 30 times more energy. Even if I assume energy conversion close to 100% (I don't think that's possible), 5.55×1020 J is between magnitude 9 and 10. No global extinction. Sorry :) --Dr Dima (talk) 06:10, 8 December 2009 (UTC)[reply]
The exactness of 5.55 indicates that there is some context to this. Would to you care to give us the source? SpinningSpark 09:28, 8 December 2009 (UTC)[reply]
After some calculation I estimate that 5.55*10^20 joules is equal to approximately 1 5.5 Gigaton nuclear explosion, or 5500 1 megaton explosions. Strategically placed, that could probably kill all humans, but it would be a stretch for a random event. Googlemeister (talk) 14:57, 8 December 2009 (UTC)[reply]
It depends ENTIRELY on how this energy is dispersed. For example, the sun dumps something like 1.3x1017 joules onto the surface of the earth every second...the same amount as your bomb about every hour of every day. A fairly small bomb (on this scale) could wipe out most land-dwelling life by producing a large enough tsunami to scour the continents clean - or by putting enough dust into the atmosphere to cause a 'nuclear winter'. It's hard to say - but knowing the raw energy alone isn't sufficient to allow for a meaningful answer. SteveBaker (talk) 18:32, 8 December 2009 (UTC)[reply]
Right, to take it another way, according my my handy physics text, the same amount of energy to stop a typical pistol round is the same amount of energy to stop a baseball thrown at 90 mph. Because the baseball has an impact surface area that is something like 2 orders of magnitude larger, a baseball team will not regularly kill their catchers. Googlemeister (talk) 21:37, 8 December 2009 (UTC)[reply]

Ok, context. I'm writing a military speculative fiction novel and trying to keep it fairly hard. The scenario I have in mind is a 1016 kg ceramic rod hitting a planet's surface at just short of the speed of light, with the goal of destroying the planet's ecosystem. Unless I dropped a zero somewhere that should have roughly 5.55×1020 joules of kinetic energy on impact. Horselover Frost (talk · edits) 22:41, 8 December 2009 (UTC)[reply]

We have an article on that concept: Kinetic bombardment. If you haven't already read it, I suggest you do. --Tango (talk) 23:01, 8 December 2009 (UTC)[reply]
Yes, I've read it. The idea I'm using is closer to a relativistic kill vehicle. What I'm asking is if it's big enough to render a planet temporarily uninhabitable, and if it isn't how much bigger should it be. Horselover Frost (talk · edits) 23:11, 8 December 2009 (UTC)[reply]
I do not know (and I do not know of anyone who has done research on) what the energy deposition curve looks like for a macroscopic relativistic projectile in the Earth magnetosphere, atmosphere, and crust. Does the energy deposition curve of such projectile in the Earth crust have a Bragg peak? What is the characteristic energy deposition depth scale? 100m? 1km? 10km? 100km? What is the primary stopping mechanism? What is the efficiency of projectile energy conversion into gamma radiation, into seismic waves, into kinetic and thermal energy of the debris? What kind of radioactive fallout to expect? How much dust will be kicked up into the stratosphere? Your guesses are as good as mine. Let me say this again: the problem is not just energy conversion. You need to know what the projectile kinetic energy is converted to, into what volume it is deposited, what kind of atmospheric and surface contamination it produces, and how that contamination spreads and decays. BTW, that should also depend on whether it hits ice/water, sand, or rock, and at what angle. --Dr Dima (talk) 00:08, 9 December 2009 (UTC)[reply]
For one, you can up the energy in the projectile as much as you want to by making it go a tiny faster. Given a single, compact one ton rest mass rod, I suspect the object will not have much of a chance of depositing energy into the ecosphere. It should go through the atmosphere so fast that there will not be much time for interaction, and the damage should be fairly localized. --Stephan Schulz (talk) 01:05, 9 December 2009 (UTC)[reply]
Yeah - you could imagine something with incredible local intensity - but by the time it hits the ground it'll just keep going - shedding most of it's energy FAR underground with little global consequence. I think a good place for our OP to start is with this handy gadget The Earth Impact Effects Program. You can play with parameters for earth-impactors and see how much damage they do. I'm not sure that the tool will be robust up to these crazy speeds though - the equations and assumptions might easily break down. But maybe you can come up with an Earth-destroyer with a more reasonable set of parameters. The PDF that they reference at the bottom of that page is a great tutorial on impact effects. SteveBaker (talk) 02:44, 9 December 2009 (UTC)[reply]
Thanks for the link. It accepted the numbers, but after playing with it for a while I think I'm going to have to scrap the idea. While trying various values the effects went from strictly local to obliterating the planet with very little in between. I guess I'll go back to my solar shade idea instead. Horselover Frost (talk · edits) 04:02, 9 December 2009 (UTC)[reply]
It would be all but impossible to destroy the ecosystem of the entire planet by hitting just one side of it. You could try to kick up some dust in the air, but that wouldn't destroy everything - many plants would manage with just a little light. You could never get full opacity - most dust will settle very fast, and lots of plants will survive. Also if you make your impactor very fast (and small) it would bury itself underground. You would need something large, but a little slower, for maximum impact.
If your goal is to kill the life, but not the planet itself? I would suggest a gamma ray burst, in particular Gamma ray burst#Rates_and_impacts_on_life. To make a gamma ray burst a matter/anti-matter bomb would do the trick. If you don't want to deal with the nitric oxide theory, just have a number bombs carefully spaced some hours apart, and at different latitudes. (Ask on the math desk for where to place the bombs for maximum coverage of a sphere.) I think 3, 8 hours apart ± 22.5 degrees from the equator would do it. Or maybe 5, 3 of them 8 hours apart at the equator, and one each at the poles. If you are shooting from far away, and above the ecliptic, you can overshoot the planet off to the side to get the opposite pole. Ariel. (talk) 10:41, 9 December 2009 (UTC)[reply]
BTW, a matter/anti-matter bomb is pretty hard to build - as soon as the edges come in contact it will explode, and very little of the mass of the bomb will react. Instead make a spray cannon, and just spray some anti-hydrogen at the earth. Not too much. You'll make a beautiful fireball at the edge of the atmosphere, and lots of gamma rays and other radiation that would do a great job of sterilizing the earth. Let me know if you like the idea. Ariel. (talk) 10:49, 9 December 2009 (UTC)[reply]
Yes - but remember that half of the stuff that 'explodes' is still antimatter which immediately hits more matter (the air, the ground, etc), explodes some more - and so on. Given an essentially infinite supply of matter to combine with, unless your bomb goes off in a hard vacuum, all of the antimatter WILL get turned into energy in very little time. Your design can be just a ball of antimatter shot into the atmosphere at moderate speed. There was a while when that was one of the theories for the Tanguska event. SteveBaker (talk) 14:29, 9 December 2009 (UTC)[reply]

Methods of Capital Punishment

So this guy in Ohio is set to get a single drug administered IV rather than the standard triple shot. [33] And the basis of his appeal is basically the bozos at the facility can't reliably find veins.

Which leads me to wonder why we even bother with lethal injection at all? American law insists on painless (or nearly so) executions, so what's wrong with putting a dude in a LazyBoy and slowly pumping all the oxygen out of the room over the course of an hour. Wouldn't the condemned just fall asleep/unconscious, and then eventually painlessly expire? 218.25.32.210 (talk) 05:02, 8 December 2009 (UTC)[reply]

That would be a humane way to execute a dog or cat, but I'm not sure that it would be as humane for a human that was aware of his fate. I think I'd prefer something quicker and more irrevocable. APL (talk) 06:00, 8 December 2009 (UTC)[reply]
I've always wondered why they don't use Euthanasia. It makes much more sense than a series of shots that may or may not be humane. Falconusp t c 06:07, 8 December 2009 (UTC)[reply]
Euthanasia is a series of shots. For a variety of reasons, previous execution by lethal injection used multiple shots - a sedative and a muscle-relaxant, and finally potassium chloride. The new Ohio method uses a single barbiturate, without the sedatives or muscle relaxants. Ultimately, the argument boils down to legal definitions about "ethical" and "humane." Personally, I think these legal arguments are very different from a "common-sense based argument", for the same reason that legal claims always deviate from normal, rational, logical thought. Ethics tend to be subject to personal interpretation - so when the State makes an ethical claim, it's always subject to legalese bickering. My personal belief is that the execution would be more humane by certain other methods, such as firing squad or gas chamber, which are both still used in some countries. Lethal injection "pretends" to have a certain sterility and clean-ness which I feel is counter to the act of executing a criminal. If the guy deserves to die for his crimes, then he probably deserves to be shot for his crimes; otherwise, we have alternative corrections methods. Nimur (talk) 06:12, 8 December 2009 (UTC)[reply]
I doubt it has anything to do with that—considering it was still used well into the early 1970s (and is still used a bit afterwards), and has a lot of differences from the Nazi gas chambers. Lethal injection is now practically the standard but I don't think the Nazis have anything to do with that. --Mr.98 (talk) 13:50, 8 December 2009 (UTC)[reply]
My understanding is that there are no attending physicians, which is why you get people who can't find a vein in fifteen tries... 218.25.32.210 (talk) 09:09, 8 December 2009 (UTC)[reply]
Well, according to the linked JAMA article, in 2007 17 of the 38 states with death penalty required a physician, while 18 more allowed a physician's participation. I suspect the problem of medical incompetency primarily arises in the states that do not require a physician and fail to find one willing and competent to do the deed. --Stephan Schulz (talk) 10:24, 8 December 2009 (UTC)[reply]
Is it actually something doctors are usually good at? At least from TV shows (hardly a good source I know) my impression is that even though it's a skill doctors are supposed to be good at, in reality it tends to be the nurses that do the job most of the time and so they are the ones who are usually good at it not doctors Nil Einne (talk) 18:56, 8 December 2009 (UTC)[reply]
I think this depends. If I remember right (in this case, that's a large if!), in Germany nurses are allowed to administer intramuscular and subcutaneous injections, but intravenous ones are restricted to licensed physicians. This used to be different in the GDR, and one of the smaller problems of reunification has been that the job profile of nurses has changed - we have a number of nurses qualified, trained, and experienced in procedures they are not allowed to do anymore. --Stephan Schulz (talk) 09:42, 9 December 2009 (UTC)[reply]
Surely the most humane way of putting someone to death is nitrogen poisoning. Death occurs in about 15 minutes, but during that time you enter a state of euphoria. Michael Portillo did a documentary for the BBC about the death penalty in which he entered a partial state of nitrogen poisoning, but was pulled out before he died. No doctor needed, no straps, no wounds. --TammyMoet (talk) 10:46, 8 December 2009 (UTC)[reply]
I have heard that proposed. One reason for rejecting it is that some people feel dying in a state of euphoria isn't appropriate for a criminal. --Tango (talk) 14:02, 8 December 2009 (UTC)[reply]
...and some people probably think beating them to death with a truncheon would be inappropriately gentle too. Portillo concluded that there was no method that fulfilled the conflicting criteria. --Dweller (talk) 16:36, 8 December 2009 (UTC)[reply]
It would seem to me that the two contradictory goals of an execution (killing the prisoner without pain, but make the victim's relatives think the prisoner suffered) could be accomplished by destroying the prisoner's brain completely and rapidly with a pneumatic hammer or explosives. Horselover Frost (talk · edits) 00:45, 9 December 2009 (UTC)[reply]
So the question of what kinds of technologies are legally permissible is a tough one, and hard to change, because if you end up on the wrong side of the Eight Amendment, then you have a legal fiasco on your hands. See also Capital_punishment_in_the_United_States#Methods. For a really interesting film on a related topic, see Mr. Death: The Rise and Fall of Fred A. Leuchter, Jr.. --Mr.98 (talk) 13:50, 8 December 2009 (UTC)[reply]
The sudden imposition of a "short sharp shock" in the French style would seem to satisfy the mostly painless requirement but I don't believe has ever been used in the U.S. But would the victims' families still fill the galleries to watch that execution? Perhaps the choice of method is not entirely dictated by the rights of the condemned. 75.41.110.200 (talk) 16:23, 8 December 2009 (UTC)[reply]
There is significant evidence that the heads remain conscious for up to 30 seconds after being separated from the bodies. I don't know whether they are actually able to feel pain during those seconds, but I would need convincing that it didn't cause significant suffering. --Tango (talk) 16:33, 8 December 2009 (UTC)[reply]
[citation needed]. Come on, this is the Reference Desk. Comet Tuttle (talk) 17:53, 8 December 2009 (UTC)[reply]
Guillotine#Living_heads is a good overview of the fact and fiction surrounding this. Nimur (talk) 18:14, 8 December 2009 (UTC)[reply]
Actually I watched a documentary where a guy did a pretty good study of execution methods and the one he came up with which was most humane was similar to what the OP suggested, Hypoxia_(medical). It's extremely cheap, doesn't take very long, quite impossible to stuff up and very humane, in fact, it even gives the person a little bit of a high before they die. The presenter actually submitted himself to an experiment to experience hypoxia and was taken pretty close to passing out, it was extremely interesting. In fact in the experiment, there is a big red button right in front of him and at ANY time when he feels him self in any danger he can press the red button to stop the experiment, but for the entire time he thinks he is completely fine and does not press the button, he had to be rescued. When he watches the footage back, he's quite shocked to see how delirious and close he was to passing out he was, he thought he was doing fine the whole time. Would you believe, when he got his data and petitioned some people, politicians and prison wardens and stuff, regarding this method of execution, guess what the reaction was? Everyone vehemently opposed his idea, on the grounds that people who are executed should feel a bit of fear and remorse when they die, it's not enough of a punishment if they go out on a high. I don't remember the name of the doco. Vespine (talk) 21:29, 8 December 2009 (UTC)[reply]
I just realised that everything i said is already covered in a couple of posts above.. I only skimmed them the 1st time and missed it.. sorry.. Vespine (talk) 23:40, 8 December 2009 (UTC)[reply]
There are usually four reasons for using punishment of any kind - and capital punishment only relates to three of them:
  1. It removes the criminal from society - thereby preventing them from reoffending.
  2. It is a kind of revenge for the victims - perhaps easing their mental state.
  3. It serves as a deterrent to others.
  4. It provides a means to try to rehabilitate the criminal (in non-capital punishment situations - obviously).
So of the first three of those things: which gain benefit from a more brutal approach? Clearly, so long as the criminal dies - you've scored on the first criterion.
For the second reason - I'm not sure it really helps the victims to have the criminal die painfully - although they may claim it does - at the very least, if we truly believe that this is the reason - we should be asking the victims whether a more or less humane death is required to make them feel better. I'd worry that perhaps the horrible death of the criminal might weigh on their conscience later. But I don't think it helps much.
Perhaps in the third case it makes a difference - but since whatever method is used is generally proclaimed as "humane" (whether it actually is or not), it probably doesn't matter here either. But I doubt that criminals really consider too carefully the precise details of the punishment when the commit crimes - because clearly, if they were thinking coherently, they wouldn't do such a serious thing anyway. I might imagine a criminal weighing the balance between stealing some money and a few years in jail - but I can't imagine anything worth the risk of dying over. So it can only be that they don't believe they'll get caught...hence changing small details of the execution method probably won't make the slightest difference to the rate that these super-serious crimes are committed.
There is of course another option - don't execute the prisoner at all. I think it's worth knowing (and I wish I could come up with a reference - but I don't recall where I read it) that keeping someone in prison for their entire life is actually cheaper in raw dollar terms than administering the death penalty. The extra mandatory appeals and the cost of the actual execution is higher than the cost of jail time...on average. I find that surprising - but I understand it to be true. It's also worth pointing out that sometimes people are later found not to be guilty after all - and the death penalty is really a bit too final. Most of all - I think it's actually easier on the criminal to get a quick death than to languish in the prison system for 30 years. I don't think we're actually dissuading anyone from committing crimes this way - and perhaps the 30-year long grinding horror of life imprisonment without any hope of parole is an even less humane solution. The idea that you'll never have anything nice to look forward to - never have any freedom ever again - that's way more depressing than a fairly quick, relatively painless death...IMHO.
SteveBaker (talk) 00:14, 9 December 2009 (UTC)[reply]
Steve, how old are you expecting these criminals to be when they're caught given you expect the remainder of "their entire life" to be 30 years? 50? --203.202.43.54 (talk) 08:11, 9 December 2009 (UTC)[reply]
I was trying to express how bad it could be. 30 years is on the low end of how bad it could be...the exact number doesn't matter. SteveBaker (talk) 14:23, 9 December 2009 (UTC)[reply]

Air versus Marine Propeller Design

Why do airplane propellers use a twisted airfoil shape for their blades while marine propellers use more of a screw shape? Can the force of lift produced by a rotating airfoil be considered analogous to thrust? Thanks in advance —Preceding unsigned comment added by 141.213.50.137 (talk) 06:02, 8 December 2009 (UTC)[reply]

Please allow me to add a rider sub-question to your question - would it be reasonable to say that air propellers pull while marine propellers push? Or is there really no distinction? 218.25.32.210 (talk) 06:06, 8 December 2009 (UTC)[reply]
Air propellers do not pull. Only push. Newtons second law. Plus some fun stuff about change in air pressure before after. Hmm, I suppose that given that water is incompressible, and air is compressible, some effects might be different. But I bet they are minor. Don't know for sure though. And actually, with water you have to avoid cavitation if you go too fast, which is a result of water not being compressible. With air you don't have to worry about that. But I would definitely not simplify it to pull/push. Ariel. (talk) 08:06, 8 December 2009 (UTC)[reply]
Water and air have different viscosity and different density. Primarily for these reasons, the optimal shape of a propeller for most efficient thrust generation is different. The distinction between "pulling" or "pushing" are fairly artificial for this context- I wouldn't use that as a part of any explanation for the different shapes. Our propeller article is really a great overview of the qualitative differences between air and marine propellers. It also provides some equations and numeric parameters, so you can calculate the efficiency and other parameters for standard shapes yourself. Nimur (talk) 07:03, 8 December 2009 (UTC)[reply]

Although most propeller airplanes have the props in front, there have been plenty of designs with them at the back, where they are known as pusher propellers. See that page for discussion of why one or the other design is used. There have even been a few planes with propellers in both places, like the NC-4 and the Rutan Voyager.

If you think of the size of typical boat propellers in comparison with the size of the boat, you will see that if the propeller was in front, the entire stream of water pushed backward by it would hit the front of the boat and tend to slow it down. This is not such an issue with airplanes because the props are larger and the stream of air can flow around the airplane easily, especially in the case of props mounted on the wings. --Anonymous, 08:38 UTC, December 8, 2009.

Fixed your link. --Sean 14:06, 8 December 2009 (UTC)[reply]
Oops, thanks. I meant to check whether it'd work the way I had it before before saving, but forgot to. --Anon, 23:32 UTC, Dec. 8.
I don't think there is an actual qualitative difference between water propellors and air propellors. They can both be 'pushers' or 'pullers' - they can each have different numbers of blades - and they both work by 'screwing' through the air (an old fashioned name for aircraft propellors is "airscrew"). Think of them like a wood screw being driven into a plank. The differences are quantitative - the angle of the blade to the 'fluid', the amount of curvature (they are like little airplane wings in cross-section), the amount of pitch and the length-to-chord ratio. In that sense, they are like little wings - they have an angle of attack, a length and a chord-width - and those numbers are determined by the rate of rotation and the density of the medium through which they are travelling. All of the ideas behind airfoils and wings apply here. Increasing the pitch makes for more thrust - but also more drag. If you make the pitch too steep, an airplane wing will be said to "stall" - where a propellor might be said to "cavitate" - it's the same thing. Notice how different kinds of plane have different wing shapes - things like gliders have long, thin wings - supersonic jets have very deep, triangular wings (think "Concord") - these design differences come about in exactly the same way that propellors come out differently when you optimise them for a dense fluid like water or a thin one like air. Detailed differences appear because some are designed for speed where others are designed for fuel efficiency or some other design criterion. SteveBaker (talk) 18:18, 8 December 2009 (UTC)[reply]
Going to have to disagree a bit with you Steve. While a propeller is very much like a screw for boats, on an airplane, the propeller is far more akin to a wing spinning in a circle very quickly. The same aerodynamic forces on the wings are on the propeller, only in this case, the lift is in the forward direction. If you were to have the same size propeller on the airplane and it was flat in cross section as a boat's propeller is, it would be far less efficient, even to the point of not providing enough thrust to get the plane off of the ground. The forces from the lift are far greater then that of the corkscrewing motion. In a non-compressible (or rather barely compressible) fluid like water, the aerodynamic lift forces are much smaller and the majority of your thrust will come from the actual corkscrew motion. Googlemeister (talk) 21:28, 8 December 2009 (UTC)[reply]
Wings get almost all of their lift from the angle of attack they have to the airflow. The nonsense put about about the Bernoulli principle creating the majority of the lift is easily dispelled by making a model plane with a rectangular cross-section wing and demonstrating that it flies just fine (my father once did that to win a bet - and he was right - the plane flew pretty well considering the drag it had from the vertical leading edge!)...so the angle of attack is key - and whether you consider that as a rotating wing or an 'air screw' is entirely a matter of which words you want to use because the angle of attack is what makes (for example) a wood screw go into wood. You can get screws for screwing into metal with a choice of finer and steeper pitches - and the amount of torque you need to screw them in - and the speed at which they go in - changes just like changing the pitch on a variable-pitch propellor on an aircraft changes the amount of thrust you get as a function of speed. It's exactly the same thing. SteveBaker (talk) 23:52, 8 December 2009 (UTC)[reply]

Dreams

I've been having bad dreams for the past couple of weeks. What's a way of combatting bad dreams? jc iindyysgvxc (my contributions) 11:14, 8 December 2009 (UTC)[reply]

Not sleeping? No Cheese before bedtime? Sorry, i'm not aware of any thing you can do to change you dreams - my understanding is they're not something you can influence. 194.221.133.226 (talk) 11:53, 8 December 2009 (UTC)[reply]
We can influence our dreams. See tetris effect. --Mark PEA (talk) 18:19, 8 December 2009 (UTC)[reply]
We can't give medical advice, I'm afraid. If you feel the need for help with them then you should get professional help from a doctor or therapist. --Tango (talk) 11:55, 8 December 2009 (UTC)[reply]
We can however point you to Nightmare. Dmcq (talk) 12:54, 8 December 2009 (UTC)[reply]
And Lucid dreaming. Fences&Windows 17:42, 8 December 2009 (UTC)[reply]
The thing about dreams is that you are quite unaware of them unless you happen to wake up during one of them. It seems very likely that they are merely the brains' way of "defragging it's hard drive" to put it in computer terms. Memories are shuffled around and reorganized - and while that's happening, things get a bit crazy. So the trick here is to not wake up during them. You can't stop them from happening - nor would that be particularly desirable. So try to get as comfortable as possible - make sure the room is quiet - try to sleep for longer without alarm clocks forcing you to wake up before the "defragging" is complete. Anything that lets you get your "REM" sleep done without interruption is going to prevent you from being consciously aware of the bad dream. Better sleep means fewer dreams - bad or good. SteveBaker (talk) 18:03, 8 December 2009 (UTC)[reply]
And if you do wake up during a bad dream, I find it is often helpful to get up, turn the light on, empty your bladder, have a small drink of water, and generally interrupt the thought process, blow the dream out of your mind (they're usually quite 'fragile'), before going back to bed. This dream dispersed, you'll likely have a completely different dream in the next go around. And, unhelpful as it might sound, try not to worry too much about it! Dreams seem to bring up the things you've been thinking and worrying about: the less you worry, the less you'll dream about the worrying thing in a bad way.
Sometimes, of course, a dream is a helpful message that you are worried about something. When I have a specific sort of bad dream about my family, that generally tells me it's time to visit again: I've drifted out of touch. 86.166.148.95 (talk) 19:55, 8 December 2009 (UTC)[reply]
Before going to bed pick a topic you want to dream about, and think, and imagine about it extensively. Not just a fleeting thought, but really think about it. You'll probably dream about it. Another thing to do is if you do get a bad dream, modify it. If you are being chased by a monster, imagine a bite proof suit and a weapon for yourself. It's OK to do that after you wake up. If you imagine it hard enough (write a whole script and story in your mind after), you can change your memory of the event. You will also, over time, train yourself to do it while sleeping. Ariel. (talk) 20:29, 8 December 2009 (UTC)[reply]
I don't think there is ANY evidence for that - there is an enormous amount of nonsense said about dreams and very little of it is true. Show me some citations please. The evidence we do have is that they happen during REM sleep - and if you don't wake up during REM - you don't remember them at all. So undisturbed sleep is the key here. Many people claim that the horrible dream woke them up - but the evidence is that the reverse is the case - you woke up for some other reason - and therefore remember the dream. Dreaming is clearly something the brain needs to do - and by far the most reasonable explanation is that it's reshuffling memories around to improve organisation and recall. Even if you could influence what happens - it would probably be injurious to your mental state because you'd be preventing the optimal rearrangement of memories. SteveBaker (talk) 23:43, 8 December 2009 (UTC)[reply]
Speaking of nonsense, you often mention "defragging the hard drive", but I can't imagine what it really means in terms of human memories. I read in hypnagogia that "suppression of REM sleep due to antidepressants and lesions to the brainstem has not been found to produce detrimental effects on cognition". So REM sleep might not in fact do anything much. The REM sleep article states that it helps with creativity, but without the involvement of memory. (Though memory is a nebulous concept, admittedly.) 213.122.50.56 (talk) 14:52, 9 December 2009 (UTC)[reply]

Body, spinal cord and partial brain transplant?

Head transplants suffer from the problem that there is no means of re-attaching the spinal cord, leaving the subject with paralysis. However, supposing the donor body, spinal cord and its associated brain structures were transplanted into a recipient brain along with nerve growth factors-would the recipient brain be able to wire itself in to the donor? P.S. I have no intention of carrying out this experiment.Trevor Loughlin (talk) 11:42, 8 December 2009 (UTC)[reply]

I have a sneaking suspicion that if I answer your question, you will actually perform this -- nonetheless, I will let the next editor mark this as a violation of appropriateness. :)
Brain transplants are problematic because the brain is not only a physical organ as is the kidney, but also the source of a patient's sense of self. Consider it analogous to a situation in which your monitor snaps off your laptop while still under warranty and you send it back to the company. The tech transfers all your data to a new laptop so that you now possess a "new computer" in the sense of a body but the "same computer" in the sense of all of your previous data (files, uploads, etc.) -- in a sense, your organ recipient here will likely take on the identity of the donor, rather than resume his or her previous status. That being said, it's been a classic "rule" in physiology that the central nervous system either cannot, does not or is completely inadequate/unpredictable in it ability to regenerate. DRosenbach (Talk | Contribs) 14:14, 8 December 2009 (UTC)[reply]
See head transplant and brain transplant.--Shantavira|feed me 14:37, 8 December 2009 (UTC)[reply]
There is the question as to how to keep the body alive while the nerves regrow, if that is even possible. Googlemeister (talk) 14:42, 8 December 2009 (UTC)[reply]
"While the nerves regrow" This is the nub of the whole problem. If there was an easy way - or any way - to get the spinal cord to regrow and reconnect then millions of paraplegic patients would be jumping up and down, and I mean that literally. Of course it would be a big downer for the Paralympic Games. Caesar's Daddy (talk) 14:56, 8 December 2009 (UTC)[reply]
The brain and spinal cord cannot regenerate, but peripheral nerves can. It takes a long time, though. If fibers connecting the spine to the hand are destroyed, it takes months for them to regrow, and there are various issues that may cause the regrowth to fail. Also, this scenario of introducing foreign tissue into the body and requiring it to extend projections through every part creates rejection issues that are just about as nasty as it is possible to imagine. But if you could somehow keep the body alive for months in the absence of any neural control over the lungs, digestive system, etc, I don't see anything in principle that would absolutely prohibit the operation. Looie496 (talk) 16:55, 8 December 2009 (UTC)[reply]
You don't just need to keep the body alive, you need to keep the brain alive. For most organs you have a few hours to carry out the transplant. For the brain you would have a few minutes before irreparable brain damage was caused. I don't think you could remove the brain from one body and get it into the other and connected up to blood vessels fast enough. --Tango (talk) 17:02, 8 December 2009 (UTC)[reply]
You could hook the donor brain to a machine that would be responsible for perfusion prior to completely severing its connection to the donor body and allow that vascular supply to remain until after the recipient vasculature is connected. DRosenbach (Talk | Contribs) 17:56, 8 December 2009 (UTC)[reply]
Neuroregeneration is relevant to this. Fences&Windows 17:41, 8 December 2009 (UTC)[reply]
I don't think any of you doubters bothered to read the head transplant article. Supposedly a monkey survived this dubious experiment for a while. There are inline citations but I still don't believe it. Comet Tuttle (talk) 17:51, 8 December 2009 (UTC)[reply]
All the cases described in that article seem to be about transplanting a head onto a body that still has its original head - that makes it much easier. The original brain controls all the bodies systems so the transplanted head can be severely brain damaged without killing it. --Tango (talk) 18:57, 8 December 2009 (UTC)[reply]

Valence electron counts

Carbon prefers a total valence electron count of 8, whereas many transition metal complexes prefer a total valence electron count of 18. Why is this? Alaphent (talk) 12:18, 8 December 2009 (UTC)[reply]

Carbon isn't a transition metal, first off. Second, Carbon is in period 2, so its electronic configuration only involves 1s, 2s, and 2p. To get 18 you need the third energy level. ~ Amory (utc) 13:49, 8 December 2009 (UTC)[reply]
So to answer your question more directly yet with a spin (pun intended), it's not that carbon has an affinity towards having 8 electrons in its valence for any mystical reason other than conforming to the general phenomenon that all atoms have an affinity for having their valence shell full. Carbon, as mentioned above, has the potential for 8 electrons in its valence but possesses only 6 electrons -- 2 in its first shell and 4 in its second (with room for 4 more to make 8, which is why carbon generally bonds with four other atoms in the form of single bonds, two other atoms with double bonds, or a double and two singles). Because organic compounds and creatures involves lots of reactions between the COHNS atoms (carbon, oxygen, hydrogen, nitrogen and sulfer), people tend to focus on valences of 8, but really, atoms in higher periods will fill their respective shells. The concept is the same, though, and halides will need one electron to complete their valence shells, regardless of the total electron count. DRosenbach (Talk | Contribs) 14:28, 8 December 2009 (UTC)[reply]
See also detailed discussion under 18-electron rule. –Henning Makholm (talk) 00:45, 9 December 2009 (UTC)[reply]

Anesthesia

How does it work?Accdude92 (talk to me!) (sign) 14:17, 8 December 2009 (UTC)[reply]

Have you read our article on anesthesia?--Shantavira|feed me 14:39, 8 December 2009 (UTC)[reply]
From memory (and without reading the anaesthesia article, the answer (for general anaesthetics) is we really don't know. --203.202.43.54 (talk) 08:31, 9 December 2009 (UTC)[reply]

ref:HEAT MODELLING

Actually that code has come as derivation of a fourier heat equation with a heat generation term and the transient part kept alive.When solving the code for 5 points .i m getting 4 solutions plots and one y=0 solution.all solutions are homogeneous(pass through origin).what can be the physical significance of the y=0 solution? is solution already optimized? SCI-hunter 220.225.98.251 (talk) —Preceding unsigned comment added by SCI-hunter (talkcontribs) 17:27, 8 December 2009 (UTC)[reply]

Wikipedia does not have an article called Heat modelling. Can you point out which code you mean? Cuddlyable3 (talk) 18:23, 8 December 2009 (UTC)[reply]

i am referring to code discussed on 5th dec,article 2.3 in this page only.#REDIRECT [[35]]f> —Preceding unsigned comment added by SCI-hunter (talkcontribs) 01:27, 9 December 2009 (UTC)[reply]

Stop. You have posted the same problem at the Mathematics desk and Science desk. I suggest the latter discussion is the place for you to add any follow-on comments rather than spreading your problem over 3 sections. Please read carefully the responses you have received. Please sign your posts by typing four tildes at the end. Cuddlyable3 (talk) 17:17, 9 December 2009 (UTC)[reply]

How quickly does caffeine evaporate or decompose?

I hate the taste and smell of all coffee, so I make it infrequently in large pots and keep it in the fridge, mixing it with warm milk and chocolate later. Old coffee tastes the same (equally bad) to me as freshly brewed. My only goal here is to dose myself with the stimulant caffeine.

Should I be stoppering the coffeepot and/or make it more frequently? In particular: How long does it take for half of the caffeine to evaporate away? Does it decompose in solution? Thank you for your kind attention to my artificial mental alertness. 99.56.137.179 (talk) 18:40, 8 December 2009 (UTC)[reply]

I don't think caffeine in coffee evaporates or decomposes to a significant degree. If it did, decaffeination wouldn't be such hard work. However, there are plenty of other sources of caffeine you could try. Tea, coke, energy drinks, caffeine pills, etc., etc.. --Tango (talk) 19:00, 8 December 2009 (UTC)[reply]
The only bad thing that happens when you keep coffee sitting around, in my experience, is that eventually mold grows on it, even in the refrigerator. Stoppering it will probably prevent that. Looie496 (talk) 19:04, 8 December 2009 (UTC)[reply]
Skip the coffee - if all you want is caffeine, go buy a bottle of "Nodoz" caffeine pills - or go to www.thinkgeek.com and buy some penguin-brand caffeinated mints. One of those has about the same caffeine as three cups of coffee or about a dozen cans of Coke. SteveBaker (talk) 19:30, 8 December 2009 (UTC)[reply]
Thank you, but I am not sure of the economics of that. More importantly, I like the ability to titrate on an as-needed basis by sipping from a cup. Pills wouldn't allow that kind of control. 99.56.137.179 (talk) 19:50, 8 December 2009 (UTC)[reply]
Try tea. Do not make the mistake of brewing it for too long - remove the tea/bag within one or two minutes of pouring hot water into the teapot. Another and better alternative it to give up caffeine completely: after suffering withdrawal for one or two weeks, you will feel alert all the time as if you had just drunk a cup of coffee, and sleep much better and wake up feeling alert and refreshed. 78.149.206.42 (talk) 20:15, 8 December 2009 (UTC)[reply]
There is certainly a build-up of tolerance to caffeine if you use it all the time. The trick is to not have high doses of the stuff every single day - because it basically stops working after a while. It's most effective when you use it for a couple of days - then stop for at least a week or so. You don't get withdrawal symptome and you don't build up that tolerance that forces you to have to take more and more of it to produce the desired effect. Caffeine is an exceedingly well-studied drug. There are LOTS of details about effective doses and tolerance issues in our article - you can easily take advantage of what it says and get the benefits of an occasional boost without the issues of a build-up of tolerance. SteveBaker (talk) 23:37, 8 December 2009 (UTC)[reply]
Oddly, I don't find any of this to be true from personal experience. I've been drinking tea constantly for 20 years or so and it hasn't stopped working. Recently I gave up for a month out of curiosity, and the only effect was that I spent the month feeling as if I hadn't just had a cup of tea. I neither had withdrawal symptoms nor a special holy buzz of natural purity, just the absence of the buzz of a cup of tea. Oh, and 1 to 2 minutes is pathetic (the packets generally recommend 5 minutes, but sometimes I leave the bag in, doesn't seem to make much difference). Maybe 1 to 2 minutes for a total tea n00b who's still acquiring the taste, though. Have it with milk, obviously, or it's nasty. 213.122.50.56 (talk) 15:26, 9 December 2009 (UTC)[reply]
  • If you want to have caffeine, why not drink cola instead. It adds lots of sugar, but you get rid of the bitter coffee taste. I also discovered flat paper like energy products (they have similar chewing gum) filled with caffeine. - Mgm|(talk) 12:13, 9 December 2009 (UTC)[reply]
Obviously it's up to you what you do, but I would recommend thinking it through before you start using lots of caffeine, and possibly develop a dependence on it. Sorry, I know that's not what you were asking. Falconusp t c 12:21, 9 December 2009 (UTC)[reply]
This "dependance" thing is very overrated. I drink coffee at work - and regularly give it up for a week or two when I'm on vacation and over holidays. The withdrawal symptoms are WAY overrated by the anti-caffeine fanatics. A mild headache once in a while for maybe a day - easily fixed with an asperin. Drinking a can of Coke or Pepsi specifically to get a shot of caffeine is also kinda silly - there is only about a quarter of the amount of caffeine in a can of coke compared to a cup of regular filter coffee (Coke: 34mg, Filter coffee 115 to 175mg)...or to put it another way, a couple of cups of DECAFFEINATED coffee have the same amount of caffeine as a can of Coke! (Yes, you holier-than-thou folks who swig back the decaff...it's not zero caffeine...you could be getting as much as 15mg per cup...that's a US standard cup - not an industry-standard-mug of the stuff!) SteveBaker (talk) 14:17, 9 December 2009 (UTC)[reply]

Modelling sound in a breeze

I'm wondering how to mathematically model how the intensity of sound diminishes from its source, with the added complication of a steady breeze. The breeze would make the sound travel further in some directions. The breeze is gentle enough not to add any further noise. Does anyone have any idea how to do this please? 78.149.206.42 (talk) 20:09, 8 December 2009 (UTC)[reply]

Such a breeze is equivalent to the source (and any stationary observers) moving with the opposite velocity in still air. Does that help? --Tardis (talk) 20:29, 8 December 2009 (UTC)[reply]

Got a formula for that please? I'm perplexed by what happens upwind of the noise source, since you can still hear a noise-source when standing upwind of it. 78.149.206.42 (talk) 21:48, 8 December 2009 (UTC)[reply]

Unless the breeze were blowing at faster than the speed of sound you will always hear the noise source. --Jayron32 22:11, 8 December 2009 (UTC)[reply]
The article Doppler effect explains what happens. Looie496 (talk) 23:03, 8 December 2009 (UTC)[reply]
In terms of distance - imagine that the air is standing still and the world is moving past it - because from the point of view of the sound wave, that's exactly what's happening. You're going to get some doppler shift because of the way the air moves past the sound source and destination - but that won't affect the distance much. So in still air, it is roughly true to say that the intensity of the sound decreases as the square of the range - which (for constant speed-of-sound) means that it decreases as the square of the time it takes to get somewhere. When there is a wind blowing, that doesn't change - so the intensity of the sound at a given distance is greater down-wind than it is up-wind because the downwind sound is moving at the speed of sound in still air plus the wind-speed - and on the upwind side, it's the speed of sound in still air MINUS the wind speed. Given that the speed of sound is somewhere around 700mph (depending on a lot of variables) then a gentle 7mph wind will alter the time it takes to get somewhere by plus or minus 1% - and the effect that has on the intensity (which is the inverse square of that time) will therefore vary depending on how far away you are. There is a second order effect - the inverse-range-squared thing is only an approximation because frictional forces in the air are absorbing some of the sound...that wouldn't matter much except that - because of the doppler effect - the frequency of the sound will also change slightly because of the wind. The attenuation of sound in air varies with frequency - the higher frequencies being more strongly attenuated than the lower frequencies. This makes the quality of the sound change as the higher frequencies become inaudible faster than the lower frequencies. The effect of this on overall "intensity" gets complicated to estimate because it depends on the frequency components of the sound in the first place. SteveBaker (talk) 23:24, 8 December 2009 (UTC)[reply]
The Steady breeze model may be unreal. It will move faster with height, and real air will probably have turbulence. Graeme Bartlett (talk) 02:33, 9 December 2009 (UTC)[reply]

How do you think I should modify the standard one over distance squared formula to include the breeze? 84.13.190.195 (talk) 11:09, 9 December 2009 (UTC)[reply]

Perhaps like this:

 Intensity =             K
             -------------------------
             (d ( 1 - v cos(ø) / V ) )²

where K = constant
      d = distance source to receiver
      v = breeze speed
      ø = angle between sound path and breeze
      V = speed of sound

Cuddlyable3 (talk) 17:53, 9 December 2009 (UTC)[reply]

Vision acuity measurements

Could visions be measure as any other numbers between 20/x or 6/x. I thought 6/20 or 6/12 is Metric and 20/70 and 20/40 is customary wise. Do some people calculate by 4/15 or 4/8?--209.129.85.4 (talk) 20:24, 8 December 2009 (UTC)[reply]

The reason for the x/20 or x/6 is that the measurements are based on the ability to see features on an eye chart at a distance of 20 feet (in imperial units) or 6 meters (about the same distance in metric units). For a lot more detail, you'll want to have a look at our article: Visual acuity#Visual acuity expression. Apparently in some countries it is an accepted practice to reduce the value to a decimal (that is, 10/20 vision can be written as 0.50). TenOfAllTrades(talk) 20:58, 8 December 2009 (UTC)[reply]
Careful - the 20 (or 6) goes first. 20/10 is vision twice as good as "normal", and would be 2.00 in decimal. 0.50 would be 20/40. --Tango (talk) 21:23, 8 December 2009 (UTC)[reply]
And you could say 20/40, for example, as "this person can see at 20' what a 'normal' person can see at 40'" —Preceding unsigned comment added by 203.202.43.54 (talk) 08:39, 9 December 2009 (UTC)[reply]

December 9

mathematica model of diffusion

I don't know if my problem is how I implemented the code. I am simulating diffusion of nitrogen into a metal with a constant surface concentration to contrast with an analytic solution to diffusion. The system is 20 micrometres deep (to evaluate how the concentration at 10 micrometres changes over time) -- there is no mass transfer through the end node.

 
(* Calculating the Constants *) 
Dlt = 0.144816767
Dif = 1.381 * 10^-9 
Dlx = 2 * 10^-5 
Const = Dlt*Dif /((Dlx)^2 ) 

(* Initialising the Array *) 
s = Array[0, {24859, 101} ] 
s[[1]] = Table[0, {i, 101}] 

(* Setting up constant surface concentration for All t *) 
s[[All, 1]]  = 0.0002

(* setting up general concentration-calculating algorithm for each \
position in a row t*)

c[t_, n_]  := 
 s[[t - 1, n]] + 
  Const *( s[[t - 1, n + 1]] - 2*s[[t - 1, n]] + s[[t - 1, n - 1]])  


(* Assembling a data row of iteratively - 
  calculated positions for each array row t) 
f[t_] := Table[c[t, i], {i, 2, 100}]

(* calculating the end node at the end of each row t *) 
g[t_] := s[[t - 1, 101]] - 
  2* Const * (s[[t - 1, 101]] - s[[t - 1, 100]])

For[i = 2, i < 24859, i = i + 1, s[[i, 2 ;; 100]] = f[i]; 
 s[[i, 101]] = g[i]]

(This gives me an array that I can then evaluate and present through various Manipulate[] and ListLinePlot[] functions.)

The problem is that I know from my analytical solution that the concentration at 10 micrometres is supposed to go to 1.5 * 10^-4 g/cm^3 in about an hour, but my simulation has it reach that in around a quarter of an hour. I don't think it's my constants. The activation energy per atom is 0.879 eV, and the temperature is 500K. The temperature-independent diffusion constant (D_0) is 1 cm^2 / s (hence D = 1 cm^2/s * e^(-0.879 eV / (500 K * k_B) = 1.381 * 10^-9 cm^2/s). I'm sure I've satisfied the Von Neumann stability criterion -- I'm trying to do this in about 100 steps, so dx = 20 micrometres / 100 = 2 * 10^-5 cm, and based on the stability criterion the largest possible time interval to prevent residual error buildup is approx 0.14482 seconds per "step". (Hence 24859 time nodes to make roughly an hour.)

My attack so far is to define each new cell's concentration (at a particular time t) from known cells' concentrations at time t-1, based on the concentrations at that time at the node before, at and after. (This is function c[t,n].) Then I find an entire row for that time t to feed data into the array (function f[t]), as well as calculating the end node {function g[t]). Then I have an iterative loop to calculate new rows based off of the row before already calculated. I define my initial conditions (surface concentration = 2 * 10-4 g/cm^3 + no nitrogen in the metal initially) and let it run. What's my problem? John Riemann Soong (talk) 01:42, 9 December 2009 (UTC)[reply]

Help, anyone? This is basically like Fick's laws of diffusion and stuff, but used discretely. John Riemann Soong (talk) 15:22, 9 December 2009 (UTC)[reply]
I don't see anything immediately wrong with your algorithm, although the code doesn't look very idiomatic to me (I would write
up[l_]:=Take[l,{2,-2}]+k*(Drop[l,2]+Drop[l,-2]-2*Take[l,{2,-2}])
up2[l_]:=Prepend[Append[l,l[[-2]]],c0] (* add fixed left value and mirror-symmetric right value *)
up3[l_]:=up2[up[l]]
k=0.144816767*1.381*^-9/2*^-5^2; c0=0.0002; s0=up3[Table[0,{102}]]
s=Nest[up3,s0,24859] (* or: *)
i=0; s=NestWhile[up3,s0,(++i;#[[50]]<1.5*^-4)&]
where the Nest[] chooses a fixed number of steps and the NestWhile[] waits instead for the 1.5×10-4 to be reached). From the latter I get i=10258 (24.4 minutes). That's probably not what you get: it's not "around a quarter of an hour". Maybe you should post your analytical solution here for more detailed comparison? Perhaps your actual code too; what you've written doesn't work (surely you want Table[] instead of Array[], one comment is unterminated, and s[[-1]] is unused). I tried to fix it, and got the same result of 10258 steps. --Tardis (talk) 17:28, 9 December 2009 (UTC)[reply]
Does using Table[] make it run faster/cleaner? I also don't know where I refer to s[[-1]]. (The commenting error is a residual thing from copy/paste issues whoops.) Hold on about to post my analytic solution. John Riemann Soong (talk) 18:33, 9 December 2009 (UTC)[reply]
The problem I solved analytically was here. I know I did it correctly, because I got 10/10 for the analytic part. Basically, we know D_0 = 1 cm^2/s (a given value), T=500K, surface concentration = 0.0002 g/cm^3 (like above). I used an analytic solution to solve for activation energy, knowing that at a depth of 10 micrometres the concentration is 0.0015 g/cm^3 after 1 hour.
C = C_s - (C_s - C_0) * erf(x / (2*sqrt(D*t)) = 0.0002 g/cm^3 * (1 - erf (x/(2 * sqrt (D_0 * exp(-E_a/(Boltzmann constant * T))*3600s)))
= 0.00015 g/cm^3
= 0.0002 g/cm^3 * (1 - erf (0.001 cm### / (2*sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s)))
0.00005 g/cm^3 = 0.0002 g/cm^3 * (0.001 cm / 2*sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s))
2 * erfinv(0.00005 g/cm^3 / 0.0002 g/cm^3) / 0.001 cm = 1 / sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s)
(1 cm^2 /s (2 * erfinv(0.25) / 0.001 cm)^2 * 3600s) = exp (-E_a / (500K * Boltzmann constant))
ln 1 - 2 ln (2 erfinv(0.25) / 0.001 cm) - ln 3600 = E_a / (500K * Boltzmann constant)
500K * Boltzmann constant * -20.41 = E_a = 1.41 * 10^-19 J = -0.879 eV John Riemann Soong (talk) 18:54, 9 December 2009 (UTC)[reply]
### 0.001 cm is for the depth of 10 microns

Green lightning?

I'm in Milwaukee, WI, and we're getting quite a bit of snow here. I looked out of the window as I was doing my homework on my computer and I saw two bright bluish-green flashes outside (coming from the sky) within 5 secs of each other. They were accompanied by quiet vibrating sounds. I'm not in the city, so I don't think it's light pollution or anything. Any idea what this might be? 76.230.148.207 (talk) 02:35, 9 December 2009 (UTC)[reply]

It was probably a Navigation light on an airplane. Ariel. (talk) 03:19, 9 December 2009 (UTC)[reply]
No way; it was way too close and big! It was like it was coming right from the eaves of my roof! 76.230.148.207 (talk) 03:39, 9 December 2009 (UTC)[reply]
thundersnow? 75.41.110.200 (talk) 03:29, 9 December 2009 (UTC)[reply]
A power transformer shorting out is usually accompanied with a brilliant bluish-green flash. I guess green is from copper in the wires or terminals. Could be a transformer on a utility pole close by. --Dr Dima (talk) 05:26, 9 December 2009 (UTC)[reply]
Aren't transformer explosions are usually accompanied by more than quiet vibrations (if you are close enough to see it in a snowstorm)? Unless the snow dampened the effect, which is very possible. Falconusp t c 12:17, 9 December 2009 (UTC)[reply]
Not an explosion, a short. A short makes a bright (usually blue/white - but so bright it's hard to see, plus dangerous to look at - full of UV) flash, with a loud humming sound, then a bang or a crackle. Was it very windy that day? Ariel. (talk) 12:55, 9 December 2009 (UTC)[reply]
It could have been a meteor burning up. Some meteors burn with a green light (I've seen one myself over Barnsley, Yorkshire about 10 years ago), and in a few days the Geminids will be in full flow. The one I saw also "sang" as it went overhead. --TammyMoet (talk) 14:59, 9 December 2009 (UTC)[reply]
Sound from meteors is widely reported but it is not well understood. The "meteorgenic radio-wave induced vibrating eyeglasses" theory is the most plausible of many implausible explanations. Nimur (talk) 15:26, 9 December 2009 (UTC)[reply]

Re: Science Question

When you move or crumple a sheet of paper you cause a change in a. state b. mass or weight c. position or texture or d. size or position? —Preceding unsigned comment added by 75.136.12.225 (talk) 02:36, 9 December 2009 (UTC)[reply]

I bet you do.
Please do your own homework.
Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. DMacks (talk) 02:47, 9 December 2009 (UTC)[reply]
If I were you, I'd look up all of the aforementioned Wiki articles and see for yourself (I linked them for your convenience). DRosenbach (Talk | Contribs) 03:27, 9 December 2009 (UTC)[reply]
Friendly reminder: Don't edit other editors' posts, even if it's just to add wikilinks - the RefDesk guidelines are quite clear on this. -- Scray (talk) 19:24, 9 December 2009 (UTC)[reply]

Veterinary anesthesia

Does anyone know what is used for induction in veterinary anesthesia? DRosenbach (Talk | Contribs) 03:27, 9 December 2009 (UTC) Forget I even asked. DRosenbach (Talk | Contribs) 03:30, 9 December 2009 (UTC)[reply]

This is the reference desk. We can't forget you even asked. Here's your obligatory reference. Induction of Anesthesia with Diazepam-Ketamine and Midazolam-Ketamine in Greyhounds (2008). Different animals and different medical needs will require different chemicals. If you need veterinary care, see the usual reference desk medical/veterinary disclaimer. Nimur (talk) 15:11, 9 December 2009 (UTC)[reply]

Freezing rain affected by a lake?

As I watched the television news about the major winter storms in the Great Lakes region of the USA, I noticed that most of Lake Michigan was receiving freezing rain. Although the northern and eastern boundaries of the freezing rain area (past which it was snow) was in the middle of the lake, the southern and western boundaries (past which it was rain) followed the lake's shoreline almost exactly. Can the lake really affect the type of precipitation, or is this more likely an error with the Doppler radar? Nyttend (talk) 04:07, 9 December 2009 (UTC)[reply]

Maybe these articles will answer your question: Lake effect snow, Great Salt Lake effect. Ariel. (talk) 07:24, 9 December 2009 (UTC)[reply]
Freezing rain is critically dependent on temperature, and large lakes certainly affect the temperature noticeably. That said, it doesn't make sense to say that the boundary of the freezing-rain area was in the middle of Lake Michigan. Freezing rain is possibly only when the rain falls onto ground that is below the freezing point, not onto liquid water in a lake! (It would be different if the lake was frozen over, of course.) --Anonymous, 09:35 UTC, December 8, 2009.
Freezing rain is possible on a boat of course, and can cause a lot of problems. Looie496 (talk) 16:02, 9 December 2009 (UTC)[reply]

Rhodonite oxidation

Rhodonite, the pink/red coloured gem material will oxidise on the surface. This may take a couple of days to a couple of years. Not sure why there is a big time difference, but that is another topc. Polished rhodonite does not oxidise. I wish to find out the best way to prevent oxidation of unpolished rhodonite. I am using some (15kg piece) as a memorial stone and do not want a pink rock turning black in the future. I don't want to use epoxy coatings, or polish it. At this time Im considering an oil coating, such as vegetable oil or new mineral oil to prevent air contact causing oxidisation.Yarraford (talk) 04:19, 9 December 2009 (UTC)[reply]

Are you sure? I don't think Rhodonite can oxidize - it's already fully oxidized. The different colors are from different minerals in it. Maybe it turns black for some other reason? (If in fact it does turn black - you should double check.) Ariel. (talk) 07:27, 9 December 2009 (UTC)[reply]
de-WP states that black streaks are from MnO2. --Ayacop (talk) 15:00, 9 December 2009 (UTC)[reply]

Wind power from tightly stretched band

The other night on TV (Canada, West coast) I saw a company who was using a principle of vibration (flapping, sorta) from a tightly stretched band with magnets and coils and air from a desk fan blowing across it. They didn't, as far as I could tell, have a "production" level product. I'm trying to figure out what scientific/physical principle this was using, and if possible who this was. Help? --Kickstart70TC 05:11, 9 December 2009 (UTC)[reply]

Oops...found it: Windbelt, which is a horrendous article, FWIW. --Kickstart70TC 05:25, 9 December 2009 (UTC)[reply]
The third link to the YouTube video, assuming it's the same one (an interview with the developer) I saw last year when researching this for a lecture, will tell you all you really need to know. 218.25.32.210 (talk) 06:31, 9 December 2009 (UTC)[reply]

DNA data bases

Are DNA databases good enough to allow someone to state in their will that they want to leave their estate to the person(s) whose DNA is the closest to match to their own as apposed to leaving their estate to the person(s) with the greatest legal status? 71.100.160.161 (talk) 06:02, 9 December 2009 (UTC) [reply]

That's more a question of what (local) law allows than a question about quality of databases. Clearly regardless of the quality (or really, size) of the database, one could always define "closest" in such a way that there's an heir; the question is will the law allow such a capricious distribution of an estate. If there are heirs otherwise entitled to inherit, such a provision would certainly result in prolonged legal battles and make many lawyers and few heirs rich. - Nunh-huh 06:12, 9 December 2009 (UTC)[reply]
The person whose DNA was most similar to theirs would undoubtably be a close family member, so a huge database is really not required, just to sequence/genotype siblings and children. BTW, only a very few number of people have had their DNA sequenced for a genome. The commercial companies only sequence small, variable regions or look for SNPs on a chip. A few now offer full genomes, but still generally that's really only about 90% of a genome. Aaadddaaammm (talk) 08:30, 9 December 2009 (UTC)[reply]
The idea of the OP was possibly to find unknown relatives that are not part of the family -- which wouldn't be allowed if one did it just for the sake of knowledge, but could be in case of heritage. --Ayacop (talk) 14:51, 9 December 2009 (UTC)[reply]

Could the Hubble Space Telescope have imaged damage to the Space Shuttle Columbia?

The Columbia Accident Investigation Board report discusses multiple requests for DoD imagery (both ground-based and space-based) submitted by engineers who were concerned about possible damage from the foam strike during Space Shuttle Columbia's final launch. (The requests were quashed by NASA management who erroneously believed both that the strike was unlikely to have caused significant damage, and that there was nothing that could be done to help if significant damage had occurred.) Could the Hubble Space Telescope have imaged the orbiter? (Potential problems could include orbital alignment, focus, exposure times, and tracking ability.) Has the HST ever imaged an artifact in earth orbit? -- 58.147.52.66 (talk) 08:25, 9 December 2009 (UTC)[reply]

Ignoring everything else, focus would be a problem. An astronomical telescope are not constructed to focus on objects closer than "infinity"; therefore it cannot resolve details smaller than its main mirror (2.4 m for Hubble) at any distance. Even apart from this, the best-resolving camera aboard Hubble has a resolution of 40 pixels per arcsecond. Imaging the orbiter from a distance of 1000 km (which would be a rather lucky break, and probably demand faster tracking than the on-board software is written to provide), this translates to some 12 cm per pixel. A hole of the size estimated by the CAIB would not have been visible on such fuzzy an image. –Henning Makholm (talk) 09:13, 9 December 2009 (UTC)[reply]
Here is some data about on the HST's tracking capabilities in the context of observing the Moon. It is a few orders of magnitude too slow to follow any object at or below its own height, which will be moving at orbital speed and be at most several thousand kilometers away, or would be behind the horizon. –Henning Makholm (talk) 17:34, 9 December 2009 (UTC)[reply]
All very true. There were, however, other cameras in orbit (some military satellites) that could have photographed the shuttle and resolved the damage - and those devices have indeed been used for this purpose subsequently. We were not provided with details because these are secret spy satellites - but they could do the job because they are designed to focus at distances comparable to their orbital height and resolve down to centimeters. SteveBaker (talk) 13:42, 9 December 2009 (UTC)[reply]
This LIDAR image from Air Force Starfire Optical Range, imaged Columbia at a range of probably under 100 km. I doubt a spacecraft could have done better, or could have plausibly been at closer range. Nimur (talk) 15:37, 9 December 2009 (UTC)[reply]

IPCC models

The CO2 in the atmosphere constantly interacts with the earth/plants and the sea. So an increase in atmospheric CO2 leads to increase CO2 in the oceans. How do the IPCC climate models allow for this effect?--Samweller1 (talk) 12:51, 9 December 2009 (UTC)[reply]

As I understand it (and I don't do so very well), normal Global climate models do not model the carbon cycle, i.e. the change in atmospheric CO2 is provided as an input, based on assumptions about human emissions and estimated other carbon sources and sinks. CO2 in the ocean has essentially no direct influence on the climate. The main effect is that atmospheric concentrations are lower than they would otherwise be. Understanding more indirect effects (e.g. the limits of the oceans ability to act as a sink, or the influence on oceanic food chains) is ongoing work, and effects are modeled independently. Our article on transient climate simulation might also be of interest. --Stephan Schulz (talk) 13:03, 9 December 2009 (UTC)[reply]

if global warming is a problem why don't we put up a thermostat

why don't we just put a sliver of something reflective in orbit around the sun in lockstep with Earth, but closer to the sun (that way you don't need a lot of this thing) and then adjust it to block as much/little light as we need for optimum temperature/to counteract any global warming occurring? note: this is not a request for medical advice. And saying that saying this is not a request for medical advice does not not make it a request for medical advice does not make it a request for medical advice. 92.230.65.75 (talk) 13:49, 9 December 2009 (UTC)[reply]

We have an article on this: Space sunshade- Fribbler (talk) 13:54, 9 December 2009 (UTC)[reply]
The idea has been proposed, but you can't put something in "lockstep with Earth, but closer to the sun" - orbital period is determined by the size of the orbit. The only real option is L1, which isn't that much closer to the Sun than the Earth. That means it has to be very big, which makes it very difficult and expensive to make. --Tango (talk) 13:57, 9 December 2009 (UTC)[reply]
Sounds to me like space elevator inventor Jerome Pearsons suggestion of forming a ring around the Earth.[36] Nanonic (talk) 14:00, 9 December 2009 (UTC)[reply]
A ring around the Earth, rather than just at L1, is an option, but probably not a good one. It would probably have to be bigger overall and it would get in the way of near-Earth space travel. --Tango (talk) 14:04, 9 December 2009 (UTC)[reply]
There are a couple of problems, both technical, political, economical and ecological. Technically, we don't know how to build such a thing right now. Mathematically, since the sun is larger than the Earth, the closer you move it to the sun, the less sunlight it would block. So the best you can do is indeed putting it into orbit or at L1 (which is unstable). Politically, whom would you trust to control it? Assuming its me, having Austin, Texas, in perpetual darkness might be a nice idea, but what if I get bored with that and shadow (or, better, light) something else? Economically, it's likely to be much more expensive to build and maintain than it would be to fix our CO2 habit here on Earth. And ecologically, we would receive not only less energy, but less light. Nobody knows what effect that would have. And it would still cause significant local climate change, as not only the total energy budget, but also local distribution of energy is affected by greenhouse gases. We do not know enough to predict the overall effects of such a thing, even assuming it technically works flawless. --Stephan Schulz (talk) 14:15, 9 December 2009 (UTC)[reply]

couldn't it remain in lockstep with Earth, but closer to the sun by expending energy (as opposed to just passively orbiting on its inertial momentum) -- if it were much closer to the sun it could be much, much smaller... Also: coudn't it get some of the energy just mentioned directly from the sun? Is there a way to turn solar energy into thrust in space? Thanks. Still not asking for medical advice, by the way. 92.230.65.75 (talk) 14:20, 9 December 2009 (UTC)[reply]


Oh. I just read the second comment, mentioning that as you get closer to the sun you block less and less light from Earth. Like some bad math joke, my logic went: assume the sun is a point-source... 92.230.65.75 (talk) 14:22, 9 December 2009 (UTC)[reply]

Assume a spherical cow... Fences&Windows 14:24, 9 December 2009 (UTC)[reply]
Wikilinked, just because we have an article on everything (EC x 4!!) -- Coneslayer (talk) 14:30, 9 December 2009 (UTC)[reply]
...and here is my ec'ed comment, about half of which is still relevant ;-): As pointed out above, since the sun is larger than the Earth, the farther you move something towards the sun, the less light it will block. Note that solar shadows (as opposed to shadows cast by an approximate point source) do not get larger as the distance between object and screen increases. They just get more diffuse until they vanish. Apart from that, you can use solar panels and a ion drive for station keeping (but you still need reaction mass), or possibly use solar sails, although this will be far from trivial to figure out and will certainly need active control. --Stephan Schulz (talk) 14:28, 9 December 2009 (UTC)[reply]
Scientists now believe that the principle whose name is derived from the Latin para- "defense against" (from verb parere "to ward off") + sole "sun" can be implemented to create human-deployable collapsible sources of shade. Wikipedia has an article on parasol technology. Cuddlyable3 (talk) 18:34, 9 December 2009 (UTC)[reply]

Health clinic in Sevilla, Spain

where can I find a health clinic in Sevilla, Spain that deals in STD's? Thanks —Preceding unsigned comment added by 80.58.205.49 (talk) 15:46, 9 December 2009 (UTC)[reply]

It appears there may have once been a STD clinic/diagnostic centre at the University of Seville School of Medicine but I don't know if it still exists. If you speak Spanish perhaps you can work out from their website [37]. I can't offer much more help, perhaps someone else could, except to say you should be able to go to any Sevilla general practitioner (according to our article, in Spain probably based at a primary care centre) and they'll be able to direct you to an appropriate clinic if it's not something they can deal with themselves, while protecting your confidentiality & privacy as they should always do. Nil Einne (talk) 17:07, 9 December 2009 (UTC)[reply]

Is nonhuman skin color a result of melanin levels or something different?

I understand that melanin is the primary determinant of the variance in skin color among humans, but I was wondering if it is also what makes elephants, rhinoceroses, and hippopotamuses gray and gorillas black, or if these are differences of a fundamentally different type. 20.137.18.50 (talk) 17:17, 9 December 2009 (UTC)[reply]

Interestingly, skin color redirects to human skin color. From that article, there is a link to biological pigment which discusses coloration in animals. The article has a list of biological chemicals that are common; exotic animals also have other biochemicals, see for example bioluminescence. Chromatophore also has lots of good information about coloration in animals like fish, amphibians, and reptiles. Mammals and birds do not have chromatophores, only melanocytes. Nimur (talk) 17:48, 9 December 2009 (UTC)[reply]

Care of surfaces while removing snow from them

The Internet has information about how to remove snow while caring for one's own health, that is, the health of whoever is doing that work. However, I am seeking information about how to remove snow while caring for the durability of artificial surfaces, such as asphalt and concrete. I am thinking of the possibility of cracks in the surface being started or enlarged by expansion and contraction caused by changes in temperature. With this in mind, is it better to clear an entire surface at one time, avoiding borderlines between cleared and uncleared parts of a surface? Is it better (when practical) to postpone snow removal until new snow has stopped falling? Where is it best to put snow which has been removed? Are grassy areas suitable? Are ditches suitable? I would like someone with expertise in the appropriate field(s) to answer these questions and any closely related ones which come to mind. (A related article is frost heaving.) -- Wavelength (talk) 17:33, 9 December 2009 (UTC)[reply]

Looking for molecules with large huang rhys factor

I am looking for molecules with large huang-rhys factors, that also absorb in the visible part of the spectrum. The huang rhys factor is a measure of the displacement of the nuclear potential minimum upon electronic excitation, as described here. The result of this would be that in the absorption spectrum, the first overtone for a particular vibrational mode is a larger peak than the fundamental (the 0-0 pure electronic transition). I know this question is pretty obscure, but I am unsure about how to proceed with this search. mislih 17:44, 9 December 2009 (UTC)[reply]

Have you tried searching Google Scholar for huang rhys factor? The Huang-Rhys factor S(a1g) for transition-metal impurities: a microscopic insight (1992), discusses transition metal ligands and compares specific molecules. Nimur (talk) 17:55, 9 December 2009 (UTC)[reply]

Echoes

If I am standing in a large room and I yell, how many times does my voice echo? It typically sounds like 3 or 4 times but I imagine that's just the threshold of what I can hear. Does my voice actually echo forever? TheFutureAwaits (talk) 17:49, 9 December 2009 (UTC)[reply]

An "echo" as you are apparently interpreting it is a distinct, undistorted return of the original sound of your voice. In reality, what happens is that as the wavefront reverberates, many echoes "combine" and distort, eventually decaying in amplitude until you can not hear them (and the wavefront settles down below the ambient noise level]. See reverberation for a more thorough explanation of this effect. Depending on the size, shape, and material of the room walls, the number of "distinct" echoes can vary from zero to "too many to count." Also see Multipath interference for information about echos that bounce off of different walls and recombine. Nimur (talk) 17:58, 9 December 2009 (UTC)[reply]

I uniformly prefer white

I've noticed that nurses uniforms are no longer white. I thought being white was important for preventing infection for a couple of reasons:

1) Any stains are easy to spot, which hopefully means a clean uniform will be put on. Patterns are perhaps the worst, in this respect, as they can disguise a soiled uniform.

2) Bleach can be used liberally when washing whites, without fear of them fading. Not so with coloreds. More bleach means fewer surviving microbes.

So, with this in mind, why have they gone away from white uniforms ? StuRat (talk) 18:23, 9 December 2009 (UTC)[reply]

Scrubs (clothing) is somewhat informative... apparently white induces eyestrain, and the colors are used to differentiate departments and to keep people from stealing them. I am sure that they are able to sterilize the clothing regardless of the color. I'm not sure any uniforms are patterns. --Mr.98 (talk) 18:34, 9 December 2009 (UTC)[reply]
I understand that the actor's nurses uniforms in an early British black and white TV series Emergency - Ward 10 were yellow because this appeared better on camera. Nostalgiatrip starts here. Cuddlyable3 (talk) 18:54, 9 December 2009 (UTC)[reply]

moment of Big Bang

Can the moment of the Big Bang be characterized as the moment of the greatest unrest? 71.100.160.161 (talk) 18:43, 9 December 2009 (UTC) [reply]

Zombie Plan

i was reading about mad cow disease and how that if there was a stronger form of it, like a Super mad cow or madder cow disease, that was transferd by blood or saliva, it would be almost like a Zombie outbreak. this made me wander.... what are the chances of a virus, or infection of any kind that would cause a "zombie like" outbreak if any? just a thought. —Preceding unsigned comment added by DanielTrox (talkcontribs) 18:46, 9 December 2009 (UTC)[reply]

Plan????? Title gives you away you evil mastermind Daniel Trox! 92.224.205.128 (talk) 19:25, 9 December 2009 (UTC)[reply]