Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 76.230.211.192 (talk) at 16:18, 13 December 2009 (→‎Radon: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 9

mathematica model of diffusion

I don't know if my problem is how I implemented the code. I am simulating diffusion of nitrogen into a metal with a constant surface concentration to contrast with an analytic solution to diffusion. The system is 20 micrometres deep (to evaluate how the concentration at 10 micrometres changes over time) -- there is no mass transfer through the end node.

 
(* Calculating the Constants *) 
Dlt = 0.144816767
Dif = 1.381 * 10^-9 
Dlx = 2 * 10^-5 
Const = Dlt*Dif /((Dlx)^2 ) 

(* Initialising the Array *) 
s = Array[0, {24859, 101} ] 
s[[1]] = Table[0, {i, 101}] 

(* Setting up constant surface concentration for All t *) 
s[[All, 1]]  = 0.0002

(* setting up general concentration-calculating algorithm for each \
position in a row t*)

c[t_, n_]  := 
 s[[t - 1, n]] + 
  Const *( s[[t - 1, n + 1]] - 2*s[[t - 1, n]] + s[[t - 1, n - 1]])  


(* Assembling a data row of iteratively - 
  calculated positions for each array row t) 
f[t_] := Table[c[t, i], {i, 2, 100}]

(* calculating the end node at the end of each row t *) 
g[t_] := s[[t - 1, 101]] - 
  2* Const * (s[[t - 1, 101]] - s[[t - 1, 100]])

For[i = 2, i < 24859, i = i + 1, s[[i, 2 ;; 100]] = f[i]; 
 s[[i, 101]] = g[i]]

(This gives me an array that I can then evaluate and present through various Manipulate[] and ListLinePlot[] functions.)

The problem is that I know from my analytical solution that the concentration at 10 micrometres is supposed to go to 1.5 * 10^-4 g/cm^3 in about an hour, but my simulation has it reach that in around a quarter of an hour. I don't think it's my constants. The activation energy per atom is 0.879 eV, and the temperature is 500K. The temperature-independent diffusion constant (D_0) is 1 cm^2 / s (hence D = 1 cm^2/s * e^(-0.879 eV / (500 K * k_B) = 1.381 * 10^-9 cm^2/s). I'm sure I've satisfied the Von Neumann stability criterion -- I'm trying to do this in about 100 steps, so dx = 20 micrometres / 100 = 2 * 10^-5 cm, and based on the stability criterion the largest possible time interval to prevent residual error buildup is approx 0.14482 seconds per "step". (Hence 24859 time nodes to make roughly an hour.)

My attack so far is to define each new cell's concentration (at a particular time t) from known cells' concentrations at time t-1, based on the concentrations at that time at the node before, at and after. (This is function c[t,n].) Then I find an entire row for that time t to feed data into the array (function f[t]), as well as calculating the end node {function g[t]). Then I have an iterative loop to calculate new rows based off of the row before already calculated. I define my initial conditions (surface concentration = 2 * 10-4 g/cm^3 + no nitrogen in the metal initially) and let it run. What's my problem? John Riemann Soong (talk) 01:42, 9 December 2009 (UTC)[reply]

Help, anyone? This is basically like Fick's laws of diffusion and stuff, but used discretely. John Riemann Soong (talk) 15:22, 9 December 2009 (UTC)[reply]
I don't see anything immediately wrong with your algorithm, although the code doesn't look very idiomatic to me (I would write
up[l_]:=Take[l,{2,-2}]+k*(Drop[l,2]+Drop[l,-2]-2*Take[l,{2,-2}])
up2[l_]:=Prepend[Append[l,l[[-2]]],c0] (* add fixed left value and mirror-symmetric right value *)
up3[l_]:=up2[up[l]]
k=0.144816767*1.381*^-9/2*^-5^2; c0=0.0002; s0=up3[Table[0,{102}]]
s=Nest[up3,s0,24859] (* or: *)
i=0; s=NestWhile[up3,s0,(++i;#[[50]]<1.5*^-4)&]
where the Nest[] chooses a fixed number of steps and the NestWhile[] waits instead for the 1.5×10-4 to be reached). From the latter I get i=10258 (24.4 minutes). That's probably not what you get: it's not "around a quarter of an hour". Maybe you should post your analytical solution here for more detailed comparison? Perhaps your actual code too; what you've written doesn't work (surely you want Table[] instead of Array[], one comment is unterminated, and s[[-1]] is unused). I tried to fix it, and got the same result of 10258 steps. --Tardis (talk) 17:28, 9 December 2009 (UTC)[reply]
Does using Table[] make it run faster/cleaner? I also don't know where I refer to s[[-1]]. (The commenting error is a residual thing from copy/paste issues whoops.) Hold on about to post my analytic solution. John Riemann Soong (talk) 18:33, 9 December 2009 (UTC)[reply]
The problem I solved analytically was here. I know I did it correctly, because I got 10/10 for the analytic part. Basically, we know D_0 = 1 cm^2/s (a given value), T=500K, surface concentration = 0.0002 g/cm^3 (like above). I used an analytic solution to solve for activation energy, knowing that at a depth of 10 micrometres the concentration is 0.0015 g/cm^3 after 1 hour.
C = C_s - (C_s - C_0) * erf(x / (2*sqrt(D*t)) = 0.0002 g/cm^3 * (1 - erf (x/(2 * sqrt (D_0 * exp(-E_a/(Boltzmann constant * T))*3600s)))
= 0.00015 g/cm^3
= 0.0002 g/cm^3 * (1 - erf (0.001 cm### / (2*sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s)))
0.00005 g/cm^3 = 0.0002 g/cm^3 * (0.001 cm / 2*sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s))
2 * erfinv(0.00005 g/cm^3 / 0.0002 g/cm^3) / 0.001 cm = 1 / sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s)
(1 cm^2 /s (2 * erfinv(0.25) / 0.001 cm)^2 * 3600s) = exp (-E_a / (500K * Boltzmann constant))
ln 1 - 2 ln (2 erfinv(0.25) / 0.001 cm) - ln 3600 = E_a / (500K * Boltzmann constant)
500K * Boltzmann constant * -20.41 = E_a = 1.41 * 10^-19 J = -0.879 eV John Riemann Soong (talk) 18:54, 9 December 2009 (UTC)[reply]
### 0.001 cm is for the depth of 10 microns
Table[] is certainly cleaner: just evaluate Array[0,4] to see what I mean. s[[-1]] is the last element of s; I just meant that your loop stopped one short of filling your array. As far as verifying the analytical solution goes, we don't need ; D(500 K) will do, and I get . That differs from your reconstituted value by 0.95%, so it affects the answer just noticeably.
What is important is that you simulate to only twice the depth at which you want to test the concentration, and you have a boundary condition on the inside that significantly affects (increases) the concentration of gas in the simulation. Realize that by symmetry you are effectively simulating a very thin (40 micron) film of metal exposed to the gas on both sides, which obviously will take up gas better than a thick slab exposed only on one side (as in your analytical solution).
I don't have Mathematica available at this instant, and my reimplementation in Python apparently rounds things differently (it gets 10386 steps (instead of 10258) with your D and 10485 with mine), but if I increase the simulated depth to 100 microns (501 sample points) and use my D I get 24858 steps, which should look familiar. (With your D I get 24624 steps, which is 34 seconds too fast.) --Tardis (talk) 22:44, 10 December 2009 (UTC)[reply]
Thanks so much! My current limiting factor is computer memory 0_o. Who knew that modelling such a simple system could be so memory-consuming. How did they even do this with the first supercomputers? John Riemann Soong (talk) 21:58, 13 December 2009 (UTC)[reply]

Green lightning?

I'm in Milwaukee, WI, and we're getting quite a bit of snow here. I looked out of the window as I was doing my homework on my computer and I saw two bright bluish-green flashes outside (coming from the sky) within 5 secs of each other. They were accompanied by quiet vibrating sounds. I'm not in the city, so I don't think it's light pollution or anything. Any idea what this might be? 76.230.148.207 (talk) 02:35, 9 December 2009 (UTC)[reply]

It was probably a Navigation light on an airplane. Ariel. (talk) 03:19, 9 December 2009 (UTC)[reply]
No way; it was way too close and big! It was like it was coming right from the eaves of my roof! 76.230.148.207 (talk) 03:39, 9 December 2009 (UTC)[reply]
St Elmo's fire? —Preceding unsigned comment added by 75.41.110.200 (talk) 04:22, 10 December 2009 (UTC)[reply]
thundersnow? 75.41.110.200 (talk) 03:29, 9 December 2009 (UTC)[reply]
A power transformer shorting out is usually accompanied with a brilliant bluish-green flash. I guess green is from copper in the wires or terminals. Could be a transformer on a utility pole close by. --Dr Dima (talk) 05:26, 9 December 2009 (UTC)[reply]
Aren't transformer explosions are usually accompanied by more than quiet vibrations (if you are close enough to see it in a snowstorm)? Unless the snow dampened the effect, which is very possible. Falconusp t c 12:17, 9 December 2009 (UTC)[reply]
Not an explosion, a short. A short makes a bright (usually blue/white - but so bright it's hard to see, plus dangerous to look at - full of UV) flash, with a loud humming sound, then a bang or a crackle. Was it very windy that day? Ariel. (talk) 12:55, 9 December 2009 (UTC)[reply]
It could have been a meteor burning up. Some meteors burn with a green light (I've seen one myself over Barnsley, Yorkshire about 10 years ago), and in a few days the Geminids will be in full flow. The one I saw also "sang" as it went overhead. --TammyMoet (talk) 14:59, 9 December 2009 (UTC)[reply]
Sound from meteors is widely reported but it is not well understood. The "meteorgenic radio-wave induced vibrating eyeglasses" theory is the most plausible of many implausible explanations. Nimur (talk) 15:26, 9 December 2009 (UTC)[reply]

Re: Science Question

When you move or crumple a sheet of paper you cause a change in a. state b. mass or weight c. position or texture or d. size or position? —Preceding unsigned comment added by 75.136.12.225 (talk) 02:36, 9 December 2009 (UTC)[reply]

I bet you do.
Please do your own homework.
Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. DMacks (talk) 02:47, 9 December 2009 (UTC)[reply]
If I were you, I'd look up all of the aforementioned Wiki articles and see for yourself (I linked them for your convenience). DRosenbach (Talk | Contribs) 03:27, 9 December 2009 (UTC)[reply]
Links added to original post by second editor removed. They included: state, mass, weight, position, texture, and size -- Scray (talk) 21:44, 9 December 2009 (UTC)[reply]
Friendly reminder: Don't edit other editors' posts, even if it's just to add wikilinks - the RefDesk guidelines are quite clear on this. -- Scray (talk) 19:24, 9 December 2009 (UTC)[reply]

Veterinary anesthesia

Does anyone know what is used for induction in veterinary anesthesia? DRosenbach (Talk | Contribs) 03:27, 9 December 2009 (UTC) Forget I even asked. DRosenbach (Talk | Contribs) 03:30, 9 December 2009 (UTC)[reply]

This is the reference desk. We can't forget you even asked. Here's your obligatory reference. Induction of Anesthesia with Diazepam-Ketamine and Midazolam-Ketamine in Greyhounds (2008). Different animals and different medical needs will require different chemicals. If you need veterinary care, see the usual reference desk medical/veterinary disclaimer. Nimur (talk) 15:11, 9 December 2009 (UTC)[reply]
If DRosenbach needs veterinary care, then our disclaimer will be one of his smaller problems! SteveBaker (talk) 22:33, 9 December 2009 (UTC)[reply]
I can only speak for myself, but I have completely forgotten that he asked. Bus stop (talk) 22:49, 9 December 2009 (UTC)[reply]
Asked what? Cuddlyable3 (talk) 21:24, 11 December 2009 (UTC)[reply]

Freezing rain affected by a lake?

As I watched the television news about the major winter storms in the Great Lakes region of the USA, I noticed that most of Lake Michigan was receiving freezing rain. Although the northern and eastern boundaries of the freezing rain area (past which it was snow) was in the middle of the lake, the southern and western boundaries (past which it was rain) followed the lake's shoreline almost exactly. Can the lake really affect the type of precipitation, or is this more likely an error with the Doppler radar? Nyttend (talk) 04:07, 9 December 2009 (UTC)[reply]

Maybe these articles will answer your question: Lake effect snow, Great Salt Lake effect. Ariel. (talk) 07:24, 9 December 2009 (UTC)[reply]
Freezing rain is critically dependent on temperature, and large lakes certainly affect the temperature noticeably. That said, it doesn't make sense to say that the boundary of the freezing-rain area was in the middle of Lake Michigan. Freezing rain is possibly only when the rain falls onto ground that is below the freezing point, not onto liquid water in a lake! (It would be different if the lake was frozen over, of course.) --Anonymous, 09:35 UTC, December 8, 2009.
Freezing rain is possible on a boat of course, and can cause a lot of problems. Looie496 (talk) 16:02, 9 December 2009 (UTC)[reply]
Ah, good point! --Anon, 21:22 UTC, December 9, 2009.

Rhodonite oxidation

Rhodonite, the pink/red coloured gem material will oxidise on the surface. This may take a couple of days to a couple of years. Not sure why there is a big time difference, but that is another topc. Polished rhodonite does not oxidise. I wish to find out the best way to prevent oxidation of unpolished rhodonite. I am using some (15kg piece) as a memorial stone and do not want a pink rock turning black in the future. I don't want to use epoxy coatings, or polish it. At this time Im considering an oil coating, such as vegetable oil or new mineral oil to prevent air contact causing oxidisation.Yarraford (talk) 04:19, 9 December 2009 (UTC)[reply]

Are you sure? I don't think Rhodonite can oxidize - it's already fully oxidized. The different colors are from different minerals in it. Maybe it turns black for some other reason? (If in fact it does turn black - you should double check.) Ariel. (talk) 07:27, 9 December 2009 (UTC)[reply]
de-WP states that black streaks are from MnO2. --Ayacop (talk) 15:00, 9 December 2009 (UTC)[reply]

The info re oxidation came direct from the miner, while I was at his mine. Tamworth, NSW Australia. He took me to a spot and the rocks were black. No rhodonite in sight. He said, this is the best rhodonite, fine grained and dark coloured. On breaking these rocks with a hammer, what was revealed was pure pink rhodonite without any of the typical black banding often seen in rhodonite. He said this must be polished immediatly to stop colour change to black which will happen in days. Maybe some other process is happening, I don,t know. Courser grained rhodonite intersected with black banding about 40 metres distant in the same seam was also evidently turning black at a much slower rate as different stages of the change could be seen on the rock. —Preceding unsigned comment added by Yarraford (talkcontribs) 00:34, 11 December 2009 (UTC)[reply]

It would be some sort of weathering process. Some materials are more soluable and wash away with water. Over the long term silica will dissolve, and so will ions like sodium and potassium. Leaving MnO2 or iron oxides behind, that are dark in colour. Graeme Bartlett (talk) 00:56, 11 December 2009 (UTC)[reply]

Wind power from tightly stretched band

The other night on TV (Canada, West coast) I saw a company who was using a principle of vibration (flapping, sorta) from a tightly stretched band with magnets and coils and air from a desk fan blowing across it. They didn't, as far as I could tell, have a "production" level product. I'm trying to figure out what scientific/physical principle this was using, and if possible who this was. Help? --Kickstart70TC 05:11, 9 December 2009 (UTC)[reply]

Oops...found it: Windbelt, which is a horrendous article, FWIW. --Kickstart70TC 05:25, 9 December 2009 (UTC)[reply]
The third link to the YouTube video, assuming it's the same one (an interview with the developer) I saw last year when researching this for a lecture, will tell you all you really need to know. 218.25.32.210 (talk) 06:31, 9 December 2009 (UTC)[reply]

DNA data bases

Are DNA databases good enough to allow someone to state in their will that they want to leave their estate to the person(s) whose DNA is the closest to match to their own as apposed to leaving their estate to the person(s) with the greatest legal status? 71.100.160.161 (talk) 06:02, 9 December 2009 (UTC) [reply]

That's more a question of what (local) law allows than a question about quality of databases. Clearly regardless of the quality (or really, size) of the database, one could always define "closest" in such a way that there's an heir; the question is will the law allow such a capricious distribution of an estate. If there are heirs otherwise entitled to inherit, such a provision would certainly result in prolonged legal battles and make many lawyers and few heirs rich. - Nunh-huh 06:12, 9 December 2009 (UTC)[reply]
The person whose DNA was most similar to theirs would undoubtably be a close family member, so a huge database is really not required, just to sequence/genotype siblings and children. BTW, only a very few number of people have had their DNA sequenced for a genome. The commercial companies only sequence small, variable regions or look for SNPs on a chip. A few now offer full genomes, but still generally that's really only about 90% of a genome. Aaadddaaammm (talk) 08:30, 9 December 2009 (UTC)[reply]
The idea of the OP was possibly to find unknown relatives that are not part of the family -- which wouldn't be allowed if one did it just for the sake of knowledge, but could be in case of heritage. --Ayacop (talk) 14:51, 9 December 2009 (UTC)[reply]
Well, possibly. If so, he/she should have a look at 23andMe, which can identify relatives and classify them with regard to closeness of relationship (e.g., 4th cousin). This is because the genetic testing at that database includes autosomal markers as well as mtDNA and Y-DNA markers. - Nunh-huh 00:03, 10 December 2009 (UTC)[reply]

Could the Hubble Space Telescope have imaged damage to the Space Shuttle Columbia?

The Columbia Accident Investigation Board report discusses multiple requests for DoD imagery (both ground-based and space-based) submitted by engineers who were concerned about possible damage from the foam strike during Space Shuttle Columbia's final launch. (The requests were quashed by NASA management who erroneously believed both that the strike was unlikely to have caused significant damage, and that there was nothing that could be done to help if significant damage had occurred.) Could the Hubble Space Telescope have imaged the orbiter? (Potential problems could include orbital alignment, focus, exposure times, and tracking ability.) Has the HST ever imaged an artifact in earth orbit? -- 58.147.52.66 (talk) 08:25, 9 December 2009 (UTC)[reply]

Ignoring everything else, focus would be a problem. An astronomical telescope are not constructed to focus on objects closer than "infinity"; therefore it cannot resolve details smaller than its main mirror (2.4 m for Hubble) at any distance. Even apart from this, the best-resolving camera aboard Hubble has a resolution of 40 pixels per arcsecond. Imaging the orbiter from a distance of 1000 km (which would be a rather lucky break, and probably demand faster tracking than the on-board software is written to provide), this translates to some 12 cm per pixel. A hole of the size estimated by the CAIB would not have been visible on such fuzzy an image. –Henning Makholm (talk) 09:13, 9 December 2009 (UTC)[reply]
Here is some data about on the HST's tracking capabilities in the context of observing the Moon. It is a few orders of magnitude too slow to follow any object at or below its own height, which will be moving at orbital speed and be at most several thousand kilometers away, or would be behind the horizon. –Henning Makholm (talk) 17:34, 9 December 2009 (UTC)[reply]
All very true. There were, however, other cameras in orbit (some military satellites) that could have photographed the shuttle and resolved the damage - and those devices have indeed been used for this purpose subsequently. We were not provided with details because these are secret spy satellites - but they could do the job because they are designed to focus at distances comparable to their orbital height and resolve down to centimeters. SteveBaker (talk) 13:42, 9 December 2009 (UTC)[reply]
This LIDAR image from Air Force Starfire Optical Range, imaged Columbia at a range of probably under 100 km. I doubt a spacecraft could have done better, or could have plausibly been at closer range. Nimur (talk) 15:37, 9 December 2009 (UTC)[reply]

An astronomical telescope are not constructed to focus on objects closer than "infinity"; therefore it cannot resolve details smaller than its main mirror (2.4 m for Hubble) at any distance.

I cannot understand the reasoning here. Focussing to infinity means that the light coming from one direction is focussed at one point on the sensor and light from another direction is directed to another specific point. Assuming we can make arbitrarily small pixels, I cannot see where the size of the mirror enters in the ray optics approximation. In wave optics, diffraction on the aperture (effectively the mirror) really puts a limit quantified somewhat by the Rayleigh criterion. However, the resolution limits are never rigid (even quantum indeterminacy is correctly expressed by variances, which are natural, but still arbitrary measures of the widths of statistical distributions) and can often be improved by deconvolution, which is really frequently used for Hubble.  Pt (T) 21:18, 9 December 2009 (UTC)[reply]
Focusing on infinity means that a set of parallel light rays that hit the mirror will end up on the same point (pixel) on the detector plate. Follow those rays back to the object being imaged -- they'll have been emitted everywhere on a section of the object that has the same shape and size as the mirror. Conversely any single point of the object emits rays towards every part of the mirror; but those rays must have slightly different directions (if not they'd all hit the same spot on the mirror), and therefore they're going to end up at different points on the detector. This is independently of how fine pixels the detector is made of.
Usually this is not a relevant effect in astronomy, because the things one wants observe (stars, planets) are much, much larger than the aperture size anyway. –Henning Makholm (talk) 23:09, 9 December 2009 (UTC)[reply]
Thank you, this clarifies the matters for me! Nevertheless, while the point spread function of a point at distance d would thus be (approximately) a circle with such a diameter that it exactly covers the image of an object at distance d with the same diameter as the mirror (am I correct here?), (most of) the information about the original object would still be encoded in the blurred image. If the shape of the PSF didn't depend on the observation angle w.r.t the mirror, we could simply deconvolute the blurred image with the PSF. And even if it does change with angle, but in a way we know or have calculated in advance, the "software refocussing" is still doable by solving a linear Fredholm integral equation of the first kind, which can be easily done using a Fourier transform some numerical linear algebra (after discretizing, you'll have a system of linear equations). Ah, if the world were actually so ideal... In reality, the PSF is sensitive to every little detail in the optics and the sensor and we either don't know those contributions exactly enough or we make errors in image acquisition, so that the resolution is still limited. But that limit is more a matter of engineering, not fundamental physics!  Pt (T) 01:03, 10 December 2009 (UTC) and 01:16, 10 December 2009 (UTC)[reply]

IPCC models

The CO2 in the atmosphere constantly interacts with the earth/plants and the sea. So an increase in atmospheric CO2 leads to increase CO2 in the oceans. How do the IPCC climate models allow for this effect?--Samweller1 (talk) 12:51, 9 December 2009 (UTC)[reply]

As I understand it (and I don't do so very well), normal Global climate models do not model the carbon cycle, i.e. the change in atmospheric CO2 is provided as an input, based on assumptions about human emissions and estimated other carbon sources and sinks. CO2 in the ocean has essentially no direct influence on the climate. The main effect is that atmospheric concentrations are lower than they would otherwise be. Understanding more indirect effects (e.g. the limits of the oceans ability to act as a sink, or the influence on oceanic food chains) is ongoing work, and effects are modeled independently. Our article on transient climate simulation might also be of interest. --Stephan Schulz (talk) 13:03, 9 December 2009 (UTC)[reply]

if global warming is a problem why don't we put up a thermostat

why don't we just put a sliver of something reflective in orbit around the sun in lockstep with Earth, but closer to the sun (that way you don't need a lot of this thing) and then adjust it to block as much/little light as we need for optimum temperature/to counteract any global warming occurring? note: this is not a request for medical advice. And saying that saying this is not a request for medical advice does not not make it a request for medical advice does not make it a request for medical advice. 92.230.65.75 (talk) 13:49, 9 December 2009 (UTC)[reply]

We have an article on this: Space sunshade- Fribbler (talk) 13:54, 9 December 2009 (UTC)[reply]
The idea has been proposed, but you can't put something in "lockstep with Earth, but closer to the sun" - orbital period is determined by the size of the orbit. The only real option is L1, which isn't that much closer to the Sun than the Earth. That means it has to be very big, which makes it very difficult and expensive to make. --Tango (talk) 13:57, 9 December 2009 (UTC)[reply]
Sounds to me like space elevator inventor Jerome Pearsons suggestion of forming a ring around the Earth.[1] Nanonic (talk) 14:00, 9 December 2009 (UTC)[reply]
A ring around the Earth, rather than just at L1, is an option, but probably not a good one. It would probably have to be bigger overall and it would get in the way of near-Earth space travel. --Tango (talk) 14:04, 9 December 2009 (UTC)[reply]
There are a couple of problems, both technical, political, economical and ecological. Technically, we don't know how to build such a thing right now. Mathematically, since the sun is larger than the Earth, the closer you move it to the sun, the less sunlight it would block. So the best you can do is indeed putting it into orbit or at L1 (which is unstable). Politically, whom would you trust to control it? Assuming its me, having Austin, Texas, in perpetual darkness might be a nice idea, but what if I get bored with that and shadow (or, better, light) something else? Economically, it's likely to be much more expensive to build and maintain than it would be to fix our CO2 habit here on Earth. And ecologically, we would receive not only less energy, but less light. Nobody knows what effect that would have. And it would still cause significant local climate change, as not only the total energy budget, but also local distribution of energy is affected by greenhouse gases. We do not know enough to predict the overall effects of such a thing, even assuming it technically works flawless. --Stephan Schulz (talk) 14:15, 9 December 2009 (UTC)[reply]

couldn't it remain in lockstep with Earth, but closer to the sun by expending energy (as opposed to just passively orbiting on its inertial momentum) -- if it were much closer to the sun it could be much, much smaller... Also: coudn't it get some of the energy just mentioned directly from the sun? Is there a way to turn solar energy into thrust in space? Thanks. Still not asking for medical advice, by the way. 92.230.65.75 (talk) 14:20, 9 December 2009 (UTC)[reply]


Oh. I just read the second comment, mentioning that as you get closer to the sun you block less and less light from Earth. Like some bad math joke, my logic went: assume the sun is a point-source... 92.230.65.75 (talk) 14:22, 9 December 2009 (UTC)[reply]

Assume a spherical cow... Fences&Windows 14:24, 9 December 2009 (UTC)[reply]
Wikilinked, just because we have an article on everything (EC x 4!!) -- Coneslayer (talk) 14:30, 9 December 2009 (UTC)[reply]
...and here is my ec'ed comment, about half of which is still relevant ;-): As pointed out above, since the sun is larger than the Earth, the farther you move something towards the sun, the less light it will block. Note that solar shadows (as opposed to shadows cast by an approximate point source) do not get larger as the distance between object and screen increases. They just get more diffuse until they vanish. Apart from that, you can use solar panels and a ion drive for station keeping (but you still need reaction mass), or possibly use solar sails, although this will be far from trivial to figure out and will certainly need active control. --Stephan Schulz (talk) 14:28, 9 December 2009 (UTC)[reply]
Scientists now believe that the principle whose name is derived from the Latin para- "defense against" (from verb parere "to ward off") + sole "sun" can be implemented to create human-deployable collapsible sources of shade. Wikipedia has an article on parasol technology. Cuddlyable3 (talk) 18:34, 9 December 2009 (UTC)[reply]
@Stephan Schulz: I think your distance argument is wrong. The shadow cast by an object would indeed become smaller when it is moved closer to the Sun, but the radiation it receives does increase: The closer an object of a given size moves to the Sun, the more radiation it will get (since radiation density decreases with the square of the distance from the Sun). An object at 1/109 AU from Earth (or less, e.g. Moon's distance) from Earth will thus get more radiation than the same object closer to the Earth, and if placed directly on the line connecting Sun's and Earth's centres all of the radiation it gets would otherwise reach the Earth (because 109=diameter of Sun/diameter of Earth- use the Intercept theorem). There wouldn't be any point on Earth experiencing a Solar eclipse by it, but the whole Earth would get a higher reduction of radiation. An interesting question would be what happens when one increases the distance above 1/109 AU - what is the optimal distance for a Sun shade?--Roentgenium111 (talk) 13:05, 10 December 2009 (UTC)[reply]
If we make the assumptions that the Sun and Earth are two flat disks with radii R and r separated by a much larger distance D (and that the Sun's disk is equally luminous everywhere), and place a third disk with radius ρ at a distance d from Earth, I calculate that the light denied Earth is (proportional to) , where is the overlap between two circles whose centers are separated by s. (The formula at MathWorld assumes that the circles do intersect and that neither circle contains the other, but it seems that its error otherwise is purely imaginary.) The integrand is 0 for , which may be useful for numerical integration.
Doing that integration for small ρ (I used 1 km) suggests that your argument about similar triangles is the right idea: the radiation reduction increases until the shade reaches the point where the Sun's light focused onto each point of it would illuminate the whole Earth if the shade were absent, and then falls off rapidly as the shade continues toward the sun. This makes sense: once all of the Earth sees the disk as a subset of the Sun's disk, you're blocking as much light as you can. Moving the shade further from Earth reduces the amount of the Sun it blocks at each point, and moving it closer to Earth causes the areas in twilight to see part of the shade uselessly occluding the sky beside the Sun. The only non-trivial thing to discover is that bringing it towards Earth loses in the twilight regions more than it gains by occluding more of the Sun in the center. The weird thing is that I see a local minimum around d=179 Mm; I don't know if I trust the numerics for d that small, though. The limit for small d should be , which is 0.5% smaller than that local minimum.
With larger shades, the optimum is closer to Earth: 1.154 Gm (instead of 1.355 Gm) for ρ=1 Mm, 945 Mm for twice that, and 738 Mm for twice it again. This also makes sense: as the disk approaches Earth's size, the optimal strategy is to set it on Earth like a lampshade. --Tardis (talk) 20:10, 10 December 2009 (UTC)[reply]
Because it currently costs on the order of $5,000 per lb to get things into low earth orbit. Probably a lot cheaper to convert all our current power plants to solar and wind power plants for the amount it would cost to put up a lasting shade of a size large enough to accomplish the required amount of solar energy deflection, not to mention that building the shade would be a monumental engineering project that would make building the pyramids seem trivial. Googlemeister (talk) 20:34, 10 December 2009 (UTC)[reply]
Worth noting is that our article on a space sunshade describes the cost as 'in excess of' 5 trillion USD — and that assumes the successful development of a suitable rail- or coil-gun technology to carry out the launches. (And heaven only knows how much in excess the 'in excess' actually would turn out to be....) TenOfAllTrades(talk) 20:49, 10 December 2009 (UTC)[reply]

Health clinic in Sevilla, Spain

where can I find a health clinic in Sevilla, Spain that deals in STD's? Thanks —Preceding unsigned comment added by 80.58.205.49 (talk) 15:46, 9 December 2009 (UTC)[reply]

It appears there may have once been a STD clinic/diagnostic centre at the University of Seville School of Medicine but I don't know if it still exists. If you speak Spanish perhaps you can work out from their website [2]. I can't offer much more help, perhaps someone else could, except to say you should be able to go to any Sevilla general practitioner (according to our article, in Spain probably based at a primary care centre) and they'll be able to direct you to an appropriate clinic if it's not something they can deal with themselves, while protecting your confidentiality & privacy as they should always do. Nil Einne (talk) 17:07, 9 December 2009 (UTC)[reply]

Is nonhuman skin color a result of melanin levels or something different?

I understand that melanin is the primary determinant of the variance in skin color among humans, but I was wondering if it is also what makes elephants, rhinoceroses, and hippopotamuses gray and gorillas black, or if these are differences of a fundamentally different type. 20.137.18.50 (talk) 17:17, 9 December 2009 (UTC)[reply]

Interestingly, skin color redirects to human skin color. From that article, there is a link to biological pigment which discusses coloration in animals. The article has a list of biological chemicals that are common; exotic animals also have other biochemicals, see for example bioluminescence. Chromatophore also has lots of good information about coloration in animals like fish, amphibians, and reptiles. Mammals and birds do not have chromatophores, only melanocytes. Nimur (talk) 17:48, 9 December 2009 (UTC)[reply]
There may be additional development but the pathway for melatonin synthesis exists in all living things. For example, the browning of an apple when cut uses some of the same enzymes. --Ayacop (talk) 19:47, 9 December 2009 (UTC)[reply]
PS:No stop, I was wrong: the dopachrome tautomerase (DCT) enzyme only developed with the chordata, so the forking of the pathway is an animal thing, ie, DCT is only one way to get melatonin in animals, while plants need absolutely tyrosinase/polyphenol oxidase for that. --Ayacop (talk) 19:58, 9 December 2009 (UTC)[reply]

Caring for surfaces while removing snow from them

The Internet has information about how to remove snow while caring for one's own health, that is, the health of whoever is doing that work. However, I am seeking information about how to remove snow while caring for the durability of artificial surfaces, such as asphalt and concrete. I am thinking of the possibility of cracks in the surface being started or enlarged by expansion and contraction caused by changes in temperature. With this in mind, is it better to clear an entire surface at one time, avoiding borderlines between cleared and uncleared parts of a surface? Is it better (when practical) to postpone snow removal until new snow has stopped falling? Where is it best to put snow which has been removed? Are grassy areas suitable? Are ditches suitable? I would like someone with expertise in the appropriate field(s) to answer these questions and any closely related ones which come to mind. (A related article is frost heaving.) -- Wavelength (talk) 17:33, 9 December 2009 (UTC)[reply]

When or in what way you remove snow shouldn't affect whether cracking appears on it. Cracks appear in asphalt and concrete primarily because of thermal expansion (or contraction), but removing the snow should not have a significant effect on the temperature of the surface. While snow is a good insulator (that's why igloos work), the fact that there is snow accumulated on the surface means that it is already cold enough to not melt snow that falls on it. So, removing the snow will only expose the surface to air that is approximately the same temperature as the snow. Removing the snow all at once when it's done snowing is mainly a practical matter--who wants to go out and shovel twice for the same snowstorm? It's perfectly fine to pile snow on grassy areas, as long as you're OK with the pile being there longer than the rest of the snow that fell naturally. Mildly MadTC 20:40, 9 December 2009 (UTC)[reply]
Thank you for your answer. I am correcting the grammar of the heading. -- Wavelength (talk) 21:14, 11 December 2009 (UTC)[reply]

Looking for molecules with large huang rhys factor

I am looking for molecules with large huang-rhys factors, that also absorb in the visible part of the spectrum. The huang rhys factor is a measure of the displacement of the nuclear potential minimum upon electronic excitation, as described here. The result of this would be that in the absorption spectrum, the first overtone for a particular vibrational mode is a larger peak than the fundamental (the 0-0 pure electronic transition). I know this question is pretty obscure, but I am unsure about how to proceed with this search. mislih 17:44, 9 December 2009 (UTC)[reply]

Have you tried searching Google Scholar for huang rhys factor? The Huang-Rhys factor S(a1g) for transition-metal impurities: a microscopic insight (1992), discusses transition metal ligands and compares specific molecules. Nimur (talk) 17:55, 9 December 2009 (UTC)[reply]

Echoes

If I am standing in a large room and I yell, how many times does my voice echo? It typically sounds like 3 or 4 times but I imagine that's just the threshold of what I can hear. Does my voice actually echo forever? TheFutureAwaits (talk) 17:49, 9 December 2009 (UTC)[reply]

An "echo" as you are apparently interpreting it is a distinct, undistorted return of the original sound of your voice. In reality, what happens is that as the wavefront reverberates, many echoes "combine" and distort, eventually decaying in amplitude until you can not hear them (and the wavefront settles down below the ambient noise level]. See reverberation for a more thorough explanation of this effect. Depending on the size, shape, and material of the room walls, the number of "distinct" echoes can vary from zero to "too many to count." Also see Multipath interference for information about echos that bounce off of different walls and recombine. Nimur (talk) 17:58, 9 December 2009 (UTC)[reply]

I uniformly prefer white

I've noticed that nurses uniforms are no longer white. I thought being white was important for preventing infection for a couple of reasons:

1) Any stains are easy to spot, which hopefully means a clean uniform will be put on. Patterns are perhaps the worst, in this respect, as they can disguise a soiled uniform.

2) Bleach can be used liberally when washing whites, without fear of them fading. Not so with coloreds. More bleach means fewer surviving microbes.

So, with this in mind, why have they gone away from white uniforms ? StuRat (talk) 18:23, 9 December 2009 (UTC)[reply]

Scrubs (clothing) is somewhat informative... apparently white induces eyestrain, and the colors are used to differentiate departments and to keep people from stealing them. I am sure that they are able to sterilize the clothing regardless of the color. I'm not sure any uniforms are patterns. --Mr.98 (talk) 18:34, 9 December 2009 (UTC)[reply]
I understand that the actor's nurses uniforms in an early British black and white TV series Emergency - Ward 10 were yellow because this appeared better on camera. Nostalgiatrip starts here. Cuddlyable3 (talk) 18:54, 9 December 2009 (UTC)[reply]
The old nursing auxiliary uniforms in the UK used to be a sort of beige check. Yuk!--TammyMoet (talk) 19:30, 9 December 2009 (UTC)[reply]
Some scrubs are patterned. I think those sometimes worn in paediatrics, in particular. --Tango (talk) 19:51, 9 December 2009 (UTC)[reply]
Nurses still wear white tunics in the UK, with some wards wearing blue or green scrubs instead. One of the drawbacks with white is that whilst it will show coloured stains such as blood, it won't show clear fluid stains which are easily observed on blue or green clothing. Nanonic (talk) 19:50, 9 December 2009 (UTC)[reply]
And there's always color-safe bleach. DRosenbach (Talk | Contribs) 00:56, 10 December 2009 (UTC)[reply]
I think it is fashion more than anything else. Fashion in this case is not individual but generally held concepts by medical institutions and apparel suppliers. White is consistent with outmoded concepts, largely concerned with how the individual is perceived in society. The colors and patterns are probably an expression of the pluralistic society that is now embraced by most establishments. I think it is a good question. I think it goes to the heart of fashion megatrends. Bus stop (talk) 01:15, 10 December 2009 (UTC)[reply]
Different roles are shown in the UK by different colours. These colours are decided by the individual hospitals rather than the NHS, so I can't give a definitive answer as to what colour means which role. --TammyMoet (talk) 18:38, 10 December 2009 (UTC)[reply]
Where I'm from most doctors in the OR room wear green scrubs, making blood appear close to black, though I don't know if this is the intention. 219.102.221.182 (talk) 05:17, 11 December 2009 (UTC)[reply]

moment of Big Bang

Can the moment of the Big Bang be characterized as the moment of the greatest unrest? 71.100.160.161 (talk) 18:43, 9 December 2009 (UTC) [reply]

"Unrest" doesn't have a well-defined scientific meaning. Do you interpret entropy to mean unrest? In that case, the answer is no, the universe had less entropy during its early stages than it will in its later stages, because of the second law of thermodynamics. Nimur (talk) 19:30, 9 December 2009 (UTC)[reply]
Except that at the moment of the BB, the laws of physics all had to be different. Otherwise the universe would have just collapsed into the grand-daddy of all black holes, and that would have been that. StuRat (talk) 19:35, 9 December 2009 (UTC)[reply]
I defer to one of the more expert physicists on the reference desk to clarify current scientific thought on the validity of thermodynamic laws during the early big bang. My understanding was that these were always valid. Nimur (talk) 19:43, 9 December 2009 (UTC)[reply]
The laws of physics are valid at any positive time after the Big Bang. We don't have any laws of physics to describe the Big Bang itself. Naive extrapolation says the universe was infinitely dense at the moment of the Big Bang, which most likely means we can't be that naive. --Tango (talk) 20:11, 9 December 2009 (UTC)[reply]
Right, but as I understand it, even changing parameters of fundamental forces, or unifying them, or changing symmetry relationships, do not change fundamental thermodynamic properties in a quantum mechanics treatment. Nimur (talk) 20:36, 9 December 2009 (UTC)[reply]
There is only one parameter which affects the 2nd law, as far as I know - initial entropy. The mathematical derivation of the 2nd law is time reversal symmetric, so entropy ought to increase both towards the future and the past (which is rather difficult, since it would seem to make the present special). It is the very low entropy at the beginning that causes it to increase towards the future. So if you change the initial entropy, you change the 2nd law (and all the arrows of time that follow from it). The other parameters shouldn't make any difference, the 2nd law is a pretty elementary mathematical theorem. --Tango (talk) 22:10, 9 December 2009 (UTC)[reply]

Zombie Plan

i was reading about mad cow disease and how that if there was a stronger form of it, like a Super mad cow or madder cow disease, that was transferd by blood or saliva, it would be almost like a Zombie outbreak. this made me wander.... what are the chances of a virus, or infection of any kind that would cause a "zombie like" outbreak if any? just a thought. —Preceding unsigned comment added by DanielTrox (talkcontribs) 18:46, 9 December 2009 (UTC)[reply]

Plan????? Your title gives you away you evil mastermind Daniel Trox! 92.224.205.128 (talk) 19:25, 9 December 2009 (UTC)[reply]
Venereal disease? Sufferers may not like to be called zombies. Nimur (talk) 19:25, 9 December 2009 (UTC)[reply]
Would Kuru (disease) fit the bill here? --TammyMoet (talk) 19:29, 9 December 2009 (UTC)[reply]
I wonder where you wandered, to "psychotic cow disease", perhaps ? But anyway, I believe rabies can be spread directly from human to human, if you can just convince them to bite each other. StuRat (talk) 19:32, 9 December 2009 (UTC)[reply]
That would have been my vote. Rabies is usually mentioned as among the most zombie-like disease - it affects the brain, often causing mania and increased agitation, and can increase saliva production while eliminating the ability to speak. ~ Amory (utc) 19:50, 9 December 2009 (UTC)[reply]
And to answer the specific question, the chances would be pretty low. Anything zombie-like would kill the infected too quickly while being too obvious, allowing the uninfected to take necessary precautions. The only effective spread (for rabies anyway) seems to be through the various animal reservoirs which, aside from bats, is usually pretty obvious. ~ Amory (utc) 19:57, 9 December 2009 (UTC)[reply]
For me the defining characteristic of a zombie is something that is tenacious (maybe to the point of being manic) AND can only be killed by dismemberment or other severe injury, making them formidable foes. If you are afraid of salivating, nonsensical humans that are agitated and manic then your worst nightmare might be something called Ozzfest... --66.195.232.121 (talk) 21:50, 9 December 2009 (UTC)[reply]
There is a human analog of Mad Cow - and a bunch of people in the UK caught it by eating infected meat products. It's called Creutzfeldt–Jakob disease (CJD for short). Our article says "The first symptom of CJD is rapidly progressive dementia, leading to memory loss, personality changes and hallucinations. This is accompanied by physical problems such as speech impairment, jerky movements (myoclonus), balance and coordination dysfunction (ataxia), changes in gait, rigid posture, and seizures. The duration of the disease varies greatly, but sporadic (non-inherited) CJD can be fatal within months or even weeks (Johnson, 1998). In some people, the symptoms can continue for years." - so not really Zombieism per-se. It doesn't spread human-to-human very well - unless there are cannibals around - but eating brains certainly would be a reasonable cause. SteveBaker (talk) 22:24, 9 December 2009 (UTC)[reply]

well see thats what i was getting at, that madcow disease would make someone seem almost zombie like, and if it altered to make people extreamly agressive and tranfer through blood or saliva, i think it could make a "infected" like epidemic --Talk Shugoːː 18:39, 10 December 2009 (UTC)[reply]

Smallpox eradication

In 1979 WHO declared the complete eradication of smallpox, but I caught it while in kindergarten (late 1980s) and infected my sister in early 1990s, who had blister traces for several years. How it could be? 85.132.99.18 (talk) 19:52, 9 December 2009 (UTC)[reply]

You did not catch smallpox while in kindergarten. Perhaps you had some other disease such as chickenpox. Algebraist 19:55, 9 December 2009 (UTC)[reply]
If you had smallpox and your doctor ever saw or treated you for it, then it would have been a major international incident. Nimur (talk) 20:37, 9 December 2009 (UTC)[reply]
And listed in our article Smallpox#Post-eradication which currently says the last known cases were among reseachers in 1978 Nil Einne (talk) 20:40, 9 December 2009 (UTC)[reply]
You're almost certainly thinking of chickenpox. If you had gotten Smallpox in 1989 it would have been in newspapers worldwide. APL (talk) 21:25, 9 December 2009 (UTC)[reply]

What are wastes of different industries and what are their usages?

What are wastes of different industries and what are their usages?

Examples are:

From rice mills we get rice dusk as waste. We can use that dusk in producing energy in biomass plant or we can use that dusk to feed animals. —Preceding unsigned comment added by Anirbannaskar (talkcontribs) 20:06, 9 December 2009 (UTC)[reply]

This sounds like a homework question, which we won't help with. You need to do your own work if you are going to learn anything from it. --Tango (talk) 20:20, 9 December 2009 (UTC)[reply]

Compare energy released by automobiles vs. nuclear warheads

Most of the information used here comes from WP articles. Is the conclusion correct?

One W89 nuclear warhead has a yield of approximately 475 Kilotons of TNT.
475 Kt converts to approximately 1987 Terajoules of energy.
One gallon of gasoline contains approximately 132 Megajoules of energy.
So, 15,053,030 gallons of gasoline contain the energy released by a single W89 nuclear warhead.
Americans alone drive 2,208 billion miles per year (per Dept of Transportation).
At 20 MPG, that is 110.4 billion gallons of gasoline converted to energy.

Thus, American driving alone releases the energy equivalent of 7,334 modern nuclear warheads annually. —Preceding unsigned comment added by Alfrodull (talkcontribs) 20:47, 9 December 2009 (UTC)[reply]

I haven't checked your numbers or arithmetic (you can double check that yourself), but the conclusion is certainly plausible. There is a very big difference between energy released over a lot of time and space and energy released in an instant in one place. --Tango (talk) 20:51, 9 December 2009 (UTC)[reply]
Also, the energy released in a nuclear explosion is largely thermal energy (heating the air and the solid objects in the target area), and kinetic (moving huge quantities of air, debris) and potential energy, in the form of deforming and destroying the target; and nuclear, in the form of irradiating energy both in the form of a quick blast ("pulse", commonly called an EMP as the liberated nuclear energy takes electromagnetic form through a variety of processes), and in the form of long-lasting decaying nuclear particles. The energy released in an automobile is about 60% thermal and 40% kinetic, which is converted to the controlled vehicle motions that the engine is connected to. As such, an equivalent amount of energy released is much safer in the controlled, normal operation of motor vehicles. So if you want to carry this thought-experiment farther, you'll need to brush up the numbers and check the details of those figures more carefully. Nimur (talk) 21:01, 9 December 2009 (UTC)[reply]
Also, check your "20 mpg" figure. I would not be surprised if the average fuel efficiency over 2.2 trillion miles (which probably includes freight and trucking) is actually much worse. Trucks make a huge percentage of the total vehicle-miles travelled in the U.S., and when loaded, they do not usually get 20 mpg (and they do not run on gasoline). Nimur (talk) 21:07, 9 December 2009 (UTC)[reply]
The energy released in an automobile is almost 100% thermal. There is only kinetic energy temporarily. Likewise, most of the energy from a nuclear weapon gets converted to heat pretty quickly. --Tango (talk) 21:10, 9 December 2009 (UTC)[reply]
You're counting braking (and friction), which is a can of worms. But, you are right, technically. My point was that the energy in a car flows through controllable pathways, rather than uncontrolled destructive release. Nimur (talk) 21:11, 9 December 2009 (UTC)[reply]
The problem with comparing energy like this is that raw energy is not that interesting. Compared to, say, the energy that the sun imparts to the earth, the amount of energy released by nuclear warheads is trivial. Ditto things like earthquakes (the other place where they love to use kiloton/megaton/gigaton measurements). The trick is that nuclear warheads release that energy quickly and in a very limited space. If you release a megaton of energy in tiny, diffuse intervals, it's not that impressive. If you release it all at once, over a city, that's impressive. (Additionally, as has been noticed, the effects of nuclear weapons are more diverse than just energy release. You do not get the same results at all from automobile emissions.) --Mr.98 (talk) 21:29, 9 December 2009 (UTC)[reply]
Factual thing—I think you mean the W88, not the W89. --Mr.98 (talk) 21:37, 9 December 2009 (UTC)[reply]

absolute zero

Would an area of space at a temperature of absolute zero have a greater permittivity than an area of space characterized only as a perfect vacuum? 71.100.160.161 (talk) 22:54, 9 December 2009 (UTC) [reply]

Space doesn't really have a temperature. Temperature is a measure of the kinetic energy of particles - no particles - no temperature. You could perhaps measure the speed of the very few stray molecules zipping around in space and come up with a number for temperature - but I'm not sure it really means much! SteveBaker (talk) 00:01, 10 December 2009 (UTC)[reply]
Temperature is not really a measurement of kinetic energy. For example a single particle has kinetic energy, but no temperature. Temperature needs to be understood in the context of statistical mechanics, in terms of the relationship between entropy and internal energy. --Trovatore (talk) 00:34, 10 December 2009 (UTC)[reply]
Even in the absence of 'matter' (which is what is really meant by vacuum) there is still the presence of electromagnetic radiation that allows for a definition of the temperature of an 'empty' region of space. As far as I know that has no effect on the vacuum permittivity. Dauto (talk) 01:04, 10 December 2009 (UTC)[reply]
True; I thought about saying something about that (references, as in reference desk: black-body law, Boltzmann distribution) but decided to make just one point. It seems to me that this bit about temperature v kinetic energy is widely misunderstood even among editors with good general science backgrounds. A definition in terms of kinetic energy per particle works for monatomic ideal gases and that's about it. (Even for them, you have to be talking about chunks of gas whose center of mass is at rest, not rotating, etc). Once you have any interaction among the particles beyond elastic collision, there is no simple relationship between temperature and energy per particle. --Trovatore (talk) 01:29, 10 December 2009 (UTC)[reply]
That is also very true, but pedagogically speaking, you cannot start from that principle. When people have questions here about these concepts, they often lack the background to start from the real definition of these things. It is more helpful to start from the simpler models and build up to the more accurate definitions later. For example, you can't take someone who has never taken a chemistry class, and drop the Schroedinger equations on them and say "this is how electrons work". Its the same thing here. We start with the basic, oversimplified model (temperature is the average kinetic energy of a large number of particles) and then if their level of understanding needs to be deeper, we provide it. But starting at the sort of understanding someone with an advanced degree in physics would understand, well, that isn't exactly helpful for the average layperson. --Jayron32 02:16, 10 December 2009 (UTC)[reply]
I've never been a big fan of lies to children. Temperature is hard to understand; that fact should not be concealed. Once that's established, yes, you can go on to explain approximations to the concept.
But really my point was another — I've observed that people editing articles like absolute zero often really don't get the idea that temperature is not about kinetic energy per se. And I've seen such comments from people I'd expect to know better. --Trovatore (talk) 02:26, 10 December 2009 (UTC)[reply]
Well, if you don't want to teach the simplified model, how many months are you going to spend teaching someone the finer details of statistical mechanics so they can "get" what temperature really means? The average person has no use for that level of detail in their day-to-day lives. Of course, the most "elegent" definition of temperature is the Zeroth law of thermodynamics, which merely states that if two systems are in thermal equilibrium (i.e. no exchange of heat between them) their temperature must be identical, in other words temperature is that property which is shared between two arbitrary systems in thermal equilibrium. The zeroth law definition is elegent also in the sense that it does not care about the type of organization in the systems, and even allows for a meaningful definition of temperature of a vacuum; the temperature of a vacuum is the same as the temperature of a non-vacuum whereby the vacuum system and the non-vacuum system are in thermal equilibrium, and this temperature is not absolute zero, rather it is the temperature of Zero-point energy, which even in a perfect vacuum isolated from all radiation sources, would be the temperature of the cosmic background radiation, which is about 2.7 K. I've always liked the zeroth-law definition of temperature for precisely the reasons you describe temperature as being "hard". It's not temperature that's hard to understand, it's molecular motion which is hard to understand. --Jayron32 04:49, 10 December 2009 (UTC)[reply]
Molecular motion is not the hard concept here. Oh, it's hard enough, certainly, but it's a red herring when discussing conceptual approaches to temperature. The hard concepts are the statistical ones, such as, appunto, "thermodynamic equilibrium". What does thermodynamic equilibrium mean, really? I don't think it's even well-defined, in the final analysis. What it means depends on the system you're examining, and what particular things you want to know about that system. --Trovatore (talk) 04:56, 10 December 2009 (UTC)[reply]
If you like, you can define it by the ability to do work; two systems in thermodynamic equilibrium cannot do work on a third system. Two systems which are not in thermodynamic equilibrium will be able to do work on a third system until such time as they reach thermodynamic equilibrium. That provides one with the "free energy" definition of temperature (the second law of thermodynamics, if you prefer). --Jayron32 05:04, 10 December 2009 (UTC)[reply]
What if the first two systems are moving with respect to the third system, and they do work on it just by crashing into it? What if they have coherent pressure waves running through them? Are you going to say we can't define temperature in these cases? I have never seen a satisfactory demarcation of what parts of the motion/energy of the system are "thermal" and which are not. I don't believe a truly philosophically adequate one exists. I suspect that the demarcation really belongs to pragmatics and not physics. Not that there's anything wrong with that, provided it's acknowledged. --Trovatore (talk) 06:40, 10 December 2009 (UTC)[reply]
HOWEVER, even you admit that the statistical approach to understanding temperature is unreachable to the average lay person, and yet the average lay person still needs to have some understanding of what temperature is and how it works. Again, do we spend time teaching concepts to someone who has no use for them simply so they "get" the higher implications of temperature? Or do we teach them a simpler model of temperature, if it works in their day-to-day lives to work within the simpler model? --Jayron32 05:07, 10 December 2009 (UTC)[reply]
Just don't lie. There's nothing wrong with providing simplified accounts, provided they're labeled as what they are. --Trovatore (talk) 06:40, 10 December 2009 (UTC)[reply]
Wouldn't this have been more suitable if taken to the talk page? Vimescarrot (talk) 09:29, 10 December 2009 (UTC)[reply]

Thank you very much for this discussion whether more appropriate for the talk page or not. To clarify I do have a bit better an understanding of temperature and entropy the the average person since I repaired AC and refrigeration units and got curious about latent versus sensible heat. As a side note it is fascinating to see two Wikipedia "librarians" hone in on the best way to respond.

Now here is what I am going for... was the environment the Big Bang happened in at absolute zero, which the cosmic background radiation hovers above (won't ask if that temperature have ever changed, yet)? 71.100.160.161 (talk) 18:03, 10 December 2009 (UTC) [reply]

At the earliest moments for which known physics is believed to work, the universe was fantastically hot. Greater than 1028 K. See Timeline of the Big Bang for some details. Dragons flight (talk) 18:13, 10 December 2009 (UTC)[reply]
If, by "the environment the Big Bang happened in", you mean the "something" (though I prefer to say "nothing") that the early universe expanded "into", then "it" could not have had a temperature defined because it was not matter or even space-time. The temperature of the Cosmic microwave background radiation is gradually cooling towards absolute zero, but extremely slowly. Dbfirs 09:53, 11 December 2009 (UTC)[reply]

How much warning do we get before a supernova will become detectable by naked eye?

Let us assume light from a supernova reaches us in the next few months or years. Betelgeuse is one of the best candidates, even if the chances that it would happen exactly in this timeframe are very slim (but still realistically above zero). As it will outshine the full Moon and be visible even in the daylight, it could lead to serious problems, thousands or even millions cold die if a panic strikes and people will start fleeing the big cities or start looting and plundering, thinking the world will soon end. Especially if it happened in December 2012. So it is reasonable to be important to inform political leaders, and leaders of mainstream religions to prepare their people and explain what exactly is bound to happen. So, how much time will we have between astronomers detecting and reliably predicting it and the event becoming obviously visible? Few hours? Days? Weeks? Months? --131.188.3.20 (talk) 23:49, 9 December 2009 (UTC)[reply]

The Supernova Early Warning System could perhaps detect neutrino's a few hours before the main explosion...but it's not really certain that this is true. Aside from that - nothing goes faster than the speed of light - so the light gets here years to centuries before the particulate material. SteveBaker (talk) 23:56, 9 December 2009 (UTC)[reply]
Nitpick: the neutrinos are the main explosion. Everything else (such as electromagnetic radiation and the kinetic energy of the expanding gases) comprises less than 1% of the energy released. Algebraist 00:06, 10 December 2009 (UTC)[reply]
Wow, thanks, I didn't knew we had an article about that. Or even that neutrino detectors are constructed and maintained especially for this purpose. However, these 3 hours seem frighteningly low, lot smaller than required to inform a significant percentage of the population. Especially those who would be more prone to panic. Are there no other ways to detect a supernova from a star fairly big and close enough? Measure extreme size fluctuation, spectrum of emitted light, or other symptoms of an impending supernova? --131.188.3.21 (talk) 00:13, 10 December 2009 (UTC)[reply]
I take it the name is a bit tongue-in-cheek. I gather that its purpose is not so much to provide a warning as a notice, so that the astronomers can get their telescopes pointed in the right direction. --Trovatore (talk) 00:15, 10 December 2009 (UTC)[reply]
Why would anybody panic? Dauto (talk) 00:58, 10 December 2009 (UTC)[reply]
(EC) and my question as well. You seem awfully certain that everything would go to hell - why? 218.25.32.210 (talk) 00:59, 10 December 2009 (UTC)[reply]
Well, not everyone. Pretty sure if you walk down the street in an evening and see a big flash of light in the sky, growing bigger and bigger until it's brighter than the Moon, you will know in an instant, that "wow, that's a supernova, cool" and go on. I'm not sure everyone will be like this. Look at people committing suicide because of some freaking comets. And comets are seen frequently enough that people should be accustomed to them. --131.188.3.20 (talk) 01:43, 10 December 2009 (UTC) [reply]
And of course it's only a 50/50 chance that you'll see it first hand - you might be on the opposite side of the planet at the time and only hear about it on the news. That'll give you plenty of time to update supernova and get a head start on Supernova mass panic of 2009 and List of supernovea in 2009. SteveBaker (talk) 01:57, 10 December 2009 (UTC)[reply]
Be sure to re-direct List of supernovea in 2009 to the correct spelling, List of supernovae in 2009. Nimur (talk) 02:03, 10 December 2009 (UTC) [reply]
Come on, if you're creating the thing you don't want to miss the chance of List of supernovæ in 2009. Algebraist 02:15, 10 December 2009 (UTC)[reply]
I think you overestimate the panic effect. Some people will panic, because some people will panic at anything, but most will turn on the news, or go on to Google, or whatever, and find out what it is. Those who cannot do any of these things will probably just wonder. Maybe fear. But full-blown, mobs-and-suicide panic? Show me the precedent for it in modern times. --Mr.98 (talk) 02:10, 10 December 2009 (UTC)[reply]
I'm surprised there isn't more panic over this. Originally thought to be a hoax, it has been circulating in many "well-reputed" newspapers, Daily Mail and Dagens Nyheter, and Fox News for example. Frankly, whether it is natural or the result of a rocket misfire, it's fairly frightening. And if it turns out to be a hoax, that is also frightening - one would hope that a giant apparition in the sky would be easily verifiable or refuted by major regional news outlets. ... So, do we have 2009 Norwegian sky apparition yet? Nimur (talk) 02:20, 10 December 2009 (UTC)[reply]
Spaceweather.com has a lot of info about this -- apparently it was a Russian ICBM test that went awry. Looie496 (talk) 02:48, 10 December 2009 (UTC)[reply]
Well, that makes me feel much better.... --Trovatore (talk) 02:49, 10 December 2009 (UTC)[reply]
We have 2009 Norwegian spiral anomaly. FWIW. Acroterion (talk) 03:49, 10 December 2009 (UTC)[reply]
Not everyone is as well educated as you guys, and not everyone is even literate on this planet. However, I don't want to push this further, because the point of the question was not how big or small the panic would be, but how soon can we reliably predict the event, which still has no meaningful answer except for the 3 hours given by neutrino detection. --131.188.3.20 (talk) 10:48, 10 December 2009 (UTC)[reply]
I don't think anyone thinks it is about being educated and literate. The question is whether people react with panic to such things to any significant degree. I think the vast majority of people, anywhere, are more likely to just hunker down and wait (or assimilate it into their world-view, which people do pretty well), rather than panic. I am no expert on mob psychology but strange things in the sky don't seem like serious triggers to me, compared to, say, accusations of rape by people of another race and things that really push human psychological buttons. --Mr.98 (talk) 15:02, 10 December 2009 (UTC)[reply]
I will note that SN 1054, the supernova which created the Crab Nebula, was widely observed. In 1054 AD, it remained visible during daylight for more than three weeks, and yet was not linked to mass suicides, rioting, or other chaos. TenOfAllTrades(talk) 19:02, 10 December 2009 (UTC)[reply]
I like the OP's belief that "leaders of mainstream religions" when given astronomical facts will explain to "their people" exactly what is bound to happen. Think of Pope Urban VII getting facts from Galileo or Marshall Applewhite explaining Comet Hale-Bopp. Cuddlyable3 (talk) 22:20, 10 December 2009 (UTC)[reply]
Indeed - there is a long history of political, military and religious leaders gaining the capability to calculate solar eclipses and using that knowledge to scare the populace into bending to their will. Far from carefully explaining and calming the populace - they have often used this knowledge to make some dire proclamation at the moment of the eclipse and scare the bejeezus out of the poor, math/astronomy-deprived masses. There have been a few cases (Thales of Miletus for example) where this knowledge has been used for good...but it's not typical! SteveBaker (talk) 13:59, 11 December 2009 (UTC)[reply]
We've heard yet another ranting about how absolutely evil and primitive every society was except from a libertarian one. Thanks :P But this still does not answer the question: How early can we reliably detect a supernova of a similar scale to what we can except from Betelgeuse, for example? --131.188.3.21 (talk) 20:26, 11 December 2009 (UTC)[reply]
Currently, we don't know how to do this, other than by detecting the neutrino flux, which as previously said arrives at best a couple or so hours before the visible photons ramp up (not because neutrinos are faster, but because they're the first thing to be produced when the star actually blows) - consider SN 1987A, where the gap was about 3 hours, and this for a star not actually in our own Galaxy, though close to it in a nearby satellite galaxy.
There are two problems. One is that nearby (actually in or very near our own Galaxy) and observable supernovae, bright enough to be easily visible to the naked eye, are very infrequent - SN 1987A was the first in several hundred years (the previous being SN 1604); because of this we haven't had much chance to work out any 'warning signs', bearing in mind that there are more than one type, and cause, of supernovae. The other is that what warning signs there might be may well be detectable only with considerably better telescopes (or other instruments) than we currently have. 87.81.230.195 (talk) 23:27, 11 December 2009 (UTC)[reply]


December 10

sulfoxide functional groups in drugs

Is the primary fate of sulfoxides to interact with cysteine residues in enzymes to form sulfide-sulfide bonds? John Riemann Soong (talk) 01:28, 10 December 2009 (UTC)[reply]

L1 Lagrange point

The Lagrangian point article states, "The Earth–Moon L1 allows easy access to lunar and earth orbits with minimal change in velocity and would be ideal for a half-way manned space station intended to help transport cargo and personnel to the Moon and back."

As far as I can tell from some of the external links at the article, L1 is past the orbit of the Moon. For some reason this example point isn't spelled out in the article though others are. I had to go looking elsewhere to see where L1 is in relation to the moon's orbit. So, could someone explain, more completely and in fairly basic terms (i.e. I'm not an amateur astronomer), why a point which is past the Moon's orbit would be a good half way point for going to the moon? Thanks, Dismas|(talk) 03:51, 10 December 2009 (UTC)[reply]

This is the Earth-Moon L1, which is of course between the Earth and the Moon. You're thinking of the Earth-Sun L1. Algebraist 03:57, 10 December 2009 (UTC)[reply]
Ah! Right. Got it. Sorry for that... I must have read it too quickly. Dismas|(talk) 03:59, 10 December 2009 (UTC)[reply]

I'm suspicious of the claim, though. The Earth-Moon L1 point is about 5/6 of the way to the moon, and the amount of energy required to get there from here is pretty close to the amount to get all the way to the Moon. In what way would it be logistically useful to stop there? --Anonymous, 04:31 UTC, December 10, 2009.

It may be 5/6 of the way by distance, but distance isn't really relevant in the context of space travel. Since there is minimal drag in space, you can get pretty far without having to exert any force (spend any energy) which is what costs fuel/money. From the first line of this question, quoted from the article, you can see that by definition, the Lagrangian point has minimal change in velocity to switch from an Earth orbit to a moon orbit. Change in velocity = acceleration = force ~ cost. moink (talk) 09:44, 10 December 2009 (UTC)[reply]
First, the relevance of the distance from Earth is that it determines the energy requirement to get there from here, i.e. how high you have to rise in the Earth's gravity well. Second, the cost of switching from "an" Earth orbit to "a" Moon orbit isn't important; what matters is the cost of switching between useful orbits. --Anonymous, 10:37 UTC, December 10, 2009.
The tight linkage between distance traveled and energy expended (e.g. on Earth's surface) is due to the need to overcome friction, wind resistance, etc. Moving with constant momentum in a vacuum where gravity is negligible would require nearly zero energy. Add gravity, and the energy required is related to work done against gravity, with distance playing a role only as it relates to the force of gravity. Thus, when talking about the energy required to move in space, distance is not as relevant as intuition might suggest. Of course, distance will affect the time required to make a trip, in a velocity-dependent way. -- Scray (talk) 11:49, 10 December 2009 (UTC)[reply]
For the third time, I'm talking about the distance only in relation to the force and energy required to overcome (the Earth's) gravity. --Anonymous, 21:33 UTC, December 10, 2009.
By definition, L1 lies on the path where you increase the least relative to Earth's gravity well before falling into the Moon's gravity well. Dragons flight (talk) 12:23, 10 December 2009 (UTC)[reply]
Correct, but velocity is relevant as well as position. To make a stop at L1 you must expend energy to enter an orbit matching L1, then more energy to get moving toward the Moon again. --Anonymous, 21:40 UTC, December 10, 2009.
By its nature, both Earth and the Moon are "downhill" from L1. Yes, if you're stopped at L1, you need to expend energy to get moving again, but since an orbit at L1 is an unstable equilibrium, any expenditure of energy, no matter how small, is sufficient. --Carnildo (talk) 00:40, 11 December 2009 (UTC)[reply]
Very true. However, the point about the difference between "an orbit" and "a useful orbit" is a very good one - a small expenditure of energy would get you into a very high orbit around either the Earth or the Moon, neither of which is very useful. --Tango (talk) 17:10, 11 December 2009 (UTC)[reply]

Assume an initial geocentric elliptical orbit with its apogee close to L1 and its perigee close to Earth's surface. The perigee could be raised by several flybys of the Moon (or should I say the Earth-Moon L1 point) with a carefully selected trajectory... Then the relative velocity would be small and the spaceship could enter a lunar orbit using only little rocket fuel (sometimes such slow transfers are called the "Interplanetary Transport Network"). In order to be even more fuel-efficient, we would have space elevators both at Earth (with its top above the geostationary orbit) and at the Moon (with its top above the Lagrange point). Icek (talk) 07:34, 13 December 2009 (UTC)[reply]

Known changes: mental function: adulthood

What is known about changes of a physiological sort and also perhaps a behavioral sort that occur in the brain and in the minds of people in the years between the beginning of adulthood and the beginning of old age? These points may be poorly defined, especially "old age." But I seem to recall seeing a lot written on how these things change through childhood and perhaps into early adulthood. And it is known that age significantly correlates with the mental decline seen in some older people. But is anything known about any changes that transpire in the forty or fifty years in between these two points? Bus stop (talk) 04:17, 10 December 2009 (UTC)[reply]

Gerontology is the study of ageing. Developmental psychology has something to say about psychological changes associated with adulthood. --TammyMoet (talk) 10:08, 10 December 2009 (UTC)[reply]

Although I am old there is nothing wrong with my short term memory nor is there anything wrong with my short term memory.Cuddlyable3 (talk) 21:23, 10 December 2009 (UTC)[reply]

Life Expectancy in 2050

What will the life expectancy of the world be in 2050?

What will the life expectancy of America be in 2050?

What will the life expectancy of Australia be in 2050?

Bowei Huang (talk) 05:06, 10 December 2009 (UTC)[reply]

Wikipedia is not a crystal ball. That being said, I would guess that it wouldn't be much higher than today because the life expectancy of those countries seems to be close to the maximum life span. Jkasd 08:35, 10 December 2009 (UTC)[reply]
Wikipedia might not be a crystal ball, but others have made estimates.[3][4][5] If you want to see more examples, look on Google News and Google Scholar. Fences&Windows 15:19, 10 December 2009 (UTC)[reply]
I answered a very similar question at the Miscellaneous desk and the U.S. census source I give there also does future projections. SO the answer there will also answer this question. --Jayron32 16:25, 10 December 2009 (UTC)[reply]

But what if we don't talk about the life expectancy of America or Australia, we just talk about the life expectancy of the world? What are the projections for the world's life expectancy in 2050?

Bowei Huang (talk) 23:49, 10 December 2009 (UTC)[reply]

Bowie Huang: Go to the miscellaneous desk. Find the very similar question you asked there. Click the link I gave you for the U.S. Census Bureau's International Database. Follow the instructions I gave there to find the data you are looking for. It has data for the USA, for Australia, and for the whole world, and for every year going back a long time, and for projections for many years into the future. Its all there. Trust me. You don't have to keep asking. It's all there for you to find. --Jayron32 05:43, 11 December 2009 (UTC)[reply]

E.V.S. project

Which topic is good for e.v.s project? —Preceding unsigned comment added by 117.200.178.181 (talk) 06:55, 10 December 2009 (UTC)[reply]

What is an EVS project? Dismas|(talk) 06:59, 10 December 2009 (UTC)[reply]
Perhaps European Voluntary Service? AndrewWTaylor (talk) 09:22, 10 December 2009 (UTC)[reply]

why is Potassium chloride caustic if its ph is 7  ?

do you guys know? —Preceding unsigned comment added by 74.65.3.30 (talk) 10:21, 10 December 2009 (UTC)[reply]

It isn't. The burning sensation you feel if it gets into an open wound is a consequence of potassium triggering the exposed free nerve ends (somebody correct me if I'm wrong). Both potassium and chloride potentials are important components of the electrical balance in nerve cells. — Yerpo Eh? 10:28, 10 December 2009 (UTC)[reply]
In addition to Yerpo's point above, any water-soluble salt or concentrated salt solution (including regular old sodium chloride: table salt) will cause discomfort in an open wound. The high salt concentration outside the body's tissues will draw out water, causing a localized osmotic stress and triggering pain. TenOfAllTrades(talk) 14:42, 10 December 2009 (UTC)[reply]
Also note that Potassium Chloride, unlike Sodium Chloride, is very damaging to skin and tissue and does not promote healing but rather the opposite. A wound will not heal if kept exposed to Potassium Chloride and exposure of the intestinal track to Potassium Chloride will cause ulcers. The reason may be linked to the fact that the primary extracellular ion is Sodium, not Potassium, correct me if I am wrong. 71.100.160.161 (talk) 17:42, 10 December 2009 (UTC) [reply]
Potassium chloride is a common ingredient in salt substitutes, so in similar quantities to sodium chloride, would not be likely to cause adverse effects in human consumption. Googlemeister (talk) 20:29, 10 December 2009 (UTC)[reply]
Next time you get a cut or abrasion don't do a reality check by using potassium chloride as an antiseptic or to cover and protect the wound even if you believe your opinion is fact. 71.100.160.161 (talk) 22:08, 10 December 2009 (UTC) [reply]
Don't credit me for things I did not write. I say nothing about skin application, only that your statement that potassium chloride causes ulcers when eaten is demonstrably false when consumption is of the same magnitude that one would eat sodium chloride. Googlemeister (talk) 22:20, 10 December 2009 (UTC)[reply]
People use salt to cleanse wounds all of the time. The Potassium in foods like bananas is relatively safe if you do not eat too many. Your taste buds, however, can tolerate a great deal more than your intestines. There are two types of salt substitute. One is an approximate 50/50 mix of Sodium and Potassium. The other is all Potassium. What is needed are warning labels. .7 grams per liter of water is the limit for the 50/50 mix. .7 grams of the 100% and you will begin to have pain in your gut. 71.100.160.161 (talk) 22:50, 10 December 2009 (UTC) [reply]

why is potassium chloride damaging to the skin if its ph is 7 isint that neutral?

Osmolarity. Take a look at the top picture (left most panel) in the hypertonic article and imagine that's what's happening to your skin cells. -- 128.104.113.17 (talk) 17:28, 11 December 2009 (UTC)[reply]


There's also the fact that potassium depolarises cell membranes. (Extracellular potassium levels are supposed to be low whereas extracellular potassium levels are supposed to be high.) Like you know why intravenuous potassium chloride is a method for executions. John Riemann Soong (talk) 22:27, 11 December 2009 (UTC)[reply]

Since of touch

How fast does it travel? And how does it does so so fast that when you touch something, you immediately feel it?Accdude92 (talk to me!) (sign) 14:47, 10 December 2009 (UTC)[reply]

It's not instantaneous—it's as fast as your nerves can transmit the signal and your brain can make sense of it (though some types of sensations—like extreme pain—can be processed without your brain fully understanding them, and responded to with a reflex, if I recall). Reaction time is probably a good place to start. --Mr.98 (talk) 14:55, 10 December 2009 (UTC)[reply]
See Axon#Sensory for signal travel speed. It depends on the fiber myelination. --Mark PEA (talk) 15:09, 10 December 2009 (UTC)[reply]

Measuring magnetic susceptibility

Is a Gouy balance the same thing as a Faraday balance? They are both used for measuring magnetic properties. Alaphent (talk) 16:35, 10 December 2009 (UTC)[reply]

We have an article on Gouy balance. The Faraday balance method is very similar with the difference being in the size of the sample. In the Faraday method, a small sample (essentially a point) is balanced in a graded magnetic field. In the Gouy method, the magnetic field is constant but the length of a sample rod in the field is varied. This book shows the difference diagramatically. SpinningSpark 19:08, 10 December 2009 (UTC)[reply]

ammonia sanitizer

I understand that the meat processing industry uses anhydrous ammonia to kill e-coli and other meat product contaminants by sealed exposure to the gas. Is this really done, does it work and is it harmful to the meat products or to consumer? 71.100.160.161 (talk) 17:33, 10 December 2009 (UTC) [reply]

It seemingly is done, and works, according to Section 7.4.4 of the article you yourself linked to, which also says that the US Department of Agriculture says it's safe. Now, how far do you trust them? 87.81.230.195 (talk) 01:57, 11 December 2009 (UTC)[reply]

Sodium bicarbonate ph?

why the ph listed on this site as a 10 when it is really a 8  ? —Preceding unsigned comment added by 74.65.3.30 (talk) 17:56, 10 December 2009 (UTC)[reply]

Solid sodium bicarbonate has no pH. pH is the property of a substance when it dissolves in water. pH is a measure of the concentration of something called hydronium ions in water. pH is based on a negative logarithm scale, which means that small numbers indicate a higher concentration of hydronium, and larger numbers indicate a smaller concentration. It also means that each increase of 1 on the pH scale means a factor of 10 in concentration, so a pH of 1 has 10 times the concentration of hydronium as does pH 2, and 100 times the conentration as pH 3. Now, depending on how much sodium bicarboate you add to water will effect how much hydronium it makes, which will then affect the pH. A 1.00 molar solution of sodium bicarbonate has a pH of 10.3, but if you had a more dilute solution, the pH would be closer to that of water (ph = 7) while a more concentrated solution would result in a pH farther from water. --Jayron32 20:08, 10 December 2009 (UTC)[reply]
Please tell us which site lists Sodium Bicarbonate as pH 10 or 8. It does not seem to be the Wikipedia article.Cuddlyable3 (talk) 21:03, 10 December 2009 (UTC)[reply]
The Wikipedia article lists the pKa as 10.3... Presumably, the OP confused the terms pKa and pH. I may have too, now that I look at my reasoning. The half-equivalence point of a solution of sodium bicarb will have a pH of 10.3, not a 1 molar solution. Regardless, the OP seems to have a general misunderstanding of how pH works. --Jayron32 21:19, 10 December 2009 (UTC)[reply]


so why dosent the wiki article just list the real ph? in fact on wiki most chemicals here dont have a ph listed. why?

Did you actually read anything I wrote, or even click the links and read the articles? pH refers to a very specific property of a very specific type of thing. It is specifically the amount of hydronium ions created when you dissolve something in water. That amount of hydronium ions is going to be dependent on how much you dump into the water. Nothing has an inherent pH. Its not a property of a substance, its a property of a mixture between a substance and water. If you dump two scoops of sodium bicarbonate in water, the mixture will have a different pH than if you dump one scoop of sodium bicarbonate in water. --Jayron32 05:40, 11 December 2009 (UTC)[reply]

yer a fucking idiot if i take baking soda POWDER to a lab and ask them the ph they will tell me. quick lime ph is like 11 and ITS A FUCKING POWDER U IDIOT —Preceding unsigned comment added by 74.65.3.30 (talkcontribs)

No, they don't tell you the pH of a powder because powders do not have a pH. Read the article titled pH. The first line of that article is "pH is a measure of the acidity or basicity of a solution". You should also probably read what a solution is, if that doesn't make sense to you. And typing in all caps and calling people names doesn't make you right. It just makes you look rude. --Jayron32 06:37, 11 December 2009 (UTC)[reply]
No need for name-calling. You say "if i take baking soda POWDER to a lab and ask them the ph they will tell me. quick lime ph is like 11".[original research?] Please actually do this. Let us know what lab and what they tell you. DMacks (talk) 06:58, 11 December 2009 (UTC)[reply]

Since this was an issue which I noticed when searching on this, can someone look into Talk:Sodium_bicarbonate#pKa Nil Einne (talk) 10:31, 11 December 2009 (UTC)[reply]

To be fair, before I'd really got a handle on what pKa was, the WP articles on acids etc confused me too. I know that it's obviously not useful to put up pHs of various solutions, but I think the OP has fallen into a fairly common hole that lurks within the chemistry articles. Brammers (talk) 10:35, 11 December 2009 (UTC)[reply]

The chembox entry link points to Acid dissociation constant, so (assuming people actually click a link before assuming what a term means) that is the page that needs to be very clear very early what the difference between the acidicity of a "chemical in solution" vs the acidity of a "solution of a chemical". DMacks (talk) 10:47, 11 December 2009 (UTC)[reply]
The deal is, with acid-base chemistry, it is pretty complicated. I think I did my best to explain what pH is above, but really, consider that we have three theories of acid-base chemistry, and they ALL serve their purpose (Arrhenius theory, Brønsted–Lowry theory, and Lewis theory). If people arrive at Wikipedia with a misunderstanding of what pH is, all we can do is attempt to correct the misunderstanding in terms they are likely to understand. If the articles that exist at Wikipedia need fixing in order to make them clearer lets do that too. This was a case of someone just being rude for its own sake. I patiently explained in two different ways how pH worked, and got called a "fucking idiot". I don't know that the person who asked the question is in the proper frame of mind to be educated, given his response. --Jayron32 17:04, 11 December 2009 (UTC)[reply]

should a shit blanket go over an awesome one or vice versa?

if I have a shit excuse for a blanket and also an awesome Taj Mahal of blankets, would I optain optimum warmth by putting the shit excuse for a blanket over me and the Taj Mahal over that, or vice versa? What is your reasoning. 92.230.69.195 (talk) 19:20, 10 December 2009 (UTC)[reply]

It makes little difference in warmth but it may make a difference in confort. Which one makes you feel ichyh? Dauto (talk) 19:49, 10 December 2009 (UTC)[reply]
It should make little difference if they are both intact. If the shitty blanket has holes in it, allowing for convection, you would probably be warmer is that is on the inside. Dragons flight (talk) 19:55, 10 December 2009 (UTC)[reply]
(ec)It depends on what makes it shitty. If it just had holes in it, I think that could conceivably improve the usefulness of the under-blanket as it wouldn't detract from the layer of warm air accumulating under the covers and might actually allow some additional circulation of air to lessen the sweats. On the other hand, it it's shitty because it's starchy or plastic-y, it might be better on top because the starchiness might interfere with circulation and/or feel scratchy. Matt Deres (talk) 19:58, 10 December 2009 (UTC)[reply]
"what is your reasoning" With a title like "should a shit blanket go over an awesome one or vice versa" thank *god* there is a reasoning requirement! But seriously, there are many factors to measure blanket effectiveness, you need to be more specific. As an avid camper that spends nights in a thin tent at sub 0F temperatures, I would say that the wind/water impermeability is paramount for the outermost layer (meaning it stops cold air/water from getting in), whereas the inner layers are measured by their fluffiness (meaning they better insulate the heat inside). Hope this helps! --66.195.232.121 (talk) 21:03, 10 December 2009 (UTC)[reply]
When in the past I slept in a cold room under blankets, I noticed that I would feel warmer if I put something over the blankets that stopped the warm air from rising up and escaping. Similarly, a sleeping bag inside a large plastic bag is warmer (beware suffocating, and you get lots of condensation). So put the most wind-proof one on top. 89.242.147.237 (talk) 23:04, 10 December 2009 (UTC)[reply]

Rubbing salt into the wound

I'd always assumed that when salt was rubbed into the wounds of chimney-sweeps, for example, they were causing them pain (an osmosis-based pain, WP says) but also doing them a favour of some kind, perhaps antiseptic or similar. Is this correct, or was it just sadism? - Jarry1250 [Humorous? Discuss.] 20:19, 10 December 2009 (UTC)[reply]

The hyperosmolarity is the cause of the antiseptic effect. Wisdom89 (T / C) 21:30, 10 December 2009 (UTC)[reply]

is The Future Is Wild made by Amasia when the supercontinent is all Atlantic Ocean or is it like Pangaea Ultima. Will The Future Is Wild shows when all pacific Oceans close?--209.129.85.4 (talk) 20:24, 10 December 2009 (UTC)[reply]

atom "melting" temp

What temperature would it take to literally rip an atom apart, that is the nucleus flies apart and the electrons are lost, leaving only random bits of subatomic particles? I mean sure temperature is something like how fast atoms or molecules are moving and vibrating and such, surely some temperature would be large enough that atoms are moving fast enough the forces holding the atom together are no longer sufficient? Googlemeister (talk) 21:26, 10 December 2009 (UTC)[reply]

A wild guess: look at the binding energies. 1–9 MeV corresponds to temperatures of 12–104 GK. --Tardis (talk) 21:43, 10 December 2009 (UTC)[reply]
(ec) That's one of those questions whose answer will depend heavily on how it is interpreted. Some radioisotopes will fission spontaneously at room temperature; does that meet the minimum standard? At the other extreme, the binding energy of an atomic nucleus is the (hypothetical) amount of energy required to pull all of its nucleons apart into separate particles. Figure it's 7 or 8 MeV per nucleon (neutron or proton), and the energy of a nucleon at temperature T will be (very roughly) on the order of kBT. In that case, the whole thing drops apart at around 1011 kelvin (that's 8 MeV divided by kB). The thermal energy of each particle will be roughly equal to the energy with which it would be bound to the nucleus — which is probably closer to the sort of answer you're looking for. Note that I'm back-of-the-enveloping things here, so if anyone has a better answer, go to it. TenOfAllTrades(talk) 21:48, 10 December 2009 (UTC)[reply]
(also ec)
Even in extreme conditions, subatomic particles don't group themselves at "random"; they have a strong tendency to collect in specific groups that form the nuclei of the stable elements and their isotopes. If you start with the nucleus of a radioactive isotope, it'll be unstable no matter what the temperature; for example, a nucleus of radium 226 will sooner or later rip itself apart, all by itself, into a helium 4 nucleus (otherwise called an alpha particle) and a radon 222 nucleus. This will continue with the radon nucleus emitting another alpha particle, and so on until all of the products are stable nuclei.
If you raise the temperature, stable nuclei will start colliding and break up in other ways, or fusing together. But each specific reaction requires a different amount of energy (because of the tendency of particles to collect in specific groups) and therefore a different temperature. Thus for example inside the Sun the core temperature is about 15,700,000 K or °C (say 28,000,000°F). At this temperature collidign nuclei will produce certain fusion reactions with the overall effect that four hydrogen 1 nuclei (i.e. protons) end up forming one helium 4 nucleus. But no other important reactions occur at that temperature. On the other hand, the core of a star that's about to become a Type II supernova contains iron 56 at 2,500,000,000 K or °C (4,500,000,000°F), and it's when this iron begins reacting that the star explodes (because the reactions absorb energy and the core collapses).
So the answer is basically "from the tens of millions of degrees up to the billions, depending on what element you're talking about".
That's for the disruption of nuclei. Electrons are lost at much, much lower temperatures, I think in the tens of thousands of degrees. See plasma (physics). --Anonymous, 22:12 UTC, December 10, 2009.
It depends on which electrons. The outermost electron only costs you between 5 and 25 eV (thousands or tens of thousands of degrees), but the binding energies of core electrons get into the hundreds of eV very, very fast. (Each time you pull another electron off, there are fewer electrons remaining to screen the positive charge of the nucleus, and electrons in core orbitals are 'deeper' down to begin with.) TenOfAllTrades(talk) 22:50, 10 December 2009 (UTC)[reply]
But the limit is still about 136 keV (, squared, times the Rydberg constant), which is still much smaller than even deuterium's per-nucleon binding energy. --Tardis (talk) 23:12, 10 December 2009 (UTC)[reply]
Oh, absolutely! My point was more that you're not going to get fully-stripped nuclei until you hit millions of degrees, not that the core electron binding energies were comparable to nucleon binding energies. (Indeed, if they were, we'd have some very interesting transmutation chemistry accessible to us....) TenOfAllTrades(talk) 23:39, 10 December 2009 (UTC)[reply]
This is one of the reasons, incidentally, that nuclear fission was so unintuitive to nuclear physicists. If you calculate based on binding energies alone, it should be VERY hard to make a large nuclei break apart—it should take very high energies. But, in fact, it takes low energy neutrons to do it (in U-235, anyway)... because it's not just about the binding energy alone. One physicist described it as throwing a softball at a house and watching the whole structure split into pieces. --Mr.98 (talk) 18:20, 11 December 2009 (UTC)[reply]

Free energy interpretation

I don't know why the Arrhenius equation is being used, since that is a kinetic thing, not a thermodynamic thing...? For the melting temperature you'd have to find the T where free energy of free nucleons = free energy of bound nucleus. Thus if an atom is unstable, you can thus see that even at say 300K, a tiny amount of the reactive species will exist at any one time. John Riemann Soong (talk) 22:13, 11 December 2009 (UTC)[reply]

Highest temperature

Was the moment of the Big Bang the moment of the highest temperature and if so by what curvature has the universe cooled? 71.100.160.161 (talk) 22:13, 10 December 2009 (UTC) [reply]

Read Planck temperature, Planck epoch and Absolute hot. The general relativity model predicts infinite temperature, but this formulation fails at the time period anyway. Graeme Bartlett (talk) 23:42, 11 December 2009 (UTC)[reply]

Miller–Urey-type experiments

Miller–Urey-type experiments with more realistic predictions about early Earth's atmospheric composition produce many amino acids, but they also produce deadly toxins such as formaldehyde and cyanide. Even if, by some freak chance, amino acids assembled themselves in the correct order to form usable proteins and then life, why wasn't early life killed off by these toxins that were formed at the same time? --76.194.202.247 (talk) 23:28, 10 December 2009 (UTC)[reply]

Firstly, the concentration of these toxic substances in the sea would not have been that great. Secondly, how do you know that these chemicals were toxic to early lifeforms? Pseudomonas aeruginosa, for instance, is known to be tolerant to hydrogen cyanide, indeed, it will synthesise this chemical in low oxygen conditions. Escherichia coli is tolerant to formaldehyde. On the other hand, most early life was poisoned by oxygen and would not last 5 minutes if it was released now. Life evolves to cope with the environment it finds itself in. SpinningSpark 00:21, 11 December 2009 (UTC)[reply]
That. It's also worth mentioning, perhaps, that the earliest products wouldn't have been necessarily life as we know it, but rather just organic structures that could perpetuate themselves somehow. That's something a lot closer to viruses, for example, than to a living multi-celled organism. ~ Amory (utc) 02:07, 11 December 2009 (UTC)[reply]
You might want to have a look at our article on abiogenesis; it touches on some of the theories for how life came about. TenOfAllTrades(talk) 04:43, 11 December 2009 (UTC)[reply]

December 11

Sewing machines

I can't find anything on how to use a sewing machine to sew a patch in the middle of a large sheet of fabric (50' x 50').

The problem I'm running into is that there isn't enough room between the needle and the base of the sewing machine for that much fabric.

But I've seen patches done in large tents, tarps, sails, blankets, etc.

What's the secret?

(Is there another type of machine that is used for this?)

Simple Simon Ate the Pieman (talk) 03:52, 11 December 2009 (UTC)[reply]

There are special "long arm" machines for that kind of thing. Like this one, for example. SteveBaker (talk) 05:56, 11 December 2009 (UTC)[reply]
The example machine is said ominously to have "All fear driven hook mechanism". Cuddlyable3 (talk) 21:02, 11 December 2009 (UTC)[reply]

On the number of exoplanets that transit their stars in the milky way

I'm a pretty avid observer of the amateur transit watch community and I was just wondering something. Obviously, astronomers looking for transits are hoping that the rotational plane of the observing planetary system will coincide with our line of sight, thus having planets crossing the light of the star every once in a while giving us much more direct evidence for their existence. To measure the radial velocity of stars we watch how quickly the star moves towards and away from us, and thus, a similar transiting orbital orientation works best for radial velocity measurements as well.

Now my question is, is there a tendency for planetary ecliptic planes to orient themselves in a certain way relative to the orbital plane of the galaxy? Obviously this comes down to the rotation of the star in its early stages, so I could ask then do the rotations of stars in any way reflect their orbit around the galaxy? I couldn't find out the planar angle of the solar system relative to the milky way, but from visual memory it doesn't seem to be very close, which means that this will be much help to astronomers looking for transiting stars, but I guess it's also possible that local planetary systems will have similar planes, due to similar environment, age, etc.

Does anyone have any insight on this? Thanks! 219.102.221.182 (talk) 05:02, 11 December 2009 (UTC)[reply]

I doubt there is any special tendency for the plane of the stellar ecliptic to line up with the galactic ecliptic. So the probability of an exoplanet eclipsing it's parent star basically comes down to it's orbital radius and the diameter of the star (and to a much, much lesser extent, the diameter of the planet). The angle through which the orbit will occlude the star is arctan(starRadius/planetaryOrbitRadius)x2.0 - if the angle of the star's ecliptic is random then you can easily figure the probability that it'll happen to line up with the position of the earth. So to answer your question - we'd need to know the distribution of diameters of the stars in our galaxy - but we'd also need to know the statistics of the planetary orbital radius...and there's the problem. We haven't got that information until we've already found the exo-planets! We could probably make a guesstimate.
So if we said that our sun was typical - and we were looking for planets out as far as (say) Saturn - then that eclipse is only visible over an angle of about 0.06 degrees...out of 180 degrees. That's about a one in 3,200 chance. There are a lot of stars out there - so there are a lot of chances - and lots of planets are a lot closer to their stars than Saturn - so the odds are probably a lot better than that. This is a worst-case - where the plane of the stellar ecliptic is completely random. If there is some tendency for the stellar ecliptic to line up with the galactic ecliptic - then for more distant stars - that greatly increases the probability of eclipses - but for closer ones - it decreases the probability. SteveBaker (talk) 05:36, 11 December 2009 (UTC)[reply]
Thanks! 1 in 3200 chance, that would actually be quite a bit better than I had expected (for random planes). Is there any particular reason you feel that the planes should be random? The fact that the solar system, earth, and most satellites occur on the same plane would seem to hint at a trend, and I have no reason to assume in the other direction. Though I have to say I totally didn't realize the consequences this would have for near star systems... I guess I was thinking too 2-dimensionally. Also it's worth noting that if the sol system isn't nearly planar with the galaxy, but there indeed is a general trend for other planetary systems to be, that could also lower the odds of catching a transiting planet considerably. 219.102.221.182 (talk) 06:34, 11 December 2009 (UTC)[reply]
It is also worth mentioning that gravitational microlensing techniques such as the proposed The Galactic Exoplanet Survey Telescope (GEST) are expected to be good for small planets some distance from the star. Both observed transits and radial velocity techniques require a large planet close to the star (which microlensing in not very good for), hence microlensing is expected to find many more planets and of a smaller size than previous methods. SpinningSpark 10:46, 11 December 2009 (UTC)[reply]
I agree that one in 3200 seems like unexpectedly good odds - I tried to make it come out worse - but the numbers insisted and we must obey! It's not so much that I have a reason to believe that the ecliptic planes of stars should be random with respect to the plane of the galactic ecliptic - so much as that I can't think of any reason why they shouldn't be random. (Maybe that's the same thing?!) The plane of the Milky Way galaxy is at 60 degrees to the Sun's ecliptic - so unless we're rather special - I'm pretty sure there is no correlation. However, if there is a preferred direction then that's a problem. The deal is that the galactic spiral is quite thick - maybe 1000 light-years. So all of the stars within about 1000 light years of us are much more likely to be above or below us as than they are to lie in the same plane. If there is some tendency for planetary disks to lie in the galactic plane (meaning that our sun is weird in that regard) - then we'd be unable to see any eclipses for almost all of the stars within about 1000 ly. That would be bad news for astronomers because the really nice, easy-to-measure stars are going to be the closest ones...and those would be the problematic kind. However - as I've said - there doesn't seem to be a good reason for them all to line up like that (and the Sun certainly doesn't) - so I think the 1:3200 number is about right...at least for stars the size of the sun with planets at the distance of Saturn. Of course, exo-planets that are closer to the parent star will be seen to eclipse their star over a wider angle - and planets further from their star - less so...so the 1:3200 number is just a ballpark figure based on our Solar System. Also - some of those eclipses will be briefer in duration than others if the planet only just eclipses the very edge of the star - so nice long-duration eclipses would be rarer. SteveBaker (talk) 13:48, 11 December 2009 (UTC)[reply]
Check out Methods of detecting extrasolar planets#Transit method - it has some of the probabilities. For an Earth-like planet in an Earth-like orbit (which are the most interesting), it's about 1 in 200. Pretty good odds. --Tango (talk) 14:00, 11 December 2009 (UTC)[reply]

Binoculars in Games & Movies.

When game makers and movie makers want to depict the idea that we're seeing the world through a pair of binoculars - they often use the trick of masking off the edges of the screen with a pair of overlapping circles kinda like this:

  #################################
  #######      #######       ######
  ####            #            ####
  ###                           ###
  ###                           ###
  ####            #            ####
  ######       #######       ######
  #################################

I'm pretty sure that out here in the real world, binoculars don't look like that when they are properly set up (sadly, I don't own a pair to try) - it's really just a single circle. Is this a true statement? ...and (because I need to convince some people about it today) is there a cogent explanation as to why that is the case.

Finally - is there a name for this kind of mask - maybe some kind of movie jargon?

TIA SteveBaker (talk) 13:32, 11 December 2009 (UTC)[reply]

You are correct that only a single circle is seen in the real world, however that would waste a lot of screen real estate (cf a "gun scope / crosshairs" view as seen in any typical James Bond film opening.) As far as terminology, in Final Cut Pro, they just use the term "binoculars filter". --LarryMac | Talk 13:50, 11 December 2009 (UTC)[reply]
Yeah - and the 'real-estate' issue is under debate too. I'm building a computer graphics simulation where realism is very important - so loss of screen real-estate is taking second place to "getting it right". What I may do is to use a single circle that's only a little bit narrower than the screen - and let it cut off at the top and bottom. A compromise solution.
So what is a convincing explanation (to my Producer - who also doesn't have a set of binoculars at hand right now - and who wants a double-circle) of why we only see one circle?
SteveBaker (talk) 14:09, 11 December 2009 (UTC)[reply]
Steve, if this is for work, this is the perfect occasion to buy a pair of binoculars on company expense ;-). --Stephan Schulz (talk) 14:13, 11 December 2009 (UTC)[reply]
It's all about frame of reference... The point of looking through binoculars is so that both of your eyes can experience the visual of the down-field magnification. Your brain, doing what it always does with information from your eyes, stitches the two circles together to form one stereoscopic view. A looking glass, spotting scope, monocular, etc. are all single optic versions of the same thing, if you aren't interested in the stereoscopic view. The answer that I am getting to is that there is no good computer screen approximation for the function of binoculars, unless you have a stereoscopic display of some sort, so "getting it right" with binoculars is kind of out of the question. If you want my opinion, either go for the movie cheeze double bubble view, change the device in the game to a monocular and make it a circle, or make it a computerized spotting device of some sort that doesn't use discrete optics but still has the wide form factor of binoculars, so that the onscreen representation can be 4:3 or 16:9 or whatever it is that you are going for. --66.195.232.121 (talk) 14:29, 11 December 2009 (UTC)[reply]
Actually, I can't change the device. What we do at is "Serious Games" - using games technology and game development techniques to produce simulations for training real people for the jobs they actually do. They could be firefighters, county sheriffs, black-ops guys, campus security people...you name it. But if what they carry is binoculars - we have to simulate binoculars - the actual binoculars they'd really have with the right field of view, depth of field and magnification. We don't have the luxury of being able to make it into some kind of futuristic gadget. Whatever that person would be issued with in reality is what we'll give them...to the degree of fidelity possible with the computer we're running on. SteveBaker (talk) 22:44, 11 December 2009 (UTC)[reply]
...and therefore no good computer screen approximation for the function of eyes. 81.131.32.17 (talk) 17:21, 11 December 2009 (UTC)[reply]
(ec) Try this one on for size — even without special equipment, you already have binocular vision. You're looking through two eyes, but you only see one image. The brain is very good at fusing two images into one, and the same merging process happens when you use a pair of binoculars.
If you don't have a pair of binoculars handy, then you can do a quick and dirty demo with a matched pair of hollow tubes. (Note - this is original research. I just tested this with a toilet paper tube cut in half, as it was the only tubular object I had handy.) Hold the tubes side-by-side, directly in front of your eyes. You want the center of the tubes to be as close as possible to being in line with the pupils of your eyes.
Now, focus your eyes on an object in the distance. Notice how the inner surfaces of the tubes (as seen from each eye) get merged together, and you have an apparently circular field of view? Presto! You may have to wiggle the tubes a bit to get the positioning right, but without a pair of binoculars it's probably the most convincing demo you're going to get. That said, if your boss wants the double-bubble silhouette, just give it to him/her. Or try to persuade him to equip the in-game character with a spotting scope, telescope, or other single-eyepiece device. TenOfAllTrades(talk) 14:37, 11 December 2009 (UTC)[reply]
I think the important point is not so much that your eyes can fuse the two images, since that doesn't preclude the sort of "double-bubble" effect, but that the fields of vision provided by the binoculars to each eye is nearly identical; they almost completely overlap. In contrast, your normal vision without binoculars is much closer to this double-bubble thing, since the left side of your field of vision is only seen by the left eye and the same with the right. Only the center area is in the overlap seen by both. With binoculars you adjust the position of the two telescopes specifically so that they provide each eye the same view in order to get binocular vision of what you're looking at. Rckrone (talk) 15:28, 11 December 2009 (UTC)[reply]
Surely the why-for is so that it is instantly clear to (most) people that the vision we are seeing on screen is (as if) 'through binoculars'. Binoculars defintely show just a single-circle when used - though if you adjust the 'width' you can make it look more like a side-ways laid number 8 (infinity sign?) too. I'd imagine it's a simple short-cut by film-crews to make it clear what we're supposed to be seeing, and as others not it takes away less of the view on screen than a circle would. 194.221.133.226 (talk) 15:32, 11 December 2009 (UTC)[reply]
To answer the question of what this is called in cinematography, if it is done with "soft" edges, as it invariably is, it is called a "vignette" (and the technique is called "vignetting"). If the edges are hard it is simply called mask/masking. SpinningSpark 16:06, 11 December 2009 (UTC)[reply]
The problem with the game/movie thing is that with real binoculars you can actually see depth, and know the difference between a binocular image and a spyglass image. In a game/movie, you cannot, and cannot know the difference otherwise. So you're already going to have to sacrifice the biggest "reality" aspect of binoculars for your game (seeing in 3-D) just by the nature of it (unless you are working on something a bit more cool than I am guessing). At some level, the amount of "reality" is somewhat arbitrary, given how much you are already abstracting. --Mr.98 (talk) 16:15, 11 December 2009 (UTC)[reply]
You might try showing a movie clip of an accurate depiction, the only one i can recall is from The Missouri Breaks.—eric 17:08, 11 December 2009 (UTC)[reply]
This problem is equally as challenging as asking somebody to describe the "shape" of their field of view without binoculars. Nimur (talk) 17:21, 11 December 2009 (UTC)[reply]
Well, I learned from this pseudo-educational host segment from MST3k, they're called "Gobos" or "Camera Masks". WP's articles don't seem to fully back this up. But google shows me that that "gobo" is at least sometimes used in this context. Google also seems to suggest "Binocular masks" APL (talk) 17:54, 11 December 2009 (UTC)[reply]
As a compromise, when you switch to the binocular view, you could start with two separate circular images (arranged like a MasterCard logo, with the two images identical) that converge to a single circular field. The action should be sort of irregular, with some jerkiness and overshoot. This would mimic someone picking up binoculars and adjusting the interpupillary distance to their eyes, and I think it would clearly depict "binoculars" to the user. -- Coneslayer (talk) 18:57, 11 December 2009 (UTC)[reply]

What a touching lament "sadly, I don't own a pair to try" so close to Christmas.... Cuddlyable3 (talk) 20:58, 11 December 2009 (UTC)[reply]

(No,no,no! Do NOT confuse Santa. I carefully wrote my Xmas list (in my best handwriting) and shoved it up the chimney already - what I want is a Kindle - I don't own binoculars because I neither want nor need binoculars! I have not been naughty this year - at all - ever.  :-) SteveBaker (talk) 22:44, 11 December 2009 (UTC)[reply]
What never? If you ask for a pair of binoculars Santa won't think of an unauthorised verse sung to the melody of the Eton Boating Song
It was Christmas eve at the harem
All of the eunuchs were there
watching the beautiful houris
combing their pubic hair
Just then from the regal apartment
the sexy old sultan calls
What do you want for Christmas boys?
The eunuchs as one shouted Balls.
Cuddlyable3 (talk) 00:02, 13 December 2009 (UTC)[reply]
Coneslayer's idea sounds good. I could also suggest that at the edges of the circles you blur it a bit and darken. You could even have the whole picture go out and in focus a couple of times, increase the chance of missing something in the view. And don't forget to ray trace the internal reflections off the lenses if you are looking near something very bright! And there would be a little bit of unsteady shaking. Graeme Bartlett (talk) 21:54, 11 December 2009 (UTC)[reply]
I like the idea of doing a quick bit of faked "adjustment" - but maybe only the first time you use the binoculars in the game...it wouldn't be cool to do that every time they picked them up though! There is a fine line between "cool effect" and "bloody annoying"! SteveBaker (talk) 22:44, 11 December 2009 (UTC)[reply]
Although if you are going for realism "bloody annoying" may be more realistic than the "cool effect". Ideally (i.e. for maximum realism) anytime the binoculars are used straight out of their case (i.e. from folded) the faked adjustment should happen. Also there should be a little bit of focusing delay when shifting views to objects at different distances - similar to what Graeme suggests. Reading about what you do (cool job by the way) simulating this usage delay could be fairly critical for some stuff. For example, if a police officer gets in the habit of keeping multiple people in view just by swinging their binoculars around, they are going to be unprepared for the need to constantly refocus when doing this in the real world. I speak from birdwatching experience, keeping multiple subjects at different distances "under surveillance" is quite challenging. Putting focus delay into the simulation would train people to consider carefully how many subjects they can watch at once, an annoying but important real world limitation. Apologies if you are way ahead of me on this one or I am getting too pedantic. 131.111.8.99 (talk) 01:12, 12 December 2009 (UTC)[reply]
We have 'subject matter experts' who we consult about small details like this. It's always interesting to discover what features matter and what don't. For example - play one of the very latest combat games: Call of Duty: Modern Warfare 2 - as you're walking around in "first person" mode - look for your shadow. You'll be surprised to find that while every other character in the game casts a shadow - you don't! It may not be immediately obvious why this is important - but if you are hiding around the corner of a building - hoping that the bad guy doesn't know you're there - in the real world, your shadow can give you away...if you have a shadow! You'd like someone who is trying to do that to pay attention to where their shadow is falling - so CoD isn't a great training aid (for lots of reasons - not just this). It's actually surprisingly difficult to produce a good shadow in a first person game because the animation of your avatar is hard to get right when you're doing joystick/mouse-driven actions while the pre-scripted "run", "walk", "shoot" animations are playing...so they just don't bother! We don't have that luxury to just ignore things when they are difficult. SteveBaker (talk) 04:10, 12 December 2009 (UTC)[reply]

Need a reliable value for the sun's illumination under variable Earth conditions

Firstly, I am after approximate ranges for the sun's illumination on earth and in orbit under various conditions (clear day, overcast, etc.) for the purpose of calibrating CG lights in a 3d computer application. The confusion has arisen as a result of trying to research a good value for an overcast day; I tried using Google for that and encountered wildly varying values with some surprising implications. I have already looked at the Wiki article on Sunlight for these values. The opening section of the article gives an (uncited) value for the sun's illumination of 100,000 lux at the Earth's surface. Now, this sounds all well and good adn would match the default ranges for sun objects in the doftware package I am using. However, I then ran into, for example, these values on this page; the value given for 'direct sunlight' is indeed 100,000 lux, but it then describes 'full daylight' as 10,000 lux, and an overcast day as 1000 lux. This page gives the same values without specifying the 10,000 lux value for 'full daylight'. The implications of the first page are that, if direct sunlight is 100,000 lux, and 'full daylight' 10,000 lux, then the Earth's atmosphere absorbs or reflects 9/10ths of the sun's light on a clear day (!), and that an astronaut in orbit would be receiving 10 times the amount of light as someone on the Earth's surface; and that a cloudy overcast day provides around 1/10th or 1/100th the illumination of a sunny day (depending on which figures you look at), and 1/100th the illumination in orbit. . . All this sounds surely wrong to me. Firstly I was under the impression that, as the Wiki article suggests, the 100,000 figure is roughly correct for Earth's surface on a clear day, although I am a bit puzzled by the figure of 1/100th that for an overcast day - that sounds a bit too dark. Similarly, I would have assumed that the amount of visible light absorbed or reflected by the atmosphere is comparatively slight (given, for example, Earth's average albedo of around 0.31, ie. 3/10ths), and that lighting conditions in orbit aren't too far off those at Earth's surface on a clear day - brighter yes, but not that bright. Please help if you can, I seriously need some reliable values for these conditions, and with pages contradicting each other I don't know which values I'm supposed to follow. Thanks in advance for any helpful answers. LSmok3 Talk 17:14, 11 December 2009 (UTC)[reply]

Insolation is usually measured in Watts per Square Meter. The solar constant is pretty clearly measured at about 1370 watts per square meter at the top of the atmosphere (reliable sources are cited in our article); depending on conditions, anywhere from ~ 100 to 1000 watts per square meter reach the Earth's surface; but since you need a luminosity and not an incident power (irradiance), you might want to look at lux and see the conversion procedure. Nimur (talk) 17:26, 11 December 2009 (UTC)[reply]
The albedo of cloud is betwen 0.5 to 0.8 (see albedo). So we'd expect the total amount of sunlight reflected back out on a cloudy day to be 50% to 80% of the total sunlight. All that isn't reflected is either absorbed by the cloud - or scattered around so that it ends up down here on the ground as ambient sky-light - so we'd discounting absorption by the cloud - you'd expect the ratio of light on a clear-day to an overcast day to be between 2:1 and 4:1 - with multiple layers of cloud and some idea of what is being absorbed - I could easily believe the 1/10th of sunny day illumination. The 1/100th value seems more dubious...perhaps they are talking about the light coming fron the direction of the sun itself - neglecting the scattered light...that could easily explain the 1/100th number. For patchy cloud conditions between those two extremes, it all depends on whether the sun happens to be behind a cloud or not from wherever you happen to be standing.
Some of the confusion may arise from the fact that the sky isn't black. If you measure the light coming from the direction of the sun alone - you get a much lower value than if you average it over the entire sky. — Preceding unsigned comment added by SteveBaker (talkcontribs)
Okay, thanks for the quick answers, but I'm afriad I'm still a bit lost. Firstly, the watts to lux conversion is a bit beyond me - I'm an artist, not a mathematician: the lux page directs me to the page for the luminosity function, which is required to make any such calculation, and that specifies variables I don't have, like the actual wavelength. Secondly, all I'm after are guideline ranges and averages for a few conditions - Earth orbit, clear day at surface (noon - sunrise/set), and fully overcast (noon - sunrise/set), from which I can approximate all I need. The point about ignoring cloud absorption seems a bit odd, as that is a determining factor in surface illumination based on cloud cover, ie. exactly what makes an overcast day, (and remember, I was only quoting the average albedo of the Earth). I also seem to have run into the same trouble here as I encountered on other pages, namely a lack of any clear definition of terms and contradictory figures. Helpfully, the lux page gives various values for different conditions, but makes a distinction between 'full daylight' and 'direct sunlight' (10-25k and 32-130k respectively) and gives the value of 1000 lux for overcast: without any actual definition, I would assume that 'full daylight' refers to the brightest Earth-surface conditions, and 'direct sunlight' to unfiltered, non-atmospheric, ie. orbital conditions. This matches the pages I had found which caused me the confusion in the first place, and suggests that the sun lights in the application I'm using default to outer-space conditions (which is a bit silly; and to quote from the documentation: "Intensity: The intensity of the sunlight. The color swatch to the right of the spinner opens the Color Selector to set the color of the light. Typical intensities in a clear sky are around 90,000 lux."). However, the daylight value links to the Wiki page on Daylight, which also provides guideline values; it doesn't use the term 'Direct sunlight', but gives similar values for 'bright sunlight' and 'brightest sunlight', and gives the value of 10,000 - 25,000 lux for an overcast day: 10-25 times higher than the lux article. So, what am I supposed to follow? LSmok3 Talk 18:46, 11 December 2009 (UTC)[reply]
Among the problems in converting solar radiant flux into an equivalent "candela" is the difference between specular and diffuse lighting. "Candela" really only applies to an approximately point-source light - but sunlight, whether the day is cloudy or clear, is illuminating the ground diffusely - in other words, the entire sky "glows" and lights the object. So, trying to model the lighting as a single point-source (the sun) is obviously flawed - there is no equivalent brightness or luminosity for the sun which would result in equivalent lighting conditions. In computer graphics, this is usually dealt with by applying an ambient lighting condition - a "uniform" illumination from a particular direction. Alternately, you can model the sun as a point-source at a great distance, and then model the atmospheric diffusion - but that will be very computationally intense, and will yield about the same visual effect as an ambient lighting source. Nimur (talk) 19:58, 11 December 2009 (UTC)[reply]
Yes, I am well aware of that issue. And no, I'm not looking for the candle power of the Sun. Indeed, in modeling a daylight system (including the IES Sun and Sky system I quoted the documentation for above), two lights are generally used, one to represent the contibution from the sun as a point source (ideally Photometric) light (although in recent years that's tended to be an area light purely to allow for the modelling of realistic shadows due to scale and diffusion), and another to represent diffuse scattered 'fill' from the sky. But that doesn't mean I don't need realistic values, especially given that the systems may mix photometric with non-physical lighting, and with global photometric controls the relationships between all lights in a scene are affected (for example, street lighting at dusk would feature a sun and sky system in combination with artificial lighting, so the relationships must be correct). (And in fact, the reason I was originally looking for realistic lux values for an overcast day - the whole thing - was to calibrate just such a fill to realistic proportions, bearing in mind that it may also be photometric; again, if the relationships aren't accurate, not only is it bad practice, but problems would also occur if, say, I was animating a shift from clear to overcast conditions.) Also, don't assume I am only interested in Earth surface figures; as I already said, I also need realistic values for space scenes. So are there no reliable values that actually agree for sunlight in lux? That seems a bit strange. It should be possible for just about anyone to get such figures by going outside under the given conditions armed with a light meter set to give values in lux and measuring at exposed ground - I'd do it myself but I don't own one, and can't help assuming those values must surely be well-established; for some reason my documentation suggests values that Wikipedia claims are burn-your-retina space lighting and the reason why astronauts where mirror-visors, and no one seems able to agree on overcast values. LSmok3 Talk 20:50, 11 December 2009 (UTC)[reply]
Okay, how about Reference luminous solar constant and solar luminance for illuminance calculations, (J. Solar Energy, 2005), or Determination of vertical daylight illuminance under non-overcast sky conditions, (Building & Environment 2009)? Table 2 in the first article specifies a lot of parameters, including luminance of about 1.9x109 cd/m2. For the very detailed cases you are considering, you may want to read the entire articles, since you are particularly clear that their assumptions differ from yours. There is an entire section devoted to the assumptions made and the impact these have. Nimur (talk) 22:20, 11 December 2009 (UTC)[reply]

Well, I'm afraid the problem with that is the principle of having to pay around $60 for the information, which as I say I would have thought would be pretty well-established. The whole point of Wiki is that information is free, and as it is, I don't even own a credit card or have a paypal account to buy it, especially without being sure it actually contains what I need. Having said that, I did find this link through Google to the same site, which suggests that the values quoted in my software are correct and the ones on Wiki are wrong. So never mind. LSmok3 Talk 07:43, 12 December 2009 (UTC)[reply]

If you don't have access to the online journals (which are not free), then you can try requesting hard copies of those journals through a library. Unfortunately, not all information is published under a free license, so immediate access to download over the internet is not always possible. Alternately, you can ask your school or library to purchase a web subscription to those journals so you can have immediate access in the future. Nimur (talk) 20:32, 12 December 2009 (UTC)[reply]

trialkyloxonium salts and beta elimination

Most of these are unstable, right? My question is, what happens if you have no beta proton on the oxonium salt? It can't possibly eliminate via the Hoffman mechanism? Will it decompose onto ether + carbocation anyway?

Also, can you undergo transalkylation with ethers and an alkyl halide via SN2?

Are carbamates prone to beta elimination?John Riemann Soong (talk) 22:52, 11 December 2009 (UTC)[reply]

Triethyloxonium tetrafluoroborate is pretty stable. Graeme Bartlett (talk) 23:47, 11 December 2009 (UTC)[reply]
How does it decompose in water? Does water act like a base? John Riemann Soong (talk) 02:01, 12 December 2009 (UTC)[reply]
It would appear that water is acting as a Lewis base. If you do the electron pushing for this reaction:
  • [(CH3CH2)3O]+BF4 + H2O → (CH3CH2)2O + CH3CH2OH + HBF4
You can see pretty easily how an electron pair from water "grabs" an ethyl group from the trialkyloxonium and then loses a proton to the tetraflouroborate anion. --Jayron32 05:28, 12 December 2009 (UTC)[reply]

Can tetraalkyl ammonium salts undergo SN2 substitution with a nucleophile instead of beta-elimination? An alkene product seems to be favoured for ammonium salts, but an alkane-alcohol product for oxonium salts? John Riemann Soong (talk) 06:04, 12 December 2009 (UTC)[reply]

Both types of reactions are possible. Depends (as usual) on sterics and electronics of the tetraalkylammonium structure and sterics and electronics of the "other" reactant, solvent effects, etc. All the usual concerns for deciding between these two competing reaction modes. DMacks (talk) 07:50, 13 December 2009 (UTC)[reply]

December 12

Hedgehog exercise

This BBC News article discusses an obese hedgehog.

To exercise him, the veterinarians have put the hedgehog in a bathtub to swim around as part of his weight-loss regimen. Is this standard procedure for exercising a hedgehog? I would have expected a "hamster-wheel" would be more common. Anyway, I know there's a few hedgehog experts on the desk, so I figured I'd solicit their input on this unusual exercise regimen. Nimur (talk) 00:39, 12 December 2009 (UTC)[reply]

Domesticated hedgehog may have some useful information. There are also external links to follow, you may be able to find more by poking around some of those. --Jayron32 01:19, 12 December 2009 (UTC)[reply]
A standard hamster/mouse wheel is unacceptable for a hedgehog. They often step into the gaps between the spokes and suffer injury (including broken toes). Instead, a hedgehog wheel is commonly referred to as a "bucket wheel" because it is like a shallow bucket placed on its side. Many people actually make their own from buckets. On average, a hedgehog will run about 3 miles each night. The bathtub regimen is not extremely common. Many hedgehogs are afraid of water. Those that like water will spend hours swimming around. Those that are scared of it do nothing except fight to get out of the water (I've got some pretty good claw marks as scared hedgehogs have clawed right up my arm to my shoulder). Some hedgehogs are scared of water and won't run in a wheel. For them, they need exploring activities. It is common to hide their food in a lot of hard-to-reach places. Then, they have to run around and try to figure out how to get their food. All in all - there isn't such thing as a single hedgehog personality. Like all other animals, each has its own personality. -- kainaw 01:36, 12 December 2009 (UTC)[reply]
Ah. The hazard of getting feet trapped in wheels seems like a motive for finding alternative exercise ideas. From the news story, there are several other news articles linked about obese hedgehogs, [6], [7]. Is this a common problem, or is it just getting disproportionate media attention because of the "weird" factor? Nimur (talk) 01:47, 12 December 2009 (UTC)[reply]
In captivity, hedgehogs regularly become obese. Hypertension, diabetes, and hypercholesterolemia are common. This leads to heart attacks, stroke, liver failure, and high rates of cancer. The issue isn't simply captivity. It is that the hedgehogs get plenty of food (most of it rather unhealthy food) and very little exercise. In the wild, hedgehogs span many miles searching for food every night. Relatively, imagine if your main purpose each day was to run a marathon to get a good meal. Sure - you have all day to run the marathon, but the meal won't do much to offset the calories burned getting to it. There is another issue - winter. Hedgehogs are said to hibernate. They don't truly hibernate, but they do spend most of winter holed up and waiting for the weather to get warmer. To last a long time, they need to fatten up. So, this is the time of year that European hedgehogs tend to get rather plump. -- kainaw 02:24, 12 December 2009 (UTC)[reply]

nitromethane

Is the pka given for this substance the value of nitromethane as an acid, or its conjugate acid (nitromethane is acting like a base)? I really can't see an acidic proton in nitromethane ... unless nitromethide carbanion is really that stable? John Riemann Soong (talk) 02:19, 12 December 2009 (UTC)[reply]

It is the pKa of losing a C-H hydrogen. If you read the article on nitromethane, the acidity of the compound and its uses specifically because of this acidity are discussed in the Uses section. The Nitroaldol reaction makes use of this acidity of nitromethane. --Jayron32 05:23, 12 December 2009 (UTC)[reply]

How was the international prototype for the metre made?

How was the 1889 version of international prototype for the metre made? —Preceding unsigned comment added by 173.49.9.184 (talk) 04:06, 12 December 2009 (UTC)[reply]

The French version of the article explains this in detail. In short, they kept making copies of the 1799 prototype until they got one whose length could not be distinguished from that of the original. --Heron (talk) 15:06, 12 December 2009 (UTC)[reply]
Thanks. --173.49.9.184 (talk) 15:53, 12 December 2009 (UTC)[reply]

"Bleeding like a stuck pig"

Do hogs bleed more easily than other animals? Or is this phrase a vague allusion to something literary or historic? 24.93.116.128 (talk) 05:14, 12 December 2009 (UTC)[reply]

It refers to the practice of Exsanguination in the slaughter of an animal. When you want to kill a pig, it is common practice to drain the blood as fast as possible from the meat, the article on Slaughterhouse describes the process in some detail. So when you "bleed like a stuck pig", it means you are bleeding like you are being drained of your blood in a slaughterhouse. --Jayron32 05:20, 12 December 2009 (UTC)[reply]
There is a passage in Jude the Obscure in which a character (Jude's unsympathetic wife) instructs Jude that the swine's blood vessel should be lightly nicked, so that the blood would drain slowly, resulting in less blood in the meat. Jude cannot stand to see the animal suffer and kills it quickly, enraging his wife. --Trovatore (talk) 23:22, 12 December 2009 (UTC)[reply]
There's some weird sex stuff going on in that book. There's the scene towards the begining where the chicks who are butchering the pig pelt him with the pigs penis. Thomas Hardy was not a stable dude. --Jayron32 05:02, 13 December 2009 (UTC)[reply]
Due to the sizes of the animals, it was more common to send cattle to a slaughterhouse for butchering, while smaller animals like chickens, sheep and even pigs you could do yourself at home. So the "average joe" may be more familiar with seeing stuck pigs. It may also have to do with the spiked club once used to kill pigs. The only people I know who have home butchered cows used bullets for the first step. Rmhermen (talk) 14:26, 12 December 2009 (UTC)[reply]
According to Jewish law and to Muslim law, animals must be slaughtered by a single cut to the throat while the animal is still conscious, see this article, Cuddlyable3 (talk) 23:14, 12 December 2009 (UTC)[reply]

Enzymes

I was under the impression that enzymes simply increase the rate of a reaction (my college bio text book says this). However denaturing certain enzymes can result in certain products from not being produced. But if they only increase the rate, shouldn't the products be formed without the enzyme, but only at a slower rate? So I'm a bit confused as to the nature of enzymes. ScienceApe (talk) 08:32, 12 December 2009 (UTC)[reply]

Yes, sometimes the uncatalysed reaction is just REALLY slow, so slow that in a biological system it basically doesn't occur. And also, some enzymes couple 2 reactions - 1 thermodynamically favourable with an unfavourable reaction, in essence driving the second reaction against the direction it would proceed without an enzyme, and using the first reaction to provide the "fuel" to do this. Aaadddaaammm (talk) 09:33, 12 December 2009 (UTC)[reply]
To expand on what Aaaddaammm is saying, the uncatalyzed reaction may occur so slowly as to take thousands or millions of years under uncatalyzed reaction conditions. So essentially, enzymes do "make" reactions occur which would otherwise be impossible (or, at least, impossible to work in a living system). --Jayron32 15:16, 12 December 2009 (UTC)[reply]

I see, thank you. But what about something like RNA polymerase for example. From what I understand it's the only way to make RNA. Can RNA be produced without RNA polymerase, just very slowly? ScienceApe (talk) 17:13, 12 December 2009 (UTC)[reply]

RNA polymerase is special stuff in that it takes a template DNA strand, and uses it to specifically reduce the activation energy barrier to sequentially adding an appropriate RNA base (A, G, C, or U) to the growing RNA strand. (That's a bit of a simplification, because the addition is a multistep process which includes a host of other enzymes, too.) For just random concatenation of RNA bases, you can drive the process using nothing more complicated than ultraviolet light. (See, for example, RNA world hypothesis.) Of course, once you start to have lots of complicated RNA floating around, it can do some simply marvellous stuff — including catalyze the formation of more RNA polymers. Modern RNA still forms a number of important enzymes, collectively called ribozymes. TenOfAllTrades(talk) 17:46, 12 December 2009 (UTC) (expanded, TenOfAllTrades(talk) 00:26, 13 December 2009 (UTC))[reply]
So you can transcribe RNA using just ultraviolet light? ScienceApe (talk) 01:21, 13 December 2009 (UTC)[reply]
"Transcribe" implies that you can control the order of bases in the created RNA, and that's not what TenOfAllTrades said. With UV light you just get some random RNA (think monkeys with typewriters). If you're really lucky it will turn out to be a ribozyme that does something interesting, and if you're really really lucky, "something interesting" will involve catalyzing transcription of RNA. Once you get there, you're golden. –Henning Makholm (talk) 02:04, 13 December 2009 (UTC)[reply]
So then for all intents and purposes, RNA polymerase, an enzyme, makes messenger RNA. ScienceApe (talk) 07:06, 13 December 2009 (UTC)[reply]
Yes. Both this example and the fact provided Adam above hit on extremely important concepts in biochemistry. We always say that enzymes accelerate a particular reaction, but when you are trying to consider what happens in the absence of the enzyme, you have to think about all the other reactions that can happen. In the case of RNA polymerization, the mRNA product of transcription is one possible product of the polymerization of N ribonucleotide triphosphates. There are, in fact, N4 conceivable products. In the case of an ATP hydrolysis driven reaction, a conceivable set or products is ADP plus inorganic phosphate, with the enzyme's substrate left unaltered. An enzyme "makes a reaction happen" by lowering the activation energy for one particular reaction amongst the many reactions possible with a given set of reactants. Someguy1221 (talk) 07:19, 13 December 2009 (UTC)[reply]

Russian rocket failure

Is this for real? http://www.nzherald.co.nz/world/news/video.cfm?c_id=2&gal_cid=2&gallery_id=108562 Aaadddaaammm (talk) 09:38, 12 December 2009 (UTC)[reply]

Yes. The spiral was seen by hundreds of people in northern Europe. The most likely explanation is a Russian ICBM test where the rocket went out of control. spaceweather.com has a write up on it, and the rocket plume of the boost phase of the rocket is visible and normal in at least one photo of the spiral. --121.127.200.51 (talk) 10:10, 12 December 2009 (UTC)[reply]
There was a discussion on this earlier. We have an article 2009 Norwegian spiral anomaly Nil Einne (talk) 13:03, 12 December 2009 (UTC)[reply]
And just as a note, "spiral in the sky" seems odd if you haven't seen something like that before, but many failed missile tests look like that and produce distinctive spiral smoke shapes. E.g. [8] and [9]—when you have one part of a missile spewing out heat and it gets off-center, you easily get spiral-like activity. What's interesting in the Norway case is how high it was and how the period of the spiral looks so perfect, but you'll notice in the videos that most are showing a clip of just a second or two in length—it disperses immediately afterwards. It is fairly what you'd expect it to look like if, say, the third-stage of an SLBM failed to detach from the main body and spun it around. --Mr.98 (talk) 14:18, 12 December 2009 (UTC)[reply]
And just another note... another possibility that has not been discussed seriously, but doesn't quite fall into "conspiracy theory" territory, is that this is a test or failure of a different type of weapon. In their book Nuclear Express, Thomas C. Reed and Danny B. Stillman (the latter of whom was a Los Alamos scientist), report on a mysterious, not-fully-understood weapon that the USSR and China had both been developing, that they term "domes of light". Some of the description from their book, and the photos they reproduced, are visible here. The spiral on the Norwegian thing says "missile test gone awry" to me, but I remember reading about the "domes of light" sometime back and finding them pretty odd as well. Just putting that out there! --Mr.98 (talk) 14:30, 12 December 2009 (UTC)[reply]
That sounds vaguely like the conspiracy theories surrounding HAARP and its Russian counterpart SURA (neither of which is at all near Norway or the White Sea). But then what use is a sci-fi death ray if you can't blow up far away things (like test missiles over the White Sea). 87.115.47.74 (talk) 00:08, 13 December 2009 (UTC)[reply]

Cavity Preparations-Dentistry

What are the recent modifications made in outline form of class 1 cavities (with faciolingual extentions)n class 2 cavities for amalgam restorations? <ref>Fundamentals in cavity preparations</ref>—Preceding unsigned comment added by DOC PANU (talkcontribs)

Is this homework? Fences&Windows 14:41, 12 December 2009 (UTC)[reply]
You may find the information here. Cuddlyable3 (talk) 23:03, 12 December 2009 (UTC)[reply]
Your question is very vague -- could you be more specific, please? DRosenbach (Talk | Contribs) 14:12, 13 December 2009 (UTC)[reply]

no transmission for a car?

Why does a car need a transmission and an airplane doesn't? I'm not sure, but it looks like propeller aircraft have propellers somehow attached to the engines' crankshaft; would it possible to do the same with a car so engine speed would be directly proportional to car speed? 70.144.137.239 (talk) 14:36, 12 December 2009 (UTC)[reply]

A modern propeller driven plane will generally have a variable pitch propeller - and that is (in effect) a continuously variable transmission - when the propeller blades are parallel to the circle in which they travel, the blades generate almost zero thrust - as you increase the pitch of the blades, the 'gear ratio' gets higher - suitable for flying at higher speeds. Prop planes that don't have that are essentially "stuck in one gear" - which is a compromise between performance and idle. But aircraft don't have much problems with stop-and-go traffic - once they are moving, their power needs are over a fairly small band - and the range of RPM's they need isn't that great. Also, you can't stall a plane's engine in the way that you can stall a car engine. If you bring your car to a dead stop using the brakes in a high gear - the engine RPM will drop so low that the engine will be unable to keep running. In an airplane, on the ground, the propeller can continue to rotate when the brakes are on and the plane is stationary - so you don't need a clutch or a neutral gear either.
Having said that - I recall that at least one design of Russian or maybe Czech fighter from the WWII era did actually have a gearbox. Sadly, I can't recall the model number - so no link.
SteveBaker (talk) 15:21, 12 December 2009 (UTC)[reply]
As far as I know cars with electric motors (e.g. Tesla Roadster) actually go all the way down to zero RPM, they don't have the stalling or motor starting issues that regular engines have that would necessitate the need for a clutch, torque converter or gearbox. In principle they could couple the the motor directly onto the crankshaft (especially in the case of the Tesla which has a 1-speed gearbox). However there are practical considerations and advantages to have at least a neutral gear (free-wheeling downhill without excessive engine braking is one) as well as several gears. As Steve points out, a plane or boat operates in a VERY limited RPM range. If you had to stick with only 1 gear in a (electric) car it would simultaneously need to be short enough to pull away from standstill (while loaded to its maximum allowable weight) and at the same time long enough to cruise at top speed within the engine's operating range (typically 12,000 RPM for an electric motor). The Tesla can do this because it is a light sports car with no boot space and only 2 seats. The max. weight is not excessive so the gearing isn't TOO compromised on the "short" side. Zunaid 16:04, 12 December 2009 (UTC)[reply]
Our turbofan article has a very brief section discussing gearboxes for aircraft engines. The gearing is very different from a car transmission. Nimur (talk) 17:31, 12 December 2009 (UTC)[reply]
Yeah - some planes do have gearing - but it's not a shiftable gearbox - it's just a way to get the engine RPM down to a reasonable fan/propeller speed. Helicopters have much fancier gearboxes - but again, they aren't shiftable - they are a constant ratio. SteveBaker (talk) 22:16, 12 December 2009 (UTC)[reply]

I'll digress into trains and talk about diesel ones. Most of these are actually diesel-electric: the diesel engine drives a generator, at a narrow range of speeds, and electric motors drive the wheels. No gearbox needed since the electric motors can handle the full range of speeds needed. But there have been some diesel-hydraulic trains, based on a torque converter like in an automatic-transmission car; the Budd Rail Diesel Cars once common on secondary passenger services in Canada and the US are one example. And there have even been "diesel-mechanical" trains with a gearbox; these used to be common on secondary passenger routes in Britain. --Anonymous, 20:47 UTC, December 12, 2009.

Yeah - electric motors have a very flat torque-versus-rpm curve - gasoline & diesel engines typically don't. Hence the need for a gearbox to get the gasoline engine to run at the optimum RPM to get the best torque (or the lowest fuel consumption or whatever) at a range of vehicle speeds. With an electric motor, it's not really necessary to shift gears because the torque output is pretty much the same no matter the RPM. Having said that - some electric cars do use a simple two-speed gearbox with a 'starting gear' and a 'running gear'. Hybrid cars (and diesel-electric railroad locomotives) take advantage of that by running the internal-combustion engine at it's absolute best RPM - use the engine to generate electricity - and use the electricity to drive electric motors. That's a neat trick - and it's how hybrid cars get such good gas mileage. But you just don't need that range of RPM's in a low-cost light aircraft. High end prop planes use that variable-pitch prop trick - so they can alter the torque demands at a wide range of speeds without altering the engine RPM by more than it can cope with. SteveBaker (talk) 22:16, 12 December 2009 (UTC)[reply]

What happens when a cold-blooded animal gets an infection.

When warm-blooded animals (well, humans, dogs, etc at least) get an infection - their temperature goes up in a fever as their bodies try to kill off the bacteria/virus by making things too hot for them.

How do cold-blooded animals cope under these circumstances? Is there evidence that they'll bask in the sun for longer in an effort to create a hotter situation?

SteveBaker (talk) 15:26, 12 December 2009 (UTC)[reply]

Apprently that is the case.[10] There is something called 'behavioral fever' in reptiles, birds, fish and amphibians and it seems to be effective in increasing survival rates following infection. Mikenorton (talk) 17:09, 12 December 2009 (UTC)[reply]
It should be noted that the idea that fever is a useful survival technique has been questioned by some researchers. See Fever#Usefulness_of_fever which notes that the notion is not universally accepted that fever is beneficial to the organism as described by SteveBaker. --Jayron32 17:12, 12 December 2009 (UTC)[reply]
As a further pedantic note, the main function of fever isn't to make things too hot for the infectious agent, it is to ramp up the activity of the immune system. Looie496 (talk) 17:59, 12 December 2009 (UTC)[reply]
Wow! Thanks for the link Mike. So they got snakes to want to be in a warmer place by injecting them with DEAD bacteria?! How did the snakes know they needed to do that if they didn't get sick? (Presuming they didn't get sick from dead bacteria.) That's an interesting result! I'm glad I asked. SteveBaker (talk) 22:05, 12 December 2009 (UTC)[reply]
Well the dead bacteria would still contain the antigens which would bind to immunoglobulins. --Mark PEA (talk) 22:12, 12 December 2009 (UTC)[reply]
I have not read that study, but I doubt it was due to immunoglobulins, which are generally produced by the adaptive immune system and therefore would only be generated in large amounts if the animal had had that infection before. More likely it's an innate response mediated by LPS and other PAMPs from the bacterial cell, binding to PRRs like TLR4. According to this (free) publication the sensing of bacterial PAMPs by PRRs is conserved throughout vertebrate evolution. -- Scray (talk) 00:36, 13 December 2009 (UTC)[reply]
On a more general level than which antigens bind to what, "getting sick" is generally not something the invading bacteria actively do. It is simply the body's reaction to something foreign being there. Feeling sick is how we perceive the immune system's being activated to flush out the invaders before they get many enough to do damage of their own. It seems to work well for modifying our behavior to support the immune reaction; being miserable tends to make the patient lie down, bury himself in blankets and generally conserve as much energy as possible to fighting the infection (as opposed to things that can wait, such as locomotion, gathering food, or preparing for winter). Snakes probably feel it similarly. –Henning Makholm (talk) 01:10, 13 December 2009 (UTC)[reply]

M&M

Why are there two "m"s in Mmgy? SpinningSpark 16:48, 12 December 2009 (UTC)[reply]

According to our article on the barrel, where a million barrels is sometimes rendered as 1MMbbls, this arises from the use of M to represent a thousand (derived from 'mille'). Mikenorton (talk) 16:56, 12 December 2009 (UTC)[reply]
It's worth noting that neither Mmgy and MMbbls are SI units, so they do not abide by SI standard prefixes. Nimur (talk) 17:33, 12 December 2009 (UTC)[reply]
Thanks Mike, and thanks also to Nimur, although I did not think for one minute that this was an SI unit. SpinningSpark 18:57, 12 December 2009 (UTC)[reply]

Bleach Reactions

Ive had a mouse(or mouses) crawling around my room lately and i didnt really notice untill i saw one of the buggers the other night. Theres mouse excretions all over the carpet and places where it could hide where its especially concentrated.

Is it safe to spray the carpets with bleach or will this have some kind of reaction with the (presumably small?) amounts of ammonia in the mouse waste and ultimately kill me?

Cheers, kp —Preceding unsigned comment added by 121.220.22.118 (talk) 06:28, 13 December 2009 (UTC)[reply]

Urine + bleach --> chloramines, which are toxic. Probably it will be a small amount if you're cleaning a small spot, but it's still not the approach you should take. Go for an enzymatic cleaner (you can find it in pet stores, as it's used to clean up litter boxes and dog and cat urine or poop stains.) It is more likely to remove the smell, and won't turn your carpet white. - Nunh-huh 07:29, 13 December 2009 (UTC)[reply]

Nitrogen Trichloride IED?

Would it be possible to build an improvised explosive device using urine and chlorine based cleaning products? —Preceding unsigned comment added by Trevor Loughlin (talkcontribs) 13:28, 13 December 2009 (UTC)[reply]

Autophagy in bacteria

In eukaryotes specialized organelles - lysozomes - are used to mediate autophagy. Since bacteria lack organelles, they obviously can not take that approach. Even so, are there processes comparable to autophagy that occur in bacteria? Specifically, can a bacteria that finds itself in a low nutrient environment break down it's own proteins and structures to provide a temporary emergency supply of energy? Dragons flight (talk) 13:45, 13 December 2009 (UTC)[reply]

As you said, prokaryotes do not possess membrane-bound organelles. It's a great question. DRosenbach (Talk | Contribs) 14:21, 13 December 2009 (UTC)[reply]

on average how many pounds of fish are under a square meter of ocean?

On average, how many pounds of fish are under a square meter of ocean? 85.181.144.117 (talk) 14:30, 13 December 2009 (UTC)[reply]

Ocean biomass has some aggregate total mass of marine fish for the entire ocean; you can divide that by an estimate of total ocean surface area. However, fish are not uniformly distributed, so the merit of this average value, at least constructed in such a simple way, is dubious. Nimur (talk) 16:18, 13 December 2009 (UTC)[reply]

Europe and Asia

Where does the boundary between Asia and Europe between Ural mountain and Ural river meet each other? —Preceding unsigned comment added by 113.199.185.146 (talk) 16:06, 13 December 2009 (UTC)[reply]

There is no fixed boundary. See Europe-Asia border and Borders of the continents#Europe and Asia. PrimeHunter (talk) 16:16, 13 December 2009 (UTC)[reply]

Radon

How do I test for radon in water? I would prefer something I can find in a basic laboratory, if not something I could find in a store. I'm doing a science fair project on it. THX --Richard