Wikipedia:Reference desk/Science: Difference between revisions
→Amerigo and the year 1497: new section |
|||
Line 747: | Line 747: | ||
Is this for real? http://www.nzherald.co.nz/world/news/video.cfm?c_id=2&gal_cid=2&gallery_id=108562 [[User:Aaadddaaammm|Aaadddaaammm]] ([[User talk:Aaadddaaammm|talk]]) 09:38, 12 December 2009 (UTC) |
Is this for real? http://www.nzherald.co.nz/world/news/video.cfm?c_id=2&gal_cid=2&gallery_id=108562 [[User:Aaadddaaammm|Aaadddaaammm]] ([[User talk:Aaadddaaammm|talk]]) 09:38, 12 December 2009 (UTC) |
||
:Yes. The spiral was seen by hundreds of people in northern Europe. The most likely explanation is a Russian ICBM test where the rocket went out of control. [http://spaceweather.com/archive.php?month=12&day=10&year=2009&view=view spaceweather.com] has a write up on it, and the rocket plume of the rocket plume of the boost phase of the rocket is visible an normal in at least one photo of the spiral. --[[Special:Contributions/121.127.200.51|121.127.200.51]] ([[User talk:121.127.200.51|talk]]) 10:10, 12 December 2009 (UTC) |
|||
== Amerigo and the year 1497 == |
== Amerigo and the year 1497 == |
Revision as of 10:10, 12 December 2009
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
December 7
Change in kinetic energy
It's pretty easy to show, classically, that the observed change in kinetic energy doesn't depend on the frame of reference of the observer: it is a direct result from the equation relating the kinetic energy in a certain reference frame with that of the center of mass reference frame. How can it be shown that the change in kinetic energy (or energy), relativistically, doesn't depend on the reference frame? Do you have to go into the mathematical details to derive this result, or is there an a priori way of coming to the same conclusion?
A second, related question: Does an object's potential energy change between reference frames? I would think it would, because an object's potential energy depends on the relative distance between two objects, which, by Lorentz contraction, changes with reference frame. —Preceding unsigned comment added by 173.179.59.66 (talk) 00:18, 7 December 2009 (UTC)
- That what you said is not even true classically, let alone relativistically. Dauto (talk) 03:12, 7 December 2009 (UTC)
- It was (is) difficult to understand your question. That's probably why you got no answers. Also I don't think this field is well studied. Dauto: what is not true classically? Ariel. (talk) 09:16, 7 December 2009 (UTC)
- The OP said "It's pretty easy to show, classically, that the observed change in kinetic energy doesn't depend on the frame of reference". Well, that's not true. the change in kinetic energy DOES depend on the frame of reference. Dauto (talk) 14:46, 7 December 2009 (UTC)
- Why not, assuming that the system is closed? If the total kinetic energy in a certain reference frame is K, and the total kinetic energy in the center of mass reference frame is K_0, and the velocity of the center of mass relative to the reference frame in question is V, then K = K_0 + MV^2/2 (where M is the total mass). So ΔK = ΔK_0 (V won't change if it's a closed system). So the change in total kinetic energy will always be the same as the change in total kinetic energy of the center of mass, and thus the change in kinetic energy will always be the same. —Preceding unsigned comment added by 173.179.59.66 (talk) 17:45, 7 December 2009 (UTC)
- Why should we assume the system is closed? Dauto (talk) 18:37, 7 December 2009 (UTC)
- You aren't making sense. The kinetic energy in the centre of mass frame is zero. Or are you talking about the centre of mass of an n-body (n>1) system? It is easier to consider one object: If in my frame a 2kg object is moving at 1 m/s and speeds up to 2 m/s its KE increases from 1J to 4J. If your frame is moving at 1 m/s relative to mine in the same direction as the object then in your frame it starts off at rest (0J) and speeds up to 1 m/s (1J). In my frame the increase in energy was 3J, in yours it was 1J. As you can see, the change it kinetic energy is dependent on the frame of reference. (That's the classical view, the relativistic view is similar and gets the same overall conclusion.) --Tango (talk) 19:11, 7 December 2009 (UTC)
- I see what he's saying. It's true if momentum is conserved. That said, energy isn't conserved, so it's not a closed system, and it would seem pointless to assume momentum is conserved. I guess it's useful when you're talking about energy changing between kinetic and other forms. For example, if there are two balls each with a mass of 1 kg moving towards each other at 1 m/s that stick when they hit, the amount of energy lost in the collision is 2J regardless of reference frame. 67.182.169.172 (talk) 01:46, 8 December 2009 (UTC)
- Yes, thank you...but how do you show this?
- I see what he's saying. It's true if momentum is conserved. That said, energy isn't conserved, so it's not a closed system, and it would seem pointless to assume momentum is conserved. I guess it's useful when you're talking about energy changing between kinetic and other forms. For example, if there are two balls each with a mass of 1 kg moving towards each other at 1 m/s that stick when they hit, the amount of energy lost in the collision is 2J regardless of reference frame. 67.182.169.172 (talk) 01:46, 8 December 2009 (UTC)
- Potential energy certainly changes. Consider two identical springs, one held highly compressed by a (massless) band. Now zip past them at relativistic speed; the total energy of each must scale by γ, and you must see some of that increase in the compressed spring as additional potential energy because the other one has the same rest mass, thermal energy, and (rest-mass-derived) kinetic energy and the difference in energy is larger the compression energy in the springs' rest frame. --Tardis (talk) 15:38, 7 December 2009 (UTC)
Okay, it appears that I'm bad at making sense. 1) I wanted to say at the beginning that the system was closed, like in a collision, I just forgot to mention it. 2)It is a many body system (ie, as in a collision)...so basically, I'm asking if the change in kinetic energy in a collision (elastic or inelastic) is the same in all reference frames. —Preceding unsigned comment added by 173.179.59.66 (talk) 01:08, 8 December 2009 (UTC)
- Lets say you have a system of particles with masses , . The kinetic energy is given by where is the total energy. Now suppose that those particles suffer a series of collisions (which may be elastic or not) and after the collisions there are particles with masses , . The kinetic energy is given by where is the total energy after the collisions. The change in kinetic energy is , where the term cancles since the system is isolated and energy is conserved. Note how the final result depends only on the rest masses which are independent of the referencial used. I hope that helps a bit. Dauto (talk) 22:22, 8 December 2009 (UTC)
Redox rxn. How can I tell if a reaction is redox or not?
How can I tell if a reaction is a redox reaction just by looking at the chemical equation? Can someone show me an example? Thank you.161.165.196.84 (talk) 04:31, 7 December 2009 (UTC)
- A redox reaction is one where the oxidation numbers of some of the elements is changing in the reaction. All you do is assign oxidation numbers to every element in the chemical reaction. If the oxidation number for some of the elements is different on the left from on the right, then it is a redox reaction. If all of the oxidation numbers stay the same on both sides, then it is not a redox reaction. But you need to actually know how to assign oxidation numbers before you can do anything else here. Do you need help with that as well? --Jayron32 04:34, 7 December 2009 (UTC)
Yes, that would be great. My understanding is this: Hydrogen is usually +1, Oxygen is usually -2. In binary ionic compounds the charges are based on the cation's (metal) and the anion's (non-metal) group in the Periodic Table. Polyatomic ions keep their charge (Ex// Phosphate is -3, Nitrate is -1).
Now, my textbook says the following and I am not sure what this means: "In binary molecular componds (non-metal to non-metal), the more "metalic" element tends to lose, and the less "metalic" tends to gain electrons. The sum of the oxidation number of all atoms in a compound is zero." I'm not quite sure what the first part affects, but the second part is simply saying that once all oxidation numbers have been assigned, the sum of those numbers should be zero. Is this correct? —Preceding unsigned comment added by 161.165.196.84 (talk • contribs)
- Yeah, that's it. You should assign oxidation numbers per element not just for the polyatomics as a whole. Let me give you a few examples of how this works.
- Consider CO2. Oxygen is usually -2, and there are two of them, so that lone carbon must be +4, to sum up to 0, the overall charge on the molecule. C=+4, O=-2
- Consider P2O5. Oxygen is usually -2, and there are 5 of them, so the TWO phosphorus have to equal +10, so EACH phosphorus has an oxidation number of +5. P=+5, O=-2
- Consider H2SO4. Oxygen is usually -2, and hydrogen is almost always +1. That means that we have -8 for the oxygen and +2 for the hydrogen. That gives -6 total, meaning that the sulfur must be +6 to make the whole thing neutral. H=+1, S=+6, O=-2
- Consider the Cr2O7-2 ion. In this case, our target number is the charge on the ion, which is -2, not 0. So, in this case we get Oxygen usually -2, and there are 7 of them, so that's a total of -14. Since the whole thing must equal -2, that means the two Chromiums TOGETHER must equal +12, so EACH chromium has to equal +6. Cr=+6, O=-2.
- There are a few places where you may slip up. H is almost always +1, except in the case of metalic hydrides; in those cases (always of the formula MHx, where M is a metal) H=-1. Also, there are a few exeptions to the O=-2 rule. If oxygen is bonded to Fluorine, such as in OF2, fluorine being more electronegative will make the oxygen positive, so O=+2 in that case. Also, there are a few types of compounds like peroxides (O=-1) and superoxides (0=-1/2) where oxygen does not have a -2 oxidation number. These will be fairly rare, and you should only consider them where using O=-2 doesn't make sense, for example in H2O2, if O=-2 then H=+2, which makes no sense since H has only 1 proton. So in that case, O=-1 is the only way it works. However, these are rare exceptions, and I would expect almost ALL of the problems you will face in a first-year chemistry class to be more "Standard" types as I describe above.--Jayron32 06:09, 7 December 2009 (UTC)
- Note that O being an oxidation state of (-1) is the reason why peroxides are such strong oxidants. Oxygen is more stable at oxygen state (-2), and so peroxides are susceptible to nucleophilic attack where one oxygen atom accepts electrons and pushes out hydroxide or alkoxide as the leaving group (cuz of the weakness of the oxygen-oxygen bond). John Riemann Soong (talk) 06:29, 7 December 2009 (UTC)
- An important note is that "oxidation states sum to zero" ONLY in neutral compounds. If your compound is an an ion (for example, perchlorate or phosphate) or NAD+, then oxidation state will sum up to the charge of that ion. E.g. hydronium has an oxidation state +1. (It makes sense, right?) John Riemann Soong (talk) 06:33, 7 December 2009 (UTC)
This is helpful, thank you very much to all. Chrisbystereo (talk) 08:09, 7 December 2009 (UTC)
Value of a microchip
If you were to take all the metals and so on out of a chip (your choice, the newest Pentium, a digicam's image sensor, etc) and price them according to whatever tantalum/aluminum/titanium/cobalt/etc is going for, what would a chip's value be? I'm just curious what the difference between the cost of the components and the cost of the labor and such put into making it all work together. Has anyone ever even figured this out before? Dismas|(talk) 04:46, 7 December 2009 (UTC)
- The chip is basically a few grams of silicon, plastic, and maybe copper and iron. I can't imagine that the materials would be more than a few U.S. cents, if that much. The lion's share (99.99%) of cost of the chip itself is labor. --Jayron32 04:51, 7 December 2009 (UTC)
- I'm very aware that the value would be small but I was just wondering how small. My job is to make them and I've been spending the last few weeks looking at them under a scope and this question popped into my head. Dismas|(talk) 05:10, 7 December 2009 (UTC)
- Well, you probably have more accurate measures on the amounts of metal deposited in your process; and if you discount everything that gets wasted when it's etched away, you probably end up with a chip that contains a few nanograms of aluminum, a few picograms of boron, and a couple milligrams of silicon. Other trace metals depend on your process. Perhaps a better way to price everything is to count the number of bottles of each chemical solution or metal ingots that you consume in a given day/week/whatever, and divide by the number of chips produced. Again, this doesn't account for waste material, so you have to do some estimation. Nimur (talk) 08:06, 7 December 2009 (UTC)
- Pure silicon costs a lot more than impure. Does that difference count as labor to you? Metal ore in the ground is free for the taking. Making the base metal is all labor. Pretty much the cost of everything is just labor and energy. There is no "component price" for things, the only question is where do you draw the line and say "this is labor cost", and this is "component cost". I suppose - to you - it depends on if you buy it or make it. But globally there is no such line. To answer the question you are probably actually asking: I would suggest adding up the estimated total salary for everyone in your company, and subtract that from the gross income, and subtract profit. (If it's a public company you should be able to get those numbers.) Then you'll have things like overhead, and energy to include or not, as you choose. Ariel. (talk) 09:12, 7 December 2009 (UTC)
- Well, you probably have more accurate measures on the amounts of metal deposited in your process; and if you discount everything that gets wasted when it's etched away, you probably end up with a chip that contains a few nanograms of aluminum, a few picograms of boron, and a couple milligrams of silicon. Other trace metals depend on your process. Perhaps a better way to price everything is to count the number of bottles of each chemical solution or metal ingots that you consume in a given day/week/whatever, and divide by the number of chips produced. Again, this doesn't account for waste material, so you have to do some estimation. Nimur (talk) 08:06, 7 December 2009 (UTC)
- I'm very aware that the value would be small but I was just wondering how small. My job is to make them and I've been spending the last few weeks looking at them under a scope and this question popped into my head. Dismas|(talk) 05:10, 7 December 2009 (UTC)
- (either I was still sleepy, or I had an unnoticed ec - Ariel says essentially the same above) Also, the question is not very well-defined. For a microprocessor, you need very pure materials. A shovel of beach sand probably has most of the ingredients needed, but single-crystal silicon wafers are a lot more dear than that. If you pay bulk commodity price for standard-quality ingredients, the price of the material for a single chip is essentially zero. But in that case you will also need a lot of time and effort to purify them to the necessary level. --Stephan Schulz (talk) 09:13, 7 December 2009 (UTC)
- Wouldnt a vast inclusion of cost be R&D? I remember someone quoting West Wing on this desk about pharmaceuticals that would be relevant: "The second pill costs 5 cents, its that first pill that costs 100 million dollars." Livewireo (talk) 18:18, 7 December 2009 (UTC)
- Indeed. The cost is almost entirely R&D, I would think. That is a labour cost, though. --Tango (talk) 20:56, 7 December 2009 (UTC)
- Nevermind. I said I make them, I didn't say I owned the company and had access to all the costs associated with making them. I just wanted to know how much it would be if I melted it down and sold the constituent metals and such. I didn't think I was being that unclear. I'll just assume it's vanishingly small. Dismas|(talk) 20:19, 7 December 2009 (UTC)
- It would cost far more to separate the components than the components would be worth. Your question is easy to understand, it just doesn't have an answer - not all questions do. --Tango (talk) 20:55, 7 December 2009 (UTC)
- The fun of the question is how close to zero it is. Bus stop (talk) 21:10, 7 December 2009 (UTC)
- Thank you, Bus stop. I think you get my question most of all. I didn't mention labor at all. Or R&D. Nor did I ever say anything about the cost of separating the components. Again, nevermind. Dismas|(talk) 22:01, 7 December 2009 (UTC)
- Ok, but you can't get round the purity issue mentioned above. There isn't a single value for silicon, say, it depends on the purity. How pure the silicon would be depends on how much labour you put into separating the components. --Tango (talk) 22:12, 7 December 2009 (UTC)
- And the quantity of metals depends wildly on the actual die, photo masks, etc. As I mentioned above, you can estimate the masses of these constituent ingredients better than we can. Different mask patterns can leave as much as 100% or as little as 0% of a particular deposited layer - so there is no "in general" answer. You just have to estimate layer thickness and layer area for each stage of the process. Some typical numbers for areas and thicknesses might come out of articles like Self-aligned gate#Manufacturing process. Nimur (talk) 22:17, 7 December 2009 (UTC)
- Ok, but you can't get round the purity issue mentioned above. There isn't a single value for silicon, say, it depends on the purity. How pure the silicon would be depends on how much labour you put into separating the components. --Tango (talk) 22:12, 7 December 2009 (UTC)
- Thank you, Bus stop. I think you get my question most of all. I didn't mention labor at all. Or R&D. Nor did I ever say anything about the cost of separating the components. Again, nevermind. Dismas|(talk) 22:01, 7 December 2009 (UTC)
- The fun of the question is how close to zero it is. Bus stop (talk) 21:10, 7 December 2009 (UTC)
- It would cost far more to separate the components than the components would be worth. Your question is easy to understand, it just doesn't have an answer - not all questions do. --Tango (talk) 20:55, 7 December 2009 (UTC)
mesomeric versus inductive effects for the pka of catechol (ortho-diphenol)
I actually thought that o and p benzenediols should have higher pkas than phenol because of the destabilising mesomeric effect, but it seems that catechol (ortho-diol) has a pka of 9.5 (according to wikipedia). Google seems to say resorcinol (meta-diol) has a pka of 9.32 while para-diphenol is 9.8. This source seems to give a different frame of values.
My hypothesis is that the inductive effect is also at play, where having a carbanion resonance structure next to a (protonated) oxygen atom will stabilise it somewhat. And of course, the further away the two groups are from each other, the weaker the inductive effect, so it's why the para-diphenol would have the highest pka's of all the diphenols, while the meta-diol would barely see any mesomeric effect and mostly see the inductive effect. Is this reasonable? Is it supported by literature? John Riemann Soong (talk) 05:44, 7 December 2009 (UTC)
phenols as enols
I'm looking at this synthesis where a phenol is converted into a phenoxide and then used to perform a nucleophilic attack (in enol form) on an alkyl halide. My question is: why use lithium(0)? It seems a lot of trouble when you could just simply deprotonate phenol with a non-nucleophilic base like t-butyl hydroxide. Is it because a phenolate enolate is more nucleophilic at the oxygen? If so why not use something like lithium t-butyl hydroxide to bind the phenolate more tightly? John Riemann Soong (talk) 06:24, 7 December 2009 (UTC)
- I disagree with "a lot of trouble". Weigh a piece (or measure a wire length) of metal, drop it in, and you're done. Seems no worse than measuring your strong base (often harder to handle and/or harder to measure accurately). And where does that base come from? Do you think it more likely to be a benefit or a problem to have an equivalent of t-butanol byproduct (the conjugate acid of your strong base) in the reaction mixture (note that the chosen solvent is non-Lewis-basic) and during product separation/purification? The answer to every one of your "why do they do it that way" is because "it was found empirically to work well enough and provide a good trade-off for results vs cost." Really. Again again, nothing "in reality" works as cleanly as on paper, so you really have to try lots of "seems like it should work" and you find that every reaction is different and it's very hard to predict or explain why a certain set of conditions or reactants is "best" (for whatever "best" means). It's interesting to discuss these, but I think you're going to get increasingly frustrated if you expect a clear "why this way?" answers for specific reactions. On paper, any non-nucleophilic base will always work exactly as a non-nucleophilic base, and that's the fact. In the lab, one always tries small-scale reactions with several routes before scaling up whatever looks most promising. DMacks (talk) 07:05, 7 December 2009 (UTC)
- Along those lines, the best source for a certain reaction is the literature about that reaction. The ref you saw states "The use of lithium in toluene for the preparation of alkali metal phenoxides appears to be the most convenient and least expensive procedure. The procedure also has the merit of giving the salt as a finely divided powder." DMacks (talk) 07:12, 7 December 2009 (UTC)
- Sorry I guess my experience with oxidation-state 0 group I and II metals so far has been with Grignard and organolithium reagents. From an undergrad POV, they are such an awful pain to work with (compared to titrating a base and acid-base extraction)! Also -- deprotonated phenols can act like enols? Why aren't aldol side reactions a problem to worry about during the synthesis of aspirin from salicylic acid? And why aren't enol ether side reactions a worry here? John Riemann Soong (talk) 07:25, 7 December 2009 (UTC)
- The cited ref notes that the enol-ether side product is a huge problem (3:1 of that anisole product vs the "enolate α-alkylation" product they are primarily writing about). If the goal is "a difficult target", it doesn't matter if the reaction that gives it actually only gives it as a minor product compared to some other more likely reaction. The standard result of "phenoxide + SN2 alkylating agent" is O-alkylation, with other isomers being the byproduct. However, in general for enolates, the preference for O-alkylation vs C-alkylation is affected by solvent (especially its coordinating ability), electrophile, and metal counterion. It's unexpected to me that they get so much of it, but if there's any there and you want it badly enough, you go fishing through all the other stuff to get it. That's what makes this reaction worthy of publication...it does give significant amounts of this product and allows it to be purified easily from the rest. DMacks (talk) 09:16, 7 December 2009 (UTC)
- Sorry I guess my experience with oxidation-state 0 group I and II metals so far has been with Grignard and organolithium reagents. From an undergrad POV, they are such an awful pain to work with (compared to titrating a base and acid-base extraction)! Also -- deprotonated phenols can act like enols? Why aren't aldol side reactions a problem to worry about during the synthesis of aspirin from salicylic acid? And why aren't enol ether side reactions a worry here? John Riemann Soong (talk) 07:25, 7 December 2009 (UTC)
phenol-type quinoline
What do you call a phenol-type quinoline with an hydroxyl group substituted in the 8-position? I'm trying to find out its pKa (in neutral, nonprotonated form), but it's hard to know without realising what its name is.
(Also, it is an amphoteric molecule right?) These two pkas appear to interact via resonance, making for some weird effects on a problem set ... (I'm considering comparative pH-dependent hydrolysis rates (intramolecular versus intermolecular) for an ester derivative of this molecule...) John Riemann Soong (talk) 07:30, 7 December 2009 (UTC)
- Standard IUPAC nomenclature works pretty well for any known core structure: just add prefixes describing the location and identity of substituents. So quinoline with hydroxy on position 8 is 8-hydroxyquinoline (a term that gives about 132,000 google hits). Adding "pka" to the google search would help find that info. The protonation of these types of compounds is really interesting (both as structural interest and in the methods to study it)! All sorts of Lewis-base/chelation effects. DMacks (talk) 09:03, 7 December 2009 (UTC)
acidic proton question (Prilosec & Tagamet!)
Okay sorry for posting the 4th chem question in a row! I'm trying to figure out the acidic proton in two molecules, Tagamet and Prilosec. I'm given a pKa of 7.1 for the former and 4.0 and 8.8 for the latter. I don't know which ions the two in Prilosec the two pKas correspond to. (Possibly I feel there are much more basic and acidic sites, but they are not detailed or outside the range of discussion?)
With Tagamet imidazole is the the most obvious candidate for being a base with a pkB near 7, but I'm wondering why not the guanidine type residue? It has a nitrile group on it -- but how many pKa units would it raise? pKa of guanidine is 1.5, so plausibly a CN group could raise it to 7?
Oh yeah, and Prilosec. I'm ruling out the imidazole proton, but I feel that alpha-carbon next to the sulfoxide group is fairly acidic, cuz it has EWGs on both sides PLUS the carbanion could be sp2-hybridised if the lone pair helps "join" two conjugated systems. But the imidazole and pyridine lone pairs also look good for accounting for some of those pKas. Why aren't there 3 pKas? I think the imidazole-type motif in Prilosec is responsible for the pKa (pka of conjugate acid) of 8.8 -- but why the elevated pKa compared to normal imidazole? And why would the pKa of pyridine fall that low? (It has an electron donating oxygen substituted in para-position!) But assigning the pKas the other way round doesn't make sense either. Slightly disconcerted as I know these lone pairs are basic. John Riemann Soong (talk) 09:50, 7 December 2009 (UTC)
science
at firt there were judt two societies i.e hunting and gathering society but there was still life,the people were still living no equality was present evey one was equall and there was peace all over.But now a days cause of science there was no peace,no equality no respect every one is induldge in earning money.so wat should be hapened if the science and its invention are removed from our society? should we again start hunting and gathring society just 4 the sake of peace and equality? —Preceding unsigned comment added by Umair.buitms (talk • contribs) 13:31, 7 December 2009 (UTC)
- What makes you think hunter-gatherer societies were peaceful? They generally had greater equality since there wouldn't be enough food to go around if there was an elite that didn't hunt or gather, but they certainly fought neighbouring tribes. Do you really want equality, though? Surely everyone having a low standard of living is worse than some people having a high standard of living and others having a higher standard, which is the case in the modern developed world. --Tango (talk) 13:38, 7 December 2009 (UTC)
- I agree with Tango - there was unlikely to have been "equality" in the early days of humanity - and certainly no "peace". In modern times, there are still a few hunter-gatherer societies out there in places like the Amazon rainforest that science has not touched. For them, there is still warfare between tribes - women are still given one set of jobs and the men others - and there are still tribal leaders who rule the lower classes. The one place where equality is present is in "racial equality" - but that's only because they don't routinely meet other races because of the geography.
- As for removing science and invention - our society literally could not exist that way. The idea that (say) 600 million Americans could just put on loincloths and start hunting and gathering is nuts! There would be nowhere near enough food out there for that to happen - without modern agriculture, we're completely incapable of feeding ourselves. We would need for perhaps 599 million people to die before the one million survivors could possibly have enough to eat.
- I think your idyllic view of hunting & gathering is severely misplaced. It's a cruel, brutal existence compared to the relative peace and tranquility that is modern life.
- SteveBaker (talk) 13:51, 7 December 2009 (UTC)
- It a very common if very confused view, that all human problems are a product of modernity and so forth. It's true we have some new problems... but the problems of civilization are all there in part because living outside of civilization is so brutal. It is similar to the point of view that animals want to be "free"—most appear to want a stable food source more than anything else, because being "free" means starving half of the time. That's no real "freedom". --Mr.98 (talk) 14:44, 7 December 2009 (UTC)
- At least Sabre Toothed Tigers are extinct this time around. APL (talk) 15:16, 7 December 2009 (UTC)
- Do you have any references to support your utopian view of the hunter-gatherer societies? All evidence I've seen points to a society in which tribal warfare is common. Women are possessions. Children are expendable. And attempts to advance society are only accepted if they allow the tribe to attack the neighbors and steal more women and children. I feel that modern society is a bit more peaceful than that. -- kainaw™ 13:57, 7 December 2009 (UTC)
- To be fair, women as possessions is more what you get after the arrival of basic agriculture (herding), when the link between sex and children is more clearly understood and the concept of 'owning' and inheriting is established. Societies everywhere have cared about their own children: they were not viewed as expendable, except in as much as 'people who are not my family/tribe' are viewed so. If children were really viewed as expendable, there wouldn't be any concern about continuation of the family and providing inheritance, and hence there would be no possessiveness of women: the whole 'women as possessions' thing is about ensuring the children they bear are verifiably the children of the man who thinks they're his: without that, there's no reason for the man to care if the woman has sex with other men. The OP may have a hopelessly utopian view, but I'm not convinced yours is any more accurate. If nothing else, the Old Testament gives us accessible sources written up to three thousand years ago: the overwhelming feeling I get from it is how little the way people think has changed in the most basic ways. It is full of people caring very much about their children, way back. 86.166.148.95 (talk) 18:50, 7 December 2009 (UTC)
I'm going to be all cheesey and link you to Billy Joels We didn't start the fire. 194.221.133.226 (talk) 14:06, 7 December 2009 (UTC)
- I think even One Million Years B.C. was closer to the truth than the OP's utopian vision. Though their makeup probably wasn't as good :) Dmcq (talk) 14:27, 7 December 2009 (UTC)
OP, the viewpoint you expressed is known as anarcho-primitivism. You can read our wikipedia article, which includes views of both proponents and critics. See also Luddite, Neo-Luddism etc for less extreme versions of anti-modernism movements. Abecedare (talk) 15:39, 7 December 2009 (UTC)
- Also, for what it's worth, the development of science and so-called modernity was really just the logical outcome of a successful hunter-gatherer society. It's a lot easier, efficient, and survivable to build a house and a farm instead of wandering around hoping you find food, water, and shelter. Agriculture leads to spare time, spare time leads to advancements, which lead to greater agriculture, which eventually leads to Twitter. You can't go back, nobody would chose death over relaxation and creativity. ~ Amory (u • t • c) 16:26, 7 December 2009 (UTC)
- It's not that inevitable - plenty of societies didn't develop agriculture until they were introduced to it by other societies, some in modern times (eg. Australian Aborigines). Things wouldn't have needed to be too different for agriculture to have never been developed anywhere (or, at least, not developed until millennia later than it was). --Tango (talk) 16:34, 7 December 2009 (UTC)
- These are basically value judgements we are all making. These are subjective answers we are giving. Not surprisingly we favor what we have. Bus stop (talk) 16:41, 7 December 2009 (UTC)
- Re: not inevitable: I seem to recall [citation needed] that one of the ways archaeologists identify remains as early-domesticated goats rather than wild goats, is to look for signs of malnutrition. Captive goats were less well fed than wild goats. 86.166.148.95 (talk) 18:54, 7 December 2009 (UTC)
- I've never heard that, but it makes some sense. Goats were often raised for milk, rather than meat, and they don't need to be particularly well nourished to produce milk (actual malnourishment would stop lactation - animals usually don't use scarce resources on their children if they are at risk themselves). --Tango (talk) 18:57, 7 December 2009 (UTC)
- It's not that inevitable - plenty of societies didn't develop agriculture until they were introduced to it by other societies, some in modern times (eg. Australian Aborigines). Things wouldn't have needed to be too different for agriculture to have never been developed anywhere (or, at least, not developed until millennia later than it was). --Tango (talk) 16:34, 7 December 2009 (UTC)
- The actual experience of prehistoric hunter-gatherers is a serious bone of contention among anthropologists, made all the more difficult by various wild claims made by armchair researchers of the 18th and 19th centuries. (See state of nature and Nasty, brutish, and short). Among the complications are these: HGs lived in wildly diverse ecologies, meaning they had wildly diverse lifestyles, with wildly diverse advantages and disadvantages - how can you evaluate the lifestyle of an averaged out Inuit/San person meaningfully? Also, the few remaining HGs live at the very edges of the habitable earth, which makes it difficult to extrapolate what life was like in more normalized areas. In very, very, generic terms you can say this: people who lived the HG lifestyle worked a lot less per day than the farmers their descendants eventually became, they had few diseases compared to farmers, and they probably had more well-rounded diets than farmers. While there was surely enough sexual discrimination to go around, it was probably not nearly as bad as in the farming communities and the whole "slavery to acquisition" we in the modern world play to was pretty much non-existent; you can't build up wealth if you've got to slug everything on your back. On the other hand, they had relatively slow population expansion so when disasters did hit, it might spell the end to the band or tribe. Inter-band warfare was a real hit and miss kind of thing too - there were neighbours to be trusted and others that weren't, but with no central authority, there was really nobody "watching your back" if relations got out of hand. On the whole, it probably was quite a nice existence if you happened to be living in a reasonable area and didn't mind living your life within an animistic/mystical framework where you have enormous understanding of the surface of the world around you, but virtually no grasp of the real reason why anything happens. No books, no school, no apprenticeship, very little craft specialization beyond perhaps "women gather, men hunt" kind of thing. Matt Deres (talk) 21:43, 7 December 2009 (UTC)
- Of course you can build wealth if you need to haul it around. Your form of wealth would most likely be draft animals so that you can carry more stuff around. Googlemeister (talk) 22:21, 7 December 2009 (UTC)
- Domestication of animals is part of the road to civilisation. If we're talking about early hunter-gatherer societies (which I think we are - if we're talking about later h-g societies then you may be right), then they wouldn't have domestic animals. They wouldn't have had much to carry around. Simple clothes, stone tools, ceremonial items. Their economy was 99.9% food and I don't think they had the means to preserve it for long. --Tango (talk) 22:47, 7 December 2009 (UTC)
- Of course you can build wealth if you need to haul it around. Your form of wealth would most likely be draft animals so that you can carry more stuff around. Googlemeister (talk) 22:21, 7 December 2009 (UTC)
- It does not need to be animals, slavery has been around for some time as well. Googlemeister (talk) 16:54, 8 December 2009 (UTC)
- The basics of smoking meat have been known for a very long time, but I think it really came down to not wanting to carry the result around. In order to not exhaust an area, most HGs had to keep on moving at regular intervals and the use of pack animals was, while not completely unknown, not something widely employed. Dogs were probably in use for help with the hunt, but once you start semi-domesticating your prey animals you're not really hunting-gathering anymore - now you're herders, and eventually pastoralists, perhaps practising transhumance. Anthropologists use the term "pastoralist" in a more narrow sense than our article does, basically only using it for those groups that have only minimal physical goods and a high reliance on the herd animal. The classic example there are the Nuer. Matt Deres (talk) 01:33, 8 December 2009 (UTC)
- Smoking meat might help you get through a bad winter, but it isn't going to allow you to build up a retirement fund. I think a HG would have two possible wealth levels - enough food for their tribe to survive and not enough food for their tribe to survive. I can't see any way they could have significantly more wealth than than having enough food to eat. --Tango (talk) 11:51, 8 December 2009 (UTC)
- The basics of smoking meat have been known for a very long time, but I think it really came down to not wanting to carry the result around. In order to not exhaust an area, most HGs had to keep on moving at regular intervals and the use of pack animals was, while not completely unknown, not something widely employed. Dogs were probably in use for help with the hunt, but once you start semi-domesticating your prey animals you're not really hunting-gathering anymore - now you're herders, and eventually pastoralists, perhaps practising transhumance. Anthropologists use the term "pastoralist" in a more narrow sense than our article does, basically only using it for those groups that have only minimal physical goods and a high reliance on the herd animal. The classic example there are the Nuer. Matt Deres (talk) 01:33, 8 December 2009 (UTC)
1. This subject would have been more appropriately placed on the Humanities Desk.
2. The OP is delusional.
Life, in the state of nature, is "solitary, poor, nasty, brutish, and short" leading to "the war of all against all."
— Thomas Hobbes, Leviathan (1651)
B00P (talk) 23:24, 7 December 2009 (UTC)
- Getting back to the OP: you may have been misled by the name into thinking there were two societies originally, one that hunted and one that gathered. In fact, hunter-gatherer is a generic name for pre-agricultural groups, almost all of which ate both types of foods: those (mostly animals) which some members (mostly male) had hunted, and those (mostly plants) which others (mostly female) had gathered. (Another name for these societies, should you wish to research further, is foragers.) It is true that most of these groups had to be mobile, to follow the food, and as such could carry little with them, so they did not accumulate wealth in the sense in which we understand it. However, there are always exceptions. One well-studied example are the Indigenous peoples of the Pacific Northwest Coast, who lived in a rich and fertile ecosystem, and particularly the Haida, who developed an impressive material culture -- so much so that they had to invent the potlatch in order to get rid of (or share around) the surplus. And that relates to another sort of wealth, a social and cultural wealth as opposed to a material one. Much harder to demonstrate than grave goods! BrainyBabe (talk) 23:29, 7 December 2009 (UTC)
- @B00P- Please don't wp:BITE the newbies or post nonsense. Hobbes was a philosopher who never did any fieldwork, never studied the topic and just made shit up as he went along, pretending a state of nature had once existed so he could have an excuse to make up even more shit without any basis in reality. Explaining why you think such-and-such a policy is good is perfectly fine, inventing lies and making crap up out of whole cloth is the worst kind of academic fraud. You don't do yourself any favours by quoting him as if it was worth anything. Matt Deres (talk) 01:43, 8 December 2009 (UTC)
- I think that in good times they had peace of mind beyond our wildest imagination. Bus stop (talk) 23:32, 7 December 2009 (UTC)
- Indeed. So pervasive is the "solitary, poor, nasty, brutish, and short" lie that most people don't know that H-Gs in fact lived in familial groups, had no poverty, weren't particularly nasty or brutish, and lived longer, healthier lives than their farming cousins. I think that, if most people could see what kind of life they'd reasonably expect to lead, they'd choose hunting and gathering over farming any day of the week - less work, less disease, less worries, no boss - who wouldn't want that? Matt Deres (talk) 01:52, 8 December 2009 (UTC)
(edit conflict)I highly doubt the world would be able to support 6+ billion hunter gatherers. I would think that alone would answer the OP's question - if we, as a species, reverted back to hunting and gathering, it would require the deaths of billions. I would say that suggests that, "4 the sake of peace and equality", we definitely should not do this. TastyCakes (talk) 23:34, 7 December 2009 (UTC)
- Well, death is the great equalizer ... so in some sense, "4 the sake of peace and equality", we should do this unless we do not value equality.... "I'll rant as well as thou." Nimur (talk) 18:09, 8 December 2009 (UTC)
- Agriculture and settlements didn't necessarily make life better, but it made life more productive. Settled farmers can produce more food than can hunter-gatherers, and more food = more children.[1] Thinkquest estimates the Earth's carrying capacity for hunter-gatherers to be 100 million.[2], and the carrying capacity assuming the farming of all arable land as 30 billion people. As soon as human societies struck on the strategy of settling and farming this inevitably expanded either by cultural exchange or by the farmers progressively expanding their territory to the detriment of hunter-gatherers. We have pre-Western contact Australian Aboriginals and Bushmen/San as remaining examples of hunter-gatherers; Amazonian tribes aren't a great example as they derive from pre-Colombian civilizations who left behind the Terra preta, so they have traditions and social structures that persist from those times. Here's a study of hunter-gatherer societies at the end of the last ice age, which looks at societal transitions.[3] As for violence and peace, the death rate of adults of the Hiwi is about 2% per year, and the death rate in Cro-Magnon and our sibling species the Neanderthals is estimated to have been 6% per year.[4][5] Over half of deaths in the Aché pre-contact with Westerners were due to violence, so the idea that it is modern society, technology, science or civilization that causes violence is plain wrong, but it does make our killing more efficient and able to be done on a larger scale. Fences&Windows 18:42, 8 December 2009 (UTC)
Photon
Does the energy of a photon depend on the reference frame? I would think so, because observers in difference reference frames measuring the frequency of a photon will measure different values (because their clocks run at different rates), and E=hf. But then a parado seems to arise: If observer A measures the energy of a photon to be E, then an observer B moving relative to A should measure a lower energy, γE. But in B's reference frame, A should measure an energy of γA. So who measures what energy? —Preceding unsigned comment added by 173.179.59.66 (talk) 17:54, 7 December 2009 (UTC)
- See redshift. Redshift is precisely photon energies being different in different frames. Redshift is determined by the relative velocity between the source and observer. The difference in velocity between two observers would mean each sees a different redshift - the one receding from the source faster (or approaching the source slower) will see the energy as lower. --Tango (talk) 18:04, 7 December 2009 (UTC)
- There are other factors that influence the observed frequency besides the γ-factors. If everything is taken into account, there is no paradox. See doppler effect. Dauto (talk) 18:08, 7 December 2009 (UTC)
- Also, see Relativistic Doppler effect, which extends the mathematics to apply to a wider range of relative velocities of reference frames, accounting for additional effects of relativity. Nimur (talk) 19:08, 7 December 2009 (UTC)
- There are other factors that influence the observed frequency besides the γ-factors. If everything is taken into account, there is no paradox. See doppler effect. Dauto (talk) 18:08, 7 December 2009 (UTC)
Okay, so basically you're saying that the equation relating the two observed frequencies would be the Doppler equation, regardless if the emitter is actually seen? —Preceding unsigned comment added by 173.179.59.66 (talk) 01:02, 8 December 2009 (UTC)
- What do you mean by "regardless if the emitter is actually seen?" If you detect a photon than (by definition) the emitter is being seen. Dauto (talk) 09:00, 8 December 2009 (UTC)
- What I meant was, you don't know the velocity of the emitter. But I realise now that it shouldn't matter. —Preceding unsigned comment added by 173.179.59.66 (talk) 11:29, 8 December 2009 (UTC)
And a related question (actually, this was the motivator for the first question): suppose that a photon strikes a proton (at rest in the lab frame) and produces a proton and a pion or something. The first question (this was an exam question) was to find the threshold energy, which I did without problem. The second question asked to find the momentum of the pion if the photon has the threshold energy. So my strategy was to find the velocity of the center of mass and then make that the velocity of the pion...how would you do this though? —Preceding unsigned comment added by 173.179.59.66 (talk) 01:17, 8 December 2009 (UTC)
- If the photon has the threshold energy, and it actually has that interaction you describe, then all the energy is used up in creating the pion. There is no energy left to move anything, so pion and proton are stationary. But the photon had momentum so there will need to be some movement to carry that away. I guess you will need a simultaneous equation, one to preserve energy and one to preserve momentum. momentum of photon=momentum of proton+momentum of pion. (as vectors); energy of photon=kinetic energy of pion+kinetic energy of proton. Graeme Bartlett (talk) 03:16, 8 December 2009 (UTC)
- Indeed, you need to solve the equations simultaneously. If you're clever, you'll be able to work out what directions the proton and pion fly off in. --Tango (talk) 15:59, 8 December 2009 (UTC)
- It probably goes without saying, but the above equation for energy conservation should be:
- energy of photon=kinetic energy of pion + energy equivalent of the pion's mass + kinetic energy of proton
- Otherwise we're not accounting for the mass of the pion being created. TenOfAllTrades(talk) 16:22, 8 December 2009 (UTC)
- Oops, missed that out. For the direction, I am guessing that the minimal energy situation will not have any side to side or up and down movement, so that only a a scalar needs to be considered. Photon hits proton and pion and proton head off in the same direction as the original photon. Your idea about changing the coordinates sounds wise. If you pick coordinates in which total momentum is zero, afterwards you will still have momentum zero. The minimal energy situation in this frame will be pion and proton sitting stationary at the same spot. Graeme Bartlett (talk) 20:57, 8 December 2009 (UTC)
- The centre of mass frame does make things easier to calculate, but your final answer isn't very useful - yes, everything ends up at rest, but it's at rest in a frame that is moving very quickly compared to your lab. --Tango (talk) 22:54, 8 December 2009 (UTC)
- Oops, missed that out. For the direction, I am guessing that the minimal energy situation will not have any side to side or up and down movement, so that only a a scalar needs to be considered. Photon hits proton and pion and proton head off in the same direction as the original photon. Your idea about changing the coordinates sounds wise. If you pick coordinates in which total momentum is zero, afterwards you will still have momentum zero. The minimal energy situation in this frame will be pion and proton sitting stationary at the same spot. Graeme Bartlett (talk) 20:57, 8 December 2009 (UTC)
- It is usefull in so far as it stablishes that the proton and the pion will be moving together in the lab. ref. frame. Dauto (talk) 02:33, 9 December 2009 (UTC)
- So at the minimum: energy of photon=hF=½v2(mass of pion+mass of proton) + energy equivalent of the pion's mass
- Momentum of photon=hF/c=v(mass of pion+mass of proton)
- Divide energy formula by momentum:
- c=½v + (energy equivalent of the pion's mass)/(mass of pion+mass of proton)/v which you can solve for v perhaps (c=speed of light, h=Planck's constant F=frequency of photon). Graeme Bartlett (talk) 11:12, 9 December 2009 (UTC)
Area of Hong Kong
I was reading the question above about the size of California and I was wondering - has anyone ever gone and added up the total floor space in a dense city like Hong Kong, including all the floors in all those skyscrapers, as well as area on the ground, and compared that to its geographical area (1,104 square km, according to the article)? How much larger would Hong Kong, for instance, be? When viewed in that light, would the List of cities proper by population density change dramatically (ie cities with people living in big sky scrapers coming out looking better, ie less dense, than cities with lots of "1 story slums")? TastyCakes (talk) 19:23, 7 December 2009 (UTC)
- I vaguely recall that such statistics (total habitable area) are commonly collected by governments, tax administration authorities, electric/water utilities, fire-departments, etc. I can't recall if "Total habitable area" is the correct name. I'm pretty sure that the statistic of habitable- or developed area (including multi-story buildings) as a ratio to total land area is commonly used for urban planning. Nimur (talk) 19:54, 7 December 2009 (UTC)
- Floor Area Ratio. Sorry, the technical term had eluded me earlier. This article should point you toward more explanations of the usage of this statistic. Nimur (talk) 21:05, 7 December 2009 (UTC)
- Ah ok, thanks for that. Have you ever heard of it being calculated for an entire city? TastyCakes (talk) 23:41, 7 December 2009 (UTC)
- That's really the point - it's sort of a zoning ordinance metric for urban areas that's supposed to be more analytic than just capping the maximum number of stories per building. Apparently its use is widespread in urban Japanese zoning codes. It has also seen limited use in American urban planning, e.g. Boulder, Colorado. This source, Studio Basel (apparently a private architecture and urban studies insititute), claims that Hong Kong's floor area ratio is 10-12: "the highest urban density in the world". Nimur (talk) 00:32, 8 December 2009 (UTC)
- (I thought I had posted this about six hours ago but I have just come back to find an edit conflict screen) Well the offiice space is 48 million square feet. I'll leave it to Steve Baker to add on something for the fractal cracks between the floorboards. This book has some information on residential space but its well out of date. SpinningSpark 00:23, 8 December 2009 (UTC)
- Actually this is the argument for using floor area ratio as a density metric - it compares developed land area to actual habitable space by dividing the useful square footage by the area of its allocated plot - instead of dividing by some unknown estimate of the total city land area (which would include things like undeveloped hills, trees, spaces between buildings). FAR is more like an integral - it weights each building's floor space by the differential land-area unit that is allocated for it, and then accumulates and averages for the entire city. Cracks between floorboards aren't at issue - but unzoned land and undevelopable terrain are specifically not included in the total statistic. Nimur (talk) 00:37, 8 December 2009 (UTC)
- Ah ok, thanks for that. Have you ever heard of it being calculated for an entire city? TastyCakes (talk) 23:41, 7 December 2009 (UTC)
- Floor Area Ratio. Sorry, the technical term had eluded me earlier. This article should point you toward more explanations of the usage of this statistic. Nimur (talk) 21:05, 7 December 2009 (UTC)
Ah ok, I get that many square feet as being almost 4.5 square km, less than half a percent of Hong Kong's area. I can't imagine residential areas being hugely larger than that, but maybe I'm wrong? It seems that, even if residential space is several times office space, Hong Kong's usable surface area is only increased a few percent by all those sky scrapers and other buildings. Is that a fair assessment? TastyCakes (talk) 15:27, 8 December 2009 (UTC)
- Only if you count "all land in the borders" as "usable." Another definition of "usable land" might be any land area that is zoned or districted for development. A floor area ratio of 10 means that you are multiplying the effective usable area by a factor of 10. Nimur (talk) 17:13, 8 December 2009 (UTC)
Converting from degrees K to F
Could someone please answer this question? Thanks, Kingturtle (talk) 19:58, 7 December 2009 (UTC)
- The formula for converting K to F is F = 1.8K - 459.7 Googlemeister (talk) 20:02, 7 December 2009 (UTC)
- That's rounded; F = 1.8K - 459.67 is the exact formula. "Degrees Kelvin" is obsolete terminology, by the way; they've been just called "kelvins" (symbol K, not °K) since 1968. For example, 273.15 K (kelvins) = 32°F (degrees Fahrenheit). --Anonymous, 21:04 UTC, December 7, 2009.
- Google can actually answer these types of questions. What is 1.416785 × 10^32 kelvin in Fahrenheit? -Atmoz (talk) 21:54, 7 December 2009 (UTC)
- WolframAlpha does this too, and gives other (scientific) information about the conversion for comparison. TastyCakes (talk) 23:37, 7 December 2009 (UTC)
Compact florescent bulbs
What is the acceptable temperature range at which you can use these lights? I ask because I want to know if I can use it outside when it is -50 deg, or if it will not work at that temperature. Googlemeister (talk) 19:59, 7 December 2009 (UTC)
- Form our Compact fluorescent lamp article: CFLs not designed for outdoor use will not start in cold weather. CFLs are available with cold-weather ballasts, which may be rated to as low as -23°C (-10°F). (...) Cold cathode CFLs will start and perform in a wide range of temperatures due to their different design. Comet Tuttle (talk) 20:16, 7 December 2009 (UTC)
- The packaging will indicate the acceptable range for the bulb. They are universally dimmer when cold, so this may be a persistent issue considering -50 (c or f) is 'pretty darn cold' in the realm of consumer products. —Preceding unsigned comment added by 66.195.232.121 (talk) 21:27, 7 December 2009 (UTC)
Inductive electricity through glass
With Christmas season here, I had an idea... Many wireless chargers use inductors to "transmit" electricity from a base unit to a device. Does anyone make that sort of thing that transmits electricity from inside the house to outside? I'm not considering a high-powered device. I'm considering the transmit/receive devices to be within an inch of each other on opposite sides of a window. -- kainaw™ 21:46, 7 December 2009 (UTC)
- I think normal wireless rechargers should be able to transmit through glass. 74.105.223.182 (talk) 23:55, 7 December 2009 (UTC)
- I figure it would, but I don't want to recharge my watch or toothbrush through a window. I'm looking for something to send electricity through a window. I would like to have two parts. The indoor part will have one end plug into an outlet and the other end stick to the inside of the glass. The other part will stick to the outside of the glass and have a socket to plug something into (like Christmas lights - which is what gave me the idea). -- kainaw™ 01:34, 8 December 2009 (UTC)
- You'll need some conversion electronics to higher frequency. Inductive power transfer at 60 Hz requires very large coils. Most commercial devices I've seen operate at hundreds of megahertz and usually convert back to DC on the receiving-end. See Inductive charging, if you haven't already found that article. Nimur (talk) 07:14, 8 December 2009 (UTC)
- It's a really good idea. If it's not expensive, you should try to bring it to market. (Although unless you can patent it, expect to be copied.) If you need more power, you can increase the area of the device. A suction cup will probably not work - they rarely can stay on for long. Ariel. (talk) 03:42, 8 December 2009 (UTC)
- I figure it would, but I don't want to recharge my watch or toothbrush through a window. I'm looking for something to send electricity through a window. I would like to have two parts. The indoor part will have one end plug into an outlet and the other end stick to the inside of the glass. The other part will stick to the outside of the glass and have a socket to plug something into (like Christmas lights - which is what gave me the idea). -- kainaw™ 01:34, 8 December 2009 (UTC)
December 8
reaction of lead sulfate and sodium bicarbonate
While getting a new car battery, I watched them clean a lot of white, lead sulfate corrosion from the terminals by pouring some baking soda mixed with warm water on it. It foamed up quite a bit.
What did that reaction create? Elemental lead?
After, he just wiped it up with a shop towel. How badly contaminated is that towel? I wonder what they do with it. Ariel. (talk) 03:39, 8 December 2009 (UTC)
- The lead sulfate probably did nothing and was probably not there, but the sulfuric acid would have reacted with the bicarb, to release Carbon dioxide. After this neutralization the product would be safe to handle. Just sodium sulfate. Graeme Bartlett (talk) 05:46, 8 December 2009 (UTC)
- No, it couldn't have been sulfuric acid - that's a liquid. These were soft white crystals. Maybe they were soaked with sulfuric acid? They did seem kind of wet looking. So they just stayed as lead sulfate? Ariel. (talk) 07:59, 8 December 2009 (UTC)
- There would be some sulfuric acid mixed in with the crystals, that is what makes the fizz. But there could also be copper sulfate from dissolving the copper wires, and calcium sulfate, from dissolving dirt. Any lead sulfate is pretty insoluble and stable.
- No, it couldn't have been sulfuric acid - that's a liquid. These were soft white crystals. Maybe they were soaked with sulfuric acid? They did seem kind of wet looking. So they just stayed as lead sulfate? Ariel. (talk) 07:59, 8 December 2009 (UTC)
Rendering a planet uninhabitable
How large would a single explosion need to be to render an Earth-like planet temporarily uninhabitable? Would 5.55×1020 joules do it? Horselover Frost (talk) 04:42, 8 December 2009 (UTC)
- That would depend on a lot if things, and uninhabitable to whom? Chicxulub crater#Impact specifics says "estimated to have released 4×1023 joules of energy". Many species survived that. PrimeHunter (talk) 05:01, 8 December 2009 (UTC)
- You would probably enjoy this website: how to destroy the earth. Ariel. (talk) 05:08, 8 December 2009 (UTC)
- Wierd. One of the most outrageous methods included in that website is attributed to myself. I can think of a situation where I would have contributed it, but have absolutely no memory of doing so (though I do remember visiting the site in the past). Feels wierd to see your name where you don't expect it, perhaps especially when you've got a fairly uncommon name -- I ain't no Bob Smith. Wierd. --203.202.43.54 (talk) 09:14, 9 December 2009 (UTC)
- That's a humorous website - but some of his claims are a bit unscientific: "it may be possible to find or scrape together an approximately Earth-sized chunk of rock and simply to "flip" it all through a fourth spacial dimension, turning it all to antimatter at once."[dubious – discuss]. Nimur (talk) 06:08, 8 December 2009 (UTC)
- He knows. Read a little lower: "But since the proposed matter-to-antimatter flipping machine is probably complete science fiction....." Ariel. (talk) 08:10, 8 December 2009 (UTC)
- You would probably enjoy this website: how to destroy the earth. Ariel. (talk) 05:08, 8 December 2009 (UTC)
- (1) there is no point in providing 3 (three) significant digits of your energy output when neither the notion of an "Earth-like planet" is well defined; nor time-span, means, and area of the said 5.55×1020 joules delivery are specified. (2) To be fair: Sun churns out about 1400 Joules of light per second per square meter of the Earth cross-section. Earth radius being about 6400 km, the said cross-section is approximately pi*6.4e6^2 ~= 1.3e14 m^2. That is, Earth receives about 2x1017 Joules of sunlight every second. That's 6x1020 J in about an hour. If you deliver your energy over large area over large time (much longer than an hour), not much bad is gonna happen. (3) If you deliver it to the Earth core, no-one would even notice. Heat capacity of iron is about 500 J/kg/K, so 5.55×1020 joules will increase the temperature of 1×1018 kg of iron by 1 K. Earth core weights many orders of magnitude more than 1×1018 kg, so forget it. (4) However, if you deliver your energy at a relatively shallow depth, it's going to produce a pretty nasty earthquake locally. Energy released in a magnitude 8 earthquake is about 4×1018 J; magnitude 9 is about 30 times more energy. Even if I assume energy conversion close to 100% (I don't think that's possible), 5.55×1020 J is between magnitude 9 and 10. No global extinction. Sorry :) --Dr Dima (talk) 06:10, 8 December 2009 (UTC)
- The exactness of 5.55 indicates that there is some context to this. Would to you care to give us the source? SpinningSpark 09:28, 8 December 2009 (UTC)
- After some calculation I estimate that 5.55*10^20 joules is equal to approximately 1 5.5 Gigaton nuclear explosion, or 5500 1 megaton explosions. Strategically placed, that could probably kill all humans, but it would be a stretch for a random event. Googlemeister (talk) 14:57, 8 December 2009 (UTC)
- It depends ENTIRELY on how this energy is dispersed. For example, the sun dumps something like 1.3x1017 joules onto the surface of the earth every second...the same amount as your bomb about every hour of every day. A fairly small bomb (on this scale) could wipe out most land-dwelling life by producing a large enough tsunami to scour the continents clean - or by putting enough dust into the atmosphere to cause a 'nuclear winter'. It's hard to say - but knowing the raw energy alone isn't sufficient to allow for a meaningful answer. SteveBaker (talk) 18:32, 8 December 2009 (UTC)
- Right, to take it another way, according my my handy physics text, the same amount of energy to stop a typical pistol round is the same amount of energy to stop a baseball thrown at 90 mph. Because the baseball has an impact surface area that is something like 2 orders of magnitude larger, a baseball team will not regularly kill their catchers. Googlemeister (talk) 21:37, 8 December 2009 (UTC)
Ok, context. I'm writing a military speculative fiction novel and trying to keep it fairly hard. The scenario I have in mind is a 1016 kg ceramic rod hitting a planet's surface at just short of the speed of light, with the goal of destroying the planet's ecosystem. Unless I dropped a zero somewhere that should have roughly 5.55×1020 joules of kinetic energy on impact. Horselover Frost (talk · edits) 22:41, 8 December 2009 (UTC)
- We have an article on that concept: Kinetic bombardment. If you haven't already read it, I suggest you do. --Tango (talk) 23:01, 8 December 2009 (UTC)
- Yes, I've read it. The idea I'm using is closer to a relativistic kill vehicle. What I'm asking is if it's big enough to render a planet temporarily uninhabitable, and if it isn't how much bigger should it be. Horselover Frost (talk · edits) 23:11, 8 December 2009 (UTC)
- I do not know (and I do not know of anyone who has done research on) what the energy deposition curve looks like for a macroscopic relativistic projectile in the Earth magnetosphere, atmosphere, and crust. Does the energy deposition curve of such projectile in the Earth crust have a Bragg peak? What is the characteristic energy deposition depth scale? 100m? 1km? 10km? 100km? What is the primary stopping mechanism? What is the efficiency of projectile energy conversion into gamma radiation, into seismic waves, into kinetic and thermal energy of the debris? What kind of radioactive fallout to expect? How much dust will be kicked up into the stratosphere? Your guesses are as good as mine. Let me say this again: the problem is not just energy conversion. You need to know what the projectile kinetic energy is converted to, into what volume it is deposited, what kind of atmospheric and surface contamination it produces, and how that contamination spreads and decays. BTW, that should also depend on whether it hits ice/water, sand, or rock, and at what angle. --Dr Dima (talk) 00:08, 9 December 2009 (UTC)
- For one, you can up the energy in the projectile as much as you want to by making it go a tiny faster. Given a single, compact one ton rest mass rod, I suspect the object will not have much of a chance of depositing energy into the ecosphere. It should go through the atmosphere so fast that there will not be much time for interaction, and the damage should be fairly localized. --Stephan Schulz (talk) 01:05, 9 December 2009 (UTC)
- Yeah - you could imagine something with incredible local intensity - but by the time it hits the ground it'll just keep going - shedding most of it's energy FAR underground with little global consequence. I think a good place for our OP to start is with this handy gadget The Earth Impact Effects Program. You can play with parameters for earth-impactors and see how much damage they do. I'm not sure that the tool will be robust up to these crazy speeds though - the equations and assumptions might easily break down. But maybe you can come up with an Earth-destroyer with a more reasonable set of parameters. The PDF that they reference at the bottom of that page is a great tutorial on impact effects. SteveBaker (talk) 02:44, 9 December 2009 (UTC)
- Thanks for the link. It accepted the numbers, but after playing with it for a while I think I'm going to have to scrap the idea. While trying various values the effects went from strictly local to obliterating the planet with very little in between. I guess I'll go back to my solar shade idea instead. Horselover Frost (talk · edits) 04:02, 9 December 2009 (UTC)
- It would be all but impossible to destroy the ecosystem of the entire planet by hitting just one side of it. You could try to kick up some dust in the air, but that wouldn't destroy everything - many plants would manage with just a little light. You could never get full opacity - most dust will settle very fast, and lots of plants will survive. Also if you make your impactor very fast (and small) it would bury itself underground. You would need something large, but a little slower, for maximum impact.
- If your goal is to kill the life, but not the planet itself? I would suggest a gamma ray burst, in particular Gamma ray burst#Rates_and_impacts_on_life. To make a gamma ray burst a matter/anti-matter bomb would do the trick. If you don't want to deal with the nitric oxide theory, just have a number bombs carefully spaced some hours apart, and at different latitudes. (Ask on the math desk for where to place the bombs for maximum coverage of a sphere.) I think 3, 8 hours apart ± 22.5 degrees from the equator would do it. Or maybe 5, 3 of them 8 hours apart at the equator, and one each at the poles. If you are shooting from far away, and above the ecliptic, you can overshoot the planet off to the side to get the opposite pole. Ariel. (talk) 10:41, 9 December 2009 (UTC)
- BTW, a matter/anti-matter bomb is pretty hard to build - as soon as the edges come in contact it will explode, and very little of the mass of the bomb will react. Instead make a spray cannon, and just spray some anti-hydrogen at the earth. Not too much. You'll make a beautiful fireball at the edge of the atmosphere, and lots of gamma rays and other radiation that would do a great job of sterilizing the earth. Let me know if you like the idea. Ariel. (talk) 10:49, 9 December 2009 (UTC)
- Yes - but remember that half of the stuff that 'explodes' is still antimatter which immediately hits more matter (the air, the ground, etc), explodes some more - and so on. Given an essentially infinite supply of matter to combine with, unless your bomb goes off in a hard vacuum, all of the antimatter WILL get turned into energy in very little time. Your design can be just a ball of antimatter shot into the atmosphere at moderate speed. There was a while when that was one of the theories for the Tanguska event. SteveBaker (talk) 14:29, 9 December 2009 (UTC)
- I don't want to shoot the anti-matter bomb at the earth. That will make a big explosion - but what I want is a gamma ray bust, not an explosion. I want the matter/anti-matter to annihilate in space, leaving just gamma rays, with no explosion, and no residual radiation. Which is tough to do, so I suggested very very dilute anti-matter at the edge of the atmosphere. Ariel. (talk) 21:35, 9 December 2009 (UTC)
- Yes - but remember that half of the stuff that 'explodes' is still antimatter which immediately hits more matter (the air, the ground, etc), explodes some more - and so on. Given an essentially infinite supply of matter to combine with, unless your bomb goes off in a hard vacuum, all of the antimatter WILL get turned into energy in very little time. Your design can be just a ball of antimatter shot into the atmosphere at moderate speed. There was a while when that was one of the theories for the Tanguska event. SteveBaker (talk) 14:29, 9 December 2009 (UTC)
- BTW, a matter/anti-matter bomb is pretty hard to build - as soon as the edges come in contact it will explode, and very little of the mass of the bomb will react. Instead make a spray cannon, and just spray some anti-hydrogen at the earth. Not too much. You'll make a beautiful fireball at the edge of the atmosphere, and lots of gamma rays and other radiation that would do a great job of sterilizing the earth. Let me know if you like the idea. Ariel. (talk) 10:49, 9 December 2009 (UTC)
- If your goal is to kill the life, but not the planet itself? I would suggest a gamma ray burst, in particular Gamma ray burst#Rates_and_impacts_on_life. To make a gamma ray burst a matter/anti-matter bomb would do the trick. If you don't want to deal with the nitric oxide theory, just have a number bombs carefully spaced some hours apart, and at different latitudes. (Ask on the math desk for where to place the bombs for maximum coverage of a sphere.) I think 3, 8 hours apart ± 22.5 degrees from the equator would do it. Or maybe 5, 3 of them 8 hours apart at the equator, and one each at the poles. If you are shooting from far away, and above the ecliptic, you can overshoot the planet off to the side to get the opposite pole. Ariel. (talk) 10:41, 9 December 2009 (UTC)
Methods of Capital Punishment
So this guy in Ohio is set to get a single drug administered IV rather than the standard triple shot. [6] And the basis of his appeal is basically the bozos at the facility can't reliably find veins.
Which leads me to wonder why we even bother with lethal injection at all? American law insists on painless (or nearly so) executions, so what's wrong with putting a dude in a LazyBoy and slowly pumping all the oxygen out of the room over the course of an hour. Wouldn't the condemned just fall asleep/unconscious, and then eventually painlessly expire? 218.25.32.210 (talk) 05:02, 8 December 2009 (UTC)
- That would be a humane way to execute a dog or cat, but I'm not sure that it would be as humane for a human that was aware of his fate. I think I'd prefer something quicker and more irrevocable. APL (talk) 06:00, 8 December 2009 (UTC)
- I've always wondered why they don't use Euthanasia. It makes much more sense than a series of shots that may or may not be humane. Falconusp t c 06:07, 8 December 2009 (UTC)
- Euthanasia is a series of shots. For a variety of reasons, previous execution by lethal injection used multiple shots - a sedative and a muscle-relaxant, and finally potassium chloride. The new Ohio method uses a single barbiturate, without the sedatives or muscle relaxants. Ultimately, the argument boils down to legal definitions about "ethical" and "humane." Personally, I think these legal arguments are very different from a "common-sense based argument", for the same reason that legal claims always deviate from normal, rational, logical thought. Ethics tend to be subject to personal interpretation - so when the State makes an ethical claim, it's always subject to legalese bickering. My personal belief is that the execution would be more humane by certain other methods, such as firing squad or gas chamber, which are both still used in some countries. Lethal injection "pretends" to have a certain sterility and clean-ness which I feel is counter to the act of executing a criminal. If the guy deserves to die for his crimes, then he probably deserves to be shot for his crimes; otherwise, we have alternative corrections methods. Nimur (talk) 06:12, 8 December 2009 (UTC)
- I always thought they did not use gas because of the reminder of the nazi gas chambers. Gassing prisoners to death just sounds bad. Ariel. (talk) 08:08, 8 December 2009 (UTC)
- I doubt it has anything to do with that—considering it was still used well into the early 1970s (and is still used a bit afterwards), and has a lot of differences from the Nazi gas chambers. Lethal injection is now practically the standard but I don't think the Nazis have anything to do with that. --Mr.98 (talk) 13:50, 8 December 2009 (UTC)
- Another problem with death by injection (apart from the general dumbassyness of the death penalty) is that it requires the attending physicians to violate their professional code, e.g. the Hippocratic Oath or modern equivalents. See e.g. [7]. --Stephan Schulz (talk) 08:32, 8 December 2009 (UTC)
- My understanding is that there are no attending physicians, which is why you get people who can't find a vein in fifteen tries... 218.25.32.210 (talk) 09:09, 8 December 2009 (UTC)
- Well, according to the linked JAMA article, in 2007 17 of the 38 states with death penalty required a physician, while 18 more allowed a physician's participation. I suspect the problem of medical incompetency primarily arises in the states that do not require a physician and fail to find one willing and competent to do the deed. --Stephan Schulz (talk) 10:24, 8 December 2009 (UTC)
- Is it actually something doctors are usually good at? At least from TV shows (hardly a good source I know) my impression is that even though it's a skill doctors are supposed to be good at, in reality it tends to be the nurses that do the job most of the time and so they are the ones who are usually good at it not doctors Nil Einne (talk) 18:56, 8 December 2009 (UTC)
- I think this depends. If I remember right (in this case, that's a large if!), in Germany nurses are allowed to administer intramuscular and subcutaneous injections, but intravenous ones are restricted to licensed physicians. This used to be different in the GDR, and one of the smaller problems of reunification has been that the job profile of nurses has changed - we have a number of nurses qualified, trained, and experienced in procedures they are not allowed to do anymore. --Stephan Schulz (talk) 09:42, 9 December 2009 (UTC)
- Is it actually something doctors are usually good at? At least from TV shows (hardly a good source I know) my impression is that even though it's a skill doctors are supposed to be good at, in reality it tends to be the nurses that do the job most of the time and so they are the ones who are usually good at it not doctors Nil Einne (talk) 18:56, 8 December 2009 (UTC)
- Well, according to the linked JAMA article, in 2007 17 of the 38 states with death penalty required a physician, while 18 more allowed a physician's participation. I suspect the problem of medical incompetency primarily arises in the states that do not require a physician and fail to find one willing and competent to do the deed. --Stephan Schulz (talk) 10:24, 8 December 2009 (UTC)
- Euthanasia is a series of shots. For a variety of reasons, previous execution by lethal injection used multiple shots - a sedative and a muscle-relaxant, and finally potassium chloride. The new Ohio method uses a single barbiturate, without the sedatives or muscle relaxants. Ultimately, the argument boils down to legal definitions about "ethical" and "humane." Personally, I think these legal arguments are very different from a "common-sense based argument", for the same reason that legal claims always deviate from normal, rational, logical thought. Ethics tend to be subject to personal interpretation - so when the State makes an ethical claim, it's always subject to legalese bickering. My personal belief is that the execution would be more humane by certain other methods, such as firing squad or gas chamber, which are both still used in some countries. Lethal injection "pretends" to have a certain sterility and clean-ness which I feel is counter to the act of executing a criminal. If the guy deserves to die for his crimes, then he probably deserves to be shot for his crimes; otherwise, we have alternative corrections methods. Nimur (talk) 06:12, 8 December 2009 (UTC)
- I've always wondered why they don't use Euthanasia. It makes much more sense than a series of shots that may or may not be humane. Falconusp t c 06:07, 8 December 2009 (UTC)
- Surely the most humane way of putting someone to death is nitrogen poisoning. Death occurs in about 15 minutes, but during that time you enter a state of euphoria. Michael Portillo did a documentary for the BBC about the death penalty in which he entered a partial state of nitrogen poisoning, but was pulled out before he died. No doctor needed, no straps, no wounds. --TammyMoet (talk) 10:46, 8 December 2009 (UTC)
- I have heard that proposed. One reason for rejecting it is that some people feel dying in a state of euphoria isn't appropriate for a criminal. --Tango (talk) 14:02, 8 December 2009 (UTC)
- ...and some people probably think beating them to death with a truncheon would be inappropriately gentle too. Portillo concluded that there was no method that fulfilled the conflicting criteria. --Dweller (talk) 16:36, 8 December 2009 (UTC)
- It would seem to me that the two contradictory goals of an execution (killing the prisoner without pain, but make the victim's relatives think the prisoner suffered) could be accomplished by destroying the prisoner's brain completely and rapidly with a pneumatic hammer or explosives. Horselover Frost (talk · edits) 00:45, 9 December 2009 (UTC)
- ...and some people probably think beating them to death with a truncheon would be inappropriately gentle too. Portillo concluded that there was no method that fulfilled the conflicting criteria. --Dweller (talk) 16:36, 8 December 2009 (UTC)
- I have heard that proposed. One reason for rejecting it is that some people feel dying in a state of euphoria isn't appropriate for a criminal. --Tango (talk) 14:02, 8 December 2009 (UTC)
- So the question of what kinds of technologies are legally permissible is a tough one, and hard to change, because if you end up on the wrong side of the Eight Amendment, then you have a legal fiasco on your hands. See also Capital_punishment_in_the_United_States#Methods. For a really interesting film on a related topic, see Mr. Death: The Rise and Fall of Fred A. Leuchter, Jr.. --Mr.98 (talk) 13:50, 8 December 2009 (UTC)
- The sudden imposition of a "short sharp shock" in the French style would seem to satisfy the mostly painless requirement but I don't believe has ever been used in the U.S. But would the victims' families still fill the galleries to watch that execution? Perhaps the choice of method is not entirely dictated by the rights of the condemned. 75.41.110.200 (talk) 16:23, 8 December 2009 (UTC)
- There is significant evidence that the heads remain conscious for up to 30 seconds after being separated from the bodies. I don't know whether they are actually able to feel pain during those seconds, but I would need convincing that it didn't cause significant suffering. --Tango (talk) 16:33, 8 December 2009 (UTC)
- The sudden imposition of a "short sharp shock" in the French style would seem to satisfy the mostly painless requirement but I don't believe has ever been used in the U.S. But would the victims' families still fill the galleries to watch that execution? Perhaps the choice of method is not entirely dictated by the rights of the condemned. 75.41.110.200 (talk) 16:23, 8 December 2009 (UTC)
- [citation needed]. Come on, this is the Reference Desk. Comet Tuttle (talk) 17:53, 8 December 2009 (UTC)
- Guillotine#Living_heads is a good overview of the fact and fiction surrounding this. Nimur (talk) 18:14, 8 December 2009 (UTC)
- [citation needed]. Come on, this is the Reference Desk. Comet Tuttle (talk) 17:53, 8 December 2009 (UTC)
- Actually I watched a documentary where a guy did a pretty good study of execution methods and the one he came up with which was most humane was similar to what the OP suggested, Hypoxia_(medical). It's extremely cheap, doesn't take very long, quite impossible to stuff up and very humane, in fact, it even gives the person a little bit of a high before they die. The presenter actually submitted himself to an experiment to experience hypoxia and was taken pretty close to passing out, it was extremely interesting. In fact in the experiment, there is a big red button right in front of him and at ANY time when he feels him self in any danger he can press the red button to stop the experiment, but for the entire time he thinks he is completely fine and does not press the button, he had to be rescued. When he watches the footage back, he's quite shocked to see how delirious and close he was to passing out he was, he thought he was doing fine the whole time. Would you believe, when he got his data and petitioned some people, politicians and prison wardens and stuff, regarding this method of execution, guess what the reaction was? Everyone vehemently opposed his idea, on the grounds that people who are executed should feel a bit of fear and remorse when they die, it's not enough of a punishment if they go out on a high. I don't remember the name of the doco. Vespine (talk) 21:29, 8 December 2009 (UTC)
- I just realised that everything i said is already covered in a couple of posts above.. I only skimmed them the 1st time and missed it.. sorry.. Vespine (talk) 23:40, 8 December 2009 (UTC)
- There are usually four reasons for using punishment of any kind - and capital punishment only relates to three of them:
- It removes the criminal from society - thereby preventing them from reoffending.
- It is a kind of revenge for the victims - perhaps easing their mental state.
- It serves as a deterrent to others.
- It provides a means to try to rehabilitate the criminal (in non-capital punishment situations - obviously).
- So of the first three of those things: which gain benefit from a more brutal approach? Clearly, so long as the criminal dies - you've scored on the first criterion.
- For the second reason - I'm not sure it really helps the victims to have the criminal die painfully - although they may claim it does - at the very least, if we truly believe that this is the reason - we should be asking the victims whether a more or less humane death is required to make them feel better. I'd worry that perhaps the horrible death of the criminal might weigh on their conscience later. But I don't think it helps much.
- Perhaps in the third case it makes a difference - but since whatever method is used is generally proclaimed as "humane" (whether it actually is or not), it probably doesn't matter here either. But I doubt that criminals really consider too carefully the precise details of the punishment when the commit crimes - because clearly, if they were thinking coherently, they wouldn't do such a serious thing anyway. I might imagine a criminal weighing the balance between stealing some money and a few years in jail - but I can't imagine anything worth the risk of dying over. So it can only be that they don't believe they'll get caught...hence changing small details of the execution method probably won't make the slightest difference to the rate that these super-serious crimes are committed.
- There is of course another option - don't execute the prisoner at all. I think it's worth knowing (and I wish I could come up with a reference - but I don't recall where I read it) that keeping someone in prison for their entire life is actually cheaper in raw dollar terms than administering the death penalty. The extra mandatory appeals and the cost of the actual execution is higher than the cost of jail time...on average. I find that surprising - but I understand it to be true. It's also worth pointing out that sometimes people are later found not to be guilty after all - and the death penalty is really a bit too final. Most of all - I think it's actually easier on the criminal to get a quick death than to languish in the prison system for 30 years. I don't think we're actually dissuading anyone from committing crimes this way - and perhaps the 30-year long grinding horror of life imprisonment without any hope of parole is an even less humane solution. The idea that you'll never have anything nice to look forward to - never have any freedom ever again - that's way more depressing than a fairly quick, relatively painless death...IMHO.
- Steve, how old are you expecting these criminals to be when they're caught given you expect the remainder of "their entire life" to be 30 years? 50? --203.202.43.54 (talk) 08:11, 9 December 2009 (UTC)
- I was trying to express how bad it could be. 30 years is on the low end of how bad it could be...the exact number doesn't matter. SteveBaker (talk) 14:23, 9 December 2009 (UTC)
- Of course, the prisoners for life can have hope. They might escape, or they might somehow get a pardon. In the US I have heard costs to incarcerate prisoners quoted at $25,000-$35,000 a year. Googlemeister (talk) 16:50, 10 December 2009 (UTC)
- I was trying to express how bad it could be. 30 years is on the low end of how bad it could be...the exact number doesn't matter. SteveBaker (talk) 14:23, 9 December 2009 (UTC)
- Steve, how old are you expecting these criminals to be when they're caught given you expect the remainder of "their entire life" to be 30 years? 50? --203.202.43.54 (talk) 08:11, 9 December 2009 (UTC)
Air versus Marine Propeller Design
Why do airplane propellers use a twisted airfoil shape for their blades while marine propellers use more of a screw shape? Can the force of lift produced by a rotating airfoil be considered analogous to thrust? Thanks in advance —Preceding unsigned comment added by 141.213.50.137 (talk) 06:02, 8 December 2009 (UTC)
- Please allow me to add a rider sub-question to your question - would it be reasonable to say that air propellers pull while marine propellers push? Or is there really no distinction? 218.25.32.210 (talk) 06:06, 8 December 2009 (UTC)
- Air propellers do not pull. Only push. Newtons second law. Plus some fun stuff about change in air pressure before after. Hmm, I suppose that given that water is incompressible, and air is compressible, some effects might be different. But I bet they are minor. Don't know for sure though. And actually, with water you have to avoid cavitation if you go too fast, which is a result of water not being compressible. With air you don't have to worry about that. But I would definitely not simplify it to pull/push. Ariel. (talk) 08:06, 8 December 2009 (UTC)
- Water and air have different viscosity and different density. Primarily for these reasons, the optimal shape of a propeller for most efficient thrust generation is different. The distinction between "pulling" or "pushing" are fairly artificial for this context- I wouldn't use that as a part of any explanation for the different shapes. Our propeller article is really a great overview of the qualitative differences between air and marine propellers. It also provides some equations and numeric parameters, so you can calculate the efficiency and other parameters for standard shapes yourself. Nimur (talk) 07:03, 8 December 2009 (UTC)
- Air propellers do not pull. Only push. Newtons second law. Plus some fun stuff about change in air pressure before after. Hmm, I suppose that given that water is incompressible, and air is compressible, some effects might be different. But I bet they are minor. Don't know for sure though. And actually, with water you have to avoid cavitation if you go too fast, which is a result of water not being compressible. With air you don't have to worry about that. But I would definitely not simplify it to pull/push. Ariel. (talk) 08:06, 8 December 2009 (UTC)
Although most propeller airplanes have the props in front, there have been plenty of designs with them at the back, where they are known as pusher propellers. See that page for discussion of why one or the other design is used. There have even been a few planes with propellers in both places, like the NC-4 and the Rutan Voyager.
If you think of the size of typical boat propellers in comparison with the size of the boat, you will see that if the propeller was in front, the entire stream of water pushed backward by it would hit the front of the boat and tend to slow it down. This is not such an issue with airplanes because the props are larger and the stream of air can flow around the airplane easily, especially in the case of props mounted on the wings. --Anonymous, 08:38 UTC, December 8, 2009.
- Fixed your link. --Sean 14:06, 8 December 2009 (UTC)
- Oops, thanks. I meant to check whether it'd work the way I had it before before saving, but forgot to. --Anon, 23:32 UTC, Dec. 8.
- I don't think there is an actual qualitative difference between water propellors and air propellors. They can both be 'pushers' or 'pullers' - they can each have different numbers of blades - and they both work by 'screwing' through the air (an old fashioned name for aircraft propellors is "airscrew"). Think of them like a wood screw being driven into a plank. The differences are quantitative - the angle of the blade to the 'fluid', the amount of curvature (they are like little airplane wings in cross-section), the amount of pitch and the length-to-chord ratio. In that sense, they are like little wings - they have an angle of attack, a length and a chord-width - and those numbers are determined by the rate of rotation and the density of the medium through which they are travelling. All of the ideas behind airfoils and wings apply here. Increasing the pitch makes for more thrust - but also more drag. If you make the pitch too steep, an airplane wing will be said to "stall" - where a propellor might be said to "cavitate" - it's the same thing. Notice how different kinds of plane have different wing shapes - things like gliders have long, thin wings - supersonic jets have very deep, triangular wings (think "Concord") - these design differences come about in exactly the same way that propellors come out differently when you optimise them for a dense fluid like water or a thin one like air. Detailed differences appear because some are designed for speed where others are designed for fuel efficiency or some other design criterion. SteveBaker (talk) 18:18, 8 December 2009 (UTC)
- Going to have to disagree a bit with you Steve. While a propeller is very much like a screw for boats, on an airplane, the propeller is far more akin to a wing spinning in a circle very quickly. The same aerodynamic forces on the wings are on the propeller, only in this case, the lift is in the forward direction. If you were to have the same size propeller on the airplane and it was flat in cross section as a boat's propeller is, it would be far less efficient, even to the point of not providing enough thrust to get the plane off of the ground. The forces from the lift are far greater then that of the corkscrewing motion. In a non-compressible (or rather barely compressible) fluid like water, the aerodynamic lift forces are much smaller and the majority of your thrust will come from the actual corkscrew motion. Googlemeister (talk) 21:28, 8 December 2009 (UTC)
- Wings get almost all of their lift from the angle of attack they have to the airflow. The nonsense put about about the Bernoulli principle creating the majority of the lift is easily dispelled by making a model plane with a rectangular cross-section wing and demonstrating that it flies just fine (my father once did that to win a bet - and he was right - the plane flew pretty well considering the drag it had from the vertical leading edge!)...so the angle of attack is key - and whether you consider that as a rotating wing or an 'air screw' is entirely a matter of which words you want to use because the angle of attack is what makes (for example) a wood screw go into wood. You can get screws for screwing into metal with a choice of finer and steeper pitches - and the amount of torque you need to screw them in - and the speed at which they go in - changes just like changing the pitch on a variable-pitch propellor on an aircraft changes the amount of thrust you get as a function of speed. It's exactly the same thing. SteveBaker (talk) 23:52, 8 December 2009 (UTC)
- Not really. The Bernoulli principle is placed into the calculation of lift by being mathematically described as the lift coefficient. A sheet of plywood, at a 0 degree angle of attack (as well as a symmetrical airfoil), will have a lift coefficient of 0 at 0 deg alpha and thus will not generate any lift. A cambered airfoil on the other hand, (say an SM701 shaped airfoil) will have a lift coefficient of something like 0.5 or 0.6 at 0 angle of attack. Now without knowing the wing area or the air density or velocity it is not possible to give the actual amount of lift this generates, it is blatantly obvious that the Bernoulli contribution to lift is not something that you can just ignore. For A380 in steady level flight, this lift would be on the order of several hundred thousand pounds, at 0 angle of attack. Now I will not say you can not build an airplane with a symmetric airfoil, but such a craft will be more difficult to control, and will have a worse lift to drag ratio then a cambered airfoil as your flat winged craft would need to maintain a positive angle of attack to maintain steady level flight. Googlemeister (talk) 22:09, 10 December 2009 (UTC)
Dreams
I've been having bad dreams for the past couple of weeks. What's a way of combatting bad dreams? jc iindyysgvxc (my contributions) 11:14, 8 December 2009 (UTC)
- Not sleeping? No Cheese before bedtime? Sorry, i'm not aware of any thing you can do to change you dreams - my understanding is they're not something you can influence. 194.221.133.226 (talk) 11:53, 8 December 2009 (UTC)
- We can influence our dreams. See tetris effect. --Mark PEA (talk) 18:19, 8 December 2009 (UTC)
- We can't give medical advice, I'm afraid. If you feel the need for help with them then you should get professional help from a doctor or therapist. --Tango (talk) 11:55, 8 December 2009 (UTC)
- We can however point you to Nightmare. Dmcq (talk) 12:54, 8 December 2009 (UTC)
- The thing about dreams is that you are quite unaware of them unless you happen to wake up during one of them. It seems very likely that they are merely the brains' way of "defragging it's hard drive" to put it in computer terms. Memories are shuffled around and reorganized - and while that's happening, things get a bit crazy. So the trick here is to not wake up during them. You can't stop them from happening - nor would that be particularly desirable. So try to get as comfortable as possible - make sure the room is quiet - try to sleep for longer without alarm clocks forcing you to wake up before the "defragging" is complete. Anything that lets you get your "REM" sleep done without interruption is going to prevent you from being consciously aware of the bad dream. Better sleep means fewer dreams - bad or good. SteveBaker (talk) 18:03, 8 December 2009 (UTC)
- And if you do wake up during a bad dream, I find it is often helpful to get up, turn the light on, empty your bladder, have a small drink of water, and generally interrupt the thought process, blow the dream out of your mind (they're usually quite 'fragile'), before going back to bed. This dream dispersed, you'll likely have a completely different dream in the next go around. And, unhelpful as it might sound, try not to worry too much about it! Dreams seem to bring up the things you've been thinking and worrying about: the less you worry, the less you'll dream about the worrying thing in a bad way.
- Sometimes, of course, a dream is a helpful message that you are worried about something. When I have a specific sort of bad dream about my family, that generally tells me it's time to visit again: I've drifted out of touch. 86.166.148.95 (talk) 19:55, 8 December 2009 (UTC)
- Before going to bed pick a topic you want to dream about, and think, and imagine about it extensively. Not just a fleeting thought, but really think about it. You'll probably dream about it. Another thing to do is if you do get a bad dream, modify it. If you are being chased by a monster, imagine a bite proof suit and a weapon for yourself. It's OK to do that after you wake up. If you imagine it hard enough (write a whole script and story in your mind after), you can change your memory of the event. You will also, over time, train yourself to do it while sleeping. Ariel. (talk) 20:29, 8 December 2009 (UTC)
- I don't think there is ANY evidence for that - there is an enormous amount of nonsense said about dreams and very little of it is true. Show me some citations please. The evidence we do have is that they happen during REM sleep - and if you don't wake up during REM - you don't remember them at all. So undisturbed sleep is the key here. Many people claim that the horrible dream woke them up - but the evidence is that the reverse is the case - you woke up for some other reason - and therefore remember the dream. Dreaming is clearly something the brain needs to do - and by far the most reasonable explanation is that it's reshuffling memories around to improve organisation and recall. Even if you could influence what happens - it would probably be injurious to your mental state because you'd be preventing the optimal rearrangement of memories. SteveBaker (talk) 23:43, 8 December 2009 (UTC)
- Speaking of nonsense, you often mention "defragging the hard drive", but I can't imagine what it really means in terms of human memories. I read in hypnagogia that "suppression of REM sleep due to antidepressants and lesions to the brainstem has not been found to produce detrimental effects on cognition". So REM sleep might not in fact do anything much. The REM sleep article states that it helps with creativity, but without the involvement of memory. (Though memory is a nebulous concept, admittedly.) 213.122.50.56 (talk) 14:52, 9 December 2009 (UTC)
- Steve, maybe your dreams are not influenced by what you were thinking about or imagining. Perhaps you never have dreams you can influence (in as much as you have any free will over anything you think or do). Perhaps you never fight your way out of a bad dream, and never wake at the exact same point in a recurring dream many many times. Perhaps your dreams only feature things you've actually experienced (and hence have memories of), shuffled up (how boring that would be). But in that case your dreams are entirely unlike my dreams, or the dreams of my siblings, or the dreams of my friends. Certainly there is a lot of nonsense said about dreams, and much of it would be dispelled by simply comparing the theory to people's experience. Your description of dreams doesn't match the things I have been calling dreams all my life. 86.166.148.95 (talk) 20:52, 9 December 2009 (UTC)
Body, spinal cord and partial brain transplant?
Head transplants suffer from the problem that there is no means of re-attaching the spinal cord, leaving the subject with paralysis. However, supposing the donor body, spinal cord and its associated brain structures were transplanted into a recipient brain along with nerve growth factors-would the recipient brain be able to wire itself in to the donor? P.S. I have no intention of carrying out this experiment.Trevor Loughlin (talk) 11:42, 8 December 2009 (UTC)
- I have a sneaking suspicion that if I answer your question, you will actually perform this -- nonetheless, I will let the next editor mark this as a violation of appropriateness. :)
- Brain transplants are problematic because the brain is not only a physical organ as is the kidney, but also the source of a patient's sense of self. Consider it analogous to a situation in which your monitor snaps off your laptop while still under warranty and you send it back to the company. The tech transfers all your data to a new laptop so that you now possess a "new computer" in the sense of a body but the "same computer" in the sense of all of your previous data (files, uploads, etc.) -- in a sense, your organ recipient here will likely take on the identity of the donor, rather than resume his or her previous status. That being said, it's been a classic "rule" in physiology that the central nervous system either cannot, does not or is completely inadequate/unpredictable in it ability to regenerate. DRosenbach (Talk | Contribs) 14:14, 8 December 2009 (UTC)
- There is the question as to how to keep the body alive while the nerves regrow, if that is even possible. Googlemeister (talk) 14:42, 8 December 2009 (UTC)
- "While the nerves regrow" This is the nub of the whole problem. If there was an easy way - or any way - to get the spinal cord to regrow and reconnect then millions of paraplegic patients would be jumping up and down, and I mean that literally. Of course it would be a big downer for the Paralympic Games. Caesar's Daddy (talk) 14:56, 8 December 2009 (UTC)
- There is the question as to how to keep the body alive while the nerves regrow, if that is even possible. Googlemeister (talk) 14:42, 8 December 2009 (UTC)
- The brain and spinal cord cannot regenerate, but peripheral nerves can. It takes a long time, though. If fibers connecting the spine to the hand are destroyed, it takes months for them to regrow, and there are various issues that may cause the regrowth to fail. Also, this scenario of introducing foreign tissue into the body and requiring it to extend projections through every part creates rejection issues that are just about as nasty as it is possible to imagine. But if you could somehow keep the body alive for months in the absence of any neural control over the lungs, digestive system, etc, I don't see anything in principle that would absolutely prohibit the operation. Looie496 (talk) 16:55, 8 December 2009 (UTC)
- You don't just need to keep the body alive, you need to keep the brain alive. For most organs you have a few hours to carry out the transplant. For the brain you would have a few minutes before irreparable brain damage was caused. I don't think you could remove the brain from one body and get it into the other and connected up to blood vessels fast enough. --Tango (talk) 17:02, 8 December 2009 (UTC)
- You could hook the donor brain to a machine that would be responsible for perfusion prior to completely severing its connection to the donor body and allow that vascular supply to remain until after the recipient vasculature is connected. DRosenbach (Talk | Contribs) 17:56, 8 December 2009 (UTC)
- You don't just need to keep the body alive, you need to keep the brain alive. For most organs you have a few hours to carry out the transplant. For the brain you would have a few minutes before irreparable brain damage was caused. I don't think you could remove the brain from one body and get it into the other and connected up to blood vessels fast enough. --Tango (talk) 17:02, 8 December 2009 (UTC)
- Neuroregeneration is relevant to this. Fences&Windows 17:41, 8 December 2009 (UTC)
- I don't think any of you doubters bothered to read the head transplant article. Supposedly a monkey survived this dubious experiment for a while. There are inline citations but I still don't believe it. Comet Tuttle (talk) 17:51, 8 December 2009 (UTC)
- All the cases described in that article seem to be about transplanting a head onto a body that still has its original head - that makes it much easier. The original brain controls all the bodies systems so the transplanted head can be severely brain damaged without killing it. --Tango (talk) 18:57, 8 December 2009 (UTC)
Valence electron counts
Carbon prefers a total valence electron count of 8, whereas many transition metal complexes prefer a total valence electron count of 18. Why is this? Alaphent (talk) 12:18, 8 December 2009 (UTC)
- Carbon isn't a transition metal, first off. Second, Carbon is in period 2, so its electronic configuration only involves 1s, 2s, and 2p. To get 18 you need the third energy level. ~ Amory (u • t • c) 13:49, 8 December 2009 (UTC)
- So to answer your question more directly yet with a spin (pun intended), it's not that carbon has an affinity towards having 8 electrons in its valence for any mystical reason other than conforming to the general phenomenon that all atoms have an affinity for having their valence shell full. Carbon, as mentioned above, has the potential for 8 electrons in its valence but possesses only 6 electrons -- 2 in its first shell and 4 in its second (with room for 4 more to make 8, which is why carbon generally bonds with four other atoms in the form of single bonds, two other atoms with double bonds, or a double and two singles). Because organic compounds and creatures involves lots of reactions between the COHNS atoms (carbon, oxygen, hydrogen, nitrogen and sulfer), people tend to focus on valences of 8, but really, atoms in higher periods will fill their respective shells. The concept is the same, though, and halides will need one electron to complete their valence shells, regardless of the total electron count. DRosenbach (Talk | Contribs) 14:28, 8 December 2009 (UTC)
- See also detailed discussion under 18-electron rule. –Henning Makholm (talk) 00:45, 9 December 2009 (UTC)
Anesthesia
How does it work?Accdude92 (talk to me!) (sign) 14:17, 8 December 2009 (UTC)
- Have you read our article on anesthesia?--Shantavira|feed me 14:39, 8 December 2009 (UTC)
- From memory (and without reading the anaesthesia article, the answer (for general anaesthetics) is we really don't know. --203.202.43.54 (talk) 08:31, 9 December 2009 (UTC)
ref:HEAT MODELLING
Actually that code has come as derivation of a fourier heat equation with a heat generation term and the transient part kept alive.When solving the code for 5 points .i m getting 4 solutions plots and one y=0 solution.all solutions are homogeneous(pass through origin).what can be the physical significance of the y=0 solution? is solution already optimized? SCI-hunter 220.225.98.251 (talk) —Preceding unsigned comment added by SCI-hunter (talk • contribs) 17:27, 8 December 2009 (UTC)
- Wikipedia does not have an article called Heat modelling. Can you point out which code you mean? Cuddlyable3 (talk) 18:23, 8 December 2009 (UTC)
i am referring to code discussed on 5th dec,article 2.3 in this page only.#REDIRECT [[8]]f> —Preceding unsigned comment added by SCI-hunter (talk • contribs) 01:27, 9 December 2009 (UTC)
- Stop. You have posted the same problem at the Mathematics desk and Science desk. I suggest the latter discussion is the place for you to add any follow-on comments rather than spreading your problem over 3 sections. Please read carefully the responses you have received. Please sign your posts by typing four tildes at the end. Cuddlyable3 (talk) 17:17, 9 December 2009 (UTC)
How quickly does caffeine evaporate or decompose?
I hate the taste and smell of all coffee, so I make it infrequently in large pots and keep it in the fridge, mixing it with warm milk and chocolate later. Old coffee tastes the same (equally bad) to me as freshly brewed. My only goal here is to dose myself with the stimulant caffeine.
Should I be stoppering the coffeepot and/or make it more frequently? In particular: How long does it take for half of the caffeine to evaporate away? Does it decompose in solution? Thank you for your kind attention to my artificial mental alertness. 99.56.137.179 (talk) 18:40, 8 December 2009 (UTC)
- I don't think caffeine in coffee evaporates or decomposes to a significant degree. If it did, decaffeination wouldn't be such hard work. However, there are plenty of other sources of caffeine you could try. Tea, coke, energy drinks, caffeine pills, etc., etc.. --Tango (talk) 19:00, 8 December 2009 (UTC)
- The only bad thing that happens when you keep coffee sitting around, in my experience, is that eventually mold grows on it, even in the refrigerator. Stoppering it will probably prevent that. Looie496 (talk) 19:04, 8 December 2009 (UTC)
- Skip the coffee - if all you want is caffeine, go buy a bottle of "Nodoz" caffeine pills - or go to www.thinkgeek.com and buy some penguin-brand caffeinated mints. One of those has about the same caffeine as three cups of coffee or about a dozen cans of Coke. SteveBaker (talk) 19:30, 8 December 2009 (UTC)
- Thank you, but I am not sure of the economics of that. More importantly, I like the ability to titrate on an as-needed basis by sipping from a cup. Pills wouldn't allow that kind of control. 99.56.137.179 (talk) 19:50, 8 December 2009 (UTC)
- Try tea. Do not make the mistake of brewing it for too long - remove the tea/bag within one or two minutes of pouring hot water into the teapot. Another and better alternative it to give up caffeine completely: after suffering withdrawal for one or two weeks, you will feel alert all the time as if you had just drunk a cup of coffee, and sleep much better and wake up feeling alert and refreshed. 78.149.206.42 (talk) 20:15, 8 December 2009 (UTC)
- There is certainly a build-up of tolerance to caffeine if you use it all the time. The trick is to not have high doses of the stuff every single day - because it basically stops working after a while. It's most effective when you use it for a couple of days - then stop for at least a week or so. You don't get withdrawal symptome and you don't build up that tolerance that forces you to have to take more and more of it to produce the desired effect. Caffeine is an exceedingly well-studied drug. There are LOTS of details about effective doses and tolerance issues in our article - you can easily take advantage of what it says and get the benefits of an occasional boost without the issues of a build-up of tolerance. SteveBaker (talk) 23:37, 8 December 2009 (UTC)
- Oddly, I don't find any of this to be true from personal experience. I've been drinking tea constantly for 20 years or so and it hasn't stopped working. Recently I gave up for a month out of curiosity, and the only effect was that I spent the month feeling as if I hadn't just had a cup of tea. I neither had withdrawal symptoms nor a special holy buzz of natural purity, just the absence of the buzz of a cup of tea. Oh, and 1 to 2 minutes is pathetic (the packets generally recommend 5 minutes, but sometimes I leave the bag in, doesn't seem to make much difference). Maybe 1 to 2 minutes for a total tea n00b who's still acquiring the taste, though. Have it with milk, obviously, or it's nasty. 213.122.50.56 (talk) 15:26, 9 December 2009 (UTC)
- Ew! Stewed tea! Unless you're using some type of tea with a slower brew time, 5 minutes is pushing it. 'Everyday' teabags like PG tips only take a couple of minutes. Leaving it in = stewed and bitter. A nice cup of tea isn't bitter: it tastes tasty and almost meaty. Builder's tea has its place, but isn't some sort of 'grownup' 'ultimate' version. Leaving the teabag in past proper brewing time doesn't lead to a stronger version of the nice tea taste: it leads to stewed tea with a completely different flavour profile. Just like removing a cake from the oven once it's cooked doesn't make you a cake n00b: burnt cake tastes burnt and bitter. I can only assume that people who leave the bag in don't really like the taste of tea, so don't notice the difference when it's stewed. Sugar and loads of milk will cover the taste anyway. If that's what you want. 86.166.148.95 (talk) 20:33, 9 December 2009 (UTC)
- If you want to have caffeine, why not drink cola instead. It adds lots of sugar, but you get rid of the bitter coffee taste. I also discovered flat paper like energy products (they have similar chewing gum) filled with caffeine. - Mgm|(talk) 12:13, 9 December 2009 (UTC)
- Obviously it's up to you what you do, but I would recommend thinking it through before you start using lots of caffeine, and possibly develop a dependence on it. Sorry, I know that's not what you were asking. Falconusp t c 12:21, 9 December 2009 (UTC)
- This "dependance" thing is very overrated. I drink coffee at work - and regularly give it up for a week or two when I'm on vacation and over holidays. The withdrawal symptoms are WAY overrated by the anti-caffeine fanatics. A mild headache once in a while for maybe a day - easily fixed with an asperin. Drinking a can of Coke or Pepsi specifically to get a shot of caffeine is also kinda silly - there is only about a quarter of the amount of caffeine in a can of coke compared to a cup of regular filter coffee (Coke: 34mg, Filter coffee 115 to 175mg)...or to put it another way, a couple of cups of DECAFFEINATED coffee have the same amount of caffeine as a can of Coke! (Yes, you holier-than-thou folks who swig back the decaff...it's not zero caffeine...you could be getting as much as 15mg per cup...that's a US standard cup - not an industry-standard-mug of the stuff!) SteveBaker (talk) 14:17, 9 December 2009 (UTC)
- One or two people have said that they feel little different when giving up caffeine, which may be because they do not drink much tea or coffee during the day. But the OP seems like a serious addict taking in large doses of something they say they do not like. I think the OPs caffeine consumption has got out of hand and would urge them, for their psycholigical health and probably their physiological health, to at least reduce it. There is I believe less caffeine in tea than coffee, so a switch to tea would be a good idea. As stated above, do not let the tea stew as it becomes bitter. Darjeeling is said by some to be the best tea, although myself I favour loose-leaf China tea. 89.243.39.175 (talk) 10:35, 11 December 2009 (UTC)
Medical advice is inappropriate. SinglePurpose393 (talk) 19:58, 11 December 2009 (UTC)
Modelling sound in a breeze
I'm wondering how to mathematically model how the intensity of sound diminishes from its source, with the added complication of a steady breeze. The breeze would make the sound travel further in some directions. The breeze is gentle enough not to add any further noise. Does anyone have any idea how to do this please? 78.149.206.42 (talk) 20:09, 8 December 2009 (UTC)
- Such a breeze is equivalent to the source (and any stationary observers) moving with the opposite velocity in still air. Does that help? --Tardis (talk) 20:29, 8 December 2009 (UTC)
Got a formula for that please? I'm perplexed by what happens upwind of the noise source, since you can still hear a noise-source when standing upwind of it. 78.149.206.42 (talk) 21:48, 8 December 2009 (UTC)
- Unless the breeze were blowing at faster than the speed of sound you will always hear the noise source. --Jayron32 22:11, 8 December 2009 (UTC)
- The article Doppler effect explains what happens. Looie496 (talk) 23:03, 8 December 2009 (UTC)
- In terms of distance - imagine that the air is standing still and the world is moving past it - because from the point of view of the sound wave, that's exactly what's happening. You're going to get some doppler shift because of the way the air moves past the sound source and destination - but that won't affect the distance much. So in still air, it is roughly true to say that the intensity of the sound decreases as the square of the range - which (for constant speed-of-sound) means that it decreases as the square of the time it takes to get somewhere. When there is a wind blowing, that doesn't change - so the intensity of the sound at a given distance is greater down-wind than it is up-wind because the downwind sound is moving at the speed of sound in still air plus the wind-speed - and on the upwind side, it's the speed of sound in still air MINUS the wind speed. Given that the speed of sound is somewhere around 700mph (depending on a lot of variables) then a gentle 7mph wind will alter the time it takes to get somewhere by plus or minus 1% - and the effect that has on the intensity (which is the inverse square of that time) will therefore vary depending on how far away you are. There is a second order effect - the inverse-range-squared thing is only an approximation because frictional forces in the air are absorbing some of the sound...that wouldn't matter much except that - because of the doppler effect - the frequency of the sound will also change slightly because of the wind. The attenuation of sound in air varies with frequency - the higher frequencies being more strongly attenuated than the lower frequencies. This makes the quality of the sound change as the higher frequencies become inaudible faster than the lower frequencies. The effect of this on overall "intensity" gets complicated to estimate because it depends on the frequency components of the sound in the first place. SteveBaker (talk) 23:24, 8 December 2009 (UTC)
- The Steady breeze model may be unreal. It will move faster with height, and real air will probably have turbulence. Graeme Bartlett (talk) 02:33, 9 December 2009 (UTC)
How do you think I should modify the standard one over distance squared formula to include the breeze? 84.13.190.195 (talk) 11:09, 9 December 2009 (UTC)
Perhaps like this:
Intensity = K ------------------------- (d ( 1 - v cos(ø) / V ) )² where K = constant d = distance source to receiver v = breeze speed ø = angle between sound path and breeze V = speed of sound
Cuddlyable3 (talk) 17:53, 9 December 2009 (UTC)
Thanks, I suppose I ought to kick my lazy brain to get it to work out my own version of the formula and compare it with yours. What would the K be please? 89.242.147.237 (talk) 22:54, 10 December 2009 (UTC)
- K depends on the source, and on the frequency involved (if it's a contrabassoon that's out in a breeze, its intensity at high pitches will be very small even if it's played loudly and you're nearby and downwind). It has units of watts, if that helps. --Tardis (talk) 23:04, 10 December 2009 (UTC)
Wouldnt K have to be the sound intensity at the source of the sound? 92.29.113.54 (talk) 20:15, 11 December 2009 (UTC)
Vision acuity measurements
Could visions be measure as any other numbers between 20/x or 6/x. I thought 6/20 or 6/12 is Metric and 20/70 and 20/40 is customary wise. Do some people calculate by 4/15 or 4/8?--209.129.85.4 (talk) 20:24, 8 December 2009 (UTC)
- The reason for the x/20 or x/6 is that the measurements are based on the ability to see features on an eye chart at a distance of 20 feet (in imperial units) or 6 meters (about the same distance in metric units). For a lot more detail, you'll want to have a look at our article: Visual acuity#Visual acuity expression. Apparently in some countries it is an accepted practice to reduce the value to a decimal (that is, 10/20 vision can be written as 0.50). TenOfAllTrades(talk) 20:58, 8 December 2009 (UTC)
- Careful - the 20 (or 6) goes first. 20/10 is vision twice as good as "normal", and would be 2.00 in decimal. 0.50 would be 20/40. --Tango (talk) 21:23, 8 December 2009 (UTC)
- And you could say 20/40, for example, as "this person can see at 20' what a 'normal' person can see at 40'" —Preceding unsigned comment added by 203.202.43.54 (talk) 08:39, 9 December 2009 (UTC)
December 9
mathematica model of diffusion
I don't know if my problem is how I implemented the code. I am simulating diffusion of nitrogen into a metal with a constant surface concentration to contrast with an analytic solution to diffusion. The system is 20 micrometres deep (to evaluate how the concentration at 10 micrometres changes over time) -- there is no mass transfer through the end node.
(* Calculating the Constants *) Dlt = 0.144816767 Dif = 1.381 * 10^-9 Dlx = 2 * 10^-5 Const = Dlt*Dif /((Dlx)^2 ) (* Initialising the Array *) s = Array[0, {24859, 101} ] s[[1]] = Table[0, {i, 101}] (* Setting up constant surface concentration for All t *) s[[All, 1]] = 0.0002 (* setting up general concentration-calculating algorithm for each \ position in a row t*) c[t_, n_] := s[[t - 1, n]] + Const *( s[[t - 1, n + 1]] - 2*s[[t - 1, n]] + s[[t - 1, n - 1]]) (* Assembling a data row of iteratively - calculated positions for each array row t) f[t_] := Table[c[t, i], {i, 2, 100}] (* calculating the end node at the end of each row t *) g[t_] := s[[t - 1, 101]] - 2* Const * (s[[t - 1, 101]] - s[[t - 1, 100]]) For[i = 2, i < 24859, i = i + 1, s[[i, 2 ;; 100]] = f[i]; s[[i, 101]] = g[i]]
(This gives me an array that I can then evaluate and present through various Manipulate[] and ListLinePlot[] functions.)
The problem is that I know from my analytical solution that the concentration at 10 micrometres is supposed to go to 1.5 * 10^-4 g/cm^3 in about an hour, but my simulation has it reach that in around a quarter of an hour. I don't think it's my constants. The activation energy per atom is 0.879 eV, and the temperature is 500K. The temperature-independent diffusion constant (D_0) is 1 cm^2 / s (hence D = 1 cm^2/s * e^(-0.879 eV / (500 K * k_B) = 1.381 * 10^-9 cm^2/s). I'm sure I've satisfied the Von Neumann stability criterion -- I'm trying to do this in about 100 steps, so dx = 20 micrometres / 100 = 2 * 10^-5 cm, and based on the stability criterion the largest possible time interval to prevent residual error buildup is approx 0.14482 seconds per "step". (Hence 24859 time nodes to make roughly an hour.)
My attack so far is to define each new cell's concentration (at a particular time t) from known cells' concentrations at time t-1, based on the concentrations at that time at the node before, at and after. (This is function c[t,n].) Then I find an entire row for that time t to feed data into the array (function f[t]), as well as calculating the end node {function g[t]). Then I have an iterative loop to calculate new rows based off of the row before already calculated. I define my initial conditions (surface concentration = 2 * 10-4 g/cm^3 + no nitrogen in the metal initially) and let it run. What's my problem? John Riemann Soong (talk) 01:42, 9 December 2009 (UTC)
- Help, anyone? This is basically like Fick's laws of diffusion and stuff, but used discretely. John Riemann Soong (talk) 15:22, 9 December 2009 (UTC)
- I don't see anything immediately wrong with your algorithm, although the code doesn't look very idiomatic to me (I would write
up[l_]:=Take[l,{2,-2}]+k*(Drop[l,2]+Drop[l,-2]-2*Take[l,{2,-2}])
up2[l_]:=Prepend[Append[l,l[[-2]]],c0] (* add fixed left value and mirror-symmetric right value *)
up3[l_]:=up2[up[l]]
k=0.144816767*1.381*^-9/2*^-5^2; c0=0.0002; s0=up3[Table[0,{102}]]
s=Nest[up3,s0,24859] (* or: *)
i=0; s=NestWhile[up3,s0,(++i;#[[50]]<1.5*^-4)&]
- where the
Nest[]
chooses a fixed number of steps and theNestWhile[]
waits instead for the 1.5×10-4 to be reached). From the latter I get i=10258 (24.4 minutes). That's probably not what you get: it's not "around a quarter of an hour". Maybe you should post your analytical solution here for more detailed comparison? Perhaps your actual code too; what you've written doesn't work (surely you wantTable[]
instead ofArray[]
, one comment is unterminated, ands[[-1]]
is unused). I tried to fix it, and got the same result of 10258 steps. --Tardis (talk) 17:28, 9 December 2009 (UTC)
- Does using
Table[]
make it run faster/cleaner? I also don't know where I refer tos[[-1]]
. (The commenting error is a residual thing from copy/paste issues whoops.) Hold on about to post my analytic solution. John Riemann Soong (talk) 18:33, 9 December 2009 (UTC)
- Does using
- The problem I solved analytically was here. I know I did it correctly, because I got 10/10 for the analytic part. Basically, we know D_0 = 1 cm^2/s (a given value), T=500K, surface concentration = 0.0002 g/cm^3 (like above). I used an analytic solution to solve for activation energy, knowing that at a depth of 10 micrometres the concentration is 0.0015 g/cm^3 after 1 hour.
- C = C_s - (C_s - C_0) * erf(x / (2*sqrt(D*t)) = 0.0002 g/cm^3 * (1 - erf (x/(2 * sqrt (D_0 * exp(-E_a/(Boltzmann constant * T))*3600s)))
- = 0.00015 g/cm^3
- = 0.0002 g/cm^3 * (1 - erf (0.001 cm### / (2*sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s)))
- 0.00005 g/cm^3 = 0.0002 g/cm^3 * (0.001 cm / 2*sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s))
- 2 * erfinv(0.00005 g/cm^3 / 0.0002 g/cm^3) / 0.001 cm = 1 / sqrt (1 cm^2/s * exp (-E_a / (Boltzmann constant * 500K))*3600s)
- (1 cm^2 /s (2 * erfinv(0.25) / 0.001 cm)^2 * 3600s) = exp (-E_a / (500K * Boltzmann constant))
- ln 1 - 2 ln (2 erfinv(0.25) / 0.001 cm) - ln 3600 = E_a / (500K * Boltzmann constant)
- 500K * Boltzmann constant * -20.41 = E_a = 1.41 * 10^-19 J = -0.879 eV John Riemann Soong (talk) 18:54, 9 December 2009 (UTC)
###
0.001 cm is for the depth of 10 micronsTable[]
is certainly cleaner: just evaluateArray[0,4]
to see what I mean.s[[-1]]
is the last element of s; I just meant that your loop stopped one short of filling your array. As far as verifying the analytical solution goes, we don't need ; D(500 K) will do, and I get . That differs from your reconstituted value by 0.95%, so it affects the answer just noticeably.- What is important is that you simulate to only twice the depth at which you want to test the concentration, and you have a boundary condition on the inside that significantly affects (increases) the concentration of gas in the simulation. Realize that by symmetry you are effectively simulating a very thin (40 micron) film of metal exposed to the gas on both sides, which obviously will take up gas better than a thick slab exposed only on one side (as in your analytical solution).
- I don't have Mathematica available at this instant, and my reimplementation in Python apparently rounds things differently (it gets 10386 steps (instead of 10258) with your D and 10485 with mine), but if I increase the simulated depth to 100 microns (501 sample points) and use my D I get 24858 steps, which should look familiar. (With your D I get 24624 steps, which is 34 seconds too fast.) --Tardis (talk) 22:44, 10 December 2009 (UTC)
Green lightning?
I'm in Milwaukee, WI, and we're getting quite a bit of snow here. I looked out of the window as I was doing my homework on my computer and I saw two bright bluish-green flashes outside (coming from the sky) within 5 secs of each other. They were accompanied by quiet vibrating sounds. I'm not in the city, so I don't think it's light pollution or anything. Any idea what this might be? 76.230.148.207 (talk) 02:35, 9 December 2009 (UTC)
- It was probably a Navigation light on an airplane. Ariel. (talk) 03:19, 9 December 2009 (UTC)
- No way; it was way too close and big! It was like it was coming right from the eaves of my roof! 76.230.148.207 (talk) 03:39, 9 December 2009 (UTC)
- St Elmo's fire? —Preceding unsigned comment added by 75.41.110.200 (talk) 04:22, 10 December 2009 (UTC)
- thundersnow? 75.41.110.200 (talk) 03:29, 9 December 2009 (UTC)
- A power transformer shorting out is usually accompanied with a brilliant bluish-green flash. I guess green is from copper in the wires or terminals. Could be a transformer on a utility pole close by. --Dr Dima (talk) 05:26, 9 December 2009 (UTC)
- Aren't transformer explosions are usually accompanied by more than quiet vibrations (if you are close enough to see it in a snowstorm)? Unless the snow dampened the effect, which is very possible. Falconusp t c 12:17, 9 December 2009 (UTC)
- Not an explosion, a short. A short makes a bright (usually blue/white - but so bright it's hard to see, plus dangerous to look at - full of UV) flash, with a loud humming sound, then a bang or a crackle. Was it very windy that day? Ariel. (talk) 12:55, 9 December 2009 (UTC)
- Aren't transformer explosions are usually accompanied by more than quiet vibrations (if you are close enough to see it in a snowstorm)? Unless the snow dampened the effect, which is very possible. Falconusp t c 12:17, 9 December 2009 (UTC)
- It could have been a meteor burning up. Some meteors burn with a green light (I've seen one myself over Barnsley, Yorkshire about 10 years ago), and in a few days the Geminids will be in full flow. The one I saw also "sang" as it went overhead. --TammyMoet (talk) 14:59, 9 December 2009 (UTC)
- Sound from meteors is widely reported but it is not well understood. The "meteorgenic radio-wave induced vibrating eyeglasses" theory is the most plausible of many implausible explanations. Nimur (talk) 15:26, 9 December 2009 (UTC)
Re: Science Question
When you move or crumple a sheet of paper you cause a change in a. state b. mass or weight c. position or texture or d. size or position? —Preceding unsigned comment added by 75.136.12.225 (talk) 02:36, 9 December 2009 (UTC)
- I bet you do.
- Please do your own homework.
- Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. DMacks (talk) 02:47, 9 December 2009 (UTC)
- If I were you, I'd look up all of the aforementioned Wiki articles and see for yourself (I linked them for your convenience). DRosenbach (Talk | Contribs) 03:27, 9 December 2009 (UTC)
- Links added to original post by second editor removed. They included: state, mass, weight, position, texture, and size -- Scray (talk) 21:44, 9 December 2009 (UTC)
- Friendly reminder: Don't edit other editors' posts, even if it's just to add wikilinks - the RefDesk guidelines are quite clear on this. -- Scray (talk) 19:24, 9 December 2009 (UTC)
- If I were you, I'd look up all of the aforementioned Wiki articles and see for yourself (I linked them for your convenience). DRosenbach (Talk | Contribs) 03:27, 9 December 2009 (UTC)
Veterinary anesthesia
Does anyone know what is used for induction in veterinary anesthesia? DRosenbach (Talk | Contribs) 03:27, 9 December 2009 (UTC) Forget I even asked. DRosenbach (Talk | Contribs) 03:30, 9 December 2009 (UTC)
- This is the reference desk. We can't forget you even asked. Here's your obligatory reference. Induction of Anesthesia with Diazepam-Ketamine and Midazolam-Ketamine in Greyhounds (2008). Different animals and different medical needs will require different chemicals. If you need veterinary care, see the usual reference desk medical/veterinary disclaimer. Nimur (talk) 15:11, 9 December 2009 (UTC)
- If DRosenbach needs veterinary care, then our disclaimer will be one of his smaller problems! SteveBaker (talk) 22:33, 9 December 2009 (UTC)
- I can only speak for myself, but I have completely forgotten that he asked. Bus stop (talk) 22:49, 9 December 2009 (UTC)
- Asked what? Cuddlyable3 (talk) 21:24, 11 December 2009 (UTC)
- I can only speak for myself, but I have completely forgotten that he asked. Bus stop (talk) 22:49, 9 December 2009 (UTC)
- If DRosenbach needs veterinary care, then our disclaimer will be one of his smaller problems! SteveBaker (talk) 22:33, 9 December 2009 (UTC)
Freezing rain affected by a lake?
As I watched the television news about the major winter storms in the Great Lakes region of the USA, I noticed that most of Lake Michigan was receiving freezing rain. Although the northern and eastern boundaries of the freezing rain area (past which it was snow) was in the middle of the lake, the southern and western boundaries (past which it was rain) followed the lake's shoreline almost exactly. Can the lake really affect the type of precipitation, or is this more likely an error with the Doppler radar? Nyttend (talk) 04:07, 9 December 2009 (UTC)
- Maybe these articles will answer your question: Lake effect snow, Great Salt Lake effect. Ariel. (talk) 07:24, 9 December 2009 (UTC)
- Freezing rain is critically dependent on temperature, and large lakes certainly affect the temperature noticeably. That said, it doesn't make sense to say that the boundary of the freezing-rain area was in the middle of Lake Michigan. Freezing rain is possibly only when the rain falls onto ground that is below the freezing point, not onto liquid water in a lake! (It would be different if the lake was frozen over, of course.) --Anonymous, 09:35 UTC, December 8, 2009.
- Freezing rain is possible on a boat of course, and can cause a lot of problems. Looie496 (talk) 16:02, 9 December 2009 (UTC)
- Ah, good point! --Anon, 21:22 UTC, December 9, 2009.
Rhodonite oxidation
Rhodonite, the pink/red coloured gem material will oxidise on the surface. This may take a couple of days to a couple of years. Not sure why there is a big time difference, but that is another topc. Polished rhodonite does not oxidise. I wish to find out the best way to prevent oxidation of unpolished rhodonite. I am using some (15kg piece) as a memorial stone and do not want a pink rock turning black in the future. I don't want to use epoxy coatings, or polish it. At this time Im considering an oil coating, such as vegetable oil or new mineral oil to prevent air contact causing oxidisation.Yarraford (talk) 04:19, 9 December 2009 (UTC)
- Are you sure? I don't think Rhodonite can oxidize - it's already fully oxidized. The different colors are from different minerals in it. Maybe it turns black for some other reason? (If in fact it does turn black - you should double check.) Ariel. (talk) 07:27, 9 December 2009 (UTC)
- de-WP states that black streaks are from MnO2. --Ayacop (talk) 15:00, 9 December 2009 (UTC)
The info re oxidation came direct from the miner, while I was at his mine. Tamworth, NSW Australia. He took me to a spot and the rocks were black. No rhodonite in sight. He said, this is the best rhodonite, fine grained and dark coloured. On breaking these rocks with a hammer, what was revealed was pure pink rhodonite without any of the typical black banding often seen in rhodonite. He said this must be polished immediatly to stop colour change to black which will happen in days. Maybe some other process is happening, I don,t know. Courser grained rhodonite intersected with black banding about 40 metres distant in the same seam was also evidently turning black at a much slower rate as different stages of the change could be seen on the rock. —Preceding unsigned comment added by Yarraford (talk • contribs) 00:34, 11 December 2009 (UTC)
- It would be some sort of weathering process. Some materials are more soluable and wash away with water. Over the long term silica will dissolve, and so will ions like sodium and potassium. Leaving MnO2 or iron oxides behind, that are dark in colour. Graeme Bartlett (talk) 00:56, 11 December 2009 (UTC)
Wind power from tightly stretched band
The other night on TV (Canada, West coast) I saw a company who was using a principle of vibration (flapping, sorta) from a tightly stretched band with magnets and coils and air from a desk fan blowing across it. They didn't, as far as I could tell, have a "production" level product. I'm trying to figure out what scientific/physical principle this was using, and if possible who this was. Help? --Kickstart70TC 05:11, 9 December 2009 (UTC)
- Oops...found it: Windbelt, which is a horrendous article, FWIW. --Kickstart70TC 05:25, 9 December 2009 (UTC)
- The third link to the YouTube video, assuming it's the same one (an interview with the developer) I saw last year when researching this for a lecture, will tell you all you really need to know. 218.25.32.210 (talk) 06:31, 9 December 2009 (UTC)
DNA data bases
Are DNA databases good enough to allow someone to state in their will that they want to leave their estate to the person(s) whose DNA is the closest to match to their own as apposed to leaving their estate to the person(s) with the greatest legal status? 71.100.160.161 (talk) 06:02, 9 December 2009 (UTC)
- That's more a question of what (local) law allows than a question about quality of databases. Clearly regardless of the quality (or really, size) of the database, one could always define "closest" in such a way that there's an heir; the question is will the law allow such a capricious distribution of an estate. If there are heirs otherwise entitled to inherit, such a provision would certainly result in prolonged legal battles and make many lawyers and few heirs rich. - Nunh-huh 06:12, 9 December 2009 (UTC)
- The person whose DNA was most similar to theirs would undoubtably be a close family member, so a huge database is really not required, just to sequence/genotype siblings and children. BTW, only a very few number of people have had their DNA sequenced for a genome. The commercial companies only sequence small, variable regions or look for SNPs on a chip. A few now offer full genomes, but still generally that's really only about 90% of a genome. Aaadddaaammm (talk) 08:30, 9 December 2009 (UTC)
- The idea of the OP was possibly to find unknown relatives that are not part of the family -- which wouldn't be allowed if one did it just for the sake of knowledge, but could be in case of heritage. --Ayacop (talk) 14:51, 9 December 2009 (UTC)
- Well, possibly. If so, he/she should have a look at 23andMe, which can identify relatives and classify them with regard to closeness of relationship (e.g., 4th cousin). This is because the genetic testing at that database includes autosomal markers as well as mtDNA and Y-DNA markers. - Nunh-huh 00:03, 10 December 2009 (UTC)
- The idea of the OP was possibly to find unknown relatives that are not part of the family -- which wouldn't be allowed if one did it just for the sake of knowledge, but could be in case of heritage. --Ayacop (talk) 14:51, 9 December 2009 (UTC)
- The person whose DNA was most similar to theirs would undoubtably be a close family member, so a huge database is really not required, just to sequence/genotype siblings and children. BTW, only a very few number of people have had their DNA sequenced for a genome. The commercial companies only sequence small, variable regions or look for SNPs on a chip. A few now offer full genomes, but still generally that's really only about 90% of a genome. Aaadddaaammm (talk) 08:30, 9 December 2009 (UTC)
Could the Hubble Space Telescope have imaged damage to the Space Shuttle Columbia?
The Columbia Accident Investigation Board report discusses multiple requests for DoD imagery (both ground-based and space-based) submitted by engineers who were concerned about possible damage from the foam strike during Space Shuttle Columbia's final launch. (The requests were quashed by NASA management who erroneously believed both that the strike was unlikely to have caused significant damage, and that there was nothing that could be done to help if significant damage had occurred.) Could the Hubble Space Telescope have imaged the orbiter? (Potential problems could include orbital alignment, focus, exposure times, and tracking ability.) Has the HST ever imaged an artifact in earth orbit? -- 58.147.52.66 (talk) 08:25, 9 December 2009 (UTC)
- Ignoring everything else, focus would be a problem. An astronomical telescope are not constructed to focus on objects closer than "infinity"; therefore it cannot resolve details smaller than its main mirror (2.4 m for Hubble) at any distance. Even apart from this, the best-resolving camera aboard Hubble has a resolution of 40 pixels per arcsecond. Imaging the orbiter from a distance of 1000 km (which would be a rather lucky break, and probably demand faster tracking than the on-board software is written to provide), this translates to some 12 cm per pixel. A hole of the size estimated by the CAIB would not have been visible on such fuzzy an image. –Henning Makholm (talk) 09:13, 9 December 2009 (UTC)
- Here is some data about on the HST's tracking capabilities in the context of observing the Moon. It is a few orders of magnitude too slow to follow any object at or below its own height, which will be moving at orbital speed and be at most several thousand kilometers away, or would be behind the horizon. –Henning Makholm (talk) 17:34, 9 December 2009 (UTC)
- All very true. There were, however, other cameras in orbit (some military satellites) that could have photographed the shuttle and resolved the damage - and those devices have indeed been used for this purpose subsequently. We were not provided with details because these are secret spy satellites - but they could do the job because they are designed to focus at distances comparable to their orbital height and resolve down to centimeters. SteveBaker (talk) 13:42, 9 December 2009 (UTC)
- This LIDAR image from Air Force Starfire Optical Range, imaged Columbia at a range of probably under 100 km. I doubt a spacecraft could have done better, or could have plausibly been at closer range. Nimur (talk) 15:37, 9 December 2009 (UTC)
- All very true. There were, however, other cameras in orbit (some military satellites) that could have photographed the shuttle and resolved the damage - and those devices have indeed been used for this purpose subsequently. We were not provided with details because these are secret spy satellites - but they could do the job because they are designed to focus at distances comparable to their orbital height and resolve down to centimeters. SteveBaker (talk) 13:42, 9 December 2009 (UTC)
An astronomical telescope are not constructed to focus on objects closer than "infinity"; therefore it cannot resolve details smaller than its main mirror (2.4 m for Hubble) at any distance.
- I cannot understand the reasoning here. Focussing to infinity means that the light coming from one direction is focussed at one point on the sensor and light from another direction is directed to another specific point. Assuming we can make arbitrarily small pixels, I cannot see where the size of the mirror enters in the ray optics approximation. In wave optics, diffraction on the aperture (effectively the mirror) really puts a limit quantified somewhat by the Rayleigh criterion. However, the resolution limits are never rigid (even quantum indeterminacy is correctly expressed by variances, which are natural, but still arbitrary measures of the widths of statistical distributions) and can often be improved by deconvolution, which is really frequently used for Hubble. — Pt (T) 21:18, 9 December 2009 (UTC)
- Focusing on infinity means that a set of parallel light rays that hit the mirror will end up on the same point (pixel) on the detector plate. Follow those rays back to the object being imaged -- they'll have been emitted everywhere on a section of the object that has the same shape and size as the mirror. Conversely any single point of the object emits rays towards every part of the mirror; but those rays must have slightly different directions (if not they'd all hit the same spot on the mirror), and therefore they're going to end up at different points on the detector. This is independently of how fine pixels the detector is made of.
- Usually this is not a relevant effect in astronomy, because the things one wants observe (stars, planets) are much, much larger than the aperture size anyway. –Henning Makholm (talk) 23:09, 9 December 2009 (UTC)
- Thank you, this clarifies the matters for me! Nevertheless, while the point spread function of a point at distance d would thus be (approximately) a circle with such a diameter that it exactly covers the image of an object at distance d with the same diameter as the mirror (am I correct here?), (most of) the information about the original object would still be encoded in the blurred image. If the shape of the PSF didn't depend on the observation angle w.r.t the mirror, we could simply deconvolute the blurred image with the PSF. And even if it does change with angle, but in a way we know or have calculated in advance, the "software refocussing" is still doable by solving a linear Fredholm integral equation of the first kind, which can be easily done using
a Fourier transformsome numerical linear algebra (after discretizing, you'll have a system of linear equations). Ah, if the world were actually so ideal... In reality, the PSF is sensitive to every little detail in the optics and the sensor and we either don't know those contributions exactly enough or we make errors in image acquisition, so that the resolution is still limited. But that limit is more a matter of engineering, not fundamental physics! — Pt (T) 01:03, 10 December 2009 (UTC) and 01:16, 10 December 2009 (UTC)
- Thank you, this clarifies the matters for me! Nevertheless, while the point spread function of a point at distance d would thus be (approximately) a circle with such a diameter that it exactly covers the image of an object at distance d with the same diameter as the mirror (am I correct here?), (most of) the information about the original object would still be encoded in the blurred image. If the shape of the PSF didn't depend on the observation angle w.r.t the mirror, we could simply deconvolute the blurred image with the PSF. And even if it does change with angle, but in a way we know or have calculated in advance, the "software refocussing" is still doable by solving a linear Fredholm integral equation of the first kind, which can be easily done using
IPCC models
The CO2 in the atmosphere constantly interacts with the earth/plants and the sea. So an increase in atmospheric CO2 leads to increase CO2 in the oceans. How do the IPCC climate models allow for this effect?--Samweller1 (talk) 12:51, 9 December 2009 (UTC)
- As I understand it (and I don't do so very well), normal Global climate models do not model the carbon cycle, i.e. the change in atmospheric CO2 is provided as an input, based on assumptions about human emissions and estimated other carbon sources and sinks. CO2 in the ocean has essentially no direct influence on the climate. The main effect is that atmospheric concentrations are lower than they would otherwise be. Understanding more indirect effects (e.g. the limits of the oceans ability to act as a sink, or the influence on oceanic food chains) is ongoing work, and effects are modeled independently. Our article on transient climate simulation might also be of interest. --Stephan Schulz (talk) 13:03, 9 December 2009 (UTC)
if global warming is a problem why don't we put up a thermostat
why don't we just put a sliver of something reflective in orbit around the sun in lockstep with Earth, but closer to the sun (that way you don't need a lot of this thing) and then adjust it to block as much/little light as we need for optimum temperature/to counteract any global warming occurring? note: this is not a request for medical advice. And saying that saying this is not a request for medical advice does not not make it a request for medical advice does not make it a request for medical advice. 92.230.65.75 (talk) 13:49, 9 December 2009 (UTC)
- We have an article on this: Space sunshade- Fribbler (talk) 13:54, 9 December 2009 (UTC)
- The idea has been proposed, but you can't put something in "lockstep with Earth, but closer to the sun" - orbital period is determined by the size of the orbit. The only real option is L1, which isn't that much closer to the Sun than the Earth. That means it has to be very big, which makes it very difficult and expensive to make. --Tango (talk) 13:57, 9 December 2009 (UTC)
- Sounds to me like space elevator inventor Jerome Pearsons suggestion of forming a ring around the Earth.[9] Nanonic (talk) 14:00, 9 December 2009 (UTC)
- A ring around the Earth, rather than just at L1, is an option, but probably not a good one. It would probably have to be bigger overall and it would get in the way of near-Earth space travel. --Tango (talk) 14:04, 9 December 2009 (UTC)
- There are a couple of problems, both technical, political, economical and ecological. Technically, we don't know how to build such a thing right now. Mathematically, since the sun is larger than the Earth, the closer you move it to the sun, the less sunlight it would block. So the best you can do is indeed putting it into orbit or at L1 (which is unstable). Politically, whom would you trust to control it? Assuming its me, having Austin, Texas, in perpetual darkness might be a nice idea, but what if I get bored with that and shadow (or, better, light) something else? Economically, it's likely to be much more expensive to build and maintain than it would be to fix our CO2 habit here on Earth. And ecologically, we would receive not only less energy, but less light. Nobody knows what effect that would have. And it would still cause significant local climate change, as not only the total energy budget, but also local distribution of energy is affected by greenhouse gases. We do not know enough to predict the overall effects of such a thing, even assuming it technically works flawless. --Stephan Schulz (talk) 14:15, 9 December 2009 (UTC)
- A ring around the Earth, rather than just at L1, is an option, but probably not a good one. It would probably have to be bigger overall and it would get in the way of near-Earth space travel. --Tango (talk) 14:04, 9 December 2009 (UTC)
- Sounds to me like space elevator inventor Jerome Pearsons suggestion of forming a ring around the Earth.[9] Nanonic (talk) 14:00, 9 December 2009 (UTC)
couldn't it remain in lockstep with Earth, but closer to the sun by expending energy (as opposed to just passively orbiting on its inertial momentum) -- if it were much closer to the sun it could be much, much smaller... Also: coudn't it get some of the energy just mentioned directly from the sun? Is there a way to turn solar energy into thrust in space? Thanks. Still not asking for medical advice, by the way. 92.230.65.75 (talk) 14:20, 9 December 2009 (UTC)
Oh. I just read the second comment, mentioning that as you get closer to the sun you block less and less light from Earth. Like some bad math joke, my logic went: assume the sun is a point-source... 92.230.65.75 (talk) 14:22, 9 December 2009 (UTC)
- Assume a spherical cow... Fences&Windows 14:24, 9 December 2009 (UTC)
- Wikilinked, just because we have an article on everything (EC x 4!!) -- Coneslayer (talk) 14:30, 9 December 2009 (UTC)
- ...and here is my ec'ed comment, about half of which is still relevant ;-): As pointed out above, since the sun is larger than the Earth, the farther you move something towards the sun, the less light it will block. Note that solar shadows (as opposed to shadows cast by an approximate point source) do not get larger as the distance between object and screen increases. They just get more diffuse until they vanish. Apart from that, you can use solar panels and a ion drive for station keeping (but you still need reaction mass), or possibly use solar sails, although this will be far from trivial to figure out and will certainly need active control. --Stephan Schulz (talk) 14:28, 9 December 2009 (UTC)
- Scientists now believe that the principle whose name is derived from the Latin para- "defense against" (from verb parere "to ward off") + sole "sun" can be implemented to create human-deployable collapsible sources of shade. Wikipedia has an article on parasol technology. Cuddlyable3 (talk) 18:34, 9 December 2009 (UTC)
- @Stephan Schulz: I think your distance argument is wrong. The shadow cast by an object would indeed become smaller when it is moved closer to the Sun, but the radiation it receives does increase: The closer an object of a given size moves to the Sun, the more radiation it will get (since radiation density decreases with the square of the distance from the Sun). An object at 1/109 AU from Earth (or less, e.g. Moon's distance) from Earth will thus get more radiation than the same object closer to the Earth, and if placed directly on the line connecting Sun's and Earth's centres all of the radiation it gets would otherwise reach the Earth (because 109=diameter of Sun/diameter of Earth- use the Intercept theorem). There wouldn't be any point on Earth experiencing a Solar eclipse by it, but the whole Earth would get a higher reduction of radiation. An interesting question would be what happens when one increases the distance above 1/109 AU - what is the optimal distance for a Sun shade?--Roentgenium111 (talk) 13:05, 10 December 2009 (UTC)
- If we make the assumptions that the Sun and Earth are two flat disks with radii R and r separated by a much larger distance D (and that the Sun's disk is equally luminous everywhere), and place a third disk with radius ρ at a distance d from Earth, I calculate that the light denied Earth is (proportional to) , where is the overlap between two circles whose centers are separated by s. (The formula at MathWorld assumes that the circles do intersect and that neither circle contains the other, but it seems that its error otherwise is purely imaginary.) The integrand is 0 for , which may be useful for numerical integration.
- Doing that integration for small ρ (I used 1 km) suggests that your argument about similar triangles is the right idea: the radiation reduction increases until the shade reaches the point where the Sun's light focused onto each point of it would illuminate the whole Earth if the shade were absent, and then falls off rapidly as the shade continues toward the sun. This makes sense: once all of the Earth sees the disk as a subset of the Sun's disk, you're blocking as much light as you can. Moving the shade further from Earth reduces the amount of the Sun it blocks at each point, and moving it closer to Earth causes the areas in twilight to see part of the shade uselessly occluding the sky beside the Sun. The only non-trivial thing to discover is that bringing it towards Earth loses in the twilight regions more than it gains by occluding more of the Sun in the center. The weird thing is that I see a local minimum around d=179 Mm; I don't know if I trust the numerics for d that small, though. The limit for small d should be , which is 0.5% smaller than that local minimum.
- With larger shades, the optimum is closer to Earth: 1.154 Gm (instead of 1.355 Gm) for ρ=1 Mm, 945 Mm for twice that, and 738 Mm for twice it again. This also makes sense: as the disk approaches Earth's size, the optimal strategy is to set it on Earth like a lampshade. --Tardis (talk) 20:10, 10 December 2009 (UTC)
- @Stephan Schulz: I think your distance argument is wrong. The shadow cast by an object would indeed become smaller when it is moved closer to the Sun, but the radiation it receives does increase: The closer an object of a given size moves to the Sun, the more radiation it will get (since radiation density decreases with the square of the distance from the Sun). An object at 1/109 AU from Earth (or less, e.g. Moon's distance) from Earth will thus get more radiation than the same object closer to the Earth, and if placed directly on the line connecting Sun's and Earth's centres all of the radiation it gets would otherwise reach the Earth (because 109=diameter of Sun/diameter of Earth- use the Intercept theorem). There wouldn't be any point on Earth experiencing a Solar eclipse by it, but the whole Earth would get a higher reduction of radiation. An interesting question would be what happens when one increases the distance above 1/109 AU - what is the optimal distance for a Sun shade?--Roentgenium111 (talk) 13:05, 10 December 2009 (UTC)
- Because it currently costs on the order of $5,000 per lb to get things into low earth orbit. Probably a lot cheaper to convert all our current power plants to solar and wind power plants for the amount it would cost to put up a lasting shade of a size large enough to accomplish the required amount of solar energy deflection, not to mention that building the shade would be a monumental engineering project that would make building the pyramids seem trivial. Googlemeister (talk) 20:34, 10 December 2009 (UTC)
- Worth noting is that our article on a space sunshade describes the cost as 'in excess of' 5 trillion USD — and that assumes the successful development of a suitable rail- or coil-gun technology to carry out the launches. (And heaven only knows how much in excess the 'in excess' actually would turn out to be....) TenOfAllTrades(talk) 20:49, 10 December 2009 (UTC)
Health clinic in Sevilla, Spain
where can I find a health clinic in Sevilla, Spain that deals in STD's? Thanks —Preceding unsigned comment added by 80.58.205.49 (talk) 15:46, 9 December 2009 (UTC)
- It appears there may have once been a STD clinic/diagnostic centre at the University of Seville School of Medicine but I don't know if it still exists. If you speak Spanish perhaps you can work out from their website [10]. I can't offer much more help, perhaps someone else could, except to say you should be able to go to any Sevilla general practitioner (according to our article, in Spain probably based at a primary care centre) and they'll be able to direct you to an appropriate clinic if it's not something they can deal with themselves, while protecting your confidentiality & privacy as they should always do. Nil Einne (talk) 17:07, 9 December 2009 (UTC)
Is nonhuman skin color a result of melanin levels or something different?
I understand that melanin is the primary determinant of the variance in skin color among humans, but I was wondering if it is also what makes elephants, rhinoceroses, and hippopotamuses gray and gorillas black, or if these are differences of a fundamentally different type. 20.137.18.50 (talk) 17:17, 9 December 2009 (UTC)
- Interestingly, skin color redirects to human skin color. From that article, there is a link to biological pigment which discusses coloration in animals. The article has a list of biological chemicals that are common; exotic animals also have other biochemicals, see for example bioluminescence. Chromatophore also has lots of good information about coloration in animals like fish, amphibians, and reptiles. Mammals and birds do not have chromatophores, only melanocytes. Nimur (talk) 17:48, 9 December 2009 (UTC)
- There may be additional development but the pathway for melatonin synthesis exists in all living things. For example, the browning of an apple when cut uses some of the same enzymes. --Ayacop (talk) 19:47, 9 December 2009 (UTC)
- PS:No stop, I was wrong: the dopachrome tautomerase (DCT) enzyme only developed with the chordata, so the forking of the pathway is an animal thing, ie, DCT is only one way to get melatonin in animals, while plants need absolutely tyrosinase/polyphenol oxidase for that. --Ayacop (talk) 19:58, 9 December 2009 (UTC)
Caring for surfaces while removing snow from them
The Internet has information about how to remove snow while caring for one's own health, that is, the health of whoever is doing that work. However, I am seeking information about how to remove snow while caring for the durability of artificial surfaces, such as asphalt and concrete. I am thinking of the possibility of cracks in the surface being started or enlarged by expansion and contraction caused by changes in temperature. With this in mind, is it better to clear an entire surface at one time, avoiding borderlines between cleared and uncleared parts of a surface? Is it better (when practical) to postpone snow removal until new snow has stopped falling? Where is it best to put snow which has been removed? Are grassy areas suitable? Are ditches suitable? I would like someone with expertise in the appropriate field(s) to answer these questions and any closely related ones which come to mind. (A related article is frost heaving.) -- Wavelength (talk) 17:33, 9 December 2009 (UTC)
- When or in what way you remove snow shouldn't affect whether cracking appears on it. Cracks appear in asphalt and concrete primarily because of thermal expansion (or contraction), but removing the snow should not have a significant effect on the temperature of the surface. While snow is a good insulator (that's why igloos work), the fact that there is snow accumulated on the surface means that it is already cold enough to not melt snow that falls on it. So, removing the snow will only expose the surface to air that is approximately the same temperature as the snow. Removing the snow all at once when it's done snowing is mainly a practical matter--who wants to go out and shovel twice for the same snowstorm? It's perfectly fine to pile snow on grassy areas, as long as you're OK with the pile being there longer than the rest of the snow that fell naturally. Mildly MadTC 20:40, 9 December 2009 (UTC)
- Thank you for your answer. I am correcting the grammar of the heading. -- Wavelength (talk) 21:14, 11 December 2009 (UTC)
Looking for molecules with large huang rhys factor
I am looking for molecules with large huang-rhys factors, that also absorb in the visible part of the spectrum. The huang rhys factor is a measure of the displacement of the nuclear potential minimum upon electronic excitation, as described here. The result of this would be that in the absorption spectrum, the first overtone for a particular vibrational mode is a larger peak than the fundamental (the 0-0 pure electronic transition). I know this question is pretty obscure, but I am unsure about how to proceed with this search. mislih 17:44, 9 December 2009 (UTC)
- Have you tried searching Google Scholar for huang rhys factor? The Huang-Rhys factor S(a1g) for transition-metal impurities: a microscopic insight (1992), discusses transition metal ligands and compares specific molecules. Nimur (talk) 17:55, 9 December 2009 (UTC)
Echoes
If I am standing in a large room and I yell, how many times does my voice echo? It typically sounds like 3 or 4 times but I imagine that's just the threshold of what I can hear. Does my voice actually echo forever? TheFutureAwaits (talk) 17:49, 9 December 2009 (UTC)
- An "echo" as you are apparently interpreting it is a distinct, undistorted return of the original sound of your voice. In reality, what happens is that as the wavefront reverberates, many echoes "combine" and distort, eventually decaying in amplitude until you can not hear them (and the wavefront settles down below the ambient noise level]. See reverberation for a more thorough explanation of this effect. Depending on the size, shape, and material of the room walls, the number of "distinct" echoes can vary from zero to "too many to count." Also see Multipath interference for information about echos that bounce off of different walls and recombine. Nimur (talk) 17:58, 9 December 2009 (UTC)
I uniformly prefer white
I've noticed that nurses uniforms are no longer white. I thought being white was important for preventing infection for a couple of reasons:
1) Any stains are easy to spot, which hopefully means a clean uniform will be put on. Patterns are perhaps the worst, in this respect, as they can disguise a soiled uniform.
2) Bleach can be used liberally when washing whites, without fear of them fading. Not so with coloreds. More bleach means fewer surviving microbes.
So, with this in mind, why have they gone away from white uniforms ? StuRat (talk) 18:23, 9 December 2009 (UTC)
- Scrubs (clothing) is somewhat informative... apparently white induces eyestrain, and the colors are used to differentiate departments and to keep people from stealing them. I am sure that they are able to sterilize the clothing regardless of the color. I'm not sure any uniforms are patterns. --Mr.98 (talk) 18:34, 9 December 2009 (UTC)
- I understand that the actor's nurses uniforms in an early British black and white TV series Emergency - Ward 10 were yellow because this appeared better on camera. Nostalgiatrip starts here. Cuddlyable3 (talk) 18:54, 9 December 2009 (UTC)
- The old nursing auxiliary uniforms in the UK used to be a sort of beige check. Yuk!--TammyMoet (talk) 19:30, 9 December 2009 (UTC)
- Some scrubs are patterned. I think those sometimes worn in paediatrics, in particular. --Tango (talk) 19:51, 9 December 2009 (UTC)
- Nurses still wear white tunics in the UK, with some wards wearing blue or green scrubs instead. One of the drawbacks with white is that whilst it will show coloured stains such as blood, it won't show clear fluid stains which are easily observed on blue or green clothing. Nanonic (talk) 19:50, 9 December 2009 (UTC)
- And there's always color-safe bleach. DRosenbach (Talk | Contribs) 00:56, 10 December 2009 (UTC)
- I think it is fashion more than anything else. Fashion in this case is not individual but generally held concepts by medical institutions and apparel suppliers. White is consistent with outmoded concepts, largely concerned with how the individual is perceived in society. The colors and patterns are probably an expression of the pluralistic society that is now embraced by most establishments. I think it is a good question. I think it goes to the heart of fashion megatrends. Bus stop (talk) 01:15, 10 December 2009 (UTC)
- And there's always color-safe bleach. DRosenbach (Talk | Contribs) 00:56, 10 December 2009 (UTC)
- Different roles are shown in the UK by different colours. These colours are decided by the individual hospitals rather than the NHS, so I can't give a definitive answer as to what colour means which role. --TammyMoet (talk) 18:38, 10 December 2009 (UTC)
- Where I'm from most doctors in the OR room wear green scrubs, making blood appear close to black, though I don't know if this is the intention. 219.102.221.182 (talk) 05:17, 11 December 2009 (UTC)
moment of Big Bang
Can the moment of the Big Bang be characterized as the moment of the greatest unrest? 71.100.160.161 (talk) 18:43, 9 December 2009 (UTC)
- "Unrest" doesn't have a well-defined scientific meaning. Do you interpret entropy to mean unrest? In that case, the answer is no, the universe had less entropy during its early stages than it will in its later stages, because of the second law of thermodynamics. Nimur (talk) 19:30, 9 December 2009 (UTC)
- Except that at the moment of the BB, the laws of physics all had to be different. Otherwise the universe would have just collapsed into the grand-daddy of all black holes, and that would have been that. StuRat (talk) 19:35, 9 December 2009 (UTC)
- I defer to one of the more expert physicists on the reference desk to clarify current scientific thought on the validity of thermodynamic laws during the early big bang. My understanding was that these were always valid. Nimur (talk) 19:43, 9 December 2009 (UTC)
- The laws of physics are valid at any positive time after the Big Bang. We don't have any laws of physics to describe the Big Bang itself. Naive extrapolation says the universe was infinitely dense at the moment of the Big Bang, which most likely means we can't be that naive. --Tango (talk) 20:11, 9 December 2009 (UTC)
- Right, but as I understand it, even changing parameters of fundamental forces, or unifying them, or changing symmetry relationships, do not change fundamental thermodynamic properties in a quantum mechanics treatment. Nimur (talk) 20:36, 9 December 2009 (UTC)
- There is only one parameter which affects the 2nd law, as far as I know - initial entropy. The mathematical derivation of the 2nd law is time reversal symmetric, so entropy ought to increase both towards the future and the past (which is rather difficult, since it would seem to make the present special). It is the very low entropy at the beginning that causes it to increase towards the future. So if you change the initial entropy, you change the 2nd law (and all the arrows of time that follow from it). The other parameters shouldn't make any difference, the 2nd law is a pretty elementary mathematical theorem. --Tango (talk) 22:10, 9 December 2009 (UTC)
- Right, but as I understand it, even changing parameters of fundamental forces, or unifying them, or changing symmetry relationships, do not change fundamental thermodynamic properties in a quantum mechanics treatment. Nimur (talk) 20:36, 9 December 2009 (UTC)
- The laws of physics are valid at any positive time after the Big Bang. We don't have any laws of physics to describe the Big Bang itself. Naive extrapolation says the universe was infinitely dense at the moment of the Big Bang, which most likely means we can't be that naive. --Tango (talk) 20:11, 9 December 2009 (UTC)
- I defer to one of the more expert physicists on the reference desk to clarify current scientific thought on the validity of thermodynamic laws during the early big bang. My understanding was that these were always valid. Nimur (talk) 19:43, 9 December 2009 (UTC)
- Except that at the moment of the BB, the laws of physics all had to be different. Otherwise the universe would have just collapsed into the grand-daddy of all black holes, and that would have been that. StuRat (talk) 19:35, 9 December 2009 (UTC)
Zombie Plan
i was reading about mad cow disease and how that if there was a stronger form of it, like a Super mad cow or madder cow disease, that was transferd by blood or saliva, it would be almost like a Zombie outbreak. this made me wander.... what are the chances of a virus, or infection of any kind that would cause a "zombie like" outbreak if any? just a thought. —Preceding unsigned comment added by DanielTrox (talk • contribs) 18:46, 9 December 2009 (UTC)
- Plan????? Your title gives you away you evil mastermind Daniel Trox! 92.224.205.128 (talk) 19:25, 9 December 2009 (UTC)
- Venereal disease? Sufferers may not like to be called zombies. Nimur (talk) 19:25, 9 December 2009 (UTC)
- Would Kuru (disease) fit the bill here? --TammyMoet (talk) 19:29, 9 December 2009 (UTC)
- I wonder where you wandered, to "psychotic cow disease", perhaps ? But anyway, I believe rabies can be spread directly from human to human, if you can just convince them to bite each other. StuRat (talk) 19:32, 9 December 2009 (UTC)
- That would have been my vote. Rabies is usually mentioned as among the most zombie-like disease - it affects the brain, often causing mania and increased agitation, and can increase saliva production while eliminating the ability to speak. ~ Amory (u • t • c) 19:50, 9 December 2009 (UTC)
- And to answer the specific question, the chances would be pretty low. Anything zombie-like would kill the infected too quickly while being too obvious, allowing the uninfected to take necessary precautions. The only effective spread (for rabies anyway) seems to be through the various animal reservoirs which, aside from bats, is usually pretty obvious. ~ Amory (u • t • c) 19:57, 9 December 2009 (UTC)
- That would have been my vote. Rabies is usually mentioned as among the most zombie-like disease - it affects the brain, often causing mania and increased agitation, and can increase saliva production while eliminating the ability to speak. ~ Amory (u • t • c) 19:50, 9 December 2009 (UTC)
- For me the defining characteristic of a zombie is something that is tenacious (maybe to the point of being manic) AND can only be killed by dismemberment or other severe injury, making them formidable foes. If you are afraid of salivating, nonsensical humans that are agitated and manic then your worst nightmare might be something called Ozzfest... --66.195.232.121 (talk) 21:50, 9 December 2009 (UTC)
- There is a human analog of Mad Cow - and a bunch of people in the UK caught it by eating infected meat products. It's called Creutzfeldt–Jakob disease (CJD for short). Our article says "The first symptom of CJD is rapidly progressive dementia, leading to memory loss, personality changes and hallucinations. This is accompanied by physical problems such as speech impairment, jerky movements (myoclonus), balance and coordination dysfunction (ataxia), changes in gait, rigid posture, and seizures. The duration of the disease varies greatly, but sporadic (non-inherited) CJD can be fatal within months or even weeks (Johnson, 1998). In some people, the symptoms can continue for years." - so not really Zombieism per-se. It doesn't spread human-to-human very well - unless there are cannibals around - but eating brains certainly would be a reasonable cause. SteveBaker (talk) 22:24, 9 December 2009 (UTC)
well see thats what i was getting at, that madcow disease would make someone seem almost zombie like, and if it altered to make people extreamly agressive and tranfer through blood or saliva, i think it could make a "infected" like epidemic --Talk Shugoːː 18:39, 10 December 2009 (UTC)
Smallpox eradication
In 1979 WHO declared the complete eradication of smallpox, but I caught it while in kindergarten (late 1980s) and infected my sister in early 1990s, who had blister traces for several years. How it could be? 85.132.99.18 (talk) 19:52, 9 December 2009 (UTC)
- You did not catch smallpox while in kindergarten. Perhaps you had some other disease such as chickenpox. Algebraist 19:55, 9 December 2009 (UTC)
- If you had smallpox and your doctor ever saw or treated you for it, then it would have been a major international incident. Nimur (talk) 20:37, 9 December 2009 (UTC)
- And listed in our article Smallpox#Post-eradication which currently says the last known cases were among reseachers in 1978 Nil Einne (talk) 20:40, 9 December 2009 (UTC)
- If you had smallpox and your doctor ever saw or treated you for it, then it would have been a major international incident. Nimur (talk) 20:37, 9 December 2009 (UTC)
- You're almost certainly thinking of chickenpox. If you had gotten Smallpox in 1989 it would have been in newspapers worldwide. APL (talk) 21:25, 9 December 2009 (UTC)
What are wastes of different industries and what are their usages?
What are wastes of different industries and what are their usages?
Examples are:
From rice mills we get rice dusk as waste. We can use that dusk in producing energy in biomass plant or we can use that dusk to feed animals. —Preceding unsigned comment added by Anirbannaskar (talk • contribs) 20:06, 9 December 2009 (UTC)
- This sounds like a homework question, which we won't help with. You need to do your own work if you are going to learn anything from it. --Tango (talk) 20:20, 9 December 2009 (UTC)
Compare energy released by automobiles vs. nuclear warheads
Most of the information used here comes from WP articles. Is the conclusion correct?
One W89 nuclear warhead has a yield of approximately 475 Kilotons of TNT.
475 Kt converts to approximately 1987 Terajoules of energy.
One gallon of gasoline contains approximately 132 Megajoules of energy.
So, 15,053,030 gallons of gasoline contain the energy released by a single W89 nuclear warhead.
Americans alone drive 2,208 billion miles per year (per Dept of Transportation).
At 20 MPG, that is 110.4 billion gallons of gasoline converted to energy.
Thus, American driving alone releases the energy equivalent of 7,334 modern nuclear warheads annually. —Preceding unsigned comment added by Alfrodull (talk • contribs) 20:47, 9 December 2009 (UTC)
- I haven't checked your numbers or arithmetic (you can double check that yourself), but the conclusion is certainly plausible. There is a very big difference between energy released over a lot of time and space and energy released in an instant in one place. --Tango (talk) 20:51, 9 December 2009 (UTC)
- Also, the energy released in a nuclear explosion is largely thermal energy (heating the air and the solid objects in the target area), and kinetic (moving huge quantities of air, debris) and potential energy, in the form of deforming and destroying the target; and nuclear, in the form of irradiating energy both in the form of a quick blast ("pulse", commonly called an EMP as the liberated nuclear energy takes electromagnetic form through a variety of processes), and in the form of long-lasting decaying nuclear particles. The energy released in an automobile is about 60% thermal and 40% kinetic, which is converted to the controlled vehicle motions that the engine is connected to. As such, an equivalent amount of energy released is much safer in the controlled, normal operation of motor vehicles. So if you want to carry this thought-experiment farther, you'll need to brush up the numbers and check the details of those figures more carefully. Nimur (talk) 21:01, 9 December 2009 (UTC)
- Also, check your "20 mpg" figure. I would not be surprised if the average fuel efficiency over 2.2 trillion miles (which probably includes freight and trucking) is actually much worse. Trucks make a huge percentage of the total vehicle-miles travelled in the U.S., and when loaded, they do not usually get 20 mpg (and they do not run on gasoline). Nimur (talk) 21:07, 9 December 2009 (UTC)
- The energy released in an automobile is almost 100% thermal. There is only kinetic energy temporarily. Likewise, most of the energy from a nuclear weapon gets converted to heat pretty quickly. --Tango (talk) 21:10, 9 December 2009 (UTC)
- You're counting braking (and friction), which is a can of worms. But, you are right, technically. My point was that the energy in a car flows through controllable pathways, rather than uncontrolled destructive release. Nimur (talk) 21:11, 9 December 2009 (UTC)
- Also, the energy released in a nuclear explosion is largely thermal energy (heating the air and the solid objects in the target area), and kinetic (moving huge quantities of air, debris) and potential energy, in the form of deforming and destroying the target; and nuclear, in the form of irradiating energy both in the form of a quick blast ("pulse", commonly called an EMP as the liberated nuclear energy takes electromagnetic form through a variety of processes), and in the form of long-lasting decaying nuclear particles. The energy released in an automobile is about 60% thermal and 40% kinetic, which is converted to the controlled vehicle motions that the engine is connected to. As such, an equivalent amount of energy released is much safer in the controlled, normal operation of motor vehicles. So if you want to carry this thought-experiment farther, you'll need to brush up the numbers and check the details of those figures more carefully. Nimur (talk) 21:01, 9 December 2009 (UTC)
- The problem with comparing energy like this is that raw energy is not that interesting. Compared to, say, the energy that the sun imparts to the earth, the amount of energy released by nuclear warheads is trivial. Ditto things like earthquakes (the other place where they love to use kiloton/megaton/gigaton measurements). The trick is that nuclear warheads release that energy quickly and in a very limited space. If you release a megaton of energy in tiny, diffuse intervals, it's not that impressive. If you release it all at once, over a city, that's impressive. (Additionally, as has been noticed, the effects of nuclear weapons are more diverse than just energy release. You do not get the same results at all from automobile emissions.) --Mr.98 (talk) 21:29, 9 December 2009 (UTC)
- Factual thing—I think you mean the W88, not the W89. --Mr.98 (talk) 21:37, 9 December 2009 (UTC)
absolute zero
Would an area of space at a temperature of absolute zero have a greater permittivity than an area of space characterized only as a perfect vacuum? 71.100.160.161 (talk) 22:54, 9 December 2009 (UTC)
- Space doesn't really have a temperature. Temperature is a measure of the kinetic energy of particles - no particles - no temperature. You could perhaps measure the speed of the very few stray molecules zipping around in space and come up with a number for temperature - but I'm not sure it really means much! SteveBaker (talk) 00:01, 10 December 2009 (UTC)
- Temperature is not really a measurement of kinetic energy. For example a single particle has kinetic energy, but no temperature. Temperature needs to be understood in the context of statistical mechanics, in terms of the relationship between entropy and internal energy. --Trovatore (talk) 00:34, 10 December 2009 (UTC)
- Even in the absence of 'matter' (which is what is really meant by vacuum) there is still the presence of electromagnetic radiation that allows for a definition of the temperature of an 'empty' region of space. As far as I know that has no effect on the vacuum permittivity. Dauto (talk) 01:04, 10 December 2009 (UTC)
- True; I thought about saying something about that (references, as in reference desk: black-body law, Boltzmann distribution) but decided to make just one point. It seems to me that this bit about temperature v kinetic energy is widely misunderstood even among editors with good general science backgrounds. A definition in terms of kinetic energy per particle works for monatomic ideal gases and that's about it. (Even for them, you have to be talking about chunks of gas whose center of mass is at rest, not rotating, etc). Once you have any interaction among the particles beyond elastic collision, there is no simple relationship between temperature and energy per particle. --Trovatore (talk) 01:29, 10 December 2009 (UTC)
- That is also very true, but pedagogically speaking, you cannot start from that principle. When people have questions here about these concepts, they often lack the background to start from the real definition of these things. It is more helpful to start from the simpler models and build up to the more accurate definitions later. For example, you can't take someone who has never taken a chemistry class, and drop the Schroedinger equations on them and say "this is how electrons work". Its the same thing here. We start with the basic, oversimplified model (temperature is the average kinetic energy of a large number of particles) and then if their level of understanding needs to be deeper, we provide it. But starting at the sort of understanding someone with an advanced degree in physics would understand, well, that isn't exactly helpful for the average layperson. --Jayron32 02:16, 10 December 2009 (UTC)
- I've never been a big fan of lies to children. Temperature is hard to understand; that fact should not be concealed. Once that's established, yes, you can go on to explain approximations to the concept.
- But really my point was another — I've observed that people editing articles like absolute zero often really don't get the idea that temperature is not about kinetic energy per se. And I've seen such comments from people I'd expect to know better. --Trovatore (talk) 02:26, 10 December 2009 (UTC)
- Well, if you don't want to teach the simplified model, how many months are you going to spend teaching someone the finer details of statistical mechanics so they can "get" what temperature really means? The average person has no use for that level of detail in their day-to-day lives. Of course, the most "elegent" definition of temperature is the Zeroth law of thermodynamics, which merely states that if two systems are in thermal equilibrium (i.e. no exchange of heat between them) their temperature must be identical, in other words temperature is that property which is shared between two arbitrary systems in thermal equilibrium. The zeroth law definition is elegent also in the sense that it does not care about the type of organization in the systems, and even allows for a meaningful definition of temperature of a vacuum; the temperature of a vacuum is the same as the temperature of a non-vacuum whereby the vacuum system and the non-vacuum system are in thermal equilibrium, and this temperature is not absolute zero, rather it is the temperature of Zero-point energy, which even in a perfect vacuum isolated from all radiation sources, would be the temperature of the cosmic background radiation, which is about 2.7 K. I've always liked the zeroth-law definition of temperature for precisely the reasons you describe temperature as being "hard". It's not temperature that's hard to understand, it's molecular motion which is hard to understand. --Jayron32 04:49, 10 December 2009 (UTC)
- Molecular motion is not the hard concept here. Oh, it's hard enough, certainly, but it's a red herring when discussing conceptual approaches to temperature. The hard concepts are the statistical ones, such as, appunto, "thermodynamic equilibrium". What does thermodynamic equilibrium mean, really? I don't think it's even well-defined, in the final analysis. What it means depends on the system you're examining, and what particular things you want to know about that system. --Trovatore (talk) 04:56, 10 December 2009 (UTC)
- If you like, you can define it by the ability to do work; two systems in thermodynamic equilibrium cannot do work on a third system. Two systems which are not in thermodynamic equilibrium will be able to do work on a third system until such time as they reach thermodynamic equilibrium. That provides one with the "free energy" definition of temperature (the second law of thermodynamics, if you prefer). --Jayron32 05:04, 10 December 2009 (UTC)
- What if the first two systems are moving with respect to the third system, and they do work on it just by crashing into it? What if they have coherent pressure waves running through them? Are you going to say we can't define temperature in these cases? I have never seen a satisfactory demarcation of what parts of the motion/energy of the system are "thermal" and which are not. I don't believe a truly philosophically adequate one exists. I suspect that the demarcation really belongs to pragmatics and not physics. Not that there's anything wrong with that, provided it's acknowledged. --Trovatore (talk) 06:40, 10 December 2009 (UTC)
- HOWEVER, even you admit that the statistical approach to understanding temperature is unreachable to the average lay person, and yet the average lay person still needs to have some understanding of what temperature is and how it works. Again, do we spend time teaching concepts to someone who has no use for them simply so they "get" the higher implications of temperature? Or do we teach them a simpler model of temperature, if it works in their day-to-day lives to work within the simpler model? --Jayron32 05:07, 10 December 2009 (UTC)
- Just don't lie. There's nothing wrong with providing simplified accounts, provided they're labeled as what they are. --Trovatore (talk) 06:40, 10 December 2009 (UTC)
- Wouldn't this have been more suitable if taken to the talk page? Vimescarrot (talk) 09:29, 10 December 2009 (UTC)
- Just don't lie. There's nothing wrong with providing simplified accounts, provided they're labeled as what they are. --Trovatore (talk) 06:40, 10 December 2009 (UTC)
- If you like, you can define it by the ability to do work; two systems in thermodynamic equilibrium cannot do work on a third system. Two systems which are not in thermodynamic equilibrium will be able to do work on a third system until such time as they reach thermodynamic equilibrium. That provides one with the "free energy" definition of temperature (the second law of thermodynamics, if you prefer). --Jayron32 05:04, 10 December 2009 (UTC)
- Molecular motion is not the hard concept here. Oh, it's hard enough, certainly, but it's a red herring when discussing conceptual approaches to temperature. The hard concepts are the statistical ones, such as, appunto, "thermodynamic equilibrium". What does thermodynamic equilibrium mean, really? I don't think it's even well-defined, in the final analysis. What it means depends on the system you're examining, and what particular things you want to know about that system. --Trovatore (talk) 04:56, 10 December 2009 (UTC)
- Well, if you don't want to teach the simplified model, how many months are you going to spend teaching someone the finer details of statistical mechanics so they can "get" what temperature really means? The average person has no use for that level of detail in their day-to-day lives. Of course, the most "elegent" definition of temperature is the Zeroth law of thermodynamics, which merely states that if two systems are in thermal equilibrium (i.e. no exchange of heat between them) their temperature must be identical, in other words temperature is that property which is shared between two arbitrary systems in thermal equilibrium. The zeroth law definition is elegent also in the sense that it does not care about the type of organization in the systems, and even allows for a meaningful definition of temperature of a vacuum; the temperature of a vacuum is the same as the temperature of a non-vacuum whereby the vacuum system and the non-vacuum system are in thermal equilibrium, and this temperature is not absolute zero, rather it is the temperature of Zero-point energy, which even in a perfect vacuum isolated from all radiation sources, would be the temperature of the cosmic background radiation, which is about 2.7 K. I've always liked the zeroth-law definition of temperature for precisely the reasons you describe temperature as being "hard". It's not temperature that's hard to understand, it's molecular motion which is hard to understand. --Jayron32 04:49, 10 December 2009 (UTC)
- That is also very true, but pedagogically speaking, you cannot start from that principle. When people have questions here about these concepts, they often lack the background to start from the real definition of these things. It is more helpful to start from the simpler models and build up to the more accurate definitions later. For example, you can't take someone who has never taken a chemistry class, and drop the Schroedinger equations on them and say "this is how electrons work". Its the same thing here. We start with the basic, oversimplified model (temperature is the average kinetic energy of a large number of particles) and then if their level of understanding needs to be deeper, we provide it. But starting at the sort of understanding someone with an advanced degree in physics would understand, well, that isn't exactly helpful for the average layperson. --Jayron32 02:16, 10 December 2009 (UTC)
- True; I thought about saying something about that (references, as in reference desk: black-body law, Boltzmann distribution) but decided to make just one point. It seems to me that this bit about temperature v kinetic energy is widely misunderstood even among editors with good general science backgrounds. A definition in terms of kinetic energy per particle works for monatomic ideal gases and that's about it. (Even for them, you have to be talking about chunks of gas whose center of mass is at rest, not rotating, etc). Once you have any interaction among the particles beyond elastic collision, there is no simple relationship between temperature and energy per particle. --Trovatore (talk) 01:29, 10 December 2009 (UTC)
- Even in the absence of 'matter' (which is what is really meant by vacuum) there is still the presence of electromagnetic radiation that allows for a definition of the temperature of an 'empty' region of space. As far as I know that has no effect on the vacuum permittivity. Dauto (talk) 01:04, 10 December 2009 (UTC)
Thank you very much for this discussion whether more appropriate for the talk page or not. To clarify I do have a bit better an understanding of temperature and entropy the the average person since I repaired AC and refrigeration units and got curious about latent versus sensible heat. As a side note it is fascinating to see two Wikipedia "librarians" hone in on the best way to respond.
Now here is what I am going for... was the environment the Big Bang happened in at absolute zero, which the cosmic background radiation hovers above (won't ask if that temperature have ever changed, yet)? 71.100.160.161 (talk) 18:03, 10 December 2009 (UTC)
- At the earliest moments for which known physics is believed to work, the universe was fantastically hot. Greater than 1028 K. See Timeline of the Big Bang for some details. Dragons flight (talk) 18:13, 10 December 2009 (UTC)
- If, by "the environment the Big Bang happened in", you mean the "something" (though I prefer to say "nothing") that the early universe expanded "into", then "it" could not have had a temperature defined because it was not matter or even space-time. The temperature of the Cosmic microwave background radiation is gradually cooling towards absolute zero, but extremely slowly. Dbfirs 09:53, 11 December 2009 (UTC)
How much warning do we get before a supernova will become detectable by naked eye?
Let us assume light from a supernova reaches us in the next few months or years. Betelgeuse is one of the best candidates, even if the chances that it would happen exactly in this timeframe are very slim (but still realistically above zero). As it will outshine the full Moon and be visible even in the daylight, it could lead to serious problems, thousands or even millions cold die if a panic strikes and people will start fleeing the big cities or start looting and plundering, thinking the world will soon end. Especially if it happened in December 2012. So it is reasonable to be important to inform political leaders, and leaders of mainstream religions to prepare their people and explain what exactly is bound to happen. So, how much time will we have between astronomers detecting and reliably predicting it and the event becoming obviously visible? Few hours? Days? Weeks? Months? --131.188.3.20 (talk) 23:49, 9 December 2009 (UTC)
- The Supernova Early Warning System could perhaps detect neutrino's a few hours before the main explosion...but it's not really certain that this is true. Aside from that - nothing goes faster than the speed of light - so the light gets here years to centuries before the particulate material. SteveBaker (talk) 23:56, 9 December 2009 (UTC)
- Nitpick: the neutrinos are the main explosion. Everything else (such as electromagnetic radiation and the kinetic energy of the expanding gases) comprises less than 1% of the energy released. Algebraist 00:06, 10 December 2009 (UTC)
- Wow, thanks, I didn't knew we had an article about that. Or even that neutrino detectors are constructed and maintained especially for this purpose. However, these 3 hours seem frighteningly low, lot smaller than required to inform a significant percentage of the population. Especially those who would be more prone to panic. Are there no other ways to detect a supernova from a star fairly big and close enough? Measure extreme size fluctuation, spectrum of emitted light, or other symptoms of an impending supernova? --131.188.3.21 (talk) 00:13, 10 December 2009 (UTC)
- I take it the name is a bit tongue-in-cheek. I gather that its purpose is not so much to provide a warning as a notice, so that the astronomers can get their telescopes pointed in the right direction. --Trovatore (talk) 00:15, 10 December 2009 (UTC)
- Why would anybody panic? Dauto (talk) 00:58, 10 December 2009 (UTC)
- (EC) and my question as well. You seem awfully certain that everything would go to hell - why? 218.25.32.210 (talk) 00:59, 10 December 2009 (UTC)
- Well, not everyone. Pretty sure if you walk down the street in an evening and see a big flash of light in the sky, growing bigger and bigger until it's brighter than the Moon, you will know in an instant, that "wow, that's a supernova, cool" and go on. I'm not sure everyone will be like this. Look at people committing suicide because of some freaking comets. And comets are seen frequently enough that people should be accustomed to them. --131.188.3.20 (talk) 01:43, 10 December 2009 (UTC)
- And of course it's only a 50/50 chance that you'll see it first hand - you might be on the opposite side of the planet at the time and only hear about it on the news. That'll give you plenty of time to update supernova and get a head start on Supernova mass panic of 2009 and List of supernovea in 2009. SteveBaker (talk) 01:57, 10 December 2009 (UTC)
- Be sure to re-direct List of supernovea in 2009 to the correct spelling, List of supernovae in 2009. Nimur (talk) 02:03, 10 December 2009 (UTC)
- Come on, if you're creating the thing you don't want to miss the chance of List of supernovæ in 2009. Algebraist 02:15, 10 December 2009 (UTC)
- Be sure to re-direct List of supernovea in 2009 to the correct spelling, List of supernovae in 2009. Nimur (talk) 02:03, 10 December 2009 (UTC)
- I think you overestimate the panic effect. Some people will panic, because some people will panic at anything, but most will turn on the news, or go on to Google, or whatever, and find out what it is. Those who cannot do any of these things will probably just wonder. Maybe fear. But full-blown, mobs-and-suicide panic? Show me the precedent for it in modern times. --Mr.98 (talk) 02:10, 10 December 2009 (UTC)
- I'm surprised there isn't more panic over this. Originally thought to be a hoax, it has been circulating in many "well-reputed" newspapers, Daily Mail and Dagens Nyheter, and Fox News for example. Frankly, whether it is natural or the result of a rocket misfire, it's fairly frightening. And if it turns out to be a hoax, that is also frightening - one would hope that a giant apparition in the sky would be easily verifiable or refuted by major regional news outlets. ... So, do we have 2009 Norwegian sky apparition yet? Nimur (talk) 02:20, 10 December 2009 (UTC)
- Spaceweather.com has a lot of info about this -- apparently it was a Russian ICBM test that went awry. Looie496 (talk) 02:48, 10 December 2009 (UTC)
- Well, that makes me feel much better.... --Trovatore (talk) 02:49, 10 December 2009 (UTC)
- Spaceweather.com has a lot of info about this -- apparently it was a Russian ICBM test that went awry. Looie496 (talk) 02:48, 10 December 2009 (UTC)
- Not everyone is as well educated as you guys, and not everyone is even literate on this planet. However, I don't want to push this further, because the point of the question was not how big or small the panic would be, but how soon can we reliably predict the event, which still has no meaningful answer except for the 3 hours given by neutrino detection. --131.188.3.20 (talk) 10:48, 10 December 2009 (UTC)
- I don't think anyone thinks it is about being educated and literate. The question is whether people react with panic to such things to any significant degree. I think the vast majority of people, anywhere, are more likely to just hunker down and wait (or assimilate it into their world-view, which people do pretty well), rather than panic. I am no expert on mob psychology but strange things in the sky don't seem like serious triggers to me, compared to, say, accusations of rape by people of another race and things that really push human psychological buttons. --Mr.98 (talk) 15:02, 10 December 2009 (UTC)
- I will note that SN 1054, the supernova which created the Crab Nebula, was widely observed. In 1054 AD, it remained visible during daylight for more than three weeks, and yet was not linked to mass suicides, rioting, or other chaos. TenOfAllTrades(talk) 19:02, 10 December 2009 (UTC)
- I like the OP's belief that "leaders of mainstream religions" when given astronomical facts will explain to "their people" exactly what is bound to happen. Think of Pope Urban VII getting facts from Galileo or Marshall Applewhite explaining Comet Hale-Bopp. Cuddlyable3 (talk) 22:20, 10 December 2009 (UTC)
- Indeed - there is a long history of political, military and religious leaders gaining the capability to calculate solar eclipses and using that knowledge to scare the populace into bending to their will. Far from carefully explaining and calming the populace - they have often used this knowledge to make some dire proclamation at the moment of the eclipse and scare the bejeezus out of the poor, math/astronomy-deprived masses. There have been a few cases (Thales of Miletus for example) where this knowledge has been used for good...but it's not typical! SteveBaker (talk) 13:59, 11 December 2009 (UTC)
- We've heard yet another ranting about how absolutely evil and primitive every society was except from a libertarian one. Thanks :P But this still does not answer the question: How early can we reliably detect a supernova of a similar scale to what we can except from Betelgeuse, for example? --131.188.3.21 (talk) 20:26, 11 December 2009 (UTC)
- Currently, we don't know how to do this, other than by detecting the neutrino flux, which as previously said arrives at best a couple or so hours before the visible photons ramp up (not because neutrinos are faster, but because they're the first thing to be produced when the star actually blows) - consider SN 1987A, where the gap was about 3 hours, and this for a star not actually in our own Galaxy, though close to it in a nearby satellite galaxy.
- There are two problems. One is that nearby (actually in or very near our own Galaxy) and observable supernovae, bright enough to be easily visible to the naked eye, are very infrequent - SN 1987A was the first in several hundred years (the previous being SN 1604); because of this we haven't had much chance to work out any 'warning signs', bearing in mind that there are more than one type, and cause, of supernovae. The other is that what warning signs there might be may well be detectable only with considerably better telescopes (or other instruments) than we currently have. 87.81.230.195 (talk) 23:27, 11 December 2009 (UTC)
- We've heard yet another ranting about how absolutely evil and primitive every society was except from a libertarian one. Thanks :P But this still does not answer the question: How early can we reliably detect a supernova of a similar scale to what we can except from Betelgeuse, for example? --131.188.3.21 (talk) 20:26, 11 December 2009 (UTC)
- Indeed - there is a long history of political, military and religious leaders gaining the capability to calculate solar eclipses and using that knowledge to scare the populace into bending to their will. Far from carefully explaining and calming the populace - they have often used this knowledge to make some dire proclamation at the moment of the eclipse and scare the bejeezus out of the poor, math/astronomy-deprived masses. There have been a few cases (Thales of Miletus for example) where this knowledge has been used for good...but it's not typical! SteveBaker (talk) 13:59, 11 December 2009 (UTC)
- I like the OP's belief that "leaders of mainstream religions" when given astronomical facts will explain to "their people" exactly what is bound to happen. Think of Pope Urban VII getting facts from Galileo or Marshall Applewhite explaining Comet Hale-Bopp. Cuddlyable3 (talk) 22:20, 10 December 2009 (UTC)
December 10
sulfoxide functional groups in drugs
Is the primary fate of sulfoxides to interact with cysteine residues in enzymes to form sulfide-sulfide bonds? John Riemann Soong (talk) 01:28, 10 December 2009 (UTC)
L1 Lagrange point
The Lagrangian point article states, "The Earth–Moon L1 allows easy access to lunar and earth orbits with minimal change in velocity and would be ideal for a half-way manned space station intended to help transport cargo and personnel to the Moon and back."
As far as I can tell from some of the external links at the article, L1 is past the orbit of the Moon. For some reason this example point isn't spelled out in the article though others are. I had to go looking elsewhere to see where L1 is in relation to the moon's orbit. So, could someone explain, more completely and in fairly basic terms (i.e. I'm not an amateur astronomer), why a point which is past the Moon's orbit would be a good half way point for going to the moon? Thanks, Dismas|(talk) 03:51, 10 December 2009 (UTC)
- This is the Earth-Moon L1, which is of course between the Earth and the Moon. You're thinking of the Earth-Sun L1. Algebraist 03:57, 10 December 2009 (UTC)
- Ah! Right. Got it. Sorry for that... I must have read it too quickly. Dismas|(talk) 03:59, 10 December 2009 (UTC)
I'm suspicious of the claim, though. The Earth-Moon L1 point is about 5/6 of the way to the moon, and the amount of energy required to get there from here is pretty close to the amount to get all the way to the Moon. In what way would it be logistically useful to stop there? --Anonymous, 04:31 UTC, December 10, 2009.
- It may be 5/6 of the way by distance, but distance isn't really relevant in the context of space travel. Since there is minimal drag in space, you can get pretty far without having to exert any force (spend any energy) which is what costs fuel/money. From the first line of this question, quoted from the article, you can see that by definition, the Lagrangian point has minimal change in velocity to switch from an Earth orbit to a moon orbit. Change in velocity = acceleration = force ~ cost. moink (talk) 09:44, 10 December 2009 (UTC)
- First, the relevance of the distance from Earth is that it determines the energy requirement to get there from here, i.e. how high you have to rise in the Earth's gravity well. Second, the cost of switching from "an" Earth orbit to "a" Moon orbit isn't important; what matters is the cost of switching between useful orbits. --Anonymous, 10:37 UTC, December 10, 2009.
- The tight linkage between distance traveled and energy expended (e.g. on Earth's surface) is due to the need to overcome friction, wind resistance, etc. Moving with constant momentum in a vacuum where gravity is negligible would require nearly zero energy. Add gravity, and the energy required is related to work done against gravity, with distance playing a role only as it relates to the force of gravity. Thus, when talking about the energy required to move in space, distance is not as relevant as intuition might suggest. Of course, distance will affect the time required to make a trip, in a velocity-dependent way. -- Scray (talk) 11:49, 10 December 2009 (UTC)
- For the third time, I'm talking about the distance only in relation to the force and energy required to overcome (the Earth's) gravity. --Anonymous, 21:33 UTC, December 10, 2009.
- By definition, L1 lies on the path where you increase the least relative to Earth's gravity well before falling into the Moon's gravity well. Dragons flight (talk) 12:23, 10 December 2009 (UTC)
- Correct, but velocity is relevant as well as position. To make a stop at L1 you must expend energy to enter an orbit matching L1, then more energy to get moving toward the Moon again. --Anonymous, 21:40 UTC, December 10, 2009.
- By its nature, both Earth and the Moon are "downhill" from L1. Yes, if you're stopped at L1, you need to expend energy to get moving again, but since an orbit at L1 is an unstable equilibrium, any expenditure of energy, no matter how small, is sufficient. --Carnildo (talk) 00:40, 11 December 2009 (UTC)
- Very true. However, the point about the difference between "an orbit" and "a useful orbit" is a very good one - a small expenditure of energy would get you into a very high orbit around either the Earth or the Moon, neither of which is very useful. --Tango (talk) 17:10, 11 December 2009 (UTC)
- By its nature, both Earth and the Moon are "downhill" from L1. Yes, if you're stopped at L1, you need to expend energy to get moving again, but since an orbit at L1 is an unstable equilibrium, any expenditure of energy, no matter how small, is sufficient. --Carnildo (talk) 00:40, 11 December 2009 (UTC)
Known changes: mental function: adulthood
What is known about changes of a physiological sort and also perhaps a behavioral sort that occur in the brain and in the minds of people in the years between the beginning of adulthood and the beginning of old age? These points may be poorly defined, especially "old age." But I seem to recall seeing a lot written on how these things change through childhood and perhaps into early adulthood. And it is known that age significantly correlates with the mental decline seen in some older people. But is anything known about any changes that transpire in the forty or fifty years in between these two points? Bus stop (talk) 04:17, 10 December 2009 (UTC)
- Gerontology is the study of ageing. Developmental psychology has something to say about psychological changes associated with adulthood. --TammyMoet (talk) 10:08, 10 December 2009 (UTC)
Although I am old there is nothing wrong with my short term memory nor is there anything wrong with my short term memory.Cuddlyable3 (talk) 21:23, 10 December 2009 (UTC)
Life Expectancy in 2050
What will the life expectancy of the world be in 2050?
What will the life expectancy of America be in 2050?
What will the life expectancy of Australia be in 2050?
Bowei Huang (talk) 05:06, 10 December 2009 (UTC)
- Wikipedia is not a crystal ball. That being said, I would guess that it wouldn't be much higher than today because the life expectancy of those countries seems to be close to the maximum life span. Jkasd 08:35, 10 December 2009 (UTC)
- Wikipedia might not be a crystal ball, but others have made estimates.[11][12][13] If you want to see more examples, look on Google News and Google Scholar. Fences&Windows 15:19, 10 December 2009 (UTC)
- I answered a very similar question at the Miscellaneous desk and the U.S. census source I give there also does future projections. SO the answer there will also answer this question. --Jayron32 16:25, 10 December 2009 (UTC)
- Wikipedia might not be a crystal ball, but others have made estimates.[11][12][13] If you want to see more examples, look on Google News and Google Scholar. Fences&Windows 15:19, 10 December 2009 (UTC)
But what if we don't talk about the life expectancy of America or Australia, we just talk about the life expectancy of the world? What are the projections for the world's life expectancy in 2050?
Bowei Huang (talk) 23:49, 10 December 2009 (UTC)
Bowie Huang: Go to the miscellaneous desk. Find the very similar question you asked there. Click the link I gave you for the U.S. Census Bureau's International Database. Follow the instructions I gave there to find the data you are looking for. It has data for the USA, for Australia, and for the whole world, and for every year going back a long time, and for projections for many years into the future. Its all there. Trust me. You don't have to keep asking. It's all there for you to find. --Jayron32 05:43, 11 December 2009 (UTC)
E.V.S. project
Which topic is good for e.v.s project? —Preceding unsigned comment added by 117.200.178.181 (talk) 06:55, 10 December 2009 (UTC)
- What is an EVS project? Dismas|(talk) 06:59, 10 December 2009 (UTC)
why is Potassium chloride caustic if its ph is 7 ?
do you guys know? —Preceding unsigned comment added by 74.65.3.30 (talk) 10:21, 10 December 2009 (UTC)
- It isn't. The burning sensation you feel if it gets into an open wound is a consequence of potassium triggering the exposed free nerve ends (somebody correct me if I'm wrong). Both potassium and chloride potentials are important components of the electrical balance in nerve cells. — Yerpo Eh? 10:28, 10 December 2009 (UTC)
- In addition to Yerpo's point above, any water-soluble salt or concentrated salt solution (including regular old sodium chloride: table salt) will cause discomfort in an open wound. The high salt concentration outside the body's tissues will draw out water, causing a localized osmotic stress and triggering pain. TenOfAllTrades(talk) 14:42, 10 December 2009 (UTC)
- Also note that Potassium Chloride, unlike Sodium Chloride, is very damaging to skin and tissue and does not promote healing but rather the opposite. A wound will not heal if kept exposed to Potassium Chloride and exposure of the intestinal track to Potassium Chloride will cause ulcers. The reason may be linked to the fact that the primary extracellular ion is Sodium, not Potassium, correct me if I am wrong. 71.100.160.161 (talk) 17:42, 10 December 2009 (UTC)
- Potassium chloride is a common ingredient in salt substitutes, so in similar quantities to sodium chloride, would not be likely to cause adverse effects in human consumption. Googlemeister (talk) 20:29, 10 December 2009 (UTC)
- Next time you get a cut or abrasion don't do a reality check by using potassium chloride as an antiseptic or to cover and protect the wound even if you believe your opinion is fact. 71.100.160.161 (talk) 22:08, 10 December 2009 (UTC)
- Don't credit me for things I did not write. I say nothing about skin application, only that your statement that potassium chloride causes ulcers when eaten is demonstrably false when consumption is of the same magnitude that one would eat sodium chloride. Googlemeister (talk) 22:20, 10 December 2009 (UTC)
- People use salt to cleanse wounds all of the time. The Potassium in foods like bananas is relatively safe if you do not eat too many. Your taste buds, however, can tolerate a great deal more than your intestines. There are two types of salt substitute. One is an approximate 50/50 mix of Sodium and Potassium. The other is all Potassium. What is needed are warning labels. .7 grams per liter of water is the limit for the 50/50 mix. .7 grams of the 100% and you will begin to have pain in your gut. 71.100.160.161 (talk) 22:50, 10 December 2009 (UTC)
- Don't credit me for things I did not write. I say nothing about skin application, only that your statement that potassium chloride causes ulcers when eaten is demonstrably false when consumption is of the same magnitude that one would eat sodium chloride. Googlemeister (talk) 22:20, 10 December 2009 (UTC)
- Next time you get a cut or abrasion don't do a reality check by using potassium chloride as an antiseptic or to cover and protect the wound even if you believe your opinion is fact. 71.100.160.161 (talk) 22:08, 10 December 2009 (UTC)
- Potassium chloride is a common ingredient in salt substitutes, so in similar quantities to sodium chloride, would not be likely to cause adverse effects in human consumption. Googlemeister (talk) 20:29, 10 December 2009 (UTC)
why is potassium chloride damaging to the skin if its ph is 7 isint that neutral?
- Osmolarity. Take a look at the top picture (left most panel) in the hypertonic article and imagine that's what's happening to your skin cells. -- 128.104.113.17 (talk) 17:28, 11 December 2009 (UTC)
There's also the fact that potassium depolarises cell membranes. (Extracellular potassium levels are supposed to be low whereas extracellular potassium levels are supposed to be high.) Like you know why intravenuous potassium chloride is a method for executions. John Riemann Soong (talk) 22:27, 11 December 2009 (UTC)
Since of touch
How fast does it travel? And how does it does so so fast that when you touch something, you immediately feel it?Accdude92 (talk to me!) (sign) 14:47, 10 December 2009 (UTC)
- It's not instantaneous—it's as fast as your nerves can transmit the signal and your brain can make sense of it (though some types of sensations—like extreme pain—can be processed without your brain fully understanding them, and responded to with a reflex, if I recall). Reaction time is probably a good place to start. --Mr.98 (talk) 14:55, 10 December 2009 (UTC)
- See Axon#Sensory for signal travel speed. It depends on the fiber myelination. --Mark PEA (talk) 15:09, 10 December 2009 (UTC)
Measuring magnetic susceptibility
Is a Gouy balance the same thing as a Faraday balance? They are both used for measuring magnetic properties. Alaphent (talk) 16:35, 10 December 2009 (UTC)
- We have an article on Gouy balance. The Faraday balance method is very similar with the difference being in the size of the sample. In the Faraday method, a small sample (essentially a point) is balanced in a graded magnetic field. In the Gouy method, the magnetic field is constant but the length of a sample rod in the field is varied. This book shows the difference diagramatically. SpinningSpark 19:08, 10 December 2009 (UTC)
ammonia sanitizer
I understand that the meat processing industry uses anhydrous ammonia to kill e-coli and other meat product contaminants by sealed exposure to the gas. Is this really done, does it work and is it harmful to the meat products or to consumer? 71.100.160.161 (talk) 17:33, 10 December 2009 (UTC)
- It seemingly is done, and works, according to Section 7.4.4 of the article you yourself linked to, which also says that the US Department of Agriculture says it's safe. Now, how far do you trust them? 87.81.230.195 (talk) 01:57, 11 December 2009 (UTC)
Sodium bicarbonate ph?
why the ph listed on this site as a 10 when it is really a 8 ? —Preceding unsigned comment added by 74.65.3.30 (talk) 17:56, 10 December 2009 (UTC)
- Solid sodium bicarbonate has no pH. pH is the property of a substance when it dissolves in water. pH is a measure of the concentration of something called hydronium ions in water. pH is based on a negative logarithm scale, which means that small numbers indicate a higher concentration of hydronium, and larger numbers indicate a smaller concentration. It also means that each increase of 1 on the pH scale means a factor of 10 in concentration, so a pH of 1 has 10 times the concentration of hydronium as does pH 2, and 100 times the conentration as pH 3. Now, depending on how much sodium bicarboate you add to water will effect how much hydronium it makes, which will then affect the pH.
A 1.00 molar solution of sodium bicarbonate has a pH of 10.3, butif you had a more dilute solution, the pH would be closer to that of water (ph = 7) while a more concentrated solution would result in a pH farther from water. --Jayron32 20:08, 10 December 2009 (UTC)
- Please tell us which site lists Sodium Bicarbonate as pH 10 or 8. It does not seem to be the Wikipedia article.Cuddlyable3 (talk) 21:03, 10 December 2009 (UTC)
- The Wikipedia article lists the pKa as 10.3... Presumably, the OP confused the terms pKa and pH. I may have too, now that I look at my reasoning. The half-equivalence point of a solution of sodium bicarb will have a pH of 10.3, not a 1 molar solution. Regardless, the OP seems to have a general misunderstanding of how pH works. --Jayron32 21:19, 10 December 2009 (UTC)
so why dosent the wiki article just list the real ph? in fact on wiki most chemicals here dont have a ph listed. why?
- Did you actually read anything I wrote, or even click the links and read the articles? pH refers to a very specific property of a very specific type of thing. It is specifically the amount of hydronium ions created when you dissolve something in water. That amount of hydronium ions is going to be dependent on how much you dump into the water. Nothing has an inherent pH. Its not a property of a substance, its a property of a mixture between a substance and water. If you dump two scoops of sodium bicarbonate in water, the mixture will have a different pH than if you dump one scoop of sodium bicarbonate in water. --Jayron32 05:40, 11 December 2009 (UTC)
yer a fucking idiot if i take baking soda POWDER to a lab and ask them the ph they will tell me. quick lime ph is like 11 and ITS A FUCKING POWDER U IDIOT — Preceding unsigned comment added by 74.65.3.30 (talk • contribs)
- No, they don't tell you the pH of a powder because powders do not have a pH. Read the article titled pH. The first line of that article is "pH is a measure of the acidity or basicity of a solution". You should also probably read what a solution is, if that doesn't make sense to you. And typing in all caps and calling people names doesn't make you right. It just makes you look rude. --Jayron32 06:37, 11 December 2009 (UTC)
- No need for name-calling. You say "if i take baking soda POWDER to a lab and ask them the ph they will tell me. quick lime ph is like 11".[original research?] Please actually do this. Let us know what lab and what they tell you. DMacks (talk) 06:58, 11 December 2009 (UTC)
Since this was an issue which I noticed when searching on this, can someone look into Talk:Sodium_bicarbonate#pKa Nil Einne (talk) 10:31, 11 December 2009 (UTC)
To be fair, before I'd really got a handle on what pKa was, the WP articles on acids etc confused me too. I know that it's obviously not useful to put up pHs of various solutions, but I think the OP has fallen into a fairly common hole that lurks within the chemistry articles. Brammers (talk) 10:35, 11 December 2009 (UTC)
- The chembox entry link points to Acid dissociation constant, so (assuming people actually click a link before assuming what a term means) that is the page that needs to be very clear very early what the difference between the acidicity of a "chemical in solution" vs the acidity of a "solution of a chemical". DMacks (talk) 10:47, 11 December 2009 (UTC)
- The deal is, with acid-base chemistry, it is pretty complicated. I think I did my best to explain what pH is above, but really, consider that we have three theories of acid-base chemistry, and they ALL serve their purpose (Arrhenius theory, Brønsted–Lowry theory, and Lewis theory). If people arrive at Wikipedia with a misunderstanding of what pH is, all we can do is attempt to correct the misunderstanding in terms they are likely to understand. If the articles that exist at Wikipedia need fixing in order to make them clearer lets do that too. This was a case of someone just being rude for its own sake. I patiently explained in two different ways how pH worked, and got called a "fucking idiot". I don't know that the person who asked the question is in the proper frame of mind to be educated, given his response. --Jayron32 17:04, 11 December 2009 (UTC)
should a shit blanket go over an awesome one or vice versa?
if I have a shit excuse for a blanket and also an awesome Taj Mahal of blankets, would I optain optimum warmth by putting the shit excuse for a blanket over me and the Taj Mahal over that, or vice versa? What is your reasoning. 92.230.69.195 (talk) 19:20, 10 December 2009 (UTC)
- It makes little difference in warmth but it may make a difference in confort. Which one makes you feel ichyh? Dauto (talk) 19:49, 10 December 2009 (UTC)
- It should make little difference if they are both intact. If the shitty blanket has holes in it, allowing for convection, you would probably be warmer is that is on the inside. Dragons flight (talk) 19:55, 10 December 2009 (UTC)
- (ec)It depends on what makes it shitty. If it just had holes in it, I think that could conceivably improve the usefulness of the under-blanket as it wouldn't detract from the layer of warm air accumulating under the covers and might actually allow some additional circulation of air to lessen the sweats. On the other hand, it it's shitty because it's starchy or plastic-y, it might be better on top because the starchiness might interfere with circulation and/or feel scratchy. Matt Deres (talk) 19:58, 10 December 2009 (UTC)
- "what is your reasoning" With a title like "should a shit blanket go over an awesome one or vice versa" thank *god* there is a reasoning requirement! But seriously, there are many factors to measure blanket effectiveness, you need to be more specific. As an avid camper that spends nights in a thin tent at sub 0F temperatures, I would say that the wind/water impermeability is paramount for the outermost layer (meaning it stops cold air/water from getting in), whereas the inner layers are measured by their fluffiness (meaning they better insulate the heat inside). Hope this helps! --66.195.232.121 (talk) 21:03, 10 December 2009 (UTC)
- When in the past I slept in a cold room under blankets, I noticed that I would feel warmer if I put something over the blankets that stopped the warm air from rising up and escaping. Similarly, a sleeping bag inside a large plastic bag is warmer (beware suffocating, and you get lots of condensation). So put the most wind-proof one on top. 89.242.147.237 (talk) 23:04, 10 December 2009 (UTC)
Rubbing salt into the wound
I'd always assumed that when salt was rubbed into the wounds of chimney-sweeps, for example, they were causing them pain (an osmosis-based pain, WP says) but also doing them a favour of some kind, perhaps antiseptic or similar. Is this correct, or was it just sadism? - Jarry1250 [Humorous? Discuss.] 20:19, 10 December 2009 (UTC)
- The hyperosmolarity is the cause of the antiseptic effect. Wisdom89 (T / C) 21:30, 10 December 2009 (UTC)
is The Future Is Wild made by Amasia when the supercontinent is all Atlantic Ocean or is it like Pangaea Ultima. Will The Future Is Wild shows when all pacific Oceans close?--209.129.85.4 (talk) 20:24, 10 December 2009 (UTC)
atom "melting" temp
What temperature would it take to literally rip an atom apart, that is the nucleus flies apart and the electrons are lost, leaving only random bits of subatomic particles? I mean sure temperature is something like how fast atoms or molecules are moving and vibrating and such, surely some temperature would be large enough that atoms are moving fast enough the forces holding the atom together are no longer sufficient? Googlemeister (talk) 21:26, 10 December 2009 (UTC)
- A wild guess: look at the binding energies. 1–9 MeV corresponds to temperatures of 12–104 GK. --Tardis (talk) 21:43, 10 December 2009 (UTC)
- (ec) That's one of those questions whose answer will depend heavily on how it is interpreted. Some radioisotopes will fission spontaneously at room temperature; does that meet the minimum standard? At the other extreme, the binding energy of an atomic nucleus is the (hypothetical) amount of energy required to pull all of its nucleons apart into separate particles. Figure it's 7 or 8 MeV per nucleon (neutron or proton), and the energy of a nucleon at temperature T will be (very roughly) on the order of kBT. In that case, the whole thing drops apart at around 1011 kelvin (that's 8 MeV divided by kB). The thermal energy of each particle will be roughly equal to the energy with which it would be bound to the nucleus — which is probably closer to the sort of answer you're looking for. Note that I'm back-of-the-enveloping things here, so if anyone has a better answer, go to it. TenOfAllTrades(talk) 21:48, 10 December 2009 (UTC)
- (also ec)
- Even in extreme conditions, subatomic particles don't group themselves at "random"; they have a strong tendency to collect in specific groups that form the nuclei of the stable elements and their isotopes. If you start with the nucleus of a radioactive isotope, it'll be unstable no matter what the temperature; for example, a nucleus of radium 226 will sooner or later rip itself apart, all by itself, into a helium 4 nucleus (otherwise called an alpha particle) and a radon 222 nucleus. This will continue with the radon nucleus emitting another alpha particle, and so on until all of the products are stable nuclei.
- If you raise the temperature, stable nuclei will start colliding and break up in other ways, or fusing together. But each specific reaction requires a different amount of energy (because of the tendency of particles to collect in specific groups) and therefore a different temperature. Thus for example inside the Sun the core temperature is about 15,700,000 K or °C (say 28,000,000°F). At this temperature collidign nuclei will produce certain fusion reactions with the overall effect that four hydrogen 1 nuclei (i.e. protons) end up forming one helium 4 nucleus. But no other important reactions occur at that temperature. On the other hand, the core of a star that's about to become a Type II supernova contains iron 56 at 2,500,000,000 K or °C (4,500,000,000°F), and it's when this iron begins reacting that the star explodes (because the reactions absorb energy and the core collapses).
- So the answer is basically "from the tens of millions of degrees up to the billions, depending on what element you're talking about".
- That's for the disruption of nuclei. Electrons are lost at much, much lower temperatures, I think in the tens of thousands of degrees. See plasma (physics). --Anonymous, 22:12 UTC, December 10, 2009.
- It depends on which electrons. The outermost electron only costs you between 5 and 25 eV (thousands or tens of thousands of degrees), but the binding energies of core electrons get into the hundreds of eV very, very fast. (Each time you pull another electron off, there are fewer electrons remaining to screen the positive charge of the nucleus, and electrons in core orbitals are 'deeper' down to begin with.) TenOfAllTrades(talk) 22:50, 10 December 2009 (UTC)
- But the limit is still about 136 keV (, squared, times the Rydberg constant), which is still much smaller than even deuterium's per-nucleon binding energy. --Tardis (talk) 23:12, 10 December 2009 (UTC)
- Oh, absolutely! My point was more that you're not going to get fully-stripped nuclei until you hit millions of degrees, not that the core electron binding energies were comparable to nucleon binding energies. (Indeed, if they were, we'd have some very interesting transmutation chemistry accessible to us....) TenOfAllTrades(talk) 23:39, 10 December 2009 (UTC)
- But the limit is still about 136 keV (, squared, times the Rydberg constant), which is still much smaller than even deuterium's per-nucleon binding energy. --Tardis (talk) 23:12, 10 December 2009 (UTC)
- It depends on which electrons. The outermost electron only costs you between 5 and 25 eV (thousands or tens of thousands of degrees), but the binding energies of core electrons get into the hundreds of eV very, very fast. (Each time you pull another electron off, there are fewer electrons remaining to screen the positive charge of the nucleus, and electrons in core orbitals are 'deeper' down to begin with.) TenOfAllTrades(talk) 22:50, 10 December 2009 (UTC)
- This is one of the reasons, incidentally, that nuclear fission was so unintuitive to nuclear physicists. If you calculate based on binding energies alone, it should be VERY hard to make a large nuclei break apart—it should take very high energies. But, in fact, it takes low energy neutrons to do it (in U-235, anyway)... because it's not just about the binding energy alone. One physicist described it as throwing a softball at a house and watching the whole structure split into pieces. --Mr.98 (talk) 18:20, 11 December 2009 (UTC)
Free energy interpretation
I don't know why the Arrhenius equation is being used, since that is a kinetic thing, not a thermodynamic thing...? For the melting temperature you'd have to find the T where free energy of free nucleons = free energy of bound nucleus. Thus if an atom is unstable, you can thus see that even at say 300K, a tiny amount of the reactive species will exist at any one time. John Riemann Soong (talk) 22:13, 11 December 2009 (UTC)
Highest temperature
Was the moment of the Big Bang the moment of the highest temperature and if so by what curvature has the universe cooled? 71.100.160.161 (talk) 22:13, 10 December 2009 (UTC)
- Read Planck temperature, Planck epoch and Absolute hot. The general relativity model predicts infinite temperature, but this formulation fails at the time period anyway. Graeme Bartlett (talk) 23:42, 11 December 2009 (UTC)
Miller–Urey-type experiments
Miller–Urey-type experiments with more realistic predictions about early Earth's atmospheric composition produce many amino acids, but they also produce deadly toxins such as formaldehyde and cyanide. Even if, by some freak chance, amino acids assembled themselves in the correct order to form usable proteins and then life, why wasn't early life killed off by these toxins that were formed at the same time? --76.194.202.247 (talk) 23:28, 10 December 2009 (UTC)
- Firstly, the concentration of these toxic substances in the sea would not have been that great. Secondly, how do you know that these chemicals were toxic to early lifeforms? Pseudomonas aeruginosa, for instance, is known to be tolerant to hydrogen cyanide, indeed, it will synthesise this chemical in low oxygen conditions. Escherichia coli is tolerant to formaldehyde. On the other hand, most early life was poisoned by oxygen and would not last 5 minutes if it was released now. Life evolves to cope with the environment it finds itself in. SpinningSpark 00:21, 11 December 2009 (UTC)
- That. It's also worth mentioning, perhaps, that the earliest products wouldn't have been necessarily life as we know it, but rather just organic structures that could perpetuate themselves somehow. That's something a lot closer to viruses, for example, than to a living multi-celled organism. ~ Amory (u • t • c) 02:07, 11 December 2009 (UTC)
- You might want to have a look at our article on abiogenesis; it touches on some of the theories for how life came about. TenOfAllTrades(talk) 04:43, 11 December 2009 (UTC)
December 11
Sewing machines
I can't find anything on how to use a sewing machine to sew a patch in the middle of a large sheet of fabric (50' x 50').
The problem I'm running into is that there isn't enough room between the needle and the base of the sewing machine for that much fabric.
But I've seen patches done in large tents, tarps, sails, blankets, etc.
What's the secret?
(Is there another type of machine that is used for this?)
Simple Simon Ate the Pieman (talk) 03:52, 11 December 2009 (UTC)
- There are special "long arm" machines for that kind of thing. Like this one, for example. SteveBaker (talk) 05:56, 11 December 2009 (UTC)
- The example machine is said ominously to have "All fear driven hook mechanism". Cuddlyable3 (talk) 21:02, 11 December 2009 (UTC)
On the number of exoplanets that transit their stars in the milky way
I'm a pretty avid observer of the amateur transit watch community and I was just wondering something. Obviously, astronomers looking for transits are hoping that the rotational plane of the observing planetary system will coincide with our line of sight, thus having planets crossing the light of the star every once in a while giving us much more direct evidence for their existence. To measure the radial velocity of stars we watch how quickly the star moves towards and away from us, and thus, a similar transiting orbital orientation works best for radial velocity measurements as well.
Now my question is, is there a tendency for planetary ecliptic planes to orient themselves in a certain way relative to the orbital plane of the galaxy? Obviously this comes down to the rotation of the star in its early stages, so I could ask then do the rotations of stars in any way reflect their orbit around the galaxy? I couldn't find out the planar angle of the solar system relative to the milky way, but from visual memory it doesn't seem to be very close, which means that this will be much help to astronomers looking for transiting stars, but I guess it's also possible that local planetary systems will have similar planes, due to similar environment, age, etc.
Does anyone have any insight on this? Thanks! 219.102.221.182 (talk) 05:02, 11 December 2009 (UTC)
- I doubt there is any special tendency for the plane of the stellar ecliptic to line up with the galactic ecliptic. So the probability of an exoplanet eclipsing it's parent star basically comes down to it's orbital radius and the diameter of the star (and to a much, much lesser extent, the diameter of the planet). The angle through which the orbit will occlude the star is arctan(starRadius/planetaryOrbitRadius)x2.0 - if the angle of the star's ecliptic is random then you can easily figure the probability that it'll happen to line up with the position of the earth. So to answer your question - we'd need to know the distribution of diameters of the stars in our galaxy - but we'd also need to know the statistics of the planetary orbital radius...and there's the problem. We haven't got that information until we've already found the exo-planets! We could probably make a guesstimate.
- So if we said that our sun was typical - and we were looking for planets out as far as (say) Saturn - then that eclipse is only visible over an angle of about 0.06 degrees...out of 180 degrees. That's about a one in 3,200 chance. There are a lot of stars out there - so there are a lot of chances - and lots of planets are a lot closer to their stars than Saturn - so the odds are probably a lot better than that. This is a worst-case - where the plane of the stellar ecliptic is completely random. If there is some tendency for the stellar ecliptic to line up with the galactic ecliptic - then for more distant stars - that greatly increases the probability of eclipses - but for closer ones - it decreases the probability. SteveBaker (talk) 05:36, 11 December 2009 (UTC)
- Thanks! 1 in 3200 chance, that would actually be quite a bit better than I had expected (for random planes). Is there any particular reason you feel that the planes should be random? The fact that the solar system, earth, and most satellites occur on the same plane would seem to hint at a trend, and I have no reason to assume in the other direction. Though I have to say I totally didn't realize the consequences this would have for near star systems... I guess I was thinking too 2-dimensionally. Also it's worth noting that if the sol system isn't nearly planar with the galaxy, but there indeed is a general trend for other planetary systems to be, that could also lower the odds of catching a transiting planet considerably. 219.102.221.182 (talk) 06:34, 11 December 2009 (UTC)
- It is also worth mentioning that gravitational microlensing techniques such as the proposed The Galactic Exoplanet Survey Telescope (GEST) are expected to be good for small planets some distance from the star. Both observed transits and radial velocity techniques require a large planet close to the star (which microlensing in not very good for), hence microlensing is expected to find many more planets and of a smaller size than previous methods. SpinningSpark 10:46, 11 December 2009 (UTC)
- I agree that one in 3200 seems like unexpectedly good odds - I tried to make it come out worse - but the numbers insisted and we must obey! It's not so much that I have a reason to believe that the ecliptic planes of stars should be random with respect to the plane of the galactic ecliptic - so much as that I can't think of any reason why they shouldn't be random. (Maybe that's the same thing?!) The plane of the Milky Way galaxy is at 60 degrees to the Sun's ecliptic - so unless we're rather special - I'm pretty sure there is no correlation. However, if there is a preferred direction then that's a problem. The deal is that the galactic spiral is quite thick - maybe 1000 light-years. So all of the stars within about 1000 light years of us are much more likely to be above or below us as than they are to lie in the same plane. If there is some tendency for planetary disks to lie in the galactic plane (meaning that our sun is weird in that regard) - then we'd be unable to see any eclipses for almost all of the stars within about 1000 ly. That would be bad news for astronomers because the really nice, easy-to-measure stars are going to be the closest ones...and those would be the problematic kind. However - as I've said - there doesn't seem to be a good reason for them all to line up like that (and the Sun certainly doesn't) - so I think the 1:3200 number is about right...at least for stars the size of the sun with planets at the distance of Saturn. Of course, exo-planets that are closer to the parent star will be seen to eclipse their star over a wider angle - and planets further from their star - less so...so the 1:3200 number is just a ballpark figure based on our Solar System. Also - some of those eclipses will be briefer in duration than others if the planet only just eclipses the very edge of the star - so nice long-duration eclipses would be rarer. SteveBaker (talk) 13:48, 11 December 2009 (UTC)
- Check out Methods of detecting extrasolar planets#Transit method - it has some of the probabilities. For an Earth-like planet in an Earth-like orbit (which are the most interesting), it's about 1 in 200. Pretty good odds. --Tango (talk) 14:00, 11 December 2009 (UTC)
Binoculars in Games & Movies.
When game makers and movie makers want to depict the idea that we're seeing the world through a pair of binoculars - they often use the trick of masking off the edges of the screen with a pair of overlapping circles kinda like this:
################################# ####### ####### ###### #### # #### ### ### ### ### #### # #### ###### ####### ###### #################################
I'm pretty sure that out here in the real world, binoculars don't look like that when they are properly set up (sadly, I don't own a pair to try) - it's really just a single circle. Is this a true statement? ...and (because I need to convince some people about it today) is there a cogent explanation as to why that is the case.
Finally - is there a name for this kind of mask - maybe some kind of movie jargon?
TIA SteveBaker (talk) 13:32, 11 December 2009 (UTC)
- You are correct that only a single circle is seen in the real world, however that would waste a lot of screen real estate (cf a "gun scope / crosshairs" view as seen in any typical James Bond film opening.) As far as terminology, in Final Cut Pro, they just use the term "binoculars filter". --LarryMac | Talk 13:50, 11 December 2009 (UTC)
- Yeah - and the 'real-estate' issue is under debate too. I'm building a computer graphics simulation where realism is very important - so loss of screen real-estate is taking second place to "getting it right". What I may do is to use a single circle that's only a little bit narrower than the screen - and let it cut off at the top and bottom. A compromise solution.
- So what is a convincing explanation (to my Producer - who also doesn't have a set of binoculars at hand right now - and who wants a double-circle) of why we only see one circle?
- Steve, if this is for work, this is the perfect occasion to buy a pair of binoculars on company expense ;-). --Stephan Schulz (talk) 14:13, 11 December 2009 (UTC)
- It's all about frame of reference... The point of looking through binoculars is so that both of your eyes can experience the visual of the down-field magnification. Your brain, doing what it always does with information from your eyes, stitches the two circles together to form one stereoscopic view. A looking glass, spotting scope, monocular, etc. are all single optic versions of the same thing, if you aren't interested in the stereoscopic view. The answer that I am getting to is that there is no good computer screen approximation for the function of binoculars, unless you have a stereoscopic display of some sort, so "getting it right" with binoculars is kind of out of the question. If you want my opinion, either go for the movie cheeze double bubble view, change the device in the game to a monocular and make it a circle, or make it a computerized spotting device of some sort that doesn't use discrete optics but still has the wide form factor of binoculars, so that the onscreen representation can be 4:3 or 16:9 or whatever it is that you are going for. --66.195.232.121 (talk) 14:29, 11 December 2009 (UTC)
- Actually, I can't change the device. What we do at is "Serious Games" - using games technology and game development techniques to produce simulations for training real people for the jobs they actually do. They could be firefighters, county sheriffs, black-ops guys, campus security people...you name it. But if what they carry is binoculars - we have to simulate binoculars - the actual binoculars they'd really have with the right field of view, depth of field and magnification. We don't have the luxury of being able to make it into some kind of futuristic gadget. Whatever that person would be issued with in reality is what we'll give them...to the degree of fidelity possible with the computer we're running on. SteveBaker (talk) 22:44, 11 December 2009 (UTC)
- ...and therefore no good computer screen approximation for the function of eyes. 81.131.32.17 (talk) 17:21, 11 December 2009 (UTC)
- (ec) Try this one on for size — even without special equipment, you already have binocular vision. You're looking through two eyes, but you only see one image. The brain is very good at fusing two images into one, and the same merging process happens when you use a pair of binoculars.
- If you don't have a pair of binoculars handy, then you can do a quick and dirty demo with a matched pair of hollow tubes. (Note - this is original research. I just tested this with a toilet paper tube cut in half, as it was the only tubular object I had handy.) Hold the tubes side-by-side, directly in front of your eyes. You want the center of the tubes to be as close as possible to being in line with the pupils of your eyes.
- Now, focus your eyes on an object in the distance. Notice how the inner surfaces of the tubes (as seen from each eye) get merged together, and you have an apparently circular field of view? Presto! You may have to wiggle the tubes a bit to get the positioning right, but without a pair of binoculars it's probably the most convincing demo you're going to get. That said, if your boss wants the double-bubble silhouette, just give it to him/her. Or try to persuade him to equip the in-game character with a spotting scope, telescope, or other single-eyepiece device. TenOfAllTrades(talk) 14:37, 11 December 2009 (UTC)
- I think the important point is not so much that your eyes can fuse the two images, since that doesn't preclude the sort of "double-bubble" effect, but that the fields of vision provided by the binoculars to each eye is nearly identical; they almost completely overlap. In contrast, your normal vision without binoculars is much closer to this double-bubble thing, since the left side of your field of vision is only seen by the left eye and the same with the right. Only the center area is in the overlap seen by both. With binoculars you adjust the position of the two telescopes specifically so that they provide each eye the same view in order to get binocular vision of what you're looking at. Rckrone (talk) 15:28, 11 December 2009 (UTC)
- Surely the why-for is so that it is instantly clear to (most) people that the vision we are seeing on screen is (as if) 'through binoculars'. Binoculars defintely show just a single-circle when used - though if you adjust the 'width' you can make it look more like a side-ways laid number 8 (infinity sign?) too. I'd imagine it's a simple short-cut by film-crews to make it clear what we're supposed to be seeing, and as others not it takes away less of the view on screen than a circle would. 194.221.133.226 (talk) 15:32, 11 December 2009 (UTC)
- To answer the question of what this is called in cinematography, if it is done with "soft" edges, as it invariably is, it is called a "vignette" (and the technique is called "vignetting"). If the edges are hard it is simply called mask/masking. SpinningSpark 16:06, 11 December 2009 (UTC)
- The problem with the game/movie thing is that with real binoculars you can actually see depth, and know the difference between a binocular image and a spyglass image. In a game/movie, you cannot, and cannot know the difference otherwise. So you're already going to have to sacrifice the biggest "reality" aspect of binoculars for your game (seeing in 3-D) just by the nature of it (unless you are working on something a bit more cool than I am guessing). At some level, the amount of "reality" is somewhat arbitrary, given how much you are already abstracting. --Mr.98 (talk) 16:15, 11 December 2009 (UTC)
- You might try showing a movie clip of an accurate depiction, the only one i can recall is from The Missouri Breaks.—eric 17:08, 11 December 2009 (UTC)
- This problem is equally as challenging as asking somebody to describe the "shape" of their field of view without binoculars. Nimur (talk) 17:21, 11 December 2009 (UTC)
- Well, I learned from this pseudo-educational host segment from MST3k, they're called "Gobos" or "Camera Masks". WP's articles don't seem to fully back this up. But google shows me that that "gobo" is at least sometimes used in this context. Google also seems to suggest "Binocular masks" APL (talk) 17:54, 11 December 2009 (UTC)
- As a compromise, when you switch to the binocular view, you could start with two separate circular images (arranged like a MasterCard logo, with the two images identical) that converge to a single circular field. The action should be sort of irregular, with some jerkiness and overshoot. This would mimic someone picking up binoculars and adjusting the interpupillary distance to their eyes, and I think it would clearly depict "binoculars" to the user. -- Coneslayer (talk) 18:57, 11 December 2009 (UTC)
What a touching lament "sadly, I don't own a pair to try" so close to Christmas.... Cuddlyable3 (talk) 20:58, 11 December 2009 (UTC)
- (No,no,no! Do NOT confuse Santa. I carefully wrote my Xmas list (in my best handwriting) and shoved it up the chimney already - what I want is a Kindle - I don't own binoculars because I neither want nor need binoculars! I have not been naughty this year - at all - ever. :-) SteveBaker (talk) 22:44, 11 December 2009 (UTC)
- Coneslayer's idea sounds good. I could also suggest that at the edges of the circles you blur it a bit and darken. You could even have the whole picture go out and in focus a couple of times, increase the chance of missing something in the view. And don't forget to ray trace the internal reflections off the lenses if you are looking near something very bright! And there would be a little bit of unsteady shaking. Graeme Bartlett (talk) 21:54, 11 December 2009 (UTC)
- I like the idea of doing a quick bit of faked "adjustment" - but maybe only the first time you use the binoculars in the game...it wouldn't be cool to do that every time they picked them up though! There is a fine line between "cool effect" and "bloody annoying"! SteveBaker (talk) 22:44, 11 December 2009 (UTC)
- Although if you are going for realism "bloody annoying" may be more realistic than the "cool effect". Ideally (i.e. for maximum realism) anytime the binoculars are used straight out of their case (i.e. from folded) the faked adjustment should happen. Also there should be a little bit of focusing delay when shifting views to objects at different distances - similar to what Graeme suggests. Reading about what you do (cool job by the way) simulating this usage delay could be fairly critical for some stuff. For example, if a police officer gets in the habit of keeping multiple people in view just by swinging their binoculars around, they are going to be unprepared for the need to constantly refocus when doing this in the real world. I speak from birdwatching experience, keeping multiple subjects at different distances "under surveillance" is quite challenging. Putting focus delay into the simulation would train people to consider carefully how many subjects they can watch at once, an annoying but important real world limitation. Apologies if you are way ahead of me on this one or I am getting too pedantic. 131.111.8.99 (talk) 01:12, 12 December 2009 (UTC)
- We have 'subject matter experts' who we consult about small details like this. It's always interesting to discover what features matter and what don't. For example - play one of the very latest combat games: Call of Duty: Modern Warfare 2 - as you're walking around in "first person" mode - look for your shadow. You'll be surprised to find that while every other character in the game casts a shadow - you don't! It may not be immediately obvious why this is important - but if you are hiding around the corner of a building - hoping that the bad guy doesn't know you're there - in the real world, your shadow can give you away...if you have a shadow! You'd like someone who is trying to do that to pay attention to where their shadow is falling - so CoD isn't a great training aid (for lots of reasons - not just this). It's actually surprisingly difficult to produce a good shadow in a first person game because the animation of your avatar is hard to get right when you're doing joystick/mouse-driven actions while the pre-scripted "run", "walk", "shoot" animations are playing...so they just don't bother! We don't have that luxury to just ignore things when they are difficult. SteveBaker (talk) 04:10, 12 December 2009 (UTC)
- Although if you are going for realism "bloody annoying" may be more realistic than the "cool effect". Ideally (i.e. for maximum realism) anytime the binoculars are used straight out of their case (i.e. from folded) the faked adjustment should happen. Also there should be a little bit of focusing delay when shifting views to objects at different distances - similar to what Graeme suggests. Reading about what you do (cool job by the way) simulating this usage delay could be fairly critical for some stuff. For example, if a police officer gets in the habit of keeping multiple people in view just by swinging their binoculars around, they are going to be unprepared for the need to constantly refocus when doing this in the real world. I speak from birdwatching experience, keeping multiple subjects at different distances "under surveillance" is quite challenging. Putting focus delay into the simulation would train people to consider carefully how many subjects they can watch at once, an annoying but important real world limitation. Apologies if you are way ahead of me on this one or I am getting too pedantic. 131.111.8.99 (talk) 01:12, 12 December 2009 (UTC)
- I like the idea of doing a quick bit of faked "adjustment" - but maybe only the first time you use the binoculars in the game...it wouldn't be cool to do that every time they picked them up though! There is a fine line between "cool effect" and "bloody annoying"! SteveBaker (talk) 22:44, 11 December 2009 (UTC)
Need a reliable value for the sun's illumination under variable Earth conditions
Firstly, I am after approximate ranges for the sun's illumination on earth and in orbit under various conditions (clear day, overcast, etc.) for the purpose of calibrating CG lights in a 3d computer application. The confusion has arisen as a result of trying to research a good value for an overcast day; I tried using Google for that and encountered wildly varying values with some surprising implications. I have already looked at the Wiki article on Sunlight for these values. The opening section of the article gives an (uncited) value for the sun's illumination of 100,000 lux at the Earth's surface. Now, this sounds all well and good adn would match the default ranges for sun objects in the doftware package I am using. However, I then ran into, for example, these values on this page; the value given for 'direct sunlight' is indeed 100,000 lux, but it then describes 'full daylight' as 10,000 lux, and an overcast day as 1000 lux. This page gives the same values without specifying the 10,000 lux value for 'full daylight'. The implications of the first page are that, if direct sunlight is 100,000 lux, and 'full daylight' 10,000 lux, then the Earth's atmosphere absorbs or reflects 9/10ths of the sun's light on a clear day (!), and that an astronaut in orbit would be receiving 10 times the amount of light as someone on the Earth's surface; and that a cloudy overcast day provides around 1/10th or 1/100th the illumination of a sunny day (depending on which figures you look at), and 1/100th the illumination in orbit. . . All this sounds surely wrong to me. Firstly I was under the impression that, as the Wiki article suggests, the 100,000 figure is roughly correct for Earth's surface on a clear day, although I am a bit puzzled by the figure of 1/100th that for an overcast day - that sounds a bit too dark. Similarly, I would have assumed that the amount of visible light absorbed or reflected by the atmosphere is comparatively slight (given, for example, Earth's average albedo of around 0.31, ie. 3/10ths), and that lighting conditions in orbit aren't too far off those at Earth's surface on a clear day - brighter yes, but not that bright. Please help if you can, I seriously need some reliable values for these conditions, and with pages contradicting each other I don't know which values I'm supposed to follow. Thanks in advance for any helpful answers. LSmok3 Talk 17:14, 11 December 2009 (UTC)
- Insolation is usually measured in Watts per Square Meter. The solar constant is pretty clearly measured at about 1370 watts per square meter at the top of the atmosphere (reliable sources are cited in our article); depending on conditions, anywhere from ~ 100 to 1000 watts per square meter reach the Earth's surface; but since you need a luminosity and not an incident power (irradiance), you might want to look at lux and see the conversion procedure. Nimur (talk) 17:26, 11 December 2009 (UTC)
- The albedo of cloud is betwen 0.5 to 0.8 (see albedo). So we'd expect the total amount of sunlight reflected back out on a cloudy day to be 50% to 80% of the total sunlight. All that isn't reflected is either absorbed by the cloud - or scattered around so that it ends up down here on the ground as ambient sky-light - so we'd discounting absorption by the cloud - you'd expect the ratio of light on a clear-day to an overcast day to be between 2:1 and 4:1 - with multiple layers of cloud and some idea of what is being absorbed - I could easily believe the 1/10th of sunny day illumination. The 1/100th value seems more dubious...perhaps they are talking about the light coming fron the direction of the sun itself - neglecting the scattered light...that could easily explain the 1/100th number. For patchy cloud conditions between those two extremes, it all depends on whether the sun happens to be behind a cloud or not from wherever you happen to be standing.
- Some of the confusion may arise from the fact that the sky isn't black. If you measure the light coming from the direction of the sun alone - you get a much lower value than if you average it over the entire sky. — Preceding unsigned comment added by SteveBaker (talk • contribs)
- Okay, thanks for the quick answers, but I'm afriad I'm still a bit lost. Firstly, the watts to lux conversion is a bit beyond me - I'm an artist, not a mathematician: the lux page directs me to the page for the luminosity function, which is required to make any such calculation, and that specifies variables I don't have, like the actual wavelength. Secondly, all I'm after are guideline ranges and averages for a few conditions - Earth orbit, clear day at surface (noon - sunrise/set), and fully overcast (noon - sunrise/set), from which I can approximate all I need. The point about ignoring cloud absorption seems a bit odd, as that is a determining factor in surface illumination based on cloud cover, ie. exactly what makes an overcast day, (and remember, I was only quoting the average albedo of the Earth). I also seem to have run into the same trouble here as I encountered on other pages, namely a lack of any clear definition of terms and contradictory figures. Helpfully, the lux page gives various values for different conditions, but makes a distinction between 'full daylight' and 'direct sunlight' (10-25k and 32-130k respectively) and gives the value of 1000 lux for overcast: without any actual definition, I would assume that 'full daylight' refers to the brightest Earth-surface conditions, and 'direct sunlight' to unfiltered, non-atmospheric, ie. orbital conditions. This matches the pages I had found which caused me the confusion in the first place, and suggests that the sun lights in the application I'm using default to outer-space conditions (which is a bit silly; and to quote from the documentation: "Intensity: The intensity of the sunlight. The color swatch to the right of the spinner opens the Color Selector to set the color of the light. Typical intensities in a clear sky are around 90,000 lux."). However, the daylight value links to the Wiki page on Daylight, which also provides guideline values; it doesn't use the term 'Direct sunlight', but gives similar values for 'bright sunlight' and 'brightest sunlight', and gives the value of 10,000 - 25,000 lux for an overcast day: 10-25 times higher than the lux article. So, what am I supposed to follow? LSmok3 Talk 18:46, 11 December 2009 (UTC)
- Among the problems in converting solar radiant flux into an equivalent "candela" is the difference between specular and diffuse lighting. "Candela" really only applies to an approximately point-source light - but sunlight, whether the day is cloudy or clear, is illuminating the ground diffusely - in other words, the entire sky "glows" and lights the object. So, trying to model the lighting as a single point-source (the sun) is obviously flawed - there is no equivalent brightness or luminosity for the sun which would result in equivalent lighting conditions. In computer graphics, this is usually dealt with by applying an ambient lighting condition - a "uniform" illumination from a particular direction. Alternately, you can model the sun as a point-source at a great distance, and then model the atmospheric diffusion - but that will be very computationally intense, and will yield about the same visual effect as an ambient lighting source. Nimur (talk) 19:58, 11 December 2009 (UTC)
- Okay, thanks for the quick answers, but I'm afriad I'm still a bit lost. Firstly, the watts to lux conversion is a bit beyond me - I'm an artist, not a mathematician: the lux page directs me to the page for the luminosity function, which is required to make any such calculation, and that specifies variables I don't have, like the actual wavelength. Secondly, all I'm after are guideline ranges and averages for a few conditions - Earth orbit, clear day at surface (noon - sunrise/set), and fully overcast (noon - sunrise/set), from which I can approximate all I need. The point about ignoring cloud absorption seems a bit odd, as that is a determining factor in surface illumination based on cloud cover, ie. exactly what makes an overcast day, (and remember, I was only quoting the average albedo of the Earth). I also seem to have run into the same trouble here as I encountered on other pages, namely a lack of any clear definition of terms and contradictory figures. Helpfully, the lux page gives various values for different conditions, but makes a distinction between 'full daylight' and 'direct sunlight' (10-25k and 32-130k respectively) and gives the value of 1000 lux for overcast: without any actual definition, I would assume that 'full daylight' refers to the brightest Earth-surface conditions, and 'direct sunlight' to unfiltered, non-atmospheric, ie. orbital conditions. This matches the pages I had found which caused me the confusion in the first place, and suggests that the sun lights in the application I'm using default to outer-space conditions (which is a bit silly; and to quote from the documentation: "Intensity: The intensity of the sunlight. The color swatch to the right of the spinner opens the Color Selector to set the color of the light. Typical intensities in a clear sky are around 90,000 lux."). However, the daylight value links to the Wiki page on Daylight, which also provides guideline values; it doesn't use the term 'Direct sunlight', but gives similar values for 'bright sunlight' and 'brightest sunlight', and gives the value of 10,000 - 25,000 lux for an overcast day: 10-25 times higher than the lux article. So, what am I supposed to follow? LSmok3 Talk 18:46, 11 December 2009 (UTC)
- Yes, I am well aware of that issue. And no, I'm not looking for the candle power of the Sun. Indeed, in modeling a daylight system (including the IES Sun and Sky system I quoted the documentation for above), two lights are generally used, one to represent the contibution from the sun as a point source (ideally Photometric) light (although in recent years that's tended to be an area light purely to allow for the modelling of realistic shadows due to scale and diffusion), and another to represent diffuse scattered 'fill' from the sky. But that doesn't mean I don't need realistic values, especially given that the systems may mix photometric with non-physical lighting, and with global photometric controls the relationships between all lights in a scene are affected (for example, street lighting at dusk would feature a sun and sky system in combination with artificial lighting, so the relationships must be correct). (And in fact, the reason I was originally looking for realistic lux values for an overcast day - the whole thing - was to calibrate just such a fill to realistic proportions, bearing in mind that it may also be photometric; again, if the relationships aren't accurate, not only is it bad practice, but problems would also occur if, say, I was animating a shift from clear to overcast conditions.) Also, don't assume I am only interested in Earth surface figures; as I already said, I also need realistic values for space scenes. So are there no reliable values that actually agree for sunlight in lux? That seems a bit strange. It should be possible for just about anyone to get such figures by going outside under the given conditions armed with a light meter set to give values in lux and measuring at exposed ground - I'd do it myself but I don't own one, and can't help assuming those values must surely be well-established; for some reason my documentation suggests values that Wikipedia claims are burn-your-retina space lighting and the reason why astronauts where mirror-visors, and no one seems able to agree on overcast values. LSmok3 Talk 20:50, 11 December 2009 (UTC)
- Okay, how about Reference luminous solar constant and solar luminance for illuminance calculations, (J. Solar Energy, 2005), or Determination of vertical daylight illuminance under non-overcast sky conditions, (Building & Environment 2009)? Table 2 in the first article specifies a lot of parameters, including luminance of about 1.9x109 cd/m2. For the very detailed cases you are considering, you may want to read the entire articles, since you are particularly clear that their assumptions differ from yours. There is an entire section devoted to the assumptions made and the impact these have. Nimur (talk) 22:20, 11 December 2009 (UTC)
Well, I'm afraid the problem with that is the principle of having to pay around $60 for the information, which as I say I would have thought would be pretty well-established. The whole point of Wiki is that information is free, and as it is, I don't even own a credit card or have a paypal account to buy it, especially without being sure it actually contains what I need. Having said that, I did find this link through Google to the same site, which suggests that the values quoted in my software are correct and the ones on Wiki are wrong. So never mind. LSmok3 Talk 07:43, 12 December 2009 (UTC)
trialkyloxonium salts and beta elimination
Most of these are unstable, right? My question is, what happens if you have no beta proton on the oxonium salt? It can't possibly eliminate via the Hoffman mechanism? Will it decompose onto ether + carbocation anyway?
Also, can you undergo transalkylation with ethers and an alkyl halide via SN2?
Are carbamates prone to beta elimination?John Riemann Soong (talk) 22:52, 11 December 2009 (UTC)
- Triethyloxonium tetrafluoroborate is pretty stable. Graeme Bartlett (talk) 23:47, 11 December 2009 (UTC)
- How does it decompose in water? Does water act like a base? John Riemann Soong (talk) 02:01, 12 December 2009 (UTC)
- It would appear that water is acting as a Lewis base. If you do the electron pushing for this reaction:
- [(CH3CH2)3O]+BF4− + H2O → (CH3CH2)2O + CH3CH2OH + HBF4
- You can see pretty easily how an electron pair from water "grabs" an ethyl group from the trialkyloxonium and then loses a proton to the tetraflouroborate anion. --Jayron32 05:28, 12 December 2009 (UTC)
- It would appear that water is acting as a Lewis base. If you do the electron pushing for this reaction:
Can tetraalkyl ammonium salts undergo SN2 substitution with a nucleophile instead of beta-elimination? An alkene product seems to be favoured for ammonium salts, but an alkane product for oxonium salts? John Riemann Soong (talk) 06:04, 12 December 2009 (UTC)
December 12
Hedgehog exercise
This BBC News article discusses an obese hedgehog.
To exercise him, the veterinarians have put the hedgehog in a bathtub to swim around as part of his weight-loss regimen. Is this standard procedure for exercising a hedgehog? I would have expected a "hamster-wheel" would be more common. Anyway, I know there's a few hedgehog experts on the desk, so I figured I'd solicit their input on this unusual exercise regimen. Nimur (talk) 00:39, 12 December 2009 (UTC)
- Domesticated hedgehog may have some useful information. There are also external links to follow, you may be able to find more by poking around some of those. --Jayron32 01:19, 12 December 2009 (UTC)
- A standard hamster/mouse wheel is unacceptable for a hedgehog. They often step into the gaps between the spokes and suffer injury (including broken toes). Instead, a hedgehog wheel is commonly referred to as a "bucket wheel" because it is like a shallow bucket placed on its side. Many people actually make their own from buckets. On average, a hedgehog will run about 3 miles each night. The bathtub regimen is not extremely common. Many hedgehogs are afraid of water. Those that like water will spend hours swimming around. Those that are scared of it do nothing except fight to get out of the water (I've got some pretty good claw marks as scared hedgehogs have clawed right up my arm to my shoulder). Some hedgehogs are scared of water and won't run in a wheel. For them, they need exploring activities. It is common to hide their food in a lot of hard-to-reach places. Then, they have to run around and try to figure out how to get their food. All in all - there isn't such thing as a single hedgehog personality. Like all other animals, each has its own personality. -- kainaw™ 01:36, 12 December 2009 (UTC)
- Ah. The hazard of getting feet trapped in wheels seems like a motive for finding alternative exercise ideas. From the news story, there are several other news articles linked about obese hedgehogs, [14], [15]. Is this a common problem, or is it just getting disproportionate media attention because of the "weird" factor? Nimur (talk) 01:47, 12 December 2009 (UTC)
- In captivity, hedgehogs regularly become obese. Hypertension, diabetes, and hypercholesterolemia are common. This leads to heart attacks, stroke, liver failure, and high rates of cancer. The issue isn't simply captivity. It is that the hedgehogs get plenty of food (most of it rather unhealthy food) and very little exercise. In the wild, hedgehogs span many miles searching for food every night. Relatively, imagine if your main purpose each day was to run a marathon to get a good meal. Sure - you have all day to run the marathon, but the meal won't do much to offset the calories burned getting to it. There is another issue - winter. Hedgehogs are said to hibernate. They don't truly hibernate, but they do spend most of winter holed up and waiting for the weather to get warmer. To last a long time, they need to fatten up. So, this is the time of year that European hedgehogs tend to get rather plump. -- kainaw™ 02:24, 12 December 2009 (UTC)
nitromethane
Is the pka given for this substance the value of nitromethane as an acid, or its conjugate acid (nitromethane is acting like a base)? I really can't see an acidic proton in nitromethane ... unless nitromethide carbanion is really that stable? John Riemann Soong (talk) 02:19, 12 December 2009 (UTC)
- It is the pKa of losing a C-H hydrogen. If you read the article on nitromethane, the acidity of the compound and its uses specifically because of this acidity are discussed in the Uses section. The Nitroaldol reaction makes use of this acidity of nitromethane. --Jayron32 05:23, 12 December 2009 (UTC)
How was the international prototype for the metre made?
How was the 1889 version of international prototype for the metre made? —Preceding unsigned comment added by 173.49.9.184 (talk) 04:06, 12 December 2009 (UTC)
"Bleeding like a stuck pig"
Do hogs bleed more easily than other animals? Or is this phrase a vague allusion to something literary or historic? 24.93.116.128 (talk) 05:14, 12 December 2009 (UTC)
- It refers to the practice of Exsanguination in the slaughter of an animal. When you want to kill a pig, it is common practice to drain the blood as fast as possible from the meat, the article on Slaughterhouse describes the process in some detail. So when you "bleed like a stuck pig", it means you are bleeding like you are being drained of your blood in a slaughterhouse. --Jayron32 05:20, 12 December 2009 (UTC)
Enzymes
I was under the impression that enzymes simply increase the rate of a reaction (my college bio text book says this). However denaturing certain enzymes can result in certain products from not being produced. But if they only increase the rate, shouldn't the products be formed without the enzyme, but only at a slower rate? So I'm a bit confused as to the nature of enzymes. ScienceApe (talk) 08:32, 12 December 2009 (UTC)
- Yes, sometimes the uncatalysed reaction is just REALLY slow, so slow that in a biological system it basically doesn't occur. And also, some enzymes couple 2 reactions - 1 thermodynamically favourable with an unfavourable reaction, in essence driving the second reaction against the direction it would proceed without an enzyme, and using the first reaction to provide the "fuel" to do this. Aaadddaaammm (talk) 09:33, 12 December 2009 (UTC)
Russian rocket failure
Is this for real? http://www.nzherald.co.nz/world/news/video.cfm?c_id=2&gal_cid=2&gallery_id=108562 Aaadddaaammm (talk) 09:38, 12 December 2009 (UTC)
- Yes. The spiral was seen by hundreds of people in northern Europe. The most likely explanation is a Russian ICBM test where the rocket went out of control. spaceweather.com has a write up on it, and the rocket plume of the rocket plume of the boost phase of the rocket is visible an normal in at least one photo of the spiral. --121.127.200.51 (talk) 10:10, 12 December 2009 (UTC)
Amerigo and the year 1497
According to Amerigo Vespucci, "A letter published in 1504 purports to be an account by Vespucci, written to Soderini, of a lengthy visit to the New World, leaving Spain in May 1497 and returning in October 1498. However, modern scholars have doubted that this voyage took place, and consider this letter a forgery". But the 1562 map says (upper-center box): "This fourth part of the world remained unknown to all geographers until the year 1497, at which time it was discovered by Americus Vespucius..." ("incognita permansit, quo tempore iussu Regis Castellae ab Americo Vespuccio inventa est..."). So there is no forgery? And where is Columbus and 1492? Brand[t] 10:06, 12 December 2009 (UTC)