Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 71.215.74.243 (talk) at 23:32, 11 April 2012 (→‎Gauss' principle of least constraint). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


April 7

How much extreme weather damage can the world economy take?

The international reinsurance company Munich Re says that the amount of damage from extreme weather tripled from 1980 to 2010, and they expect that trend to continue.[1] Assuming that remains true, (1) how long until the damage from extreme weather causes the world economy to collapse? And, (2) assuming global wind power growth continues along its current trend, will it prevent such a collapse in time? 71.215.74.243 (talk) 02:52, 7 April 2012 (UTC)[reply]

At some point neither private companies not governments will be able to insure people living in high risk areas, like below sea level in a place regularly hit by hurricanes. This will result in people no longer being able to get mortgages to build there, and will thus limit damages from future storms, restoring the balance. StuRat (talk) 03:09, 7 April 2012 (UTC)[reply]
So do you think the world economy can absorb an infinite amount of storm damage? 71.215.74.243 (talk) 11:07, 7 April 2012 (UTC)[reply]
No, but an infinite amount of storm damage isn't possible, so that's pointless to worry about. You seem to be making a common extrapolation error. Whenever anything is increasing, all you have to do is assume it will forever increase at the same rate to conclude that it will eventually become infinite. The assumption that anything which is currently increasing must therefore increase forever is fundamentally flawed. StuRat (talk) 17:10, 7 April 2012 (UTC)[reply]
I agree. Let me rephrase my question: If the amount of economic damage from extreme weather continued to tripple every 30 years, do you think the world economy would still remain functioning indefinitely? Or are you saying that the damage must eventually level off before it causes economic collapse? 71.215.74.243 (talk) 22:20, 7 April 2012 (UTC)[reply]
Yes, it will level off or even reduce. For example, now that global warming is fairly well established, people will eventually stop building on eroding coastlines. StuRat (talk) 22:33, 7 April 2012 (UTC)[reply]
The premise is false. Data from impartial sources [2] show that there's no such increasing trend.
Also the global wind power cumulative capacity can't grow forever either, since the total amount of available wind power on Earth is limited. Anonymous.translator (talk) 03:18, 7 April 2012 (UTC)[reply]
You don't think that graph you linked to shows a trend? Wind power can't grow forever but there is enough for dozens of times expected human needs even assuming everyone started using US per-capita power. 71.215.74.243 (talk) 03:31, 7 April 2012 (UTC)[reply]
Looking at the dotted line, no, I don't see a clear increase. For most years the damage is roughly $10 billion while in the occasional "disastrous" years the damages spike up.Anonymous.translator (talk) 03:47, 7 April 2012 (UTC)[reply]
Do you think the average of the dotted line in, for instance, the first, middle, and most recent third of the graph comprises a trend? 71.215.74.243 (talk) 11:07, 7 April 2012 (UTC)[reply]
I'm pretty sure that as a publicly traded company whose core business is insurance, Munich Re has a legally mandated duty to be honest about the past claims they have honored and the future claims they expect. As a global company, I would expect they actually have a better picture of climate related losses than data that solely comes from the US. That said, I seem to recall that Munich Re also blames most of the trend on increased human development in vulnerable areas, and relatively little of it on actual changes in weather conditions. Dragons flight (talk) 05:11, 7 April 2012 (UTC)[reply]
One thing I left unstated is that most of the damage is due to people building in unsafe areas. Once we stop that, the problem is largely solved. StuRat (talk) 04:27, 7 April 2012 (UTC)[reply]
Yes. People need to remember such things as the fact that flood plains are for floods. HiLo48 (talk) 05:31, 7 April 2012 (UTC)[reply]
Indeed, Munich Re used to say that sort of thing, but not in the past several years. Their 2010 report linked above, for example, says, “In Germany, extreme precipitation resulting in floods is becoming increasingly common. This affects not only people living on rivers: there are more and more cases of heavy rain and flash floods. Anyone may be affected.” 71.215.74.243 (talk) 11:07, 7 April 2012 (UTC)[reply]
That's a rather meaningless statement, rather like saying that anybody can be struck by lightning, so you might as well play golf on top of a hill in a thunderstorm. While anyone can be affected by some type of bad weather, by no means is the risk equal in all locations, far from it. StuRat (talk) 16:16, 7 April 2012 (UTC)[reply]
One line on that graph is normalized for inflation but not for population growth or overall real estate value. Wnt (talk) 15:44, 7 April 2012 (UTC)[reply]

Not much, and the weather doesn't have to get a lot more extreme overall. According to a recent BBC Horizon documentary, we're now seeing significantly more fluctuations in the weather, while the average has only shifted slightly due to global warming. The latter is driving the former. A prediction of climate models is that cyclones can hit places where they have sitorically never occurred, e.g. Dubai seems to be at risk of being washed into the ocean by a category 5 hurricane.

A small perturbation is enough to let the World economy crash, because the economy has been mismanaged anyway. We were about to hit a global depression, simply due to mismanagement in 2008. So, it's a bit like flying a plane that could theoretically fly to the thunderstorm, but with incompetent pilots flying it, it's already at risk of crashing due to pilot error when the weather is fine. Obviously, you then don't want to fly a plane with these pilots through thunderstorm.

What I forsee happening within a few decades, is droughts and floodings causing India and China to have to import huge amounts of rice and grain for a few years in a row. They have enough financial reserves to do so, but that would drive food prices up by so much that other countries will close their markets to prevent all their rice and grain from from being exported to India and China. That will then lead to collapse of free trade agreements and ultimately to the collapse of the World economy. Count Iblis (talk) 17:42, 7 April 2012 (UTC)[reply]

In that case, what they need is system to collect flood water and store it until the next drought. StuRat (talk) 18:25, 7 April 2012 (UTC)[reply]
So simple, yet so difficult! How should this be accomplished on the massive space and time scales necessary to help? Shall we start a lobby to subsidize rain barrels? SemanticMantis (talk) 19:21, 7 April 2012 (UTC)[reply]
In the case of underground aquifers, it may be as simple as redirecting rivers to refill them rather than draining to the sea. StuRat (talk) 19:47, 7 April 2012 (UTC)[reply]

Going back to the original question, I don't see what wind power has to do with any of this. Additionally, the correlation between severe weather events and global warming is unknown; indeed, we should see a reduction in Atlantic hurricanes in the next decade or two due to a downturn in the West African rainfall cycle. -RunningOnBrains(talk) 20:33, 7 April 2012 (UTC)[reply]

I've read many times that it's not simply the number of Atlantic hurricanes that matters; supposedly global warming is predicted to increase their average intensity -- fewer less intense ones combined with more very intense ones = more damage. Is that not right?
Also, the problem is not solved once we stop building in unsafe areas (although obviously that is important too). An amazing percentage of the world's population has already built in unsafe areas -- there are lots of hugely populated coastal cities (and for good reason since coastal cities can have ports).
I think the questioner's point about wind power was simply that, to the extent that wind power supplants CO2-releasing power generation, we can slow down or stop the growth of CO2 levels in the atmosphere. S/he is asking whether present trends in the growth of wind power usage are fast enough to do that before we reach a tipping point. I'm pessimistic about it, but it's not all-or-nothing -- increased use of wind power might delay or minimize the extreme-weather effects of CO2 even if not stopping them entirely. Duoduoduo (talk) 21:06, 7 April 2012 (UTC)[reply]
It could be true: certainly increasing the sea-surface temperatures in the absence of all external factors will increase the amount of latent heat available for the hurricanes to tap into, but I've seen studies that suggest that wind shear will increase dramatically over the tropical Atlantic with a warming planet, which would lead to less intense hurricanes. We simply don't have enough information to know for sure. One thing that is known, however, is that increasing sea levels will make residents increasingly vulnerable to the hurricanes that do occur. (I'm not talking Al Gore's nonsense about a 30m increase in sea levels: even a conservative estimate of 20 centimetres (7.9 in) in the next 100 years could have devastating long-term impact.-RunningOnBrains(talk) 21:22, 7 April 2012 (UTC)[reply]
As far as wind power costs, even if the world managed to stop emitting fossil fuels in the next 20 years (an impossible goal, as China is clearly not going to try to stop burning coal any time soon), CO2 in the atmosphere is far from its steady state. The residence time of CO2 in the atmosphere is hundreds of years, so it would likely take thousands of years (without active intervention) to return atmospheric CO2 to it's current levels, never mind pre-industrial levels.-RunningOnBrains(talk) 21:30, 7 April 2012 (UTC)[reply]
We not only need to stop building in unsafe areas, but move away from them. Ports can remain (and be hardened against foul weather), but the rest of the city should be moved. This will happen slowly, as storms destroy buildings in unsafe areas and the foolish people who built there are unable to get insurance and mortgages to rebuild there again.
Also, there's a natural life of a building, after which it would be rebuilt (except for a few historic sites). When the buildings in unsafe areas reach that age, nobody will be willing to risk their money to rebuild there. StuRat (talk) 21:19, 7 April 2012 (UTC)[reply]
The idea of moving away from the unsafe areas is interesting -- the Maldives, I've read, are shopping around in Australia for land they can eventually move their entire country to; and I read very recently that one of the South Pacific island nations is also shopping around for land. But these are tiny populations. It would take an extremely long time, with an extreme amount of social disruption and a massive amount of highly expensive infrastructure investment, to move everyone in say Mumbai farther inland. And then transportation costs to the port would be higher than now. So it's not going to happen to any significant extent, unless gradually rising sea levels concentrate people's minds to organize it. Duoduoduo (talk) 21:37, 7 April 2012 (UTC)[reply]
There's no need to move the entire city at once. Start with the coastal area, then gradually move inland. This could all happen over a century. For a case where it is happening rapidly, look at Christchurch, New Zealand, which has the misfortune of being on an active fault which destroyed the downtown area recently. StuRat (talk) 21:41, 7 April 2012 (UTC)[reply]
Moving away from unsafe areas would tend to limit property damages, but would have little effect on the global economy. As for the original question about weather and economics that comes down to fuel for both people and machines. When the weather sufficiently disrupts either supply, the economy will suffer. Wind power may slow the consumption of fossil fuel but will not replace it without some major technology advancements. Without that fuel (or a cheap replacement) the global economy is doomed. MadCowpoke (talk) 04:37, 11 April 2012 (UTC)[reply]
Why would limiting property damage have little effect on the world economy ? StuRat (talk) 22:16, 11 April 2012 (UTC)[reply]
The economy will either shrink or grow. It is unlikely to remain flat unless population remains flat (also unlikely). If the desire is to increase the economy, say in the form of GDP, then demand has to increase to a new sustained level. I'll use the Christchurch, New Zealand quake and some extremely simplified parameters as an example. Estimated damages; $25B, estimated GDP; $157B, estimated life-span of replaced property; 10yrs. The 1 year increase in GDP is 15%, however over the product lifespan it is only 1.5%. If I wear out my car every 10 years, or a flood destroys it every 10, it makes no difference. MadCowpoke (talk) 06:04, 12 April 2012 (UTC)[reply]

Male sexual activity and cognitive performance

How close have any studies come to establishing or disestablishing an effect of sexual activity on cognitive performance in men? Have there at least been correlational studies that controlled for the confounding fetal-androgen effect, or lab studies using the same combination of hormones that are released in orgasm? NeonMerlin 10:18, 7 April 2012 (UTC)[reply]

What kind of cognitive performance? PMID 19105078 is a recent review, and PMID 16490297 is an example study controlling for fetal androgen exposure. PMID 19250266 includes the study of prolactin, which is the only hormone conclusively shown to be released during male orgasm. 71.215.74.243 (talk) 11:38, 7 April 2012 (UTC)[reply]

Physics LC circuit

The natural frequency of LC circuit is 1 kHZ. How do we reduce it to 0,5 kHZ, using the equation: natural frequency = 1/(2 π sqrt{LC}) I think duplicating the T (2 π sqrt{LC}) is not the correct answer? Thank you in advance!--Atacamadesert12 (talk) 17:40, 7 April 2012 (UTC) — Preceding unsigned comment added by Atacamadesert12 (talkcontribs) 17:39, 7 April 2012 (UTC)[reply]

if you want the frequency to be halved then 1/(2 π sqrt{LC}) needs to be halved, i.e. LC needs to be four times as large. Dmcq (talk) 19:41, 7 April 2012 (UTC)[reply]
Or you could increase the value of 2π -RunningOnBrains(talk) 20:21, 7 April 2012 (UTC)[reply]
Double it, to be precise, so π ≈ 6.28318530717958647652528677. Whoop whoop pull up Bitching Betty | Averted crashes 20:26, 7 April 2012 (UTC)[reply]

Electronics on board Apollo 11 spacecraft

What were the specifications of the Apollo 11 spacecrafts' electronics? By this, I mean the motherboard, memory, processor, etc. Also, how differently would the spacecraft have been designed if it were designed today with modern electronics?--99.179.20.157 (talk) 18:40, 7 April 2012 (UTC)[reply]

Apollo Guidance Computer is a good start. Almost everything would be different if implemented today; not least you wouldn't have a room full of ladies hand-knitting the program. -- Finlay McWalterTalk 18:43, 7 April 2012 (UTC)[reply]
It was the memory core that got hand-knitted, not the program.--Aspro (talk) 19:02, 7 April 2012 (UTC)[reply]
Maybe your right Core rope memory--Aspro (talk) 19:10, 7 April 2012 (UTC)[reply]
Great link, thanks! SemanticMantis (talk) 19:22, 7 April 2012 (UTC)[reply]
(ec)I'm not at all sure that the motherboard concept even existed at the time. Please clarify whether you are interested only in the on-board computers or all of the electronics in general. Most "electronics" don't contain motherboards, processors or memory, those components are specific to digital computers. Roger (talk) 18:49, 7 April 2012 (UTC)[reply]
A good resource for finding your way around more recent spaceflight computer systems is Category:Radiation-hardened microprocessors, which lists a bunch of devices that are, or have been, used in spacecraft. -- Finlay McWalterTalk 18:55, 7 April 2012 (UTC)[reply]
You should also remember that the onboard computer systems were never intended to be general purpose computers; they were calculators and autopilot controllers designed specifically for the single purpose of automating certain timing operations for the manned flights to the moon. These computers have more in common with a cockpit instrument than with modern general purpose personal computers. For example, there is no "screen" on the infamous Apollo Guidance Computer; only a few cryptic numerical status indicators and some warning lights. However, if you compare this to, say, a RADAR altimeter instrument in terrestrial aircraft of that era, you see that AGC actually provides much more information and control, including automatic fly by wire control of some spacecraft and propulsion systems. Nimur (talk) 19:48, 7 April 2012 (UTC)[reply]

Using tachyons to blast something out of a black hole

Would it be possible to "rescue" people or objects from beyond the event horizon of a black hole by hitting them with a super-high energy beam of tachyons at just the right angle? Whoop whoop pull up Bitching Betty | Averted crashes 20:37, 7 April 2012 (UTC)[reply]

I take you mean hypothetically. Well, hypothetically one might have better luck using a transporter. How do you think these questions up?--Aspro (talk) 20:46, 7 April 2012 (UTC)[reply]
Off-topic discussion
The following discussion has been closed. Please do not modify it.

AAAAAAAAAAAARGH!!!!! Whoop whoop pull up Bitching Betty | Averted crashes 20:55, 7 April 2012 (UTC)[reply]

So...is that the sound of you trying the transporter idea? -RunningOnBrains(talk) 21:13, 7 April 2012 (UTC)[reply]
Maybe the transporter reassembled his molecules with those of a fly The Fly (1986 film)? Are hybrid flies allowed to hold a WP account?--Aspro (talk) 21:25, 7 April 2012 (UTC)[reply]
BE SERIOUS!!!!Whoop whoop pull up Bitching Betty | Averted crashes 21:28, 7 April 2012 (UTC)[reply]
Verily - but you first. --Aspro (talk) 21:36, 7 April 2012 (UTC)[reply]
Some chill pills for Whoop whoop.

I couldn't even begin to approach the highly theoretical math involved, but my understanding is that tachyons can't interact with normal matter, as this would create paradoxes. -RunningOnBrains(talk) 21:33, 7 April 2012 (UTC)[reply]

The serious answer is "no, because tachyons do not exist as far as we know, and even if they do exist, we can't know anything about them, and even if we did know all about them, we couldn't manipulate them, and even if we could manipulate them, we couldn't use them to manipulate ordinary matter." 71.215.74.243 (talk) 22:42, 7 April 2012 (UTC)[reply]
Can you pull something out of a black hole? E.g., what if it was a microscopic black hole, or maybe a one-pound black hole -- could you stick part of a sturdy object partway in it and then pull it out? Duoduoduo (talk) 22:56, 7 April 2012 (UTC)[reply]
No. As summarized in the previous black hole thread, once within the event horizon, conventional "spacewise" directions become "timewise" -- all future paths move to the singularity; all paths back out require you to move backwards in time. Additionally, it only confuses matters to think about "an object" that stretches into a black hole. That whatever-it-is is in fact a large number of independent subatomic particles bound together by various forces, but those forces are overcome by the black hole once the particles pass within. — Lomn 23:07, 7 April 2012 (UTC)[reply]
How about creating a wormhole inside the event horizon? Plasmic Physics (talk) 23:58, 7 April 2012 (UTC)[reply]
My understanding is that that would just create a second entry and a second event horizon for the black hole. Whoop whoop pull up Bitching Betty | Averted crashes 11:35, 8 April 2012 (UTC)[reply]
Wormholes fall into the same category as tachyons - you can formulate them mathematically, but there is no evidence that they actually exist or can exist in the real universe and if they do there are all sorts of unanswered questions about how they would actually work. --Tango (talk) 14:40, 8 April 2012 (UTC)[reply]
It shouldn't if you face towards the future when you create the worm hole. Plasmic Physics (talk) 13:42, 8 April 2012 (UTC)[reply]
You mean like Jodie Foster flying though a worm hole in Contact (film) or was that just another cinematographic illusion?--Aspro (talk) 14:53, 8 April 2012 (UTC)[reply]
New theories are constantly being developed as to how to stablise a wormhole, including using the negative pressure exerted by the Casimir effect. Plasmic Physics (talk) 03:06, 9 April 2012 (UTC)[reply]

Radicals and VSEPR

What geometries does VSEPR theory predict for radicals? For instance, does the methyl radical have a trigonal planar geometry or a trigonal pyramidal geometry? Whoop whoop pull up Bitching Betty | Averted crashes 21:26, 7 April 2012 (UTC)[reply]

Lone electrons are treated just like non-bonding electron pairs. The methyl radical is triagonal pyramidal. Plasmic Physics (talk) 22:55, 7 April 2012 (UTC)[reply]
Thank you. Whoop whoop pull up Bitching Betty | Averted crashes 02:17, 8 April 2012 (UTC)[reply]
You can "treat it" however you like in a handwaving method that is solely an empirical analysis, but if you do so, you'll get the wrong answer:) Methyl radical is sp2 hybridized—trigonal–planar geometry with respect to the three C–H bonds and the lone electron in a p orbital. The general trend (setting aside effects of electronegativity and resonance of the substituents) is that a non-bonded electron is p not spx; one rationalization is that doing so allows a greater filling of s-like atomic orbitals (as usual, better to have vacancy in the higher-energy p levels). DMacks (talk) 14:32, 8 April 2012 (UTC)[reply]
Modelling software predicts a tetrahedral, trigonal pyramidal structure for methyl. The bond angles are greater than in methane, but less than 120 degrees. One s and two p AOs of carbon are degenerate, mixing with the hydrogen s AOs. One p AO of carbon is not degenerate, and is the (non-bonding) HOMO. To achieve a stable electrostatic equilibrium position, the valence MOs are distributed in a tetragonal fashion. For methane all the p AOs are degenerate with one s AO to form four degenerate MOs with the hydrogen AOs. Plasmic Physics (talk) 02:15, 9 April 2012 (UTC)[reply]
Experimentally by numerous types of analysis, methyl-radical is planar, and published modelling by many methods gives a slight energetic cost to the C deforming out of the plane (i.e., the model does not pre-suppose and enforce the planarity to simplify the math, but rather actually tests and finds an energy minimum at the planar form). DMacks (talk) 15:48, 9 April 2012 (UTC)[reply]
"One s and two p AOs of carbon are degenerate" is a severe mistake. The s and p are not degenerate with each other, but rather the p are all degenerate among themselves. They mix and the resulting sp2 hybrid orbitals might be degenerate combinations of the (non-degenerate) s and p components. DMacks (talk) 15:51, 9 April 2012 (UTC)[reply]
I was describing their state after mixing. Can you give a source stating that the bond angles are exactly 120 degrees or a similiar statement? Plasmic Physics (talk) 23:32, 9 April 2012 (UTC)[reply]
DOI:10.1021/ja00476a054 is a good lead ref for methyl radical being planar. DMacks (talk) 01:35, 10 April 2012 (UTC)[reply]
That article uses weasle words of the third category as clasified by Wikipedia, such as essentially planar in quotation marks, or quasi-planar, it does not have internal consistency on that imformation. Plasmic Physics (talk) 05:09, 10 April 2012 (UTC)[reply]
As I said, it's a lead ref, so you can also follow its citation trail to see the strength (or not) of the sources they use when they say quite plainly in the second paragraph, with 4 footnotes, "all indicate a planar molecule." DMacks (talk) 13:55, 10 April 2012 (UTC)[reply]


April 8

Illumination in lumen additive or parallel?

If various LED light emitters is fitted within the same "bulb". Will their light intensity add up as in total_lumen = lumen_1 + lumen_n + lumen_n+1.. Or will a light source made up of many parallel light emitters never be equal to a single (incandescent) light source of x lumen? That is parallel photons will not add up into less photons with the same energy but larger amplitude? because phase synchronization will not happen? Electron9 (talk) 00:58, 8 April 2012 (UTC)[reply]

I believe it's additive. However, our perception is not. That is, twice as many lumens doesn't seem "twice as bright". When lighting a room, it's helpful to have lights in each corner, to avoid any dark corners (this is especially true with dark walls, like wood paneling, where little light reflects into the corners). StuRat (talk) 01:15, 8 April 2012 (UTC)[reply]
Right. Photons do not join together or combine into fewer photons of different amplitude or frequency. 71.215.74.243 (talk) 01:23, 8 April 2012 (UTC)[reply]
Thus the whole idea about replacing incandescent light bulbs with multiple emitters is fundamentally flawed because it just won't be the same ? Electron9 (talk) 01:49, 8 April 2012 (UTC)[reply]
No, that's not at all what's being said. For the same total number of lumens, and same quality of light, the room will generally be perceived as being better lit if it has multiple, lower-lumen lights in multiple locations, instead of one bright light in one location. But if you wanted the more poorly lit look of a single light for some reason, you could just put the multiple lights in essentially the same location. Red Act (talk) 03:29, 8 April 2012 (UTC)[reply]
Agreed. Also, having the light sources spread out will be perceived as more light. StuRat (talk) 03:41, 8 April 2012 (UTC)[reply]
Won't it be much light but no real intensity anywhere? Electron9 (talk) 14:37, 8 April 2012 (UTC)[reply]
A room lit uniformly (as if the entire ceiling glowed, or the ceiling an walls glowed) might not be perceived as "better lit" since the lighting effect would be very "flat," and shapes of solid objects would not be as well defined. It might seem quite blah and boring. Edison (talk) 17:34, 8 April 2012 (UTC)[reply]
Indeed; this is one of the efficiency advantages of lengthy fluorescent bulbs (which I can't stand at 50 Hz, for what it's worth.) 71.215.74.243 (talk) 23:14, 8 April 2012 (UTC)[reply]
You mean you don't enjoy experiencing epileptic seizures ? StuRat (talk) 06:26, 10 April 2012 (UTC) [reply]

Strange zoo bird

Any idea what this bird is? I've gone through all the birds on its zoo's list and none seem to be it. 71.215.74.243 (talk) 01:05, 8 April 2012 (UTC)[reply]

I think it may be the Chestnut-breasted Malkoha - which is on the list. AndyTheGrump (talk) 01:23, 8 April 2012 (UTC)[reply]
More on it here - with another video. Surprisingly, it is one of the Cuculidae - the Cuckoo family. Common in Southeast Asia. AndyTheGrump (talk) 01:32, 8 April 2012 (UTC)[reply]
No doubt! Thanks, I don't know how I missed it from the accurate description linked from the list. 71.215.74.243 (talk) 01:36, 8 April 2012 (UTC)[reply]
Cool bird! "Unlike many cuckoos, it builds its nest and raises its own young." Marking as resolved. SemanticMantis (talk) 01:57, 8 April 2012 (UTC)[reply]
Resolved

Octane rating of jet fuel

What is the octane rating of jet fuel? Whoop whoop pull up Bitching Betty | Averted crashes 02:30, 8 April 2012 (UTC)[reply]

Jet fuel is essentially kerosene (C8 to C16 molecules), not gasoline (C4 to C12 molecules). Octane rating is irrelevant to jet fuel, since predetonation is not a concern in a gas turbine. However, if it was measured it would be considered to be very high, since octane rating is a measure of fuel's resistance to predetonation, not its volatility. Acroterion (talk) 02:38, 8 April 2012 (UTC)[reply]
Thanx! Whoop whoop pull up Bitching Betty | Averted crashes 11:30, 8 April 2012 (UTC)[reply]

Do different chicken breeds have different tastes?

I'm not talking about their eggs, whose taste apparently depend not on their shell color, but the chicken feed. 66.108.223.179 (talk) 02:32, 8 April 2012 (UTC)[reply]

If so, it must not be very noticeable, or I'd expect various brands to claim they have better tasting breeds. StuRat (talk) 03:35, 8 April 2012 (UTC)[reply]
This document suggests that the slower growth rates of traditional breeds gives a better flavour. Alansplodge (talk) 01:37, 9 April 2012 (UTC)[reply]
I've eaten many different breeds and never noticed a difference. Older chickens are tougher and have more flavor, though. --Sean 20:08, 9 April 2012 (UTC)[reply]
Anecdotally, I can say that they taste quite different. I live in Indonesia where two different breeds are available to buy. One is the sort that westerners are used to with the massive breast, called ayam bule by the locals, and the other is a much leaner bird called ayam kampung. 112.215.36.179 (talk) 10:02, 11 April 2012 (UTC)[reply]
Perhaps the Q should be whether different breeds commercially available in the OP's locale taste significantly different. StuRat (talk) 22:23, 11 April 2012 (UTC)[reply]

mercury spectrum

what elements may have a spectrum of wavelengths very similar to the spectrum of mercury — Preceding unsigned comment added by 197.255.118.206 (talk) 06:43, 8 April 2012 (UTC)[reply]

"Similar"? You can eyeball the visible spectra at http://www.umop.net/spctelem.htm Hg is in the lower left. 71.215.74.243 (talk) 10:16, 8 April 2012 (UTC)[reply]

sodium spectrum

what are the wavelengths of the red, yellow, yellowish-green, green, greenish-bue and violet colours of sodium . — Preceding unsigned comment added by 197.255.118.206 (talk) 08:15, 8 April 2012 (UTC)[reply]

This smells like homework to me, but if you can figure it out from http://physics.nist.gov/PhysRefData/Handbook/Tables/sodiumtable2.htm that's probably okay. 71.215.74.243 (talk) 10:20, 8 April 2012 (UTC)[reply]

physics

what is the difference between spectroscopy and spectrometry — Preceding unsigned comment added by 197.255.118.206 (talk) 08:31, 8 April 2012 (UTC)[reply]

Spectroscopy is the science of determining characteristics from a spectrum. Spectrometry is the measurement of spectra. In many cases they can be used interchangeably without too much ambiguity, but it's better to be precise. 71.215.74.243 (talk) 10:06, 8 April 2012 (UTC)[reply]
Specialists in the field often have heated debates. The difference I learned from them in a bar long ago is mainly that half of them take offense at one term and half at the other:) Okay, the main difference I hear when they're sober is that spectroscopy deals specifically with absorption/emission of electromagnetic radiation over a range of frequencies whereas spectrometry especially refers to "things over a range" other than EM effects but may also be used as a parent field for the other. DMacks (talk) 14:28, 8 April 2012 (UTC)[reply]
I think your definitions are better than mine, because there's no such thing as acoustic spectroscopy. 71.215.74.243 (talk) 20:02, 8 April 2012 (UTC)[reply]
Further to DMacks' point, a non-electromagnetic example of spectrometry is mass spectrometry, which involves measuring the charge/mass ratio of fragments of molecules. Nothing to do with light. Brammers (talk/c) 09:34, 9 April 2012 (UTC)[reply]

Could Kinect ever come with a force-feedback body suit?

I get to burn some extra calories with the XBox360's Kinect, but there's always room for improvement.

If a force-feedback body suit could make me feel (somewhat) what I see on the screen, there is potential to burn away even more of our bodily excesses. If a zombie pushes me, I would feel an extra urge to push him away, for example.

However, what technological hurdles would its developers need to overcome in order to make a viable force feedback body suit work as intended? --Tergigress (talk) 10:29, 8 April 2012 (UTC)[reply]

See Haptic technology for starters. SemanticMantis (talk) 12:29, 8 April 2012 (UTC)[reply]
There aren't really any hurdles except expense. It's already possible to get stationary bikes with force feedback. Handgrips or gloves with force feedback could be built, but they would cost a lot, especially gloves. Looie496 (talk) 15:57, 8 April 2012 (UTC)[reply]

Why can't the Kinect recognize fingers and head movements?

I can't look up or down; only straight ahead. There are many RPG elements (yet to come?) that'll require you to nod or shake your head.

Moreover, you can imagine many games that would allow/require the use of your individual fingers. A "Guitar Hero"-style game for Xbox360s would finally teach the player how to play a real guitar without even needing a guitar-like accessory. However, the Kinect only recognizes arm movements, and not much more.

Would the Kinect only need a software patch update? Or would changes have to be made to its hardware that it would essentially take a "Kinect 2.0" to pick up the finer details of our bodily movements? --Tergigress (talk) 10:29, 8 April 2012 (UTC)[reply]

My understanding is that Kinect "sees you" as outlines of forms — as a silhouette — with some depth mapped on to that (it makes an infrared map like this and then creates the form map like this). It's extremely hard (and unreliable) to see things like fine head and hand movements with just silhouette. It would take significant changes to the way the Kinect worked to do these sorts of things. I wouldn't expect it as just a software patch. (And I would note that even well-informed air guitar is not really instructional for how to play a real guitar.) --Mr.98 (talk) 12:53, 8 April 2012 (UTC)[reply]
Its Z-resolution seems to be a bit better than those pictures suggest - check out the various things at http://openkinect.org/wiki/Gallery - the first facial animation shows (in its bottom left) the raw Kinect spacial data stream. That's pretty good, but its resolution is probably too poor to properly distinguish fingers, especially at a reasonable distance, for most angles of the hand. Given an order-of-magnitude improvement in resolution (which probably isn't unreasonable to expect in say 5 or 10 years) you could locate fingers for a guitar game pretty accurately. -- Finlay McWalterTalk 13:27, 8 April 2012 (UTC)[reply]
Try a google with 'kinect finger' and yoiu can see some videos of people experimenting with this. I guess you'd need a bit of intelligence about how fingers work as well as just detecting them to get it working really well but I don't see any great problem. Dmcq (talk) 13:42, 8 April 2012 (UTC)[reply]

which reaction formula is right?

For the reaction between aluminium and NaOH, I'm interested in knowing which of the two reaction formulae is correct:

They give different products and disagree in the initial amount of H2O. Katherine.J.W. (talk) 13:52, 8 April 2012 (UTC)[reply]

The section here, Sodium_aluminate#Reaction_of_aluminium_metal_and_alkali, suggests they may both be right. --Tango (talk) 14:14, 8 April 2012 (UTC)[reply]
(edit conflict) Looks like [Al(OH)4] is just a hydrated form of [AlO2]. As Tango's link says, there are lots of different forms possible with various numbers of "oxide" and "hydroxide" ligands depending on how much water is present (which makes sense, per Katherine.J.W's good observation that "the initial amount of H2O" is a key difference). DMacks (talk) 14:22, 8 April 2012 (UTC)[reply]

Mystery bird

I took a picture of this bird that was wading and fishing at Oyama Lake last July. The complete image is 15mp and actually has a fish jumping on the other side of the frame. I hope to upload it to wikipedia after I identify the bird. The fish is most likely a trout.--Canoe1967 (talk) 16:26, 8 April 2012 (UTC)[reply]

It is one of the Heron species. AndyTheGrump (talk) 16:30, 8 April 2012 (UTC)[reply]
Probably the Great Blue Heron. Mikenorton (talk) 16:31, 8 April 2012 (UTC)[reply]
Yup - If the 'Oyama Lake' referred to is the one in Canada, that makes sense. In Europe, one would expect to find the very similar Grey Heron, which is what I initially thought it was. AndyTheGrump (talk) 16:39, 8 April 2012 (UTC)[reply]
Thank you both. I should have mentioned which planet the lake was on. I am glad I didn't label it as a kingfisher before upload.--Canoe1967 (talk) 16:41, 8 April 2012 (UTC)[reply]
Er, yes - I don't think Kingfishers get that big. Actually, I may have a decent picture of a Grey Heron myself - or if not, I might see if I can take one to download (fairly common on the banks of the Thames above London, and in surrounding parklands etc). It would be nice to get a picture of one in flight - very distinctive. AndyTheGrump (talk) 16:47, 8 April 2012 (UTC)[reply]

I had the camera tilted approximately 40 degrees. I can't see a way to crop it decently to include the heron and fish with normal rotation. I will upload as is and get proposals and votes on the best way to fix it.--Canoe1967 (talk) 17:11, 8 April 2012 (UTC)[reply]

I managed to upload, but it seems the file is too large for thumbnails. I need to look into that. File:Heron and small trout.png --Canoe1967 (talk) 18:09, 8 April 2012 (UTC)[reply]
Changed to jpg upload. File:Heron and small trout.jpg. Where would I seek photgraphic opinions on how to fix the rotation, or is it fine as is?--Canoe1967 (talk) 18:59, 8 April 2012 (UTC)[reply]

 Done I asked for opinions in the graphics lab help area.--Canoe1967 (talk) 20:15, 8 April 2012 (UTC)[reply]

calcium metaborate

I'm trying to write a balanced chemical equation for the reaction of calcium oxide melted with boric acid to yield CaB2O4. Have I worked this out correctly? CaO + 2B(OH)3 = CaB2O4 + 3H2O. 86.7.42.12 (talk) 16:47, 8 April 2012 (UTC)[reply]

I hope we aren't doing your homework, but the oxygen count looks off.--Canoe1967 (talk) 17:15, 8 April 2012 (UTC)[reply]
Hmmm, I count seven each side? 86.7.42.12 (talk) 17:19, 8 April 2012 (UTC)[reply]
Yes, I corrected my above post. It has been a while since I have done those. I had problems with factoring trinomials even more.--Canoe1967 (talk) 17:26, 8 April 2012 (UTC)[reply]
Okay. And no, you're not doing my homework (I wish I were still that young!), especially as I think I have obtained the correct chemical equation anyway, just want to check for mistakes, like whether the charges are balanced or not. It's been a while since I last worked with chemistry too. 86.7.42.12 (talk) 17:34, 8 April 2012 (UTC)[reply]

If I remember correctly, you just need the correct yield compounds and the element counts need to have the same totals on each side, so it does look okay in that respect.--Canoe1967 (talk) 18:04, 8 April 2012 (UTC)[reply]

None of the redox numbers change because this is not a redox reaction(which is what you're most likely thinking of when you asked about balancing the charges), so all you need to do is balance each side of the equation, which you've already done. So yes, this is correct as is. 112.215.36.180 (talk) 06:59, 9 April 2012 (UTC)[reply]
Thank you very much, that's just the affirmative answer I was looking for. I was thinking of redox numbers, yes. 86.7.42.12 (talk) 10:06, 9 April 2012 (UTC)[reply]

Why do cows eat grass?

The cows at a nearby farm seem to prefer almost anything over grass. If I bring my organic waste there, like banana skins, apple skins, even rotten vegetables, they come running to me from a great distance, eat all of it, hang around for a while (presumably hoping for more food), and then slowly retreat and start to eat their regular diet of grass.

So, why don't we feed cows with organic waste instead of grass? Count Iblis (talk) 17:50, 8 April 2012 (UTC)[reply]

Why do cows eat grass? Because they evolved in grasslands. AS for them eating other things, they may well enjoy it. They may well even get some benefit from it - in small quantities. I doubt that a diet consisting solely of 'banana skins, apple skins, and rotten vegetables' would be good for them. AndyTheGrump (talk) 18:01, 8 April 2012 (UTC)[reply]
In the same way that horses are crazy for polo mints, but you'd not want to feed a horse solely on polo mints. 46.208.224.194 (talk) 18:03, 8 April 2012 (UTC)[reply]
Not even a polo pony? —Tamfang (talk) 01:15, 9 April 2012 (UTC)[reply]

Cows are ruminant animals. Every part of their digestive tract - from the shape of their teeth to the chemistry of their multichambered stomach - has evolved to support eating fibrous plant matter, like grass. However, other plant matter, like fruit, grains, and oats, are acceptable in some quantity. Here's a good overview of cattle feed for small-scale beef farmers, Feeds and Feeding for Junior Beef Cattle Projects, from the extension program of the Animal Science department at Texas A&M - you'll hardly find a more expertly qualified organization than that to explain cattle nutrition! In addition to grass and roughage, cattle need minerals, vitamins, and nutrients that are normally provided by oats and alfalfa, and other products; apples and fruits are acceptable supplemental sources, too, but tend to be too expensive for large-scale feeding. Nimur (talk) 18:23, 8 April 2012 (UTC)[reply]

We don't feed cows with grass either, rather, with corn, according to The Omnivore's Dilemma. →Στc. 20:07, 8 April 2012 (UTC)[reply]
Pasture fed cattle obviously eat other stuff too. Grass is sucessful because it tolerates grazing well + the cows/other grazers eliminate the grass's potential competition from other sapling plants that could grow up to shade out the grass. SkyMachine (++) 23:55, 8 April 2012 (UTC)[reply]
Here in the UK, (and a lot of other countries) we got into a lot of bother by feeding cows on bits of other dead cows. Alansplodge (talk) 01:25, 9 April 2012 (UTC)[reply]
Cows are probably similar to primitive man, who had to subsist mainly on low-calorie veggies and grains. When they got something better, like fruit or meat, they were probably quite grateful, too. StuRat (talk) 01:20, 9 April 2012 (UTC)[reply]
As Nimur says above, the insides of cows are designed to eat grass and similar stuff and quite different from ours. Alansplodge (talk) 01:27, 9 April 2012 (UTC)[reply]
The similarity isn't in the actual diets, but that the majority of the diet of both cows and primitive humans was one thing, while both craved "treats" to supplement their meager diets. StuRat (talk) 02:10, 9 April 2012 (UTC)[reply]


Something I read long ago, so needs verification. I have heard that too much of very sweet or starchy foods can make cows and pigs ill by causing rapid fermentation in their guts leading to acidification and gas buildup. Staticd (talk) 06:00, 9 April 2012 (UTC)[reply]
See Bloat. 110.151.252.240 (talk) 21:08, 11 April 2012 (UTC)[reply]

Bayes' theorem in medical diagnostics

I'm trying to get a grasp on how Bayes' theorem can be applied to updating prior probabilities in medical diagnostics. In this context, Bayes' theorem could be written:

My understanding is that is the diagnostic sensitivity of the test, and is the doctor's prior probability of the patient having the disease in question. I'm having problems, however, in interpreting . Which probability should be entered here? The probability that any random person in the community that the doctor serves has a positive test, whether they have the disease or not? This doesn't sound right, as the doctor must have taken other information, such as the symptoms, age and sex of the patient into consideration, when assigning a prior probability. So, what probability, exactly, should be entered here? I am aware of the article Likelihood ratios in diagnostic testing, this question specifically concerns the use of Bayes' theorem. Thanks! --95.34.141.48 (talk) 20:17, 8 April 2012 (UTC)[reply]

is the probability that someone in a particular group has the disease. is the probability that someone in that same group (regardless of their disease status) tests positive. Both would be based on observed frequencies within that group. What group? It could be all the patients tested during the development phase of the test (good because it gives a large sample, making for a more accurate probability), or it could be all the patients that the doctor has ever given the test to (good if test results vary by geography etc, but bad if it results in poor accuracy of probabilities due to small sample size), or it could be all the patients of particular age, sex, and symptoms that this doctor, or doctors in general, have ever given the test to. But everything in the formula has to refer to the same one of those groups. Choosing a group depends on how good the measured probabilities are as estimates of the true underlying probability, which depends on sample size, and on how much difference it is thought to make if you limit yourself to sub-groups with particular characteristics. Duoduoduo (talk) 20:54, 8 April 2012 (UTC)[reply]
Thanks! That was really helpful. Would the following statement be both a correct interpretation of your answer, and correct (albeit pedantical) mathematical notation?
Where is the information that the patient belongs to a subgroup within the doctor's practice, that the doctor would test for the disease, for whatever reason. If the doctor is not doing a clinical study, the doctor's suspicion of the patient having the disease, or the patient's fear of having the disease, could be such reasons. --95.34.141.48 (talk) 05:52, 9 April 2012 (UTC)[reply]
Yes, that's right, everything conditioned on X. The problem with choosing X to be all people who this doctor has ever suspected of having the disease is that you probably don't have good data on the percentage of them who actually had the disease -- maybe all you have is the percentage who tested positive. Also you probably don't have since the doctor probably has not tested everyone in X for the disease by all possible means. So it seems to me that one has to go with data from clinical studies. Duoduoduo (talk) 15:10, 9 April 2012 (UTC)[reply]
Thanks again! I know of course the necessity for data from clinical studies, but I am pursuing my reasoning in order to understand exactly what is going on. Therefore: would it help if I defined as the subgroup of patients in the doctor's practice who ever contacted the doctor with a problem that led the doctor to perform the test? If the doctor had a suitable database of all his patients, lab-tests, and results, could be calculated directly from the data, by dividing the number of positive tests by the total number of tests performed (granted, you would have to figure out how to handle repeated testing of the same patient). When clinical studies calculate diagnostic sensitivities, and these are used for calculation of likelihood ratios for updating prior probabilities etc., it appears to me that there is an implicit assumption being made:
,
If we then interpret as the doctor's pre-test probability of the patient having the disease, and as the doctor's post-test probability of the patient having the disease, we end up with
I understand, of course that these are approximations, but are there any really serious flaws in this reasoning? --95.34.141.48 (talk) 19:20, 9 April 2012 (UTC)[reply]
I'm not sure why you think that those approximations are implicitly being used. Suppose there's a test for heart disease. Presumably in the clinical trials if they report then the sensitivity was measured separately for males and is not being approximated by the unconditional or by a sensitivity conditioned on some other category Y. And on the other hand if all they report is a sensitivity conditioned on no special group, then what X means in is simply the group "all people". In any event, your last equation is right provided all things including Sensitivity are conditioned on the same X. If the doctor had sufficient data about his own patients, then this X could be his own patients, or it could be his own male patients, etc. Duoduoduo (talk) 20:00, 9 April 2012 (UTC)[reply]
Thank you! Your responses have been very clarifying. Much appreciated! --95.34.141.48 (talk) 20:40, 9 April 2012 (UTC)[reply]

You can simply expand this probability:

Icek (talk) 15:18, 9 April 2012 (UTC)[reply]

Thanks for your reply! See also Duoduoduo's replies above. It turns out that the term is particularly problematic, because it depends strongly on the prevalence of the disease in the doctor's practice, and also strongly on the doctor's tendency to have (unnecessary) tests being performed, "to be on the safe side". It turns out that this problemtatic term cancels out when Bayes' theorem is transformed into its Odds version, Bayes' rule. I was confused because (1) I hadn't read that article, and (2) the articles Likelihood ratios in diagnostic testing and Pre- and posttest probabilities treat likelihood ratios only in the context of dichotomous variables. See my question about this on the maths desk[3], and Meni Rosenfeld's reply that an approach using likelihood ratios with continuous variables is perfectly valid. So I'll experiment further with that approach. I'll be back with more questions if I run into further hurdles! --95.34.141.48 (talk) 16:03, 10 April 2012 (UTC)[reply]

Group 14 electronegativities

Why doesn't this group show any EN trend? Could someone give a detailed answer?--R8R Gtrs (talk) 20:32, 8 April 2012 (UTC)[reply]

Electronegativity is determined in a relative sense with all methods other than Mulliken. It's estimated as one of many contributing factors to the energy difference between products and reactants in different reactions. In other words, determining it is not an exact science. Not all of the groups give perfect patterns as the graph shows.
. From the article on electronegativity; The anomalously high electronegativity of lead, particularly when compared tothallium and bismuth, appears to be an artifact of data selection (and data availability)—methods of calculation other than the Pauling method show the normal periodic trends for these elements. 112.215.36.180 (talk) 07:34, 9 April 2012 (UTC)[reply]
Sorry, but this is not really helpful. (Also, I'm talking about differences here--lower or higher is a difference issue) The Al-Ga, Si-Ge, etc. upturns can be explained (I guess) by the affect the new d-shell causes, but I don't have a systematic knowledge on this, and would love to find out more. About lead: I read that text as well. And this does not explain where does the Pauling Sn-Pb upturn comes from, which must be important. Pauling Pb figure (2.33) is greater than those of Bi, Po, and At; that's noticeable.--R8R Gtrs (talk) 12:44, 9 April 2012 (UTC)[reply]
The value 2.33 is for Pb(IV): the Pb(II) value is 1.87, which follows trends. Only for Tl and Pb do different oxidation states of the same element have vastly different electronegativities. Double sharp (talk) 10:30, 18 July 2014 (UTC)[reply]

A few questions from a science fiction author: Lethality of small amounts of high explosives and theoretical weapons systems

I'm currently working on a book for an RPG game developer that's in their "wish list" category, I dislike pure "pie in the sky" science fiction: I want my creation to be at least plausible.

One idea that has always intrigued me is the idea of a "utility launcher" weapon, based on the dizzying array of shotgun shells and grenades manufactured, especially the myriad special shells (flechette, HE, smoke) proposed for the Jackhammer automatic shotgun. My idea is that given the tactical adaptability needed of a 22nd century soldier (engaging rioting civilians, lightly armed insurgents, powered armor equipped enemy heavy infantry, light vehicles, etc) a common solution is a large-bore (perhaps 30-40mm) low-velocity pistol or rifle railgun. In this publisher's settings power cells have put energy levels capable of fueling a railgun in common use. The standard D-sized battery carries 180,000 KwSec of energy, as an example.

My question is threefold: First of all how practical would such a weapon system be? Would there be special challenges faced? Would they make conventional weaponry obsolete or would they be used as a side-arm in addition to a main battle rifle firing a round more like what we're used to seeing (for example a 5-8mm railgun main battle rifle)? Secondly: How much damage would a 30-40mm sphere of high explosive do? Would it be lethal enough to justify its use? What might it's kill/wound radius look like? For this I think we can assume advances in chemistry would produce an explosive 10-30% more powerful than hexogen Third: What sorts of rounds would a common soldier be issued? Obviously HEAT and HESH are likely, perhaps smoke and distraction device. Non-lethal stingball and CS rounds are other obvious possibilities, but one complaint I have with sci/fi is that there are often many advanced weapons but writers rarely look at theoretical grenades/explosives, except maybe plasma. I see no reason why they wouldn't advance along with the weapon technology. HominidMachinae (talk) 23:46, 8 April 2012 (UTC)[reply]

Are you thinking of something like the Milkor MGL? Alansplodge (talk) 01:04, 9 April 2012 (UTC)[reply]
In all honesty the fictional underpinnings of my idea come from the Scorpion pistol in Mass Effect 3 and the Pancor Jackhammer (which I consider fictional since fewer than 20 were ever made). I envision something like the sawn-off Vietnam Era variant of the M79_grenade_launcher or the XM29 HominidMachinae (talk) 02:08, 9 April 2012 (UTC)[reply]
The US military is already looking at explosives 28% more powerful than hexogen, so it wouldn't be unreasonable to assume that something 100% or even 200% more powerful will become available by the 22th century.
I really like your grenade idea. A bunch of army guys had similar ideas back in the 90's and that's how the OICW project started. 20 years later two countries are fielding (US and South Korea) computerized air burst grenades. My speculation is that these computerized grenades will gradually gain new capabilities as technology evolve and eventually become "smart grenades" that can automatically identify threats, change trajectory in mid-flight, and perhaps even switch between HE and HEAT right before impact. Just to throw some wild ideas around: sub-munitions, spherical/cylindrical grenades that can roll toward targets after landing, grenades that can turn into mines. You might also want to take a look at what other sci-fi authors came up with w.r.t. grenades:[4]Anonymous.translator (talk) 02:17, 9 April 2012 (UTC)[reply]
I assume that all rounds will have a dual-use proximity switch based upon capacitance: just like the touch screen of a modern smartphone they'll know if what they stuck to is humanoid (and thus use an instant fuze) or inanimate (and thus use a proximity fuze based on capacitance), in essence they work like a very lethal Theremin. As to what other sci-fi authors have done, usually they stick to modern explosives when offered a choice of grenade, otherwise they go to using a plasma grenade. Even settings in the far future such as Mass Effect or Halo use either fragmentation grenades much like those used since the 1800s or "Plasma" as the default "sci/fi sounding" grenade type. I hope for a more realistic interpretation. HominidMachinae (talk) 05:17, 9 April 2012 (UTC)[reply]
My last sentence was actually directing you to this website with Tom Clancy's ideas about some interesting grenade rounds. I agree with you that most sci-fi books and games just keep the same old WWII-era frag grenades and make the explosion blue or something. But I guess game makers can't really put self-guided grenades in because it doesn't require player skill and might unbalance the game. Anonymous.translator (talk) 07:01, 9 April 2012 (UTC)[reply]
I wonder why you would only have 30% more powerful explosives. You have 180 MJ per D-sized battery (56.5 cm3), in other words 3.186 MJ/cm3 for your source of electricity. Hexogen has a heat of detonation of 83.82 kJ/mol, which means 377 J/g, which means an energy density of 687 J/cm3. Of course there are materials with much higher heats of reaction around (which do not detonate, e.g. thermite has about 17 kJ/cm3), but if you get up to 3.186 MJ/cm3, I'd really expect to see explosives much more powerful than hexogen. Icek (talk) 15:05, 9 April 2012 (UTC)[reply]
According to this [5] D cells can only manage 75 KJ. The "standard" non-alkaline version only 18 KJ. A 180 MJ energy cell could drive a car at full throttle for over 20 minutes. Although I suppose it's not impossible for the standard D-sized battery to store 180 MJ in the 22nd century and be able to release it quickly and slowly that you can also plug into a gun, though it probably won't run on electrochemistry, right? Sagittarian Milky Way (talk) 23:33, 9 April 2012 (UTC)[reply]
The energy per atom would be at least a few thousand electron volts, so it would probably be nuclear. But I would still wonder why this couldn't be used as an explosive... it cannot be simply radioactive decay (which is used in RTGs) because you can release the energy relatively quickly. So if you can speed up the reaction, why can't you make it fast enough to get an explosive? The nuclear reactions themselves usually happen very fast. Icek (talk) 00:31, 10 April 2012 (UTC)[reply]
I personally think a 180 MJ battery is silly, but I am writing for a published game system that specifies that at near-future tech level that is what the batteries carry. It gets really silly when you have said battery powering devices that seem to be less deadly than simply shorting the battery. But that's what the publisher's setting says, I'm bound by it. HominidMachinae (talk) 04:23, 10 April 2012 (UTC)[reply]
Here is a 40mm hand grenade (pic). Also, I'm amused by your idea that if fewer than 20 of something was produced then it's fictional; won't the Space Shuttles be surprised!
I should point out that I consider speculation on the implications of wide use of the Pancor as well as its hypothetical, proposed-but-never-made, ammunition types as the realm of fiction, not the weapon itself. Sorry for any confusion. HominidMachinae (talk) 04:20, 10 April 2012 (UTC)[reply]

A couple of comments on your first question. The biggest issue I see is with the size of the ammunition. A 30-40mm shell is, obviously, 30-40mm in diameter. That limits your ammo capacity and makes the weapon pretty large. You ask about a handgun/side arm, but either of the current magazine designs (hand grip clip or revolving cylinder) become unwieldy with such large rounds. One possibility is something like what Metal Storm developed, where the rounds are carried stacked inside an interchangeable barrel. Your soldier could carry several barrels for a side arm, each loaded with rounds for a different environment. I'm not sure how well magnetic acceleration would work with a barrel like that, but if you can make 180 MJ batteries, miniature super conducting magnets should be simple.

As for rounds, a couple of other ideas: Continuous-rod warhead, Top attack charges and thermobaric are all modern warheads that are used on larger missles, but not IFAIK grenades. As was mentioned above, computerized air burst grenades are in development now, so something like steerable munitions or even autonomous guided grenades which search for the largest threat, select the correct warhead (HE/HEAT) and steer to engage the threat, are possible. What about a Kinetic Energy round (though I guess you'd need a rocket motor to pick up speed, since the recoil would be too great for a pistol/rifle). For non-lethal, how about glue traps or wire obstacle for inside buildings or cameras that either stick to buildings or can be launched on a ballistic arc for reconnaissance. Tobyc75 (talk) 15:13, 11 April 2012 (UTC)[reply]

April 9

low blood pressure and i. v. heroin use

can low blood pressure be caused from heroin abuse? what can the outcome be? — Preceding unsigned comment added by 174.19.186.92 (talk) 02:20, 9 April 2012 (UTC)[reply]

I wouldn't expect it as a direct result, but perhaps it could be an indirect result, if while shooting up the addict damages a blood vessel and bleeds out. Death could result. This is most likely when mainlining into major arteries. StuRat (talk) 03:50, 9 April 2012 (UTC)[reply]
Here are three sites [6], [7], and [8] that show an association but usually in cases of overdose. The causes of low blood pressure can be widely variable and the subsequent effects can be unpredictable depending on the particular physical circumstances of the person at the time. Richard Avery (talk) 07:25, 9 April 2012 (UTC)[reply]

Perfectly elastic collisions for equal masses

From the article on Collisions:

"Collisions play an important role in cue sports. Because the collisions between billiard balls are nearly elastic, and the balls roll on a surface that produces low rolling friction, their behavior is often used to illustrate Newton's laws of motion. After a low-friction collision of a moving ball with a stationary one of equal mass, the angle between the directions of the two balls is 90 degrees. This is an important fact that professional billiard players take into account.[1] Consider an elastic collision in 2 dimensions of any 2 masses m1 and m2, with respective initial velocities u1 in the x-direction, and u2 = 0, and final velocities V1 and V2. Conservation of momentum: m1u1 = m1V1+ m2V2. Conservation of energy for elastic collision: (1/2)m1|u1|2 = (1/2)m1|V1|2 + (1/2)m2|V2|2 Now consider the case m1 = m2, we then obtain u1=V1+V2 and |u1|2 = |V1|2+|V2|2 Using the dot product, |u1|2 = u1•u1 = |V1|2+|V2|2+2V1•V2 So V1•V2 = 0, so they are perpendicular."

Although I see no problem with the math, I've been having a hard time trying to believe the result that the final velocities are perpendicular. Can anyone provide evidence for this statement (A video, if possible)? Also, if correct, what mechanism is directly responsible for the change in direction of the initial velocity? If the final velocities are indeed perpendicular, that would imply a force acting on a different direction than that of the moving ball's trajectory. However, the only forces I can see are the respective normals of each ball, which, as far as I know, are perpendicular to the contacting surfaces, that is, parallel to the initial velocity of the moving ball. Is there something I am missing?186.28.49.19 (talk) 02:57, 9 April 2012 (UTC)[reply]

The cueball need not hit the target ball straight-on (square with the direction of travel)--they are not point particles. Here's a handwaving explanation for if not colliding straight-on: if the cue makes a glancing blow on the target, you have to decompose the initial velocity as one vector normal to the contact and one orthogonal to it. The contact-normal component results in energy transfer to the target in the direction of that vector. The orthogonal component is not affected by the collision, so the cue keeps moving in that direction. DMacks (talk) 03:23, 9 April 2012 (UTC)[reply]
In the real world, there is an additional complication in that when the collision occurs, the cue ball will usually have some spin, which may include spin imparted by the cue tip (in any direction) and/or a previous bounce from a cushion (in a horizontal direction), and usually (sometimes solely) a component caused by its roll along the baize: after the collision, this spin (which is not greatly affected by the near-frictionless contact between the balls) operating through friction with the baize, will contribute to the subsequent direction of the cue ball, changing it from the Newtonian ideal. If the ball is spinning and/or travelling with any appreciable speed, it "skids" on the baize, and only when it slows down sufficiently in either sense does it fully interact with the baize, so the ball may well travel in a curve, or change direction abruptly after a brief interval, following the collision.
Rather than videos, may I suggest that you go to the miniclip.com website and try playing the 8 Ball Pool QFP game (in its solo version via the "Play Instead" button rather than the multiplayer mode reached via the "Play" button). This gives, for an online version, quite a good simulation of most of the parameters of a billiards-type game. (Confession time – I'm an addict.)
Pertinent to your particular query, you will find that a hard cue shot played from any position against a ball on the centre spot (a situation that is automatically set up at some re-racks, and always if one contrives to leave the last ball of a frame in the rack area when potting the penultimate ball) so as to pot the latter into a side pocket will always send the (skidding) cue ball at right angles, parallel to the length of the table, whereas if you play the cue ball more gently, its rolling friction-induced spin will be able to cause it to deviate from this ideal direction. You will also see that the helpful post-strike direction indicator (absent if you play in "Expert" mode) always indicates the "ideal" (spinless) post-collision directions of cue and object balls to lie at right angles.
Should you be so minded, you may through repeated playing be able to develop both a reasoned and an intuitive grasp of the balls' behaviours in various situations of speeds and spins. {The poster formerly known as 87.81.230.195} 90.197.66.4 (talk) 05:42, 9 April 2012 (UTC)[reply]
To answer your question directly it seems there is indeed something you are missing. Any spin ("English") applied to the Que ball will be imparted onto the object ball in some proportion. HominidMachinae (talk) 06:04, 9 April 2012 (UTC)[reply]
But how would that spin be imparted in such a way that the resulting velocities are perpendicular? Also, since the formulation given in the article makes no reference whatsoever to spin or to the fact that they are balls (it only considers momentum and kinetic energy), it should be possible to change the problem to two cubes of equal mass on a low friction surface. In such a case, in which there is clearly no spin, what would be the mechanism that makes the final velocities turn out perpendicular to each other? By: 186.28.49.19 (the OP) 157.253.197.112 (talk) 21:13, 9 April 2012 (UTC)[reply]
I agree that the result that they end up perpendicular regardless of the initial angle is surprising, but that the math is right. And zero spin is assumed, so spin has nothing to do with it. You say "If the final velocities are indeed perpendicular, that would imply a force acting on a different direction than that of the moving ball's trajectory." But it's bouncing off -- things going in a straight line bounce off at an angle -- this happens if a cue ball bounces off an angled wall, so it's not surprising that it happens when the cue ball bounces off something that puts up less resistance than a wall does. So to me the only surprising part is the invariance of the result to the initial angle -- this somehow results from the assumption that the two balls have equal mass. Duoduoduo (talk) 14:47, 10 April 2012 (UTC)[reply]
Maybe the reason the 90° angle result seems counterintuitive is that we're used to seeing balls rolling with topspin continue in more or less the same direction after hitting another ball at an angle. But presumably the fact that it is rolling with topspin means that the cue ball, at the moment of impact, uses friction with the table to impart more momentum in the original direction, something assumed away in the math. Duoduoduo (talk) 15:17, 10 April 2012 (UTC)[reply]

CARDIOLOGY

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


SIR PLS CAN U TELL ME WHAT IS THE GENERAL MEANING OF WHEEZING IS IT RELATED TO HEART DISEASE BUT THE GENERAL MEANING WHAT I KNOW IS THAT WHEEZING IS JUST DUE TO AILMENTS IN RESPIRATORY CAVITY BECAUSE OF INSUFFICIENT OXYGEN THE PERSON SUFFER FROM BREATHING TROUBLE IS THERENT ANY MEDICINE TO ERADICATE IT PLEASE GIVE ME SUGGESTIONS WHY CANT LADIESBECOME A CARTDIOLOGIST WHAT IS THE PROBLEM — Preceding unsigned comment added by Akshayaz (talkcontribs) 12:24, 9 April 2012 (UTC)[reply]

The reference desk will not answer (and will usually remove) questions that require medical diagnosis or request medical opinions. AndyTheGrump (talk) 13:20, 9 April 2012 (UTC)[reply]
Typing all in capital letters is bad manners too. Wickwack121.221.31.226 (talk) 13:22, 9 April 2012 (UTC)[reply]
Please re-write your question using correct capitalisation and punctuation. It is very difficult to understand it at the moment. --Tango (talk) 13:33, 9 April 2012 (UTC)[reply]
Um, there are female cardiologists; there's no reason that ladies can't become cardiologists. Nyttend (talk) 15:33, 9 April 2012 (UTC)[reply]
I can't see how this question is asking for medical diagnosis or opinion. It is asking for facts about a medical phenomenon. If he was to ask (for example)'why are bruises blue' the answers would fly in. Since when have we been only answering questions in perfect English? Please, Tango! it is not difficult to understand the three questions being asked. I trust all your comments will be flawless in future. Now, Akshayaz, the answer to your first question can be found by clicking on wheeze. The answer to your second question is not lack of oxygen but a narrowing or partial obsruction of the airways in the lungs (it may of course cause a lack of oxygen). There is medicine available to improve this problem but it must be prescribed by a doctor. Finally, in many countries in the world there are lady cardiologists. Perhaps it is your experience that makes you think that cardiologists are always men. I hope that helps. Richard Avery (talk) 07:37, 10 April 2012 (UTC)[reply]
The article Wheeze will tell you the general meaning of wheezing. You will have to consult a doctor to find out what is causing wheezing in a particular case. --Colapeninsula (talk) 08:38, 10 April 2012 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Seahorse gender

So one one of all time favorite factoids - seahorse females have penis and males get pregnant. Why do we think that males are males, not the other way around? (Besides genetics - I hear this assumption was around before gene testing) ~~Xil (talk) 13:19, 9 April 2012 (UTC)[reply]

The females produce eggs - that's what makes them female. Males produce sperm - that's what make them male. Male seahorses do not get pregnant - they merely have a carrying pouch to protect the young. Wickwack121.221.31.226 (talk) 13:28, 9 April 2012 (UTC)[reply]
And what female seahorses have isn't a penis, it's an ovipositor. Penises transmit semen; ovipositors transmit eggs. Red Act (talk) 13:41, 9 April 2012 (UTC)[reply]
The question, though, was how did science first arrive to that conclusion. I am pretty sure it was discovered before you could genetically test seahorses and also their eggs probably are microscopic enough for it to not be all that obvious. So how seeing an individual with penis-like appendage and another that gets pregnant you conclude that first is female and the second is male? ~~Xil (talk) 16:41, 9 April 2012 (UTC)[reply]
Fish don't get pregnant, so if you see a fish doing something analogous to pregnancy (and only loosely analogous at that), there is no reason to assume that is the female. The female's ovipositor isn't really penis-like. While the reproductive method of the seahorse seems superficially like the reproductive method of mammals with the roles reversed, that isn't how scientists would determine their sexes. They would compare them to other fish, and the female seahorse has far more in common with other female fish than other male fish, and vice versa. --Tango (talk) 17:28, 9 April 2012 (UTC)[reply]
In a strict sense, that is correct, as no fish nourishes offspring internally in the process of placental vivipary. However, a number of fish retain their fertilised eggs internally in the process of ovoviviparity until they hatch and thus exhibit viviparous birth. I even caught such a "pregnant" female fish once, a Viviparous eelpout, whose condition was not apparent until it was prepared for dinner. Some species of snakes and other reptiles, and some amphibians, do the same.
The seahorses' method has parallels in other Classes, such as the Gastric-brooding frog. {The poster formerly known as 87.81.230.195} 90.197.66.34 (talk) 19:53, 9 April 2012 (UTC)[reply]
You are right that scientists made the male/female identification long before genetic tests came along. Even today, it will be hard to tell the sex of an individual of an arbitrary new species just using a genetic test without knowing a little about the genetics of a related species. As 121.221.31.226 pointed out above, the scientists distinguished the sexes by looking for eggs and sperm. A microscopic / histlogical characterization was common very long ago. Also (as tango mentioned) when compared to other fish, the internal organs - testes, ovaries and accessory sexual organs are quite distinguishable, both macroscopically and under a microscope. So given the choice between calling an individual with eggs, ovaries, and a "penis" a male with eggs or a female with an ovipositor the choice was kinda obvious with enough details. Same goes for a female with testes and swimming sperm / male who has a pouch. Hope this clarifies things. mark as solved if you think it is.Staticd (talk) 18:39, 9 April 2012 (UTC)[reply]
What is the essential minimum difference that enables one to declare "this cell is an egg and that one is a sperm" if you know nothing about the animal that produced it? Is it "sperm have tails that they use to swim but eggs do not actively move"? Roger (talk) 07:53, 10 April 2012 (UTC)[reply]
Size (see sperm). --Colapeninsula (talk) 08:40, 10 April 2012 (UTC)[reply]
Note that isogamy does exist ancestrally. I suppose that it is possible that in some lineage, males and females eventually lost the differences between eggs and sperm and returned to isogamy, then later reacquired the differences with the opposite pattern, thus inverting male and female roles at the gamete level. But I certainly do not know of such a case. Wnt (talk) 16:59, 10 April 2012 (UTC)[reply]

white hole

a white hole sounds alot like a super nova. what is the proposed life expectancy of a white hole? if like some other phenomena, it can only last for a short period of time why couldnt this be what the formulas are seeing? wouldnt this also follow occams razor? — Preceding unsigned comment added by 165.212.189.187 (talk) 14:44, 9 April 2012 (UTC)[reply]

Please read White hole. It is not related to a super nova in any way at all. What formulae are you refering to? Plasmic Physics (talk) 14:48, 9 April 2012 (UTC)[reply]

Whatever formulae physicists use to interpret the universe that come up with these conclusions.165.212.189.187 (talk) 15:23, 9 April 2012 (UTC)[reply]

By their articles it is difficult for me to see how they are as different as you say. Could you name some differences? Also the white dwarf hole article says nothing can enter it does that include light? — Preceding unsigned comment added by 165.212.189.187 (talk) 15:03, 9 April 2012 (UTC)[reply]

I think in the original post when the OP typed white hole he meant white dwarf. When a moderate size star ages and eventually shrinks, it doesn't have enough mass to go supernova, so it becomes a white dwarf. When a high-mass star ages and collapses, the collapse results in it going supernova and then becoming either a neutron star or, in the very high mass case, a black hole. Duoduoduo (talk) 15:37, 9 April 2012 (UTC)[reply]

No he didn't165.212.189.187 (talk) 15:47, 9 April 2012 (UTC)[reply]

A white hole is a type of singularity. A supernova is an exploding star. They have nothing in common. White holes aren't actually very interesting - it turns out when you do the maths that they are indistinguishable from black holes. --Tango (talk) 18:25, 9 April 2012 (UTC)[reply]

ThHis "In addition to a black hole region in the future, such a solution of the Einstein equations has a white hole region in its past. However, this region does not exist for black holes that have formed through gravitational collapse." and this "objects falling towards a white hole would never actually reach the white hole's event horizon the white hole event horizon in the past becomes a black hole event horizon in the future" sound to me like the explaination of a supernova which results in a black hole. in the same relative terms could you explain an actual supernova-to-black hole event? 165.212.189.187 (talk) 19:26, 9 April 2012 (UTC)[reply]

More relevantly, note the nearby bit in white hole about there being no known physical processes that could result in a white hole. Conversely, we are entirely certain that supernovae and black holes exist and we're well-versed in the processes surrounding them. Consider using those articles as a basis for understanding the phenomena in question. — Lomn 19:41, 9 April 2012 (UTC)[reply]

I did. why cant someone list some differences in the properties of the two events in question instead of remanding me to the articles?165.212.189.187 (talk) 19:52, 9 April 2012 (UTC)[reply]

People have pointed out that supernovae exist and white holes don't. What further needs to be added on the question of whether one is really the other? --Sean 20:39, 9 April 2012 (UTC)[reply]
A supernova does not have singularity, or a event horizon for one. Plasmic Physics (talk) 23:20, 9 April 2012 (UTC)[reply]
If there is a white hole, it has always existed (by definition as far as I know, but at least in the commonly regarded case of the Schwarzschild metric), and a black hole exists at the same location in space. If what we see as a supernova happens at the location of such a pair of white hole and black hole, then the supernova would not "be" a white hole, but it would just be a particle ejection event happening once in the history of the white hole.
Look at the diagram in alternative coordinates: The hyperbolic curves are curves of constant r (the radial Schwarzschild coordinate). There are lines of constant t (the Schwarzschild time coordinate) which are not shown in the diagram, but they are simply straight lines passing through the origin. But only in the "exterior" regions (left and right quarters of the diagram) these lines of constant t actually correspond to lines in spacetime which can only be traversed in one direction (from past to future). In the interior regions (top and bottom quarters of the diagram) the hyperbolic curves of constant r are such boundaries between past and future (more technically called spacelike curves). That is because the Schwarzschild r and t coordinates sort of switch roles at the event horizon.
So a particle (or call it light and matter) ejection event would only be a pretty confined event happening at the boundary of the lower quarter and the right quarter of the diagram, at one small part of the event horizons many particles would cross to the outside.
I don't know why you would think that a supernova would be something like that. I would think that it is a product of stellar evolution rather than some white hole event. In case of the white hole event, I have no clue why there would be an ejection at a particular time, and the particles would come more or less from nowhere (from the singularity).
Icek (talk) 23:51, 9 April 2012 (UTC)[reply]

What do you mean? Wouldn't they come from the star? — Preceding unsigned comment added by 76.117.202.67 (talk) 05:27, 10 April 2012 (UTC)[reply]

You do not always know whether there was a star at the position where you see a supernova. If you want to explain even the observation of a star which becomes a supernova by a white hole, then make the white hole mysteriously eject particles at a lower rate for a longer time and at a higher rate for a shorter time... but again, a white hole hypothesis doesn't seem to explain anything. Icek (talk) 05:59, 10 April 2012 (UTC)[reply]
To summarize, the answer to your question is that white holes do not eject particles, and are not localized in time (they are extremely long-lived). Those are two reasons they can not be supernovae. -RunningOnBrains(talk) 16:28, 10 April 2012 (UTC)[reply]

OK, I get it. but why does the first sentence of the article say that particles are emitted? — Preceding unsigned comment added by 165.212.189.187 (talk) 17:44, 10 April 2012 (UTC)[reply]

It doesn't, and it didn't before you edited it (today at 17:46). Icek (talk) 18:18, 10 April 2012 (UTC)[reply]
Huh? It says "A white hole, in general relativity, is a hypothetical region of spacetime which cannot be entered from the outside, but from which matter and light have the ability to escape." And his edit of 17:46 merely changed "may" to the synonym "has the ability to". So does the lead sentence need to be changed to be consistent with the above assertion of Runningonbrains? Duoduoduo (talk) 16:35, 11 April 2012 (UTC)[reply]

To an observer within the event horizon of a black hole, wouldn't the rest of the universe be a white hole?112.215.36.179 (talk) 09:33, 11 April 2012 (UTC)[reply]

That was my next question!165.212.189.187 (talk) 15:05, 11 April 2012 (UTC)[reply]
No, why should it? Plasmic Physics (talk) 09:51, 11 April 2012 (UTC)[reply]
Because a white hole is a region that cannot be entered (since an object within the event horizon of a black hole cannot escape, it cannot enter the rest of the universe). An object that is a black hole in the future is a white hole in the past and all futures for objects within the bh's event horizon lead to the singularity whereas all possible pasts reside outside (within the rest of the universe).112.215.36.179 (talk) 10:13, 11 April 2012 (UTC)[reply]
A whitehole gravitationally attracts like a blackhole, an observer would always be moving away from the horizon of the black hole toward the singularity (the future). Plasmic Physics (talk) 10:37, 11 April 2012 (UTC)[reply]
The matter in the rest of the universe will also gravitationally attract objects towards the event horizon, but they will never move towards it due to the singularity's gravity and the distortion of spacetime.112.215.36.179 (talk) 10:51, 11 April 2012 (UTC)[reply]
The way you are speaking does not make sense, please rephrase. Plasmic Physics (talk) 11:01, 11 April 2012 (UTC)[reply]
Not sure what didn't make sense, but I'll try. Matter not inside the event horizon of a blackhole still exerts its gravitational pull on objects that are within the event horizon. It won't make any difference to the objects though, because they can only move toward the singularity as you said. In short, I see no difference between how we would see a white hole, and how an observer within the event horizon of a black hole would see the rest of the universe; it has mass (and therefore exerts a gravitational pull) and charge, it cannot be entered, light and matter can escape from it, and it exists only in the past(this is due to the black hole's gravitational field distorting spacetime and forcing all futures toward the singularity). 112.215.36.179 (talk) 11:17, 11 April 2012 (UTC)[reply]

That could also explain why they are found in the same spot as a pair.165.212.189.187 (talk) 16:14, 11 April 2012 (UTC)[reply]

I'm pretty sure an observer within the event horizon can't see the rest of the universe; all possible directions in which he could attempt to look point at the singularity, not back out to the event horizon. Writ Keeper 20:13, 11 April 2012 (UTC)[reply]
White holes only exist in the past, so not being able to see the rest of the universe irrespective of which "direction" you look in is consistent with it appearing the same as a white hole. 110.151.252.240 (talk) 20:33, 11 April 2012 (UTC)[reply]

April 10

Unknown drug powder ID

How can I find out what a packet of unknown drug is made of? I'm not sure it's one drug or several, in powder form. It is extremely unlikely to be an illegal drug. I'm more concerned it is a normal prescription drug. Is the Unknown Substance test here [9] the kind of test I should be looking for? Would a doctor or hospital have access to such a service? How would I find a cheap one? How much would it cost? Thanks. 66.108.223.179 (talk) 00:23, 10 April 2012 (UTC)[reply]

Not directly addressing your question: It is of course your own business what you do with this mysterious substance (unless you manage to break any laws by possessing or using it, and are caught), but (and I am not suggesting that this is your intention) it is generally considered extremely unwise to use pharmaceutical drugs (other than "over-the-counter" ones) unless they have been specifically prescribed to treat a condition one currently has, have been kept in the correct conditions where applicable, and are within their "use-by" date (after which they may lose efficacy), which from what you have said doesn't apply. It's also unwise to keep unused and unneeded drugs around (even if their identity is known) in case someone accidentally or through ignorance (a child, an aged adult with dementia) uses them (though again this may not be applicable in your situation), so the prudent course would be to hand it in to the nearest Pharmacist, Doctor's Surgery or Hospital Outpatients Department for safe disposal (tipping it down the drain may have harmful effect on the environment and may well be illegal), which is a routine duty of theirs (in the UK, at any rate). I used to be employed in the pharmaceutical industry (in a minor administrative role) so I'm sensitised to such matters.
It's possible that further circumstances you haven't disclosed may make your retention and investigation of this presumed drug, perhaps purely to satisfy your intellectual curiosity, a reasonable course, but I can't tell that from what you've said. {The poster formerly known as 87.81.230.195} 90.197.66.44 (talk) 02:57, 10 April 2012 (UTC)[reply]
In the UK you are supposed to take such items to a pharmacist who will dispose of them for you. There are public analysts who will, for a fee, undertake the work you are thinking of, but I'm not sure what they will do if they identify that it is an illegal or controlled substance such as diamorphine. --TammyMoet (talk) 09:34, 10 April 2012 (UTC)[reply]

it sounds to me from "I'm more concerned it is a normal prescription drug" that this could be someone "concerned" about this. Whether a teacher or family member, etc, it seems the poster has good reason to say it's "highly unlikely" to be an illegal drug; meaning they know the person or situation in which the bag was found and what the person or people who could be associated with that are likely and unlikely to have access to. It sounds like the worry/concern is prescription drug abuse. 134.255.115.229 (talk) 10:10, 10 April 2012 (UTC)[reply]

I'm not sure what the best option is for you economically for the test, but if you have a significant quantity of the stuff, you could rule out some silly things right away. To do that, try dissolving a trace of it in water, alcohol, acetone, toluene ... whatever chemicals come easily to hand, with one being quite hydrophobic. (Be careful about potential fumes or fires during mixing, if, say, it turns out to be an oxidizer for homemade fireworks!) If the stuff won't dissolve in anything then it might be something like talc/clay/etc. rather than something to worry about. If the stuff has a really, really strong color, even when you add more of your solvent to a drop, then it's more likely (not guaranteed though) to be an art supply. You can also try burning a trace of it in a flame (well ventilated area...) to see if it is flammable; if not, it might be, say, borax to kill roaches. Most modern drugs are carbon-based and would catch on fire. If it gives intense color in a flame (flame test) this might tell you more about it - lithium salts would be red, sodium yellow and so forth. Detergent powder (sodium lauryl sulfate) also has an intense awful smell. Maybe after a few ultra low tech tests like this you'll feel more comfortable to just come out and ask what the stuff is, and find out if it's nothing. Wnt (talk) 16:53, 10 April 2012 (UTC)[reply]

postcognition

what is the minimum time between when something occurs and when we can react to it? (meaning behave in a way that is statistically differentiable from if it did not occur, based only on normal sensory input and our behavior to it). subconscious reactions OK. 188.157.112.202 (talk) 00:30, 10 April 2012 (UTC)[reply]

The human reaction time depends on the stimulus and activity performed. See the article for a variety of different measured values. --Mr.98 (talk) 01:01, 10 April 2012 (UTC)[reply]

Could a satellite really pack the energy to knock out the entire US's power grid?

See from 1:35 on this game trailer, a satellite knocking out the entire grid for the US.

After, in the Homefront reality, Korea reunifies under the Northern regime, annexes several countries on the Pacific Rim, and our infrastructure / society already deteriorates due to massive national debt and hyperinflation, we get our power knocked out by a single satellite.

(Then we get invaded by their forces up to the western side of the Mississippi River.)

Is our power grid really not that robust? How much energy must be packed in an EMP blast to take out our power from sea to shining sea?

Moreover, what safeguards can we put on our electrical grid today so that we do not go dark from an EMP attack in the future? Thanks. --Tergigress (talk) 06:14, 10 April 2012 (UTC)[reply]

I would expect that multiple nuclear detonations in the ionosphere would be needed. The best defense is probably just to bury all our electrical lines, as shielding them all would be prohibitively expensive. (Burying also has other advantages, like protecting them from weather.) Individual buildings might still be vulnerable, but the surge typically needs to build up over a long length of exposed wire to cause damage. Surge protectors would help, too. StuRat (talk) 06:19, 10 April 2012 (UTC)[reply]
(edit conflict)As far as safeguards go, there is the Farraday cage, which is basically an enclosing mesh or box made of conducting metals. It eliminates any danger from EM pulses, though I suspect it would be difficult (and very, very costly) to implement such a protection on the entire United States power grid. But I would say that the chances of a solar flare-induced EMP causing problems with our electrics is far greater than any hostile attack, seeing as how the GBMD exists and stuff. I actually heard once that a single detonation could wipe on US-based electrics if it went off fairly high up above Kansas. As far as the North Koreans go, the worst they could do (assuming they were suicidal) with their current missile technology is, with luck, cause a rolling black-out or two on the west coast, best-case scenario being as far east as Las Vegas (I would guess). Burying power lines would certainly help, and I've often wondered why this isn't standard practice like it is for water infrastructure. Evanh2008 (talk) (contribs) 06:32, 10 April 2012 (UTC)[reply]
Shallow burial offers little protection against EMP (except perhaps if you are burying the cables inside some other conductor like a metal pipe). During Soviet era tests, buried power lines exposed to EMP also experienced severe surges. Dragons flight (talk) 16:42, 10 April 2012 (UTC)[reply]

For the reasons why most high tension electrical infrastructure is not underground, see Underground transmission — Preceding unsigned comment added by 112.215.36.182 (talk) 08:24, 10 April 2012 (UTC)[reply]

I get that there would extra costs associated with it, but it just seems like those costs would be made up for eventually by not having to replace lines every time a major storm blows through. I dunno. Evanh2008 (talk) (contribs) 08:27, 10 April 2012 (UTC)[reply]
Underground cables can be more expensive to maintain than above-ground lines because you have to dig them up to investigate faults. --Colapeninsula (talk) 08:43, 10 April 2012 (UTC)[reply]
In areas with a risk of salt-water flooding and areas with frequent freeze-thaw cycles, it is typically much more expensive to keep your lines underground because these are factors which can damage underground lines, and as mentioned above there is an extra cost (not to mention extra repair time) when they must be maintained and/or replaced. Also in areas with dense soil such as clay the initial installation cost goes up even further. -RunningOnBrains(talk) 16:35, 10 April 2012 (UTC)[reply]
I think the main disadvantage is that underground lines cannot dissipate heat as effectively as overhead line which get significant air cooling. Hot wires have much greater resistance. 112.215.36.178 (talk) 04:16, 11 April 2012 (UTC)[reply]
For a long discussion of the effects of an EMP and the fact that our civil infrastructure is largely unprotected from it, see, if you have not already, Electromagnetic pulse. Depending on the size of your weapon and the height at which you detonate it, assuming you can put it where you want, you could do quite a lot of damage to the civilian infrastructure. Whether that is worth spending billions to defend against depends on how likely you think that sort of attack would be and how risky it is. Most non-alarmist commentators think it is a pretty unlikely thing, given that setting off a nuke above the United States in such a way — even firing a nuke on such a trajectory — would be a recipe for massive retaliation, and it would not lessen our ability to retaliate (in part because military technology is shielded from EMP, but also because a lot of our nuclear and conventional forces are not located in the continental United States). --Mr.98 (talk) 19:45, 10 April 2012 (UTC)[reply]

Sexual vs. asexual reproduction vis-a-vis early evolution

Just a thought that I had regarding the theory of evolution and hypothetical applications to theories regarding the origin of life: Are there any theories among biologists as to at what point (and why) sexual reproduction would have come about in the proverbial "primordial ooze"? In other words, what evolutionary pressures would have caused individuals who developed sexual reproduction to become a dominant faction of life? What are the actual benefits of sexual reproduction as opposed to asexual? In addition, in another world, would there have been any fundamental law of science to prevent asexually-reproducing multi-celled organisms from becoming dominant? Thanks! Evanh2008 (talk) (contribs) 08:25, 10 April 2012 (UTC)[reply]

The article Evolution of sexual reproduction discusses this. --Colapeninsula (talk) 08:44, 10 April 2012 (UTC)[reply]
Indeed it does! I should have looked a little harder before posting this question. Many thanks! :) Evanh2008 (talk) (contribs) 08:47, 10 April 2012 (UTC)[reply]
Sexual reproduction allows mixing of genetic material from different individuals to occur, this results in genetic variation at a far higher rate than in the case of asexual reproduction. Genetic variety in asexually reproducing organisms is purely a result of random mutation in individuals, viable cases of such random mutations are very rare. Compare that to the mixing in sexual reproduction which results in a practically unique genetic makeup for every individual organism (except for identical twins of course). More variation allows faster, more effective adaptation to changing environmental pressures. Roger (talk) 09:03, 10 April 2012 (UTC)[reply]
Note that the article points out that many of the benefits of sexual reproduction can be had from "sexual non-reproduction", e.g. bacterial conjugation. The origins of this latter phenomenon (horizontal gene transfer, really) are probably ancient indeed - I would think that they should predate the existence of species and individuals entirely, in some era when genetic material still freely reacted and interacted in the open before some means was invented to partition it. Sexual reproduction simply combines a thorough horizontal gene transfer with the process of reproduction, so that the process is only needed and effects are only tested in progeny at the earliest stage of development. Wnt (talk) 16:37, 10 April 2012 (UTC)[reply]
And I believe that the that longer time between generations of larger organisms also plays a role. That is, with the short time between generations in single-celled organisms, evolution proceeds at a rate quick enough to adapt to changes in the environment, even by asexual reproduction. However, with large, multi-cellular organisms, with a much longer period between generations, asexual reproduction would not be quick enough for them to adapt, and they would go extinct. Similarly, relying on a high mutation rate alone produces a large portion of defective organisms. While this is acceptable for single-celled organisms, it is not for large organisms, which may not have enough offspring to survive so many deaths. StuRat (talk) 16:49, 10 April 2012 (UTC)[reply]

What are the colors of human cone cells?

Cones_SMJ2_E.svg
Cone-response.svg
Cone spectral sensitivities.png

I haven't formally studied color vision, but I've read Ware's Visual Thinking for Design as well as various Wikipedia articles. Often, in diagrams, we treat the L, M, and S cone cells as if they simply detect the colors red, green, and blue. Those colors are used for the curves in the diagrams on the right, and they show up in schematics of the retina, such as this mosaic (source).

This representation can be useful, but it seems misleading. The simplest reason is that L and M are assigned very different colors when they are actually very similar (which is why the opponent process must subtract their signals to derive perceptually interesting information). So the broad version of my question is, what would be a more "honest" assignment of colors to L, M and S?

A simple approach would be to consider monochromatic light with the same wavelength as the peak wavelength in the responsivity spectrum. Let's say the peaks are at 564, 534, and 420 nm (taken from Cone-response.svg). Using the calculator at http://rohanhill.com/tools/WaveToRGB/ (which outputs to an unknown RGB color space), we get the following representations:

  • L, 564 nm: Red 196 Green 255 Blue 0 RGB Hex: #C4FF00
  • M, 534 nm: Red 87 Green 255 Blue 0 RGB Hex: #57FF00
  • S, 420 nm: Red 56 Green 0 Blue 255 RGB Hex: #3800FF

That's already pretty surprising, right? But I'm looking for more than just the peaks, represented as fully saturated hues. I'm also looking for an intuitive understanding of the width of the curves and the amount of overlap. What I'd like to do is take each responsivity curve, multiply it by a standard white-light spectrum, and map this filtered light into sRGB. I don't think the result would have a direct physical or psychological interpretation, but I think it would still be instructive.

I have no idea how to actually do this. Do you? :-) If so, I think these colors might be useful to add to a few Wikipedia articles. Thanks! Melchoir (talk) 09:42, 10 April 2012 (UTC)[reply]

One other way to consider is what experience do you get when those cones are stimulated alone. It is pretty clear from the ends of the spectrum that you get a violet experience from the S cone, and a red experience from the L cone. The green one may be very hard to stimulate alone without also doing either the red or blue cones at the same time. One way would be in very dim green light perhaps the M cones are active and the others not. Another way would be to bleach L and S with bright red or violet light and then look at green. I call this experience ultragreen. A green colour more saturated than spectral green. Adding our new ideas to Wikipedia articles would be original research, so we don't do it. Graeme Bartlett (talk) 10:13, 10 April 2012 (UTC)[reply]
RGB is severely undersampled. To make matters worse, display monitor pixels aren't monochromatic outputs, either: each color element emits a wide range of wavelengths centered around a particular color. So we have a seriously undersampled spectrum in a non-orthonormal basis. (You discovered this by plotting and noting an overlap in sensitivities). What is really needed for color accuracy is a full spectrum sweep: such as is performed in an optical physics laboratory or a surveillance satellite. But, for engineering reasons, we have "RGB" cameras and monitors; so unless we work for the NRO, we stuff all our visual information into three numbers - so there's gonna be some overlap.
Anyway, the standard technique for taking a lot of data (spectral response at all frequencies) and optimally representing it as a three-element vector (R,G,B) is called... principle component analysis. If you begin down this path, you'll soon conclude (like everyone else) that the math is pretty horrible; but keep the concept clear: you're trying to find just three number that accurately represent an entire continuous spectrum. Then you will postmultiply against your derived eigenvectors - idealized spectral responses for R,G, and B.
If this process was easy, camera, film, and digital imaging companies wouldn't spend so much effort on color accuracy and white balance. If you implement a program to compute optimal eigenvectors, ("idealized spectral response"), you will no doubt discover that they vary with light source, scene conditions/brightness; leakage from invisible infrared, ultraviolet, and other out-of-band light....
Anyway, there's no need for original research. This stuff is well documented in texts and research papers. Color vision is very well understood, both from a physics and from a biological perspective. A great deal is known about color psychology and perception, too. I'll dig up a good introductory text and get back to you. Nimur (talk) 16:03, 10 April 2012 (UTC)[reply]

Direct stimulation of optic nerves

An interesting aspect of the question above is what is the experience of a subject who has only the green cones stimulated? Under normal circumstances this is impossible to achieve because of the overlap with the response of the red cones. I was wondering if any research has been done on this - perhaps by directly stimulating the optic nerve or some chemical stimulus. Would the result be some kind of super green or impossible colour? SpinningSpark 18:26, 10 April 2012 (UTC)[reply]

triplicate carbon copies' carbon footprint

to settle an argument I'm having. What is "greener" (better for the environment): using one triplicate carbon copy document or 100 pages with print from a modern printer/copier? — Preceding unsigned comment added by 165.212.189.187 (talk) 15:08, 10 April 2012 (UTC)[reply]

This question cannot be answered with surety as the content is unspecified. For example, if there is sparse text on each of 100 pages, the modern printer/copier would use minimal toner/ink and suffer minimal wear, and the carbon footprint of making the paper in each case would roughly be the same. Note that this is not about the actaul carbon in either the carbon paper or the modern printer ink or toner - this is not an environmental impact. What you should consider is the cost of carbon emissions cost of electrical power used to make the paper and ink, plus the chemicals used to bleach the paper. But if large areas of solid black are on each of 100 pages, the ink/toner consumption would be huge. The carbon footprint of manufacturing the printer - should this be included? Wickwack120.145.191.82 (talk) 15:19, 10 April 2012 (UTC)[reply]

Ok, regular text on all pages. yes, the footprint to MAKE the paper plus the toner plus the printer vs. that of the carbon copy doc. plus the machine used to make the carbon doc. — Preceding unsigned comment added by 165.212.189.187 (talk) 15:36, 10 April 2012 (UTC)[reply]

It seems to me as with many questions of this sort, it's almost impossible to answer in any meaningful way since the answer will likely depend on the assumptions you make about the particular scenario. For example to use an extreme scenario the carbon footprint of paper could easily vary significantly if we compare a case where a company gets the paper directly delivered from the paper factory next door (who's power primarily comes from low carbon footprint sources like solar, wind, hydroelectric, nuclear, geothermal and wood pulp comes from nearby sustainably managed plantations); to another company who's office manager buys a few packets of paper every fortnight from the shop 150 km away, going there and back in the SUV usually buying nothing else (not particularly effective but perhaps the manager likes the drive), and the paper itself comes from a factory (where it's produced using power primarily coming from coal) many hundreds of kilometers away by truck to the shop. Nil Einne (talk) 16:57, 10 April 2012 (UTC)[reply]
You said "one triplicate carbon copy document". Does that mean a single page of text duplicated 3 times ? If so, I'd expect that to be less of a problem. The carbon paper, after all, will likely end up buried in a landfill, not in the atmosphere anytime soon. StuRat (talk) 16:56, 10 April 2012 (UTC)[reply]

Alright that settles it.165.212.189.187 (talk) 18:32, 10 April 2012 (UTC)[reply]

Don't forget the means of producing typed documents back in the olden days (when I learned to type) was on manual typewriters: no electricity needed, just arms like tree trunks! Very green. --TammyMoet (talk) 19:00, 10 April 2012 (UTC)[reply]
Um, I think you would have to include the CO2 excreted by the typist which would probably far outweigh the carbon footprint of a modern printer due to its energy consumption. SpinningSpark 19:29, 10 April 2012 (UTC)[reply]
And if we're concerned about global warming, let's not forget the methane excreted by the typist. :-) StuRat (talk) 21:34, 10 April 2012 (UTC) [reply]
Maybe we should go back to spirit duplicators, rotary duplicators or hectographs. Dunno how "green" they are, but I like purple – and the smell! Mmmmm. {The poster formerly known as 87.81.230.195} 90.197.66.16 (talk) 22:41, 10 April 2012 (UTC)[reply]

Are there more/fewer/equal quantum fluctuations near massive bodies?

As massive gravitational bodies distort space, is it helpful to think of space as being "denser" or "less dense" in areas? Do massive bodies affect quantum fluctuations of the surrounding space? -Goodbye Galaxy (talk) 15:33, 10 April 2012 (UTC)[reply]

No, space doesn't get "more dense" or "less dense". If you're on the inside of some kind of small chamber in freefall, any quantum mechanical experiment you perform within that chamber will behave the same whether you are in deep space or near a strongly gravitating body. By "small" I mean small enough that tidal forces are negligible. I'm also using the strictest sense of the word "freefall", i.e., the chamber isn't being subjected to any air resistance or other external forces. Red Act (talk) 18:54, 10 April 2012 (UTC)[reply]
And if the chamber is subjected to external forces, then the effects of gravity will be completely equivalent to the effects of acceleration (ie. you can't tell if the chamber is sitting on the Earth's surface or if it is out in deep space accelerating at 9.8m/s/s). See equivalence principle. --Tango (talk) 21:44, 10 April 2012 (UTC)[reply]
To an observer far away from a massive body, there would be more quantum fluctuation per unit time on the surface of the body due to time dilation, but as I understand it quantum fluctuations aren't observable since they are by definition virtual particles that exist in less spacetime than a single planck unit. So to answer your question; an unobservable phenomenon would occur at a greater frequency to certain observers...whatever that means. I don't think it's useful to think of space as being more or less dense, but it is useful to think of spacetime as being more or less curved. 112.215.36.179 (talk) 09:02, 11 April 2012 (UTC)[reply]

Identify the fishes

Type 1 and Type 2. --SupernovaExplosion Talk 15:54, 10 April 2012 (UTC)[reply]

Hello, could anyone identify the species, or at least the genus? The photos were taken near river mouth of Digha and the fishes are likely marine. --SupernovaExplosion Talk 04:26, 11 April 2012 (UTC)[reply]

Jungle versus machete

Just having watched a movie where a machete was used to slash through a thick jungle, I find myself wondering how far one could go without having to stop to resharpen the machete. StuRat (talk) 17:04, 10 April 2012 (UTC)[reply]

One kilometre apparently. SpinningSpark 17:29, 10 April 2012 (UTC)[reply]
A trip through the jungle at one kilometer a day would get tedious. And I really liked how the guy in the video was sharpening one side of the blade by stroking toward it, a great way to slice your hand. Edison (talk) 15:41, 11 April 2012 (UTC)[reply]
That's what I was thinking. I imagine you could carry a few spare machetes and sharpen them all at night. If there's a larger group, they could take turns hacking and dulling their machetes and using each other's paths, thus moving more quickly. I imagine you'd get tired quickly doing the hacking, especially in a hot, humid jungle. StuRat (talk) 17:15, 11 April 2012 (UTC)[reply]

Depends how much agent orange you packed. 110.151.252.240 (talk) 18:20, 11 April 2012 (UTC)[reply]

does cured cyanoacrylate penetrate skin if dissolved in acetone?

I've been working with acetone with my bare hands in the lab, using a kimwipe to clean glass bonded with krazy glue. I didn't realise that krazy glue contained a cyano group, which I realise is non-volatile but I worry about how it will be metabolised. 216.197.66.61 (talk) 17:15, 10 April 2012 (UTC)[reply]

Not all cyano groups are created equally. Organically-bound cyano groups (called nitriles) are fundementally chemically distinct from their inorganic cyanide cousins. This is similar to the wide chasm between how hydroxy groups behave. In a compound like sodium hydroxide they are highly basic. In a compound like ethanol they are essentially neutral, and in compounds like phenol or boric acid they are acidic. The molecule as a whole needs to be considered to understand its property; not just a coincidental organization of atoms. In the case of nitrile, organic cyanides like this are very unlikely to produce free cyanide ions; they more commonly and readily undergo nitrile hydrolysis to form either amides or carboxylic acids. Just as ethanol produces no free hydroxide ions in your body, nitriles like cyanoacrylate do not produce free cyanide ions. Cyanoacrylates are not fully inert in the body, there are some toxicity issues which is discussed in the Wikipedia article, but these are wholly unrelated to cyanide toxicity. Acetone is the recommended prodcedure for removing cyanoacrylate, and if you are concerned about what may happen if you get it on your hands, use impermiable and unreactive gloves of some sort. --Jayron32 17:42, 10 April 2012 (UTC)[reply]
Note that as said in the article, it can actually be used in surgery with good results. Cyanide can be found at low levels in some foods; it really isn't a problem until you get too much. Wnt (talk) 20:37, 10 April 2012 (UTC)[reply]
Isn't hydroxide a worse leaving group (and hence a stronger nucleophile) than cyanide? Can't hydroxide ions perform SN2 attacks on the tetrahedral center? 216.197.66.61 (talk) 02:37, 11 April 2012 (UTC)[reply]
OH- ions do perform SN2-type attacks (not strictly SN2, but similar idea), but you are not going to break the C-C bond. Instead, what you do is progressively substitute C-O bonds for C-N bonds until you convert the nitrile into a carboxylate/carboxamide. The nucleophilic OH- attacks the carbon which is triply bonded to the nitrogen because that is the most electron-deficient carbon. That's exactly how nitrile hydrolysis works (see link above). If you want more, look up "base-mediated (or catalyzed) nitrile hydrolysis mechanism" in google for all the details. --Jayron32 02:49, 11 April 2012 (UTC)[reply]
There's another problem in 216's premise: nucleophile-strength and leaving-group-quality do not correspond to each other as opposite trends. Halides are good nucleophiles and good leaving-groups (and going down that column on the periodic table they even become better at both modes in parallel). The two modes of reaction involve the reverse mechanistic arrow but the cause of the change and the stability/reactivity differences are not due to the same underlying atomic/molecular properties. DMacks (talk) 15:43, 11 April 2012 (UTC)[reply]

Sun dogs

What about sundogs, the halo around the hot Arizona sun. There could not be any ice crystals, could there? Just saw a beautiful one today, all morning, still going on, Tucson, April 10, 2012. — Preceding unsigned comment added by 24.255.30.15 (talk) 17:23, 10 April 2012 (UTC)[reply]

See the Wikipedia article titled Sun dog. --Jayron32 17:33, 10 April 2012 (UTC)[reply]
Note in particular that the ice crystals responsible for sun dogs occur high in the atmosphere, where the temperature is cold and generally independent of ground-level temperatures. — Lomn 17:38, 10 April 2012 (UTC)[reply]
In fact, because the surface air is dry, you have less diffusion, and therefore a better view of the high atmosphere (whatever conditions may exist at altitude, including sundog-causing ice). For this reason, a lot of interesting aeronomy and physics is practical in the high desert. Tucson is great for amateur astronomers, as is the entire region. The higher you go, the better the view. For example, Lowell Observatory is in Flagstaff; Kirtland, (elevation 5,300'), is home to an Air Force aeronomy and optics center. You can see some amazing photos on Wikipedia that are possible due to the clear air in the troposphere, enabling a clear view at optical bands all the way to the mesosphere. Nimur (talk) 17:48, 10 April 2012 (UTC)[reply]

Carnot efficiency

Is there mathematical proof for why the efficiency of a heat engine cannot exceed the Carnot efficiency? — Preceding unsigned comment added by Clover345 (talkcontribs) 20:55, 10 April 2012 (UTC)[reply]

Have you looked at Carnot efficiency and Carnot engine? The maximum efficiency is derived directly from the second law of thermodynamics. SpinningSpark 21:08, 10 April 2012 (UTC)[reply]

Why the south is hotter?


As you can see the south is hotter than the north along the year, and I ask why? Exx8 (talk) 21:34, 10 April 2012 (UTC)[reply]

I'm replacing this post which was accidentally deleted [10]. There is more land in the north. Large bodies of water take a long time to heat up and cool down, so they tend to reduce extremes of temperature. That means the southern hemisphere doesn't experience as much seasonal change as the northern hemisphere. I'm not sure the southern hemisphere is actually hotter on average. It is hard to tell from that image, but I think the northern summer is hotter than the southern summer and the northern winter is colder than the southern winter, ie. the north is just more extreme rather than colder. --Tango (talk) 21:40, 10 April 2012 (UTC)[reply]
More open ocean. Plasmic Physics (talk) 21:41, 10 April 2012 (UTC)[reply]
Can you please expand on your answer a bit, Plasmic Physics? Does it have to do with the higher albedo of snow-covered land?Anonymous.translator (talk) 22:31, 10 April 2012 (UTC)[reply]
I'd suspect it is less due to albedo, and more to do with the higher heat capacity of water, compared to land. Climate science is very complicated and there are many interacting factors, so it's difficult to definitively attribute an observed effect to one single cause. Nimur (talk) 23:18, 10 April 2012 (UTC)[reply]
I think you're all overlooking the obvious. The lower latitudes (closer to the Equator) get more-direct sunlight and thus more insolation. --Trovatore (talk) 23:21, 10 April 2012 (UTC)[reply]
Why does Exx8 say the south is hotter than the north? Is this based solely on the diagram, or on something else? The diagram steps through the year, one month at a time. It clearly shows that the northern hemisphere is hotter than the southern during the northern summer, and that the southern hemisphere is hotter than the northern during the southern summer, but it is debatable whether this diagram shows one hemisphere is hotter than the other on an annual cycle. Dolphin (t) 23:25, 10 April 2012 (UTC)[reply]
I was assuming he meant that the southern part of the Northern Hemisphere is warmer than the northern part. --Trovatore (talk) 23:27, 10 April 2012 (UTC)[reply]
This discussion is about why the southern hemisphere as a whole is hotter than the northern hemisphere as whole, when averaged over the entire year. Plasmic Physics and Nimur have the right answer, which is also the explanation given in our article on the Southern Hemisphere: "Climates in the Southern Hemisphere overall tend to be slightly milder than those in the Northern Hemisphere at similar latitudes except in the Antarctic which is colder than the Arctic. This is because the Southern Hemisphere has significantly more ocean and much less land. Water heats up and cools down more slowly than land."Anonymous.translator (talk) 00:10, 11 April 2012 (UTC)[reply]
Milder doesn't mean warmer. It just means less variation. 112.215.36.178 (talk) 04:25, 11 April 2012 (UTC)[reply]
(Multiple ECs) I'm not really seeing a South v North hemisphere discrepancy in that graphic. The main difference apparent to me is that the South's temperature zones are a little more stable through the year, which I'd assume is because the land/sea structure is somewhat simpler in the Southern hemisphere, but the areas appear to balance out either side of the equator fairly well. In terms of notionally inhabited areas, there are perhaps more hotter ones in the South than the North, but that's just an artefact of the distribution of landmasses (since we don't think of the oceans as "inhabited"), with more land in the North Temperate zone and more ocean in the South Temperate zone. If anything, I'd guess from the graphic (and this can doubtless be checked with articles elsewhere) that the South averages a little colder because of the polar-positioned Antarctic continent leading to colder Winter extremes – see the 4th map in the Geographical zone article. A contributory factor will be that the Earth's perihelion occurs during the Northern Winter and Southern Summer, which must mildly ameliorate the North's extremes and exacerbate the South's. {The poster formerly known as 87.81.230.195} 90.197.66.16 (talk) 23:29, 10 April 2012 (UTC)[reply]

Look at how cold the North gets in January and how far down it gets cold compared to the Southern hemisphere in July. There is a massive difference. --122.111.0.88 (talk) 13:21, 11 April 2012 (UTC)[reply]

It's only massive in the sense that the land is what is primarily what is changing color, and there is less land in the bottom reaches of the Southern hemisphere than there is in the North. This is what people mean about the water being the important factor. The water temperature is about the same for both. --Mr.98 (talk) 13:38, 11 April 2012 (UTC)[reply]

HCl NaOH titration

Does a buffer form in the HCl - NaOH titration? — Preceding unsigned comment added by 150.203.114.37 (talk) 22:55, 10 April 2012 (UTC)[reply]

See Buffer. You need a conjugate pair of a weak acid or a weak base (that is, a weak acid and its conjugate base, or a weak base and its conjugate acid). In order to answer your homework question, you first need to identify what the weak acid or weak base is in your mixture. If you don't have one, you don't have a buffer. --Jayron32 02:43, 11 April 2012 (UTC)[reply]
So the answer is no because HCl is a strong acid, NaOH is a strong base, and the products NaCl and H2O are not acids or bases at all. — Preceding unsigned comment added by 150.203.114.37 (talk) 02:50, 11 April 2012 (UTC)[reply]
Damn skippy. --Jayron32 02:58, 11 April 2012 (UTC)[reply]
Oh yeah? Well to hell with Peter Pan too! DMacks (talk) 15:34, 11 April 2012 (UTC)[reply]

April 11

why phenolphthalein instead of bromocresol blue

In the titration between NaOH and HCl to determine the end point. Is it just because phenolphthalein turns purple at approximately pH 7, whereas bromocresol green turns blue at pH much less than that? — Preceding unsigned comment added by 150.203.114.37 (talk) 04:19, 11 April 2012 (UTC)[reply]

See pH indicator, especially the big chart. Phenolphthalein doesn't hit the mark very well; you'd usually want to choose an indicator whose transition changes at the equivalence point of the titration, and phenolphthalein does so at much to high a pH. So, hypothetically, your reasoning would work, except that at pH 7, phenolphthalein hasn't changed color yet. However, for a titration like this, the equivalence point happens fast; one drop will generally get you from a very low pH to a very high pH, basically within one drop you go from pH 2 to pH 12 or something like that. Since you can't actually hit the mark that close, phenolphthalein has the advantage of being cheap and easier to work with than more vibrantly-colored indicators, which tend to stain your clothes and/or skin when you spill them. It is a good general purpose indicator for any titration with sodium hydroxide as the titrant, since your equivalence point will always end up sending you to a high pH really fast. --Jayron32 04:39, 11 April 2012 (UTC)[reply]
If the standard acid or base is so concentrated that one drop changes the pH from 2 to 12 as you say, couldn't you use a weaker solution of the titrant? (Its been a while since I took a chem lab). Edison (talk) 15:33, 11 April 2012 (UTC)[reply]
It's not that the titrant is too concentrated, it's that titrating a strong acid with a strong base results in a very weakly buffered solution (water is effectively the buffer, and it does a crappy job of it). A shift in pH from 2 to 12 is a bit of an exaggeration, but a swing of four or five pH units with a single drop isn't unreasonable. (Going from pH 5 to pH 9, for instance, means shifting from an excess of just 10 μM of acid to an excess of just 10 μM of base.) TenOfAllTrades(talk) 16:49, 11 April 2012 (UTC)[reply]
There's not really much of an advantage to getting finer resolution anyway if all you want to do is determine the eqivalence point. The equivalence point is the where the change is most rapid, so if with 1 drop the pH went from say 5 to 9 then you know where the eqivalence point was in terms of titrant to a pretty good certainty. 110.151.252.240 (talk) 18:26, 11 April 2012 (UTC)[reply]

The powder coating process

Hello,

The "The powder coating process" is a very informative article. Thank you.

Now to a question - In the "Part preparation processes and equipment", The second and third paragraphs talk about chemical pre-treatment, and the fourth paragraph begins "Another method ------. I'm a bit confused. From the article I'm led to believe that either one does the chemical pre-treatment before applying the powder, OR one uses abrasive blasting but not both.

From readings that I have done as well as speaking with applicators it seems one first uses an abrasive blasting procedure and then one can ALSO use a chemical treatment (phosphating) for added pretreatment before applying the powder.

By the way, I am having a mild steel railing fabricated, and then want to have an applicator apply a textured polyester-TGIC powder.

Your thoughts and clarifications would be most appreciated.

Richard Krause — Preceding unsigned comment added by 173.79.200.227 (talk) 16:12, 11 April 2012 (UTC)[reply]

For others' reference, the article Richard is talking about is powder coating. -- Finlay McWalterTalk 16:17, 11 April 2012 (UTC)[reply]
It's either or both, depending on the surface and powder composition. What if you had a surface which required a treatment but was covered in rust? You would want to blast it then treat it. 71.215.74.243 (talk) 23:14, 11 April 2012 (UTC)[reply]

Impossible colors

The CIE 1931 color space chromaticity diagram.

The article impossible colors says that we cannot in normal circumstances perceive a color red-green that is similar to both red and green. Indeed, the CIE 1931 color space chromaticity diagram shows that a combination of a pure red hue and a pure green hue (on a line between them in the diagram) will be perceived as yellow or orange. But suppose we fill a screen with intermingled red and green squares. If the squares are very large, the viewer would be aware of seeing the two colors separately and simultaneously. And presumably if the squares are extremely small, the viewer would see yellow (or orange).

Question: If we start from very large squares and then gradually diminish their size, is there a size that results in the viewer perceiving reddish green? That is, do the large red and green squares appear to merge into something reddish green, or does the perception jump from separate reds and greens directly to yellow/orange? Duoduoduo (talk) 17:02, 11 April 2012 (UTC)[reply]

The latter. I agree that yellow does seem to be a different color altogether, versus cyan, which looks like blue-green, and purple, which looks like red-blue. I wonder why. StuRat (talk) 19:53, 11 April 2012 (UTC)[reply]
Some WP:OR: I created a large yellow square in Microsoft Paint, and approached it gradually with a magnifying glass. The visual impression went from pure yellow to a pattern, that first looked like yellow vertical stripes on a black background, then like parallell red and green stripes with a black line in between (the blackness of course being the unlit blue pixel components). There was no reddish green at any point if I had the magnifying glass focussed. And if it was out of focus, the impression was yellow. --NorwegianBlue talk 20:52, 11 April 2012 (UTC)[reply]

Why don't they place neutron absorbants under nuclear reactor cores?

Couldn't that reduce the severity of nuclear accidents? Sagittarian Milky Way (talk) 17:06, 11 April 2012 (UTC)[reply]

That is too sensible and would cost extra money. A basin of cobalt or some such would suggest that the engineers are not confident in their design. Next, we would have airlines handing out parachutes to all their passengers before boarding. an' if they do start handing out parachutes, I want one of those military multicoloured types that can stand out whether I land in desert, jungle or downtown Harlem. Plus a good pistol and some of those tasty barley-sugar sweets -to keep me going until I get rescued. Oh, and of course a survival pack which includes an iPod would be de rigueur. --Aspro (talk) 17:56, 11 April 2012 (UTC)[reply]
Oh no! Someone used the words "airlines" and "pistol" in the same post! I suggest everyone assume the party escort submission position until DHS agents arrive to resolve the situation.Anonymous.translator (talk) 18:30, 11 April 2012 (UTC)[reply]
I think that untrained passengers jumping out of an airliner experiencing difficulties in mid flight would be orders of magnitude more risky than assuming the brace position and hoping that problem is resolved before impact (the engines restart, or they aircraft stops stalling, or whatver). And iPods would interfer with the aircraft's navigation systems (or so the hostesses keep telling me). And I told you...you don't get a gun until you tell me your name. 110.151.252.240 (talk) 18:39, 11 April 2012 (UTC)[reply]

If the core melted and a China Syndrome event began, wouldn't the molten mixture just go right through any neutron absorbant shield? 110.151.252.240 (talk) 18:33, 11 April 2012 (UTC)[reply]

No. Enough neuron absorber (say in a configuration of erect wedges) would stop the reaction and thus the heat generation.--Aspro (talk) 18:40, 11 April 2012 (UTC)[reply]
My general understanding is that, by the time you have a meltdown and a containment breach, the nuclear fuel is sufficiently dispersed so that the chain reaction is effectively finished. This suggestion, then, puts neutron moderators where it's too late for that particular property to be of any real use. Note also that, under these circumstances, neutron moderators won't be thermally significant, as heat generation will increasingly be from byproduct decay. Thermal protection is critically important, yes, but the history of nuclear incidents seems to bear out that bulk concrete is pretty good at the job. — Lomn 19:04, 11 April 2012 (UTC)[reply]
Poking around a bit more finds the relevant concept at core catcher, a portion of the plant engineered to catch and cool molten corium from a meltdown. — Lomn 19:15, 11 April 2012 (UTC)[reply]
This is my understanding as well. By the time you're burning through the floor, the heat is the issue, not the chain reactions. I also wonder whether vaporized radioactive cobalt would be a good idea to add to a catastrophic accident. I suspect not. --Mr.98 (talk) 19:17, 11 April 2012 (UTC)[reply]
(ec) Decay heat has a bit more on this. If you SCRAM a reactor that's been in operation for a while, there will be lots of radioisotopes present in addition to the original uranium; some of these isotopes have rather short half lives. If you shut down a normal reactor, the core will continue to output about 7% of its normal operating power due to ongoing decay of these isotopes. (This drops off over time, but it means that a reactor continues to need active cooling for days or weeks after shutdown.)
Late in a meltdown, there isn't a chain reaction to interrupt – the molten corium is probably mostly subcritical by the time it gets to the reactor floor – but the corium is still going to be making its own heat for hours, days, and weeks. As Lomn says, you just have to wait until the radioactive molten goo is mixed with enough cold metal, concrete, and rock to keep from melting any further. TenOfAllTrades(talk) 19:32, 11 April 2012 (UTC)[reply]
According to the article on the Chernobyl disaster, the chain reaction was effectively stopped about two or three seconds after the first explosion. The corium was still over 1600°C 4 days later, and it is still hotter than the ambient temperature right now. 110.151.252.240 (talk) 20:21, 11 April 2012 (UTC)[reply]
Reading uranium dioxide, why'd they choose that as the best nuclear fuel? It seems like a poor nuclear fuel to me. Is the uranium-compound and other fissile substance-list really that crappy that no suitable reactor fuel could be found with a melting point lower than 2865 °C, which would be easier to contain with structural materials if melted? Then you might be able to catch the stuff with a neutron-killing crucible/isolater. If not couldn't they have at least found a fuel with a higher thermal conductivity? It's use is to make heat for crying out loud, and one where hotspots are dangerous.
I wonder if they considered just allowing the molten fuel to mix with and melt some other material, like lead, beneath the reactor core, to both dilute and shield it. StuRat (talk) 22:11, 11 April 2012 (UTC)[reply]
The actual rods are a very complex mixture of the uranium dioxide fuel, compounds containing daughter isotopes and other materials. Having lower melting points is no advantage as far as I can see; if the fuel rods melted at a lower temperature then the reactors operating temperatures would be more constrained. Why do you think it's better to be able to catch molten fuel rods then to just keep them solid and have them not melt in the first place? And allowing the hot lava like melt to come into contact with lead is not a great idea either; lead boils at 1749°C so you'd have a radioactive mixture of very hot gaseous metals building up pressure inside whatever is left of your containment. This effect contributed to the disaster at Chernobyl but with zinc instead of lead coming into contact with the corium. 110.151.252.240 (talk) 22:23, 11 April 2012 (UTC)[reply]
I think the idea is that the rods will continue to heat up until they do melt, whatever than melting temp is, in a disaster like Chernobyl. So, if they are going to melt in any event, do we want that to happen at a lower or higher temp ? StuRat (talk) 22:28, 11 April 2012 (UTC)[reply]
Higher, so that they're not as volatile. 110.151.252.240 (talk) 22:33, 11 April 2012 (UTC)[reply]
Is something at it's melting point of 1000°C necessarily more volatile than something at it's melting point of 2000°C ? StuRat (talk) 22:56, 11 April 2012 (UTC)[reply]
No, but assuming we're still using some compound of uranium, the nuclear reactions are all still the same so the same amount of energy is released and heats the material by the same amount. So if it takes less heat to get to the melting point then there's more heat to push closer to the boiling point. Just doing some ball park maths the material in the Chernobyl incident could have boiled ~20 tonnes of lead (my assumptions were; that heat transfer from the corium to lead heat sink is uniform across the entire body, the corium mixture is at the melting point of uranium dioxide, that all of the corium has the heat capacity of uranium dioxide, the entire mass comes to thermal equillibrium at the boiling point of lead). 110.151.252.240 (talk) 23:06, 11 April 2012 (UTC)[reply]

Seems pointless to try to think of a way to do it better (well, unless you're a nuclear engineer) however I did read in the Economist that many designs are old as dirt but anything in the industry happens so dang slowly and expensively that they don't even bother. Sagittarian Milky Way (talk) 21:54, 11 April 2012 (UTC)[reply]

Use of anova/ F-tests with count data

Hi all. I've done some reading recently for a lit review (etc) and I keep coming across studies that use ANOVAs for discrete data. For example, someone gives experimental and control subjects the same 20 words of French to learn, then tests them after, say, 10 minutes, to see if there is any difference between the two groups. They almost always seem to use an Anova for this. The scores are discrete, so they can't really be from a normal distribution, can they? Kasahara (2011) is a case in point, as is Laufer and Shmueli (1997). Is this approach justified? Or is it because they don't know what they are doing, and because non-parametric statistics or robust statistics confuse people? IBE (talk) 19:47, 11 April 2012 (UTC)[reply]

Aggregations of discrete scores often form normal distributions. 71.215.74.243 (talk) 23:17, 11 April 2012 (UTC)[reply]

While cleaning out my medicine cabinet I found an old bottle of magnesium citrate, oral solution, saline laxative, cherry flavor. Since it expired over 6 years ago and had a white crystaline precipitate at the bottom, I decided to dump it. I thought the precipitate was attached to the bottom, but, when I poured it down the drain it broke up and fell down the drain, where it's now lodged. My question is, do I need to fish it out, or is this precipitate water soluble, and will it eventually dissolve ?

Here are the ingredients:

Magnesium citrate, 1.745 g per fluid ounce
Cherry flavor
Citric acid
FD&C red #40 (I figure this can't be in the precipitate, since it's white) 
Potassium bicarbonate
Sodium saccharin
Purified water

I tasted a bit of the precipitate which didn't fall down the drain. It was sweet. So, I imagine this means there's some of the saccharin in it. StuRat (talk) 20:19, 11 April 2012 (UTC)[reply]

It might take a while to dissolve. Have you tried running hot water for 10-15 minutes? 71.215.74.243 (talk) 23:24, 11 April 2012 (UTC)[reply]

Gauss' principle of least constraint

Apologies if this is a stupid question, but I haven't found any clear discussion on this matter.

I have read that Gauss' principle of least constraint is the most general variational principle of Classical mechanics, and more general than Newton's laws, but I still don't understand what this means. For the sake of convenience, take Newton's laws to say nothing other than that momentum is always conserved (I am aware of other interpretations, especially regarding the first law). I ask

  1. Does Gauss' principle imply Newton's laws? By this I mean can Newton's laws be derived from Gauss' principle?
    1. If yes, what does Gauss' principle also say that Newton's laws do not?
    2. If no, is Gauss' principle a weaker statement, and if so, how is it weaker?
  2. To what extent if there experimental data to support Gauss' principle in as much as it differs from Newton's laws?

Thanks.--Leon (talk) 20:23, 11 April 2012 (UTC)[reply]

Yes, Gauss' principle implies Newton's laws, and provides a way to solve for thermodynamic systems which Newton's laws do not. See [11]. To the extent that they make the same predictions, they are supported by the same empirical evidence. 71.215.74.243 (talk) 23:32, 11 April 2012 (UTC)[reply]

furnace fumes

I had a 95% efficient gas furnace installed and they vented it out the side of the house on the first story. When I go in the backyard and the furnace is on you can smell these very noxious fumes coming out. They smell like chlorine and it makes you nauseous to breathe them. I'm wondering what is in these fumes and is it harmful? — Preceding unsigned comment added by 64.38.226.84 (talk) 21:31, 11 April 2012 (UTC)[reply]

1) Venting the furnace out the side of the house sounds very wrong, to me. Why can't they vent out the roof, as usual ? Please tell me they didn't use the vent from the clothes dryer.
2) The smell could be some component, like new PVC piping or a coating/film on metal ductwork, giving off fumes. If so, hopefully it will go away with time.
3) It could also be unburned gas. That doesn't smell like chlorine, usually, though. If that's what you're smelling, something is seriously wrong and there's a potential for an explosion. I'd call the gas company to ensure that this isn't the issue, then call back the furnace installers and get them to fix it. Note that high efficiency furnaces suffer from a problem (the exhaust cools down so much it causes water vapor from the burned gas to condense). If they haven't addressed this properly, it could drip down and extinguish some of the flames, causing unburned gas to be vented.
4) This sounds potentially dangerous, so I'd turn off the furnace (except for when demonstrating the problem) and use electrical space heaters until you get the furnace fixed. If you're in the Northern Hemisphere, heating requirements this time of year should be minimal, so this won't cost as much as it would in January. StuRat (talk) 21:39, 11 April 2012 (UTC)[reply]
What type of gas are you burning with it? What country are you in? 110.151.252.240 (talk) 21:45, 11 April 2012 (UTC)[reply]
This thing about venting furnaces out the side of the house is a disease in Wisconsin - haven't seen it in some other states. If you think the fumes are bad you should just hear the noise of the contraption, like a giant idiot demon with a very bad musical instrument. I hope your reaction indicates these things are indeed unheard of in the First World. It could be undesirable products of either methane or propane, I'm not sure which, but I don't think any other gasses are used. I know methane can create formaldehyde gas, which has a strong smell very vaguely akin to chlorine (it's small and oxidizing, anyway). Propane can produce carbon monoxide. For either it's not uncommon for the pieces of plastic PVC pipe to come unglued inside the house, or for there to be some hole in the wall near the vent, etc., allowing fumes to come back into the building. Basically yes, as dumb as it sounds, only more so. Wnt (talk) 21:57, 11 April 2012 (UTC)[reply]
Everyone I know seems to think that formaldehyde smells like chlorine, but I think it smells like almonds (and I think cyanide just smells god aweful and not like almonds at all). If you're getting incomplete combustion of methane, then your smell could very well be formaldehyde, but that's entirely inconsistent with the furnace being highly efficient. Formaldehyde isn't nice to breath in, but it might just be indicating a far worse problem; if you are getting incomplete combustion, then carbon monoxide is almost certainly present as well and that stuff is much more toxic but is odourless. It's unlikely you'd get any ill effects if you only breath it in when you're outside though. Leaks inside the house could be deadly if they go unnoticed. 110.151.252.240 (talk) 22:07, 11 April 2012 (UTC)[reply]
Hmmm, almonds smell like cyanide, or vice versa... maybe you're smelling more of the carbon and they're smelling more of the oxygen? Wnt (talk) 22:18, 11 April 2012 (UTC)[reply]
I'm wondering if the OP's description is that of Nox. It can be a bit unpleasant and acidic if you're only used to fresh air. --Aspro (talk) 22:17, 11 April 2012 (UTC)[reply]
Do you mean nitrogen oxides? Wnt (talk) 22:18, 11 April 2012 (UTC)[reply]
Specifically, mono-nitrogen oxides, abbreviated NOx, not Nox. StuRat (talk) 22:19, 11 April 2012 (UTC)[reply]
Ah, Somebody noxed not only what I meant but can remember to keep their finger on the shift key too  :-) --Aspro (talk) 22:29, 11 April 2012 (UTC)[reply]
The problem with trying to cure it with a catalytic converter is that it might cause 'back pressure' and the gas furnace might not operate properly – and perhaps even allow CO to enter the home. In the old-days when we wanted to keep warm, we just rubbed two boy scouts together but I guess that's not allowed anymore.--Aspro (talk) 22:38, 11 April 2012 (UTC)[reply]
Which could be solved with an exhaust fan, but now we have extra complexity and monthly expenses, so why not just use the safe, older design, and give up on so-called "high efficiency" furnaces ? StuRat (talk) 22:51, 11 April 2012 (UTC) [reply]

science

how can i measure water liquid level using a co-axial capacitor? how is the change measured and fed to a digital device such as an AVR kit to display the output in a LCD board.which circuit is the most suitable for determining the level of the water? — Preceding unsigned comment added by Irishgut3 (talkcontribs) 23:07, 11 April 2012 (UTC)[reply]

Sounds a bit like a homework question. But we do have an article.Level_sensor#Capacitance--Aspro (talk) 23:24, 11 April 2012 (UTC)[reply]