Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Scsbot (talk | contribs)
edited by robot: archiving June 2
Line 733: Line 733:
:: So this universe does whatever it does - the other universe does whatever it does - and whether our definitions for the symbols '2', '+', '=' and '4' are applicable to apples placed on tables in that other universe (or to the way speeds are combined) is a matter that we can't answer. But our definitions still apply - you can't invalidate something that's axiomatic in the system of mathematics you choose to use.
:: So this universe does whatever it does - the other universe does whatever it does - and whether our definitions for the symbols '2', '+', '=' and '4' are applicable to apples placed on tables in that other universe (or to the way speeds are combined) is a matter that we can't answer. But our definitions still apply - you can't invalidate something that's axiomatic in the system of mathematics you choose to use.
:: Hence, the answer is a clear "yes" - 2+2=4 everywhere - because we happen to have defined it that way. [[User:SteveBaker|SteveBaker]] ([[User talk:SteveBaker|talk]]) 22:27, 5 June 2009 (UTC)
:: Hence, the answer is a clear "yes" - 2+2=4 everywhere - because we happen to have defined it that way. [[User:SteveBaker|SteveBaker]] ([[User talk:SteveBaker|talk]]) 22:27, 5 June 2009 (UTC)

:I'm going to expand upon what's been said above, because hopefully it'll sort out some confusion. Back off a bit - say we're in a parallel universe which doesn't have the same laws of math. First let's start with a quantity. Names are arbitrary, so let's call it "'''@'''". That's going to be boring by itself, so we'll make another quantity, and we'll call it "'''!'''". Independent quantities by themselves aren't interesting, so let's add an [[Operation (mathematics)|operation]]: "'''%'''". How does this operation behave? It might be nice to have an identity, that is, if we 'percent' a quantity with the identity, we get the other number back. It's arbitrary which one we choose, as they are the same, so let's pick ''''@''''. So we have ''''@ % @'''' => ''''@'''' and ''''! % @'''' => ''''!''''. Okay, but now what about ''''! % !''''? Well, we could say that ''''! % !'''' => ''''@'''', or even ''''! % !'''' => ''''!'''', but we can also introduce a new symbol, so let's do that and call it ''''&''''. So ''''&'''' is defined as ''''! % !''''. Now what is ''''& % !''''? Keeping it open ended we define it to be ''''$''''. And ''''$ % !'''' => ''''#''''. Now, let's figure out what ''''& % &'''' is. Since ''''&'''' is the same as ''''! % !''''; we see that ''''& % &'''' is ''''(! % !) % (! % !)''''. If we say that order in which we 'percent' doesn't matter ([[associative property]]), we can rewrite that as ''''((! % !) % !) % !'''', or ''''(& % !) % !'''' or ''''$ % !'''', which we defined earlier as ''''#''''. As long as ''''! % !'''' => ''''&'''', ''''& % !'''' => ''''$'''', ''''$ % !'''' => ''''#'''', and the ''''%'''' operation has the associative property, ''''& % &'''' => ''''#''''. Hopefully you can see where I'm going with this; the names I gave them were arbitrary. I could have easily have said 0 instead of @, 1 for !, 2/&, 3/$, 4/# and +/%, and you have your situation. If we have defined 1+1=2, 2+1=3, 3+1=4, and addition as associative, then 2+2=4. If 2+2=5, then either 3+1=5, 1+1≠2, or addition is not associative. We certainly could call 3+1, '5' if we wanted, but it would behave exactly the same way 4 does now. It would be the equivalent of writing 'IV' instead of '4' - the name changes, but the properties stay the same. Conversely, with addition being non-associative, why call it "addition"? Associativity is part of what defines addition - if you change that, you have something else.

:"Non-standard" arithmetic are used all the time, however. That case earlier, where ''''!%!''''=>''''@'''' (er, 1+1=0)? That's [[modulo arithmetic|modulo 2 arithmetic]]. We also have the case where ''''!%!''''=>''''!'''', except we call that multiplication, and we substitute 1 for @ and 0 for ! instead of vice-versa. We also frequently encounter physical situations where 2+2≠4. Take, for example, adding two liters of water to two liters of sand. 2+2≠4 in that case. Mix 2 cups vinegar to 2 cups baking soda, and the result takes up much more than 4 cups. Physical reality doesn't have to match with abstract mathematical constructs - however, instead of redefining 2+2=3 because the sand and water don't measure 4 liters afterward, or saying that 2+2=8 because of the carbon dioxide gas evolved with baking soda and water, we leave mathematics as a "pure" ideal, with 2+2=4, and realize normal addition doesn't apply to those situations. Likewise, the fact that interior angles of a triangle don't add up to 180 on the surface of the earth didn't invalidate Euclidean geometry - it still exists theoretically, we just realize that when doing surveying, we need to use a [[non-Euclidean geometry]]. -- [[Special:Contributions/128.104.112.106|128.104.112.106]] ([[User talk:128.104.112.106|talk]]) 01:28, 6 June 2009 (UTC)


== Peacock and Peahen ==
== Peacock and Peahen ==

Revision as of 01:28, 6 June 2009

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



May 31

Determining the Concentration of a Solution

Hello. I want to find the concentration of a copper(II) sulfate solution, using the most accurate procedure. Should I evaporate all the solvent and water of hydration, measure the mass, and calculate the amount of anhydrous CuSO4? Or should I add magnesium until no more can react, filter the residue, dry it, measure the mass, and calculate the amount of copper deposit? If I choose to conduct a chemical change, I must consider percentage yield as a source of error. However, I cannot see anything wrong with a physical change. My teacher, who likes my physical change idea, on the other hand, is more comfortable with a chemical change but I do not know why. Thanks in advance. --Mayfare (talk) 00:24, 31 May 2009 (UTC)[reply]

The deal with the evaporation method is that evaporation at room temperature will likely only yield the hydrate; and the additional heat needed to produce the anhydrous salt could also cause some decomposition of the sulfate to the oxide + SO3 gas. Plus, the anhydrous salt is likely so hygroscopic that it may start to rehydrate too rapidly to get an accurate mass. How is this for a third option: since Barium sulfate is both insoluble in water, and does not produce hydrate crystals like copper sulfate does, why not add excess BaCl2 or Ba(NO3)2 to the copper sulfate solution, filter the precipitate, and mass that? What do you think of that one? --Jayron32.talk.contribs 00:40, 31 May 2009 (UTC)[reply]
Maybe you could use an osmometer. --JWSurf (talk) 01:03, 31 May 2009 (UTC)[reply]
An osmometer can tell you the concentration of all solutes, so could be used, but only if it is certain that CuSO4 is the only solute (which is also true of the evaporation method). Gravimetric analysis was how we did this for first year undergraduate experiments - either by adding a solvent that precipitates the salt, or reacting it as in the question or Jayrons reply. Hydration of the solid is removed by drying for a longer period, and weighing the sample periodically (once the weight stabilises, it is assumed to be dry - drying temparatures are usually not high enough to cause reactions in stable salts). The question of the yield of any precipitating reaction is basically ignored by assuming 100% yield if an excess of reactant is used.YobMod 10:24, 31 May 2009 (UTC)[reply]
I think if you know that there's pretty much no other solute, you should just evaporate and measure the mass, without heating to dehydrate, then calculate the concentration from there. That way you don't have to worry about decomposing the solute. I think you can safely asume that the solute will be left fully hydrated if you evaporate it at room temperature. 209.148.195.177 (talk) 10:50, 31 May 2009 (UTC)[reply]

Particle interactions

While looking at Feynman diagrams, I noticed that all interactions between particles seem to fit into three categories: a vertex with two fermions and a boson, one with three bosons, and one with four bosons. Why are there no other possibilities? I don't think any laws of physics would be violated by a four-edge vertex where, say, a gluon and a quark interact to form a new gluon and quark (though the same thing could be accomplished by two successive interactions). However, there are three-Higgs and four-Higgs vertices. Why are Higgs bosons allowed to compress two steps into one, but quarks and gluons cannot? Why are there no five- or even six-Higgs interactions? Thanks, *Max* (talk) 04:44, 31 May 2009 (UTC).[reply]

This is an excellent question and the answer is a bit technical, so please bear with me. The best way to understand that is through dimensional analysis. This analysis is particularly easy to make if you chose units such that both the speed of light and the Planck's constant are adimensional (no units). That choice leaves only one unit unspecified and that unit is usually chosen to be a unit of energy given in electronvolts (eV). With that choice of units the lagrangian density has units eV. Usually this is simply expressed by saying that the lagrangian ha dimension 4. Now if you look at the kinetic terms (the ones with partial derivatives of fields with respect to space-time coordinates) in the lagrangian you will find out that boson fields have dimension 1 while fermion fields have dimension 3/2. It is easy to see now that all the interaction terms present in the lagrangian have product of fields with total dimension 4 or less, while the hypothetical term you described with two quarks and two gluons would have dimension 5. That means that all the terms actually present in the lagrangian have coefficients attached to them with non-negative dimensions while the hypothetical term you described would have dimension -1. The rule of thumb is: no coefficients with negative dimension are allowed. Why? It turns out that these terms are non-renormalizable. A simple (if a bit too naive) way to understand that is to realize that those theories are actually effective (low energy) approximate theories for an (as yet unknown) theory and that the natural energy scale for the real theory is probabily around the grand unification energy scale (GUT) or higher. That effectively supresses those non-renormalizable terms by powers of , where is the weak scale (the scale of the effective theory), is the GUT scale and is the dimension of the coefficient of the lagrangian. Those terms of the lagrangian become effectivelly negligible. Dauto (talk) 15:27, 31 May 2009 (UTC)[reply]
Thank you; your answer was very clear and helpful. *Max* (talk) 00:08, 1 June 2009 (UTC).[reply]

New islands found in aerial/satellite photos?

Have aerial or satellite photos ever revealed any previously undiscovered islands? NeonMerlin 05:47, 31 May 2009 (UTC)[reply]

Yes. Landsat Island is the only example of discovery by satellite photo. Plenty of islands were first revealed by aerial photograph, including, for example, numerous of the 30000 islands in Lake Huron. An aerial survey of the Georgian Bay Islands National Park area was carried out in the 1920s which "discovered" many new islands. Rockpocket 06:19, 31 May 2009 (UTC)[reply]
Indonesians could only estimate how many islands their country has, of which 8,844 have been named and 922 are permanently inhabited, until a satellite survey in 2002 found they had 18,306 islands. Or are there more when the tide goes down? Cuddlyable3 (talk) 10:35, 31 May 2009 (UTC)[reply]

Relationship b/w focal length and magnification of a lens and of a mirror

This Question was given in our summer assignment. (OK, I know I am not supposed to ask homework question. But i have not been able to crack the question for 2 weeks!) What is the relationship b/w focal length and magnification of a lens and of a mirror? The mirror formula is 1/u + 1/v = 1/f and m = -v/u Putting v = -mu in mirror formula gives 1/u - 1/mu = 1/f or 1/u (1 - 1/m) = 1/f Now is m directly proportional to f or inversely proportional? (Same procedure can be done for lens formula) shanu 07:04, 31 May 2009 (UTC) —Preceding unsigned comment added by Rohit..god does not exist (talkcontribs)

As you yourself would have done, m = 1/(1-u*f). Now, looking at this relation, it is clear that m is neither directly proportional nor inversely, but if you plot this function, you can say that m always increases with f, but there is a discontinuity at f = 1/u. It goes to infinity from the left and starts off from minus infinity from the right. Similarly you can do an analysis for the lens formula. Rkr1991 (talk) 08:01, 31 May 2009 (UTC)[reply]

Basic advantage of hybrid cars

I don't understand how hybrid vehicles can make sense. Okay, from what I understand, 90% efficiency for the generator is pretty good, and so is 90% efficiency for the motor at converting electricity to mechanical energy. So your generator wastes about 10% of your energy, and your motor wastes another 10% just to get the engines mechanical energy converted/stored and returned into mechanical energy. Also, most early and current production models DO NOT take advantage of solar panels or plug-in as an extra source of electricity. So that leaves what, just regenerative braking to recover more than that 20% of the energy lost? What about highway driving, when regenerative breaking is not going to be used. Didn't some models not even include regenerative breaking? So where does the energy savings come from? It sounds like a bunch of converting the energy into different forms for no reason. Am I missing something here? Is there ANY source of electricity besides the engine/generator itself and regenerative breaking in the basic hybrid vehicle? I don't think popular models like the Prius and Insight come with any solar panels or plug-in options. How do they get better mileage? Just by comprimising power? If so, why not comprimise the power and still use just a gas engine? I have one possible explanation, but I'm not sure--does the electric part have a higher PEAK power, alowing the smaller engine to keep running at a steady power and the electric motor provide peak power? That doesn't work however if you need to sustain maximum power, such as when the car is loaded and climbing a mountain side or long hill at high speed. 209.148.195.177 (talk) 10:45, 31 May 2009 (UTC)[reply]

Actually, you hit upon one of the disadvantages of hybrids. Regenerative braking means that hybrids actually get BETTER milleage in stop-and-go driving than in highway driving. If you live in a place where your commute consists mostly of freeway driving, then you will not get much advantage from a hybrid. Basically, a hybrid gives its biggest advantage at the low end of the gas milleage range, bringing up what is the most inefficient part of one's driving. They do nothing to raise the "high end" of the milleage range; except that they are usually small, light cars with small engines, so they tend to use less gas than say, a Ford Expedition might. From an emissions point of view, hybrids are usually better than the average car regardless of what style of driving you do; but from an economic one, if you don't do lots of stop and go driving it will take a long time to recoved the added expense of buying a hybrid in terms of fuel savings. --Jayron32.talk.contribs 12:04, 31 May 2009 (UTC)[reply]
This question has come up recently. See Steve Baker's answer here. Dauto (talk) 12:45, 31 May 2009 (UTC)[reply]
I'll cut/paste that reply here - because I need to expand upon it:
The reason the Prius hybrid saves energy is a three-way thing:
  • Gasoline powered engines work most efficiently at one particular speed - perhaps 3,000 rpm. If you push them harder than that - or less hard - then they need more gasoline per unit of energy they deliver. With a normal car, the number of rpm's you need depends on what gear you are in and on what speed you are going - but it's only rarely turning the engine at the best speed. In the Prius, the engine only ever runs at this perfect speed - and when the battery is fully charged, the engine shuts off and stays shut off until the battery drains down and needs a recharge.
  • When you push on the brake in a normal car, you are wasting the kinetic energy in the motion of the car to wear down the brake pads and heat up the disks. With a hybrid, the electric motors that normally power the wheels can be used 'backwards' as generators - so you slow the car down by (effectively) using the battery charger to extract energy from the car's motion. Of course you need conventional brakes too - and this process doesn't work when the battery is already fully charged...but still, it makes some significant savings. This is called 'regenerative braking'.
  • Because you don't need that peak power from the gasoline engine when you do a (rare) hard acceleration - you can have a smaller engine. The engine only has to be large enough to provide the AVERAGE amount of power the car needs - not the MAXIMUM amount. Since you (mostly) don't go around accelerating hard all the time - this means that you have a smaller, lighter engine, more fuel-efficient engine - and let the battery provide the power for short bursts of speed.
Having said that, hybrid cars are not the perfect thing some would tell you. Most of the reason the Prius gets such good gas mileage is because it's super-streamlined, it's actually not a very fast car and it has relatively poor air-conditioning and such. If you did all of those things to a conventional car - and DIDN'T have to carry around all of those heavy batteries - you can do just as good as the Prius. The Prius actually gets rather poor miles per gallon on long freeway trips because in that case, the regenerative braking and the average-versus-peak thing doesn't work out too well - and pretty much any decent car, when driven in "overdrive" or topmost gear will have the engine running at it's most efficient rpm's. Hence the Prius has no special advantages in that case. However, for in-town stop/start driving, it works amazingly well.
Yep. Re-reading my earlier response (which was mostly about the Prius) - I should add that there are a lot of cars out there that SAY they are hybrids which really are not. It's pretty safe to say that if the car's gasoline engine is driving the wheels - then it's not really a hybrid. Some claims for being a hybrid include cars that use the electric power only to improve acceleration - and thereby allow a slightly smaller gasoline engine to be used. They offer no benefits whatever to drivers who do not floor the gas pedal at every opportunity!
It's something of a mystery why the Prius doesn't come with a plug-in option. You can certainly buy these as after-market options though - and I'm told that they work amazingly well.
Solar panels mounted on the roof of the car are useless. The amount of energy a solar panel can produce is so tiny that you'd be hardly able to drive a mile after a whole day of charging. The weight and cost of the panels simply don't make it worth-while. Moreover - if you wanted solar panels for charging your car - why not leave them at home next to your garage - use them to charge a battery - then recharge your car from the battery? That way you don't have to carry the weight around with you all the time. No - solar panels are quite utterly useless for car - worse than useless in fact.
The Prius (and presumably Insight too) does indeed suffer badly if you are doing long freeway trips. The battery needs to be continuously recharged - so the gasoline engine runs all the time - and what you have is a decidedly underpowered car that's wasting much of it's limited power in the losses involved with going to the wheels via a generator, battery and electric motor. EPA estimates are wildly wrong for the Prius - and my 140mph 6.5 second 0-60 MINI Cooper'S gets better practical freeway miles than the Prius. On long uphill sections, the car may actually limit your speed - I've heard stories of Prius owners who had been doing long freeway runs and then heading up a mountain finding that the car would slow down to 15mph - which is the fastest it's pathetic little gasoline engine can manage uphill and without a fully charged battery!
So we have to be careful. Hybrids are a useful way of getting better gas-engine mileage for in-city driving...but electric cars are better still at doing that. Electric cars don't have the range you need for road-trips, however - and for that a gasoline engine is currently the only solution. So hybrids are not just hybrid in the technology they use - but also hybrid in the range of applications they can cover. Personally - I'd love to have a MINI-E (all-electric, 100 mile range, 110mph, blistering accelleration) for my daily commute - and an efficient ~40mpg gasoline-only car for road trips. The technology for both is available...but trying to cram those two vehicles into one car seems like a relatively bad idea.
Incidentally - there are technologies out there to do regenerative braking using gasoline engines. The idea is that when you push on the brake pedal, the fuel supply and spark to the engine would be cut off and valve timing changed such that the pistons of the engine would be used to compress air and any remaining exhaust gasses into a high-pressure storage cylinder. This would provide a kind of super-efficient "engine braking". Then, when you need to accelerate again, the compressed gasses would be allowed back into the cylinders under high pressure to get the car moving again with no fuel being injected and no spark provided. Once the tank is depleted, you start injecting fuel and sparking the plugs again - and the engine runs normally. With this kind of technology, an efficient, modern gasoline engine could out-perform the Prius even in in-town driving situations.
SteveBaker (talk) 17:21, 31 May 2009 (UTC)[reply]
Well-said. One of the hardest things an automobile manufacturer can do is decide its product-lineup, though. As Steve Baker points out, the optimal solution might well be two separate vehicles: a short range, super-efficient-for-city-driving minicar, and a long-range, high-mileage-on-freeways midsize or compact car. However, designing and engineering these vehicles leaves out one important (and incredibly non-negligible) factor - these cars will compete with each other for sales! Given that a market exists for economic, environmentally-friendly vehicles, an automobile manufacturer must estimate how many people will be able to purchase the cars in question. Releasing two "eco-cars" will saturate the already small market, and neither car will hit sufficient production volumes to make the economies of scale that allow effective manufacturing. No intelligent auto manufacturer will release multiple cars of the same type because it will drive them into bankruptcy. For this reason, the successful hybrid cars are really a sub-optimal car like the Prius - which aims to combine the 80th percentile of the desired features for the 80th percentile of the expected market (or some other marketing-ese voodoo statistics). These design requirements get kicked back to the poor engineers, who have to figure out how to make a 50 mpg engine fit on to a sleek-looking body, run quiet in heavy traffic, and still handle speed on the highways. The result is sort of a slapped-together "hybrid", which makes some pretty severe tradeoffs as Steve mentioned above. So, we have the Prius and the Volt and the Insight. If your only goal is fuel-efficiency, you are probably best buying a diesel compact, which will darn well beat the Prius in true miles-per-gallon across a wider range of driving conditions (and won't be stumbling up those steep hills). And lay off the jackrabbit starts - if you're an average driver, something like 10% of your gasoline is used while your car is going zero miles per hour, and 50 or 60% is used during acceleration - which means a piddling 20% of your fuel consumption is actually useful. Changing your driving habits will save more fuel than changing your car. Nimur (talk) 19:01, 31 May 2009 (UTC)[reply]
Well, BMW aren't stupid - and the all-electric MINI-E (if/when it becomes a mainstream product) will certainly be produced alongside the normal 40mpg gas-powered MINI and the diesel MINI/One-D - and all three cars are otherwise pretty much identical (except the electric version is a two-seater - the back seat area being basically full of batteries).
Diesel cars are also a very good thing - and they're very popular in Europe. But the legal issues surrounding the amount of sulphur in diesel fuels prevents the good ones such as the Golf and MINI/One-D from making it into the USA.
That's one of those bloody stupid regulations where the amount of pollutants coming out of the tailpipe is measured as a percentage of the total exhaust gasses. The consequence of which is that a car that burns more gas and produces more pollutants is permitted when one that burns a LOT less gas and produces a little less pollutants is not! These kinds of dumb laws are in severe need of re-thinking. SteveBaker (talk) 20:04, 31 May 2009 (UTC)[reply]
That's because politicians are rarely scientists or engineers. If we had informed decision-makers, we wouldn't have that situation. The diesel issue in particular is very frustrating - diesel is easier and cheaper and more environmentally friendly to produce than gasoline. It has better mile-per-gallon and ton-per-gallon characteristics. The latter is not brought up nearly enough - it's easy to get a lot of miles-per-gallon by making a very small vehicle, but most of our transportation fuel is consumed by commercial trucking - where the vehicle weight is almost negligible compared to the cargo weight. Fortunately, most trucks do run on diesel - and if the rest of the automobile drivers realized how much of a cost-savings this is, we would dramatically reduce our emissions and fossil-fuel consumption. Nimur (talk) 21:02, 31 May 2009 (UTC)[reply]
I'm not sure that the amount of energy you'd get from solar is as completely insignificant as Steve suggests. Our article on solar car suggests that it's not unreasonable to get 2kw from car-mounted solar panels. According to the electric car article the EV1 consumes 2.7kwh for the equivalent of a liter of gas. Granted, a liter is not a lot of gas. But if I could park my car in the sun while I'm at work and get four free liters of gas, I would not say that this was "unnoticeable".
I also notice that a solar panel is an factory option for the Xebra electric cars mentioned earlier in this thread. APL (talk) 13:56, 1 June 2009 (UTC)[reply]
I disagree - 2 kilowatts is abysmally negligible compared to your gasoline engine. For every second that you charged your battery, you would get one second of a 2 horsepower boost if you had a perfectly efficient conversion. In reality, you will get much worse performance. I doubt this tiny boost would even come close to making up for the excess weight of lugging around a solar panel and battery system. If you want to store more energy, you need a heavier battery - it's a no-win situation. Nimur (talk) 14:38, 1 June 2009 (UTC)[reply]
We're talking about Hybrid-Electric cars. They already have batteries. APL (talk) 15:56, 1 June 2009 (UTC)[reply]
But those batteries are already being used. They'll have optimised the size and number of batteries based on weight and reducing the amount of time the battery is full but there is still excess energy from the engine/breaks which gets wasted. If you added solar panels without adding more batteries you would just end up wasting the energy sources you already have. --Tango (talk) 18:14, 1 June 2009 (UTC)[reply]
I agree with Nimur - your numbers are hopelessly optimistic. The Zap Xebra does indeed come with a solar panel roof option - and go take a look at their web site and see what they have to say about it. Firstly, it costs $1,500 (a costly option on an $12,000 car!) so it's not at all cheap. Secondly - if you read the specs on the panel on the Zapworld web page - it doesn't produce 2kilowatts - it produces 150 watts. Parking the car in the sun all day for 4 liters of gas sounds reasonable - but you're not getting 4 liters - you're getting a little under a third of a liter. A third of a liter is 0.07 US gallons - so with gas at around $3 a gallon, a full day of recharging saved you about 25 cents. So the payoff time for that solar panel on a hybrid car is something like 6,000 sunny days...maybe 30 years in sunny parts of the world! Certainly longer than the life of either car or solar panel. But worse - the solar panel weighs 85lbs. The odds are pretty good that it's actually going to cost you more in additional electricity consumption than it actually provides! Possibly the only reason people buy them is (as the Zap page coyly suggests) is to "appear even more Green to your neighbors!". Solar panels are not as good as you think. Partly that's because they need to be tilted at right angles to the sun's rays to produce optimum amounts of power...and a car's roof is rarely (if ever) at the optimum angle. Secondly, they are adversely affected by dirt (another problem on a car roof). Thirdly, they gradually lose effectiveness as they age...so you do have to replace them if you plan on keeping your car for a long time. Fourthly - when solar panels are used effectively, they are on the roofs of buildings where they are unlikely to be shaded - and tilted towards the prevailing sun direction for the location they are in. You can't always park in the sun. There are many cities where above-ground parking is rare - or where you are blocked from the sun by tall buildings - or where there are trees overshadowing you. On the little 3-wheeled Xebra, adding all that weight to the top of the roof....I'd worry about how much they add to the roll-over risk. SteveBaker (talk) 18:24, 1 June 2009 (UTC)[reply]
I got the 2kw number from the solar car article. I wonder if that doesn't refer to special super-expensive panels used on solar race cars. Oh well. APL (talk) 19:24, 1 June 2009 (UTC)[reply]
I will add that there is really only a single main reason: Conventional internal combustion engines are generally fuel efficient only at one particular speed. My vechicle is most effiecient at 90km/hour in its 4th gear. This is when it can achieve 7 litres of fuel per 100km. In every other situation it is far less efficient. When accelerating for example, even at low speeds (from 10km/hour to 50km/hour) the fuel consumption can be 40L per 100km or worse. At high speeds (~110km/hour) the fuel consumption goes upto 14L per 100km. At low speeds (50km/hour) its 11L/100km. The cars are engineered normally around typical highway driving speeds. The reason for this is that you need 4 times as much energy to move a car at twice a higher speed. For example a car that does a trip at 50km/hour could use 1/4 as much energy as a car that does the same trip at 100km/hour speed, because airdrag increased 8x and the speed of the trip only decreased by 1/2. So when the go to design a car, even one for the city, they know that it still needs to be able to travel at 100km/hour some times, so this is where they should make it most energy efficient. This is why for 99.9% of convential cars are fuel efficient at speeds of around 80 to 90 km/hour. You can have you manufacturer modify your car so that it is most fuel efficient around 50km/hour (and get say 7L per 100km out of even 4L V6 engines), but the fuel consumption at high speeds like 100KM/hour would go upto 28L per 100km or worse. Electric cars don't have this problem that internal combustion engines have. The electric motor and work at variable speeds easily. For an internal combustion engine to work efficiently at all different speeds it would have to be capable of dynamically changing many fixed variables such as piston stroke length, value timing, huge changes (order of significance) changes in the amount of fuel injected based on speed, instead of only minor changes. Most engines run fixed stroke, fixed value timing and very little change to fuel injection amounts, then they just vary their speed - even though they can only run optimally at one speed. --Dacium (talk) 06:24, 2 June 2009 (UTC)[reply]

How many human beings can Australia support?

I often hear the argument that, while Australia is a vast continent, it is also arid and dry and cannot support much more than its current population of 20 million. (For the record, this argument is usually brought up by people terrified of non-white immigrants.) Is there any truth to this statement? While this is a pretty harsh country, there's plenty of spaces that are green and verdant - Tasmania, for example, is roughly half the size of Great Britain, which has a population of nearly 60 million. 58.161.196.113 (talk) 11:26, 31 May 2009 (UTC)[reply]

In the global economy, the number of people that the food supplies of a nation can support are largely moot. Much of the food in the U.S. for example is imported from overseas. There are studies, I suppose, which have been done which indicate how many calories a particular acre of land can produce, and how many calories a person needs to survive; but with the ease at which foodstuffs are moved around the world, assuming that a nation needs only support its own population with its own food growing is kinda silly. --Jayron32.talk.contribs 11:59, 31 May 2009 (UTC)[reply]
The united Stated is a massive net exporter of food. Most notably, the US is the largest exporter of rice, but also exports huge amounts of wheat and corn. -Arch dude (talk) 20:17, 31 May 2009 (UTC)[reply]
Australia also exports large amounts of wueat. See Export Wheat Commission. -Arch dude (talk) 20:20, 31 May 2009 (UTC)[reply]

Australia's population per square mile is tiny. It could very easily support millions more people. Yes there'd need to be major developments to boost infrastructure, but there's no reason (certainly not food) why a developed nation such as Australia couldn't have a major increase in its population. Whether or not it is desirable is another question though, and more of a political question. 194.221.133.226 (talk) 12:54, 31 May 2009 (UTC)[reply]

I understood that insufficient water resources may be the limiting factor in the southern part of Australia at least. Mikenorton (talk) 13:22, 31 May 2009 (UTC)[reply]
An Australian ecologist (whose name I've forgotten) has argued in a book (whose title I've also forgotten), that Australia is already overpopulated when measured against the maximum number its ecology can support sustainably, which I seem to recall he calculated as 17 million. This low limit, he believed, was due to the unusually impoverished soil of the continent, this resulting from the extermination some tens of thousands of years ago of most Australian megafauna by the Aborigines, thus removing their dung from the ecological cycles. If I manage to remember more (or find the book) in the near future I'll post more references - I'm sure I've seen mention of him on Wikepedia. 87.81.230.195 (talk) 18:23, 31 May 2009 (UTC)[reply]
Are yo thinking of Collapse: How Societies Choose to Fail or Succeed, by Jared Diamond? -Arch dude (talk) 20:17, 31 May 2009 (UTC)[reply]
No, definitely wasn't it or him. (I've got some of his other books though). 87.81.230.195 (talk) 02:30, 2 June 2009 (UTC)[reply]
Cracked it! Tim Flannery, The Future Eaters. 87.81.230.195 (talk) 19:41, 2 June 2009 (UTC)[reply]
Incidentally, I believe Jayron32's argument does not stand - imported food (and other goods) must be paid for with exports of some kind, whose production ultimately depends on their extraction from the ecosystem, and "everything is linked to everything else". The ability of the USA and Australia to consume two or three times a sustainable share of the planetary ecosystem's production is balanced by the populations of other countries having to subsist on much less. If everybody on Earth had a USA/Australian/UK standard of living, we'd need several Earths to sustain it; an equitable sustainable standard for everyone would be markedly lower, unless the world population was only about 2 billions. The medium-term future is likely to see one or more of: drastic reduction of First World consumption through drastically more efficient technology, or First World economic collapse; mass migrations from the Third World leading to worldwide social collapse; drastic population reduction through famine and pandemics. It's being so cheerful as keeps me going. 87.81.230.195 (talk) 18:42, 31 May 2009 (UTC)[reply]
A higher standard of living doesn't generally take up more space. You still need to eat the same amount of food. --Tango (talk) 20:22, 31 May 2009 (UTC)[reply]
It need not (though many rich bastards better-off people do have larger houses than the local norm and eat more expensively produced foods), but currently it usually averages much higher consumption of energy and material resources, and production of pollution and waste. We urgently need to develop technologies giving us the same benefits for less of the aforementioned, and to help the developing nations (e.g. China and India) to skip straight to these rather than replicating the wasteful and polluting methods the "West" went through. 87.81.230.195 (talk) 02:44, 2 June 2009 (UTC)[reply]
The question requires rephrasing: "How many human beings can Australia support *now*?" Every over population doomsday projection (they go back hundreds of years) has been wrong because the projections assume that population increases but that technology does not. While an ecosystem is finite, the sustainable use to which the ecosystem can be put is variable and a function of technology. Wikiant (talk) 18:47, 31 May 2009 (UTC)[reply]
There is some balance that has to be met at some point between how much energy it cost you to put into each bit of food you produce. You can desalinate saltwater and grow algae and fungi in vats eventually, but that is going to cost energy. Someone is going to have to mine and dispose of your fossil/nuclear energy materials or solar panel raw materials and waste. You can unload some of that cost on to other economies for a while. (We've been doing that as 87. hinted.) But at some point the buck stops. 71.236.26.74 (talk) 06:53, 1 June 2009 (UTC)[reply]

Well I guess the simple way to answer this question would be to compare arable land. This list gives arable land in Australia as being the 6th largest in the world. The world as a whole has 325 people per square km of arable land, while Australia has 43. This implies that Australia's population could grow to 7.5 times what is today (151 million) before reaching the average density of the rest of the world. Note that many countries are far above this average and still largely self sufficient agriculturally, India and China have 753 and 943 people per square km of arable land respectively. Were Australia to grow to the same density as China, it would have 438 million people. Of course this comparison is not perfect: there is large variation in the "quality" of arable land in terms of supporting humans. There is also the issue of water, as raised above. It adds another complication and uncertainty, especially for a country such as Australia where there is quite a lot of water but it is nowhere near evenly distributed. TastyCakes (talk) 18:04, 4 June 2009 (UTC)[reply]

General Motors EV1

The General Motors EV1 electric cars were leased to customers and later recalled by GM. Why did they lease these unsuccessful test cars to people in California and Arizona where the weather is always very hot? Was it a bad decision? Or did their batteries perform even worse in cold weather?

Only a number of these cars were not destroyed. In a few decades, can we install more advanced batteries to these museum cars and make them usable? -- Toytoy (talk) 13:23, 31 May 2009 (UTC)[reply]

For one, California (where most of the EV1s were issued) is not "always hot". Additionally, Arizona participants were not issued the EV1s with the most heat-vulnerable battery systems. At a guess, California was selected largely for the PR opportunities of a zero-emissions car in the most emissions-restrictive state.
As for the remaining cars, its highly unlikely that any will be retrofitted and restored to the road. GM has no incentive to do so, as the retrofit and associated certifications will be far more expensive than making current-model EVs. Museums and other display locations have no incentive to do so, as they don't require roadworthy models for display. Perhaps a private collector, should he be able to acquire an EV1, might have an interest. — Lomn 14:01, 31 May 2009 (UTC)[reply]
The cars were not leased to make money for GM - they were leased as an experiment to see how the EV1 would perform out there in the real world with real customers. This is not an unusual thing to do. In fact, BMW's MINI division is doing that exact thing right now with the Electric MINI Cooper. They built something like 500 of these cars and have leased about half of them to customers in the US - in California and New York only. The lease terms are very like the EV1's and stipulate in no uncertain terms that the cars MUST be returned at the end of the lease - no extensions or right to purchase will be made. Why? Well because they want to look at how these cars have survived the ravages of practical driving by real people - and (as with the EV1's) they have no intention of supporting them with spares and maintenance off into the future. If you want to know how well your design works - there is no point in putting them into 'easy' situations - if you suspect that they'll have trouble in extreme heat - then you should absolutely do your trials in Arizona.
No doubt the few remaining EV1's could have new batteries and be made to work - but they are now museum pieces and will probably remain that way.
If you want an electric car - you can buy one very easily. At one end of the scale, the ZAP Xebra has been in production since 2006 and is quite affordable ($11,700) - but with only a 25 mile range...you'd better have a pretty short commute! At the other end of the scale - if you have deep pockets and fancy something a bit more sporty - the Tesla Roadster is a pretty cool car (based on a Lotus body) - it'll go 240 miles on a charge and has a 0-60 time under 4 seconds! Our Category:Production electric vehicles lists many electric cars that are in mainstream production around the world. SteveBaker (talk) 16:55, 31 May 2009 (UTC)[reply]
I fixed your category ref, Steve.--Polysylabic Pseudonym (talk) 03:20, 1 June 2009 (UTC)[reply]
You can install whatever you want into a car body, but the ultimate questions are: will it be prohibitively expensive, and will you be in violation of state and federal regulations for road-worthy automobiles? The answer to these questions turned out to be "yes" for General Motors, which is why the EV-1 program was ended so abruptly. Though numerous conspiracy theories suggest collusion from the petroleum industry, this is sort of a flimsy argument - cost-effective electric vehicles have been around for a very long time, but are mostly unsatisfactory for the sort of driving we have become accustomed to. Nimur (talk) 19:20, 31 May 2009 (UTC)[reply]

Do owls ever get aggressive with people?

I was just out working in my shade garden, and noticed after a few moments that a Great Horned Owl was sitting on the wood pile about 8 feet away, staring at me. I of course went and got my camera and got some pix from about 12 feet away, which he didn't seem to have any problem with (even with the flash!).

I'm just wondering whether I should worry about working so close to him... it's hot and sunny and I was really looking forward to shady work. My wife and daughter will also be getting home any minute now, and I want to show them. So are owls safe to be near? --SB_Johnny | talk 15:07, 31 May 2009 (UTC)[reply]

If you disturb their nest or corner them, they might turn violent, but I can't see why they wouldn't just fly away in other situations if they felt threatened. If they don't feel threatened, there is no point them attacking you, you clearly aren't good prey. Oh, unless they are wounded - wounded animals can be very dangerous. --Tango (talk) 15:12, 31 May 2009 (UTC)[reply]
Owl talons, especially those of large ones defending their nest can be vicious. Check out Eric Hosking and be careful if there is a nest nearby. Shyamal (talk) 15:23, 31 May 2009 (UTC)[reply]
No, doesn't seem to be wounded... a couple minutes ago he swooped down onto something, ate it, then lofted himself back onto the pile. I've seen (and shot) baby rabbits near there, so maybe the rabbit nest is under the pile? I'm a bit surprised to see him (or her) hunting at mid-day. We all just went down to see it, and it just stared at us and winked :-). I can't imagine the nest is there... I'm pretty sure he lives in the barn. --SB_Johnny | talk 15:39, 31 May 2009 (UTC)[reply]
Well I'm jealous, you're one lucky dude. Richard Avery (talk) 21:45, 31 May 2009 (UTC)[reply]
Congrats on your winged hunting buddy. It probably got interested by you rustling some leaves. Since you were successful in helping it to a morsel its strategy paid off. Be careful with pest control. You don't want it exposed to any poison. Great Horned Owls prefer hunting at night (?our article says, shouldn't that be dusk and dawn?) because that's when their prey comes out to play. That doesn't mean they won't hunt at other times if opportunity presents itself. They rely on their hearing as much if not more than their vision to hunt. Some falconry shows have horned owls as show birds. Enjoy the show. (...and maybe post us some of your pix in the article :-) 71.236.26.74 (talk) 06:42, 1 June 2009 (UTC)[reply]

Muscle Contraction question

By the grace of G.od

Rechovot (Israel, 31/5/09

Peace and benediction!

In your article about muscle contraction, you write about "cross-bridge" in the myalin/actin action. It will be helpfull to add a few lines about the "cross-bridge" process.

Thank you very much for your wonderfull web.

Yehudah F. Rechovot, Israel EMT-P learning —Preceding unsigned comment added by Yehudah770 - Mashiach Now (talkcontribs) 19:19, 31 May 2009 (UTC)[reply]


I have moved your question to a new section. Nimur (talk) 19:22, 31 May 2009 (UTC)[reply]

The most appropriate place to make a suggestion for a page would be its specific Talk page, which is likely to be watched by people who are both interested and knowledgeable about the topic. In this case, that would seem to be Talk:Muscle_contraction. Of course, if you have a clear sense of what should be said, you are also welcome to be bold and edit the page. --Scray (talk) 19:48, 31 May 2009 (UTC)[reply]

Downtown Tokyo

I read the cryptographically written articles Tokyo, Greater Tokyo Area, City of Tokyo, and Special Wards of Tokyo, and I couldn't for the life of me find a clear explanation where "Downtown Tokyo" is, i.e. in wich ward(s) or other subdivisions or administrative regions within Tokyo. When I say Downtown, I mean the cluster of skyscrapers you see when you Google Image Search "Tokyo Skyline". Where exactly are these buildings? 209.148.195.177 (talk) 23:32, 31 May 2009 (UTC)[reply]

  • This is probably going to be completely wrong, but I was in Tokyo earlier this year and got the impression that there were actually several different clusters of skyscrapers, or "downtown" areas, dotted all over the city. 58.161.196.113 (talk) 16:13, 1 June 2009 (UTC)[reply]

La Défense

It's not really downtown.

Is La Défense basically Downtown Paris? Why is it outside the city limits? 209.148.195.177 (talk) 23:33, 31 May 2009 (UTC)[reply]

La Défense is sort of in the boonies, actually; it is a business district to the northwest that's the newest of the arrondissements — although I notice it's not even listed in our article Arrondissements of Paris. As a visitor, I claim there's nothing going on in La Défense to speak of culturally, and it's a long, long walk to anything I'd consider "downtown Paris". Tempshill (talk) 23:52, 31 May 2009 (UTC)[reply]
I don't think it's correct to refer to La Défense as an arrondissement; that word refers to the 20 districts that Paris itself is divided into, and unless something has changed recently, La Défense is outside the city limits.
Why does this concentration of office towers exist in one suburb? Presumably because urban planners decided that preserving the character of central Paris was desirable but also felt that it was desirable to allow this kind of development, so they passed zoning laws to allow for this. I imagine the construction of the Montparnasse Tower, which many people disliked, must have been a catalyst for this planning decision -- and yes, look at the Criticism section of that article. --Anonymous, 00:42 UTC, June 1, 2009.
As a visitor I claim La Défense itself is culture going on. It is an expression of Gallic gigantilism, full of bold surprises in 3-D. It is the futuristic movie set that could only be imagined for Alphaville (film) and a landscape for artworks that are superhuman but not inhuman. Why is it where it is? Because the Parisians want to reach for the future without mutilating their heritage, and their solution is that long, long axis joining their arch monuments to triumph of courage and triumph of the brain. Vive la différence! they say having digested the painful lesson of immiscible architectural dichotomies. The tourist sees the Grande Arche from afar and may enter its arena that boasts cafés, restaurants, cinemas, occasional grand shows of sound and light, a shopping mall that never ends and a potent motor museum. Cuddlyable3 (talk) 10:00, 1 June 2009 (UTC)[reply]
For a parallel in the U.S., see the relationship between Washington, D.C. and Arlington, Virginia. Washington has strict building codes designed to preserve the architectural character of the city, but it is also a major urban center which demands a large, vertically aligned commercial district, like any other major city. Arlington has become the skyscraper farm for Washington DC much the way that La Défense has become so for Paris, for almost exactly the same reasons. --Jayron32.talk.contribs 18:41, 1 June 2009 (UTC)[reply]
Incidentally for both this and the previous question, you'd likely receive a better response if you post the question in humanities or perhaps misc, since they don't really seem to concern science much Nil Einne (talk) 22:46, 1 June 2009 (UTC)[reply]


June 1

Cholesterol Level/Temporal Arteritis

I am a 79 year of age male married and have five children. On September 9, 1995, a biopsy was taken from the artery on the left side of my temple and was diagnosed as Temporal Arteritis. I am presently on 5 mg of prednisone taken daily and a quarterly (ERS) lab work is done by my family doctor. The present rate is 25.

I continue to have high level cholesterol even after a week of fasting on a low cholesterol diet prior to blood work and taken every precaution to get it down. My cholesterol for the last three years are - 11/14/07 = 221, 01/24/08 = 198, 04/13/09 = 197.

Can the inflammation of the main arteries caused by arteritis create an incorrect reading on the lab test for cholesterol? Thank you. —Preceding unsigned comment added by 76.7.97.223 (talk) 00:11, 1 June 2009 (UTC)[reply]

We apologize, but we have a policy that prohibits us from answering medical questions such as this on Wikipedia. -- Tcncv (talk) 00:47, 1 June 2009 (UTC)[reply]
As Tcncv says, we cannot give medical advise. You need to go and talk to a doctor. If you suspect your doctor may have made a mistake, you can get a second opinion from another doctor. --Tango (talk) 15:15, 1 June 2009 (UTC)[reply]

Seeing a blue tint for a minute - overexcited photosensitive ganglion cell activity?

It was a lovely, lazy Sunday afternoon, and I at out on my deck for a few minutes. With the sun beating down, I looked up into the brilliant blue sky and closed my eyes and relaxed for a few minutes. (Maybe have had a 1-2 minute nap.) When I went back inside, things were in a slight blue tint for a minute, maybe less.

What caused this? In checking the articles on vision, I happened upon the photosensitive ganglion cell article, as one of the cells that picks up informationa nd sends it brainward. Was this from overstimulation of these cells? It's a logical hypothesis, from my estimation, becasue not only was I staring totally at the blue sky, but with my eyes closed, would each pupil also be bigger? After all, with them closed I can still see slight differences in light and dark, such as my room light here versus under my desk.

Thanks And, don't worry, I'd never stare directly at the sun - even with my eyes closed. :-).Somebody or his brother (talk) 00:41, 1 June 2009 (UTC)[reply]

I am not familiar with the specific seeing-blue-tint phenomenon you describe, but my guess is that it is caused by adaptation and/or bleaching of L and M cones compared to S cones while you were outside, resulting in higher response from S cones compared to L and M. The thing is, human eyelids are not completely opaque, so some light gets through; and the light that gets through is mostly at the long-wavelength end of the visible spectrum. Thus, L and M cones are exposed to light in their respective opsin absorption bands, and adapt to that light; but S cones are not (as they mostly absorb short-wavelength light), so they adapt to the relative "darkness". If you look at a bright red or orange light for a while, objects projected on that area of the retina will look green or blue for some time after that. I guess that what you have experienced is largely the same effect. Now your second question is on photosensitive ganglion cells. Our article on the photosensitive ganglion cells is extremely detailed, considering how little do we actually know about them. They certainly project to suprachiasmatic nucleus, helping to regulate the circadian rhythm; they also likely have an effect on pupillary contraction via the olives. However, I seriously doubt they have such a strong and direct effect on color perception as the phenomenon you describe. I'm not ruling them out 100% as a "suspect" in your phenomenon, but I think there may be far simpler - and more conventional - explanation for your experience, like the one I gave above. All the best, --Dr Dima (talk) 02:14, 1 June 2009 (UTC)[reply]
Yes it'd give the same explanation but a bit simpler than Doc. Your eyes get used to the red colour you see with your eyes closed, so then everything else looks really "not red" afterwards. Aaadddaaammm (talk) 10:11, 1 June 2009 (UTC)[reply]
Cool, thanks, y'all.Somebody or his brother (talk) 10:40, 1 June 2009 (UTC)[reply]

Young's double slit experiment

Dear Wikipedians:

What does "first order image" mean for Young's double slit experiment? I'm having trouble with this terminology, it seems to be made up by my physics teacher, I can't find it anywhere in double-slit experiment.

Thanks,

70.31.158.159 (talk) 02:10, 1 June 2009 (UTC)[reply]

Maybe they mean the first diffraction fringe (strongest, central bright area)? Nimur (talk) 07:31, 1 June 2009 (UTC)[reply]
No that's the zeroth order fringe. The first order images are the ones sitting next to it on either side...the peaks of intensity...that's how my teacher taught me :-) Rkr1991 (talk) 08:34, 1 June 2009 (UTC)[reply]

Thanks!!! I was able to get the right answer with the definition. 70.31.158.159 (talk) 14:46, 1 June 2009 (UTC)[reply]

Technicians on planes

I noticed on the AF447 article, that there were 3 technicians onboard. It made me wonder whether there are always technicians onboard, what they actually do, and where they sit. Can anybody help? Thanks United Bob (talk) 13:32, 1 June 2009 (UTC)[reply]

That statement has now been removed from the article, so it may not have been accurate. I don't believe it is routine to have technicians on board, but some planes do carry a Flight engineer, who sits on the flight deck with the pilot and co-pilot. Modern planes don't generally have engineers. I doubt an Airbus A330, like the one involved in this incident, had one. --Tango (talk) 14:10, 1 June 2009 (UTC)[reply]
Indeed - flight engineers are pretty rare these days - what they used to do was to monitor the engines for vibration and/or temperature problems - and also track fuel consumption, shift fuel from one tank to another if one engine was consuming more than the others - that kind of thing. Other airline staff such as technicians frequently fly with an aircraft as passengers in order that they can do some maintenance during the next stop-over or something...I suspect that's what happened in this case. It's hard to read anything meaningful into it though. SteveBaker (talk) 17:47, 1 June 2009 (UTC)[reply]
Yes, they could have been Deadheading (an unfortunate term, in context). --Tango (talk) 18:11, 1 June 2009 (UTC)[reply]

Output Formula question

output for per man per shift formulaSathyavolu sar (talk) 13:35, 1 June 2009 (UTC)[reply]

i want to know about the formula for "output per man per shift" in a mass production manufacturing industry
can any body help
Sathyavolu sar (talk) 05:30, 1 June 2009 (UTC)[reply]


I have moved your question to a new section so it can be answered. Since you are probably new to Wikipedia, I have also taken the liberty of copying your talk-page message to this location where it will probably be better answered.
Have you looked at the man-hour article? Nimur (talk) 14:46, 1 June 2009 (UTC)[reply]
Take the total amount produced and divide it by the total number of shifts (that is the average shifts per person, times the number of people). --Tango (talk) 15:13, 1 June 2009 (UTC)[reply]

Adderall vs. recreational speed

Does Adderall differ from the amphetamine pills that are standard among recreational users? If so, how? NeonMerlin 14:07, 1 June 2009 (UTC)[reply]

A major issue with all illegal pharmaceuticals is the lack of quality control. For refined drugs (heroin, cocaine) the chief issue is concentration (alloyed with dangers from harmful adulterants and from poor hygiene). Synthesised drugs (ecstasy, lsd, pcp, amphetamine) have additional issues with purity. It's really quite difficult to make exactly the drug desired, without inadvertently making lots of similar compounds as well. Getting that justright is a core business of major pharma companies; bunging stuff out the door is a core business of some gangsters in an industrial park in The Hague. Adderal is a specifically chosen cocktail of related amphetamines, picked to support a desired psychopharmaceutical outcome. By contrast, if someone is buying "amphetamine" from some bloke in a club somewhere, he'll likely be getting pills with a diverse range of related substances, in unknown concentrations and proportions. Moreover, as the Adderall article notes, it's dispensed in a pill which has been engineered to control its release, whereas the illegal pill hasn't been. That all said, related compounds often have very similar effects, so most of the bad things that illegal "amphetamine" pills can do to you are also dangers of their prescribed brethren. Hopper Mine (talk) 17:20, 1 June 2009 (UTC)[reply]
Adderall is a mix of two mirror-image molecules of amphetamine, dexamphetamine and levoamphetamine. Some recreational pills may be mixed, some may be dexamphetamine, some may be methamphetamine which is a different compound. As Hopper Mine pointed out, illicit drugs can virtually contain anything e.g. caffeine, benign or even toxic compounds. - cyclosarin (talk) 07:41, 7 June 2009 (UTC)[reply]

Inorganic chemistry

How metal exess defect responsable for violet colour of KCl due to exess k and pink colour due to exess Li ?Rikichowdhury (talk) 15:58, 1 June 2009 (UTC)[reply]

Wow. I am not sure I understand your question at all. KCl and LiCl are white solids and/or clear solutions. The colors only become apparent during flame ionization or gas-phase eletrical ionization, as in a Geissler tube. Could you elaborate on your question so that we can answer it better? --Jayron32.talk.contribs 18:34, 1 June 2009 (UTC)[reply]
Are you talking about something like KCl0.99? How do you make this? You could make K deficient KCl due to radioactive decay, if you could find some billion year old KCl. Graeme Bartlett (talk) 01:40, 2 June 2009 (UTC)[reply]

I definitely remember studying about it during my school days. It has something to do with that one unpaired excess electron which the excess K provides. It absorbs energy (from incident light rays) and goes up to higher levels, and we see the complimentary color of the wavelength absorbed. I am actually quite amazed that we don't have a page on this (or maybe i looked in all the wrong places), but the fact remains that if you heat (white) KCl with excess potassium vapour, your compound turns purple. The same thing also holds for LiCl. Rkr1991 (talk) 04:38, 2 June 2009 (UTC)[reply]

Inorganic chemistry

Two compound NaCl and AlCl3 . Here NaCl is ionic crystal and AlCl3 is covalent in natuer .But when we use highly polar solvent AlCl3 treatet as ionic why ?Supriyochowdhury (talk) 16:04, 1 June 2009 (UTC)[reply]

Your understanding of "ionic" and "covalent" is somewhat flawed. Don't think of the two as two sides of a coin, but rather as two ends of a continuum. All bonds basically consist of positive charged nuclei attracted to a negative charged electron cloud consisting of the "bonding electrons". The difference between an ionic or covalent bond is the location of the bonding electron cloud. Consider these extremes:
covalent bond
ionic bond
The deal is, that almost all compounds exist somewhere between these two extremes. Most of the time, the bonding cloud is not localized exactly in the middle, but it is also rare to find it centered on one of the two nuclei. The center of electron density in all compounds will lie somewhere between the midpoint of the bond length and the center of the more electronegative atom. Aluminum Chloride lies almost exactly 50% between the right and left picture; the bonding in aluminum chloride is covalent enough that it has a relatively low melting point, but it is ionic enough for water to seperate the aluminum from the chloride during the solvation process. It would not be unique in this regard; the strong acids like hydrochloric acid and nitric acid behave much the same way; HCl is a gas at room temperature but it ionizes 100% in water. --Jayron32.talk.contribs 18:29, 1 June 2009 (UTC)[reply]

In short, no compound is purely ionic, or purely covalent. There is always a balance. In the case of AlCl3, it is covalent because of the large positive charge of the metalk cation and relatively bis anion (Fajans' rules). However, in polar solvents, this bond is easily broken, and you get the charged ions, because of the large difference in electronegativity between the atoms. The positively charged Al ion is surrounded by the negative part of the solvent (OH- if the solvent is water) and the negatively charged Cl ion is surrounded by the positive part of the solvent(H+ if the solvent is water). This leads to a stabilization called the solvation enthalpy. If this enthalpy is able to compensate the enthalpy required to break the Al-Cl covalent bond, then the compound is solvated, which is what happens here Rkr1991 (talk) 04:32, 2 June 2009 (UTC)[reply]

human body

what is the function of the gall bladder —Preceding unsigned comment added by Javedesmail (talkcontribs) 17:56, 1 June 2009 (UTC)[reply]

All you have to do is type "gall bladder" into the search box on the left side of the screen, and it will take you to the gall bladder article. Friday (talk) 18:01, 1 June 2009 (UTC)[reply]

Does crumpling a piece of paper increase its density?

Help me settle an argument. Please source this well. Lesath (talk) 19:50, 1 June 2009 (UTC)[reply]

No. Depending on how you define it, the density either stays the same or decreases. If you are referring to the density of the paper itself, crumpling it has no significant impact at all. If you are referring to bulk density (the mass divided by the volume it actually takes up), then it decreases - you can fit more uncrumpled pieces of paper in a given box than crumpled ones. --Tango (talk) 20:17, 1 June 2009 (UTC)[reply]
We don't have pages. What you may be hunting for is the compressive strength that crumpled paper displays. There are formulas that you can use in package design to calculate the amount of crumpling and the increase in compressive strength. (Corrugated fiberboard needs a section there, pages for BCT and ECT are missing.) For packaging there is another important factor and that is that cumpled paper does not only have compressive strength, but elasticity too. Mechanically crumpled paper has more predictable results than manually crumpled paper. It is used by industry in packaging [1] [2] [3] [4]71.236.26.74 (talk) 01:49, 2 June 2009 (UTC)[reply]
Paper is made up of microscopic fibres with spaces between them (see photos), and compressing it would certainly reduce the spaces and increase the density (by close inspection you can see that papers often have a slight hairiness). Whether crumpling up a sheet in your hand would compress the fibres enough to significantly reduce the volume, I'm not so sure. The question of how to measure the volume of a hairy thing is not simple either. --Maltelauridsbrigge (talk) 13:54, 2 June 2009 (UTC)[reply]
I don't think crumpling a piece of paper has any bearing on its density. Its configuration changes, but I don't think that its density changes at all. Density is a factor of the material. Changing the configuration of a material such as a sheet of paper has little bearing on the material's density, in my opinion. Bus stop (talk) 14:07, 2 June 2009 (UTC)[reply]

Damaged parrot mandible

The upper mandible of this parrot was damaged in an accident, causing it to be much shorter than the lower one - a condition which makes it impossible to feed itself. It has been suggested that the lower mandible be cut back in the hope that the upper would then regrow. Can anyone advise? Rotational (talk) 20:06, 1 June 2009 (UTC)[reply]

You really need to talk to your avian veterinarian about that. The reference desk does not provide medical advice. I will say that I have seen and read about birds with prosthetic beaks in the past, though... --Kurt Shaped Box (talk) 20:15, 1 June 2009 (UTC)[reply]

I agree that you need to talk to your avian veterinarian, but I disagree that this condition makes it impossible for your parrot to feed itself. I have a green cheek conure with this exact injury. His upper beak will never grow back as the damage was too severe. Every couple of weeks, I use a dremel tool to file his lower beak down so that it doesn't overgrow the top beak and he eats just fine. In fact, he eats exactly the same foods that he ate before the injury. If your bird is having trouble eating, then I would suggest purchasing a coffee grinder to grind his food into a course powder that will make it easier for him to eat. If you can not do the filing yourself, your vet can do it. It is amazing how well any creature can adapt to their situations. [1]

Sun limb - how to make sense of the direction

I understand that the edge of the sum is called a limb, but I don't understand why the upper left is called the Northeastern Limb.

Note the new solar prominence and sunspot identified as being on the northeastern limb.

Space Weather 1 June 2009

I get the north - why the east?Sphilbrick (talk) 21:14, 1 June 2009 (UTC)[reply]

Imagine facing the south; east would then be to your left. Look up to the sky and east would still be to your left, but north would be "up"--so east is to the left of north. In the heavens, the directions are mirror-reversed simply because one looks up towards the sky and down towards the ground. --Bowlhover (talk) 22:15, 1 June 2009 (UTC)[reply]
They can't call it "upper left" because that would depend on where on the Earth you were. What's the top for a viewer at the North Pole would be the bottom for a view at the South Pole (they are, in effect, standing upside-down relative to the person at the North Pole), for people at intermediate latitudes there is a gradual change in orientation. With the Sun, you don't notice this much, but with the Moon it is far more obvious - when I look out the window here with a latitude of about 55 deg N, a crescent moon looks like it is roughly upright. Near the equator it would look like it was lying down. That same happens with the Sun, there is just no way of telling unless you can see sunspots or similar. For that reason, they need a more universal way of naming the directions, so they use compass points in the manner described by Bowlhover. --Tango (talk) 22:34, 1 June 2009 (UTC)[reply]
It isn't making sense yet. Obviously, I know they couldn't call it "upper left" I was just clarifying which item on the image I meant. It's also obvious why they can't call it top or bottom, but prefer North and South. Is it simply a convention that East and West directions are reversed in space? The claim that East is to your right when facing north and looking down, so we want that same direction to be East on the sky behind you sounds clever, but I would think it would be cited somewhere - I've looked and I can't find one, which may just mean my search skills aren't up to par. I think the choice is arbitrary, and I'd like to know why they chose the opposite of the intuitive choice. Sphilbrick (talk) 23:07, 1 June 2009 (UTC)[reply]
It's not arbitrary, and it was intuitive when it was chosen. Remember that Astronomy is thousands of years old, dating back to long before it was accepted that the Sun and other celestial objects were independent spherical bodies in space, with easts, and wests, and axial rotations of their own. Originally they were though to be minor features on the Earth's sky (itself possibly a revolving solid sphere, hence "firmament"). The convention arose of the eastern side (limb) of the Moon, say, being the one nearest the eastern horizon of the Earth, because an Earth-centred point of view was the only one that seemed important.
We still usually use this convention when describing things as seen from the surface of the Earth, but now we know that the Sun, Moon and planets are worlds in their own right, we also know that if we're describing, say, Mars from a Mars-centered viewpoint, East and West will be reversed (such that on Mars the Sun will still rise in the East). This can occasionally cause confusion, and now that we've actually visited the Moon, modern Moon maps often show East and West opposite to the way they're marked on older, pre-spaceflight maps. 87.81.230.195 (talk) 02:07, 2 June 2009 (UTC)[reply]
Imagine looking in the usual way at a globe that represents planet Earth. Obviously, it North is at the top, then East is to the right. Now put yourself inside the globe and look outwards. While North is still upwards, East is now to the left. But that is exactly the situation when we look up at the sky, the "celestial sphere". --Wrongfilter (talk) 10:14, 2 June 2009 (UTC)[reply]
I’m not yet persuaded that the decision wasn’t arbitrary. For example, even the choice of north could be chosen in more than one way. Once one has (arbitrarily) assigned north on the Earth, the question arises how to assign north on other planets. We start by saying it is on the axis on rotation, but that still leaves two possibilities. We could choose the end that lies in the same side of the ecliptic as the north of the earth, or w could invoke a right-hand rule to determine north. Note that these two rules do not give the same answer for all bodies in the solar system – the decision to use the right hand rule was an arbitrary choice, albeit a choice that can be defended.
Once north is selected, south is forced, but east is not. So we have to make a choice. Again, we could pick from more than one rule. One possible rule is – orient so north is on top, and choose east so that it is on the right, just as one would do looking at earth. Apparently that rule wasn’t chosen. As wrongfilter suggests, imagine yourself inside the earth, and choose the direction that is closest to the direction you would look is you looked to the east on earth. A rule, to be sure, but a different rule (and one I suspect gives the wrong answer for Uranus).
However, as 87…points out, we don’t use that rule for all celestial bodies, we use it only for those we haven’t visited. I checked some moon maps, and with north on top, east is to the right. Ditto for Mars. Well, have we visited Mars? Humans haven’t been there, but we’ve sent equipment. But if the rule for determining east is use the rule based upon the view from inside the earth, except for bodies we have visited, then why does it apply to the sun? After all we have sent a probe to the sun.
I’m looking for a coherent rule – I’ll even accept that we arbitrarily made a decision and stuck with it, but I’d like to see a source – I don’t see that this is addressed in Wikipedia, and I haven’t found it elsewhere yet. Sphilbrick (talk) 12:24, 2 June 2009 (UTC)[reply]

Outdent – incidentally, I accept that it make sense to label sky in the way wrongfilter suggests – so sky maps, with north on top have east of the left. This makes perfect sense, because we can think of ourselves being inside a sphere looking at the inside of a larger sphere. Mirror real makes sense it that case - if someone says Castor and Pollux are in the east, it seems natural to look toward the east to find them. The convention for the celestial sphere makes perfect sense to me. What I am questioning is why the same rule is applied to the sun. From our point of view, we are outside the sun, looking at it from the outside, not the inside. Same for the moon and Mars. In the case of the moon and Mars, we locate east at pi/2 clockwise from north. In the case of the sun, we locate east at pi/2 counterclockwise from north. clearly arbitrary – my question is what was the thinking behind the convention? Sphilbrick (talk) 12:48, 2 June 2009 (UTC)[reply]

A consistent system would be for planets and stars to have their own compass directions, with east to the right when north is up, the same as earth, and for it also to be possible to talk about their "limbs" which are oriented to the celestial sphere and therefore have east and west reversed (and might not have north the same way up). I'm only guessing, but I think this might be the way it works. See this page I just dug up: http://www.sidleach.com/mars_081703_21.htm "South is up in this image, and the eastern limb of Mars is to the right." Presumably the eastern limb of mars is different from mars's east, and the same could apply to the sun if we ever talk about the sun's east (I don't know if anyone does). 81.131.66.245 (talk) 19:45, 2 June 2009 (UTC)[reply]
Much observation of any celestial body still takes place on Earth, using telescopes. In that case, it would make sense to use the same system of east/west as the rest of the sky to avoid confusion. If you look at amateur astronomy maps of the Moon or Mars, you'll see that they also have east 90 degrees counterclosewise from north. --Bowlhover (talk) 11:33, 3 June 2009 (UTC)[reply]
A complication is that astronomical telescopes usually show an inverted image, because this requires the least amount of lens glass (or number of mirror surfaces) in the optical train, thus maximising the brightness of the image and minimising its degredation, given that no lens or mirror is perfect. When such images are photographed and printed, there is endless scope (sorry!) for - deliberately or accidentally - further changing the orientation of the picture. Everyone in the astronomical community (amateur and professional) is well aware of this and double-checks alleged orientations against reality when it matters, but mistakes or ambiguities in published pictures may well mislead the layperson.
Further to Sphilbrick and Bowlhover's remarks above, I've just checked the 2 editions of Norton's Star Atlas I have to hand. In the 1946 edition, the "Sketch Map of the Moon (As seen in an inverting telescope)" marks South at the top and East on the right, with Mare Crisium near the North West (lower left) limb; in the 1973 edition, the (much superior) "Map of the Moon (. . . based on a drawing published in 1926 . . .)" marks South at the top and East on the left, with Crisium near the North East (lower left) limb. In Patrick Moore & Garry Hunt's Atlas of the Solar System (1990 edition), the multi-page lunar maps designate North at the top and East on the right. with Crisium near the North East (upper right) limb. 87.81.230.195 (talk) 20:33, 3 June 2009 (UTC)[reply]

Turing test

Is it possible for a human to fail the Turing test? —Preceding unsigned comment added by 81.76.42.229 (talk) 21:29, 1 June 2009 (UTC)[reply]

Certainly, but false negatives aren't a result of interest. Anyone could respond to a tester in such a way as to be indistinguishable from a computer program. — Lomn 21:46, 1 June 2009 (UTC)[reply]
An autistic person (or Rush Limbaugh) could easily give nonsensical responses. Clarityfiend (talk) 21:50, 1 June 2009 (UTC)[reply]
More than that, I could (as a Turing testee) bring the source code to a chat bot and manually interpret it to determine my test responses. It is comparatively trivial for a human to not only fail the Turing test but to be completely indistinguishable from a known computer. — Lomn 02:19, 2 June 2009 (UTC)[reply]
You would have to interpret it in real time, though, and that's not so easy. -- BenRG (talk) 14:31, 2 June 2009 (UTC)[reply]
Purple, 7, the east. -Arch dude (talk) 22:30, 1 June 2009 (UTC)[reply]
Yes it happens sometimes e.g. [5]. Dmcq (talk) 22:32, 1 June 2009 (UTC)[reply]
What makes you believe is it possible for a human to fail the Turing test? --Sean 13:16, 2 June 2009 (UTC)[reply]
How do you feel about Purple, 7, the east? Nimur (talk) 15:02, 2 June 2009 (UTC)[reply]
Oh... believe is it possible for a human to fail the Turing test? SteveBaker (talk) 02:53, 3 June 2009 (UTC)[reply]
Quote from Lessons from a Restricted Turing Test: "Ms. Cynthia Clay, the Shakespeare aficionado, was thrice misclassified as a computer. At least one of the judges made her classification on the premise that '[no] human would have that amount of knowledge about Shakespeare.'" -- BenRG (talk) 14:31, 2 June 2009 (UTC)[reply]
I could not resist in linking this famous comic strip :) [6] ... aaaand, this one too :) [7] --131.188.3.21 (talk) 22:58, 4 June 2009 (UTC)[reply]

Easiest cell to extract DNA from

For humans, which cell would be the easiest to extract DNA from? I don't just mean getting the DNA, the cell has to be accessible as well. Any help would be appreciated. --The Dark Side (talk) 22:22, 1 June 2009 (UTC)[reply]

You can get DNA from saliva, or from hair, and I believe from skin, blood, and basically anything that makes up a human. Take your pick. --KageTora - (영호 (影虎)) (talk) 22:50, 1 June 2009 (UTC)[reply]
But is any one easier to use than the others? --The Dark Side (talk) 22:56, 1 June 2009 (UTC)[reply]
Ok, imagine the story of Goldilocks, and imagine an alternative ending where she left before the bears came home. They could find out who it was who ate the porridge, broke the chair, and slept in the beds just by the forensic evidence (assuming she wasn't a first-time offender). From the spoon she used to eat the porridge, there maybe some residual saliva, which could be used to extract DNA. It would also help if she licked the spoon clean, and even more so if she licked the bowl clean. When she sat on the chair that broke, she would have fallen, and possibly cut herself on the broken chair (not mentioned in the original story, but this is an alternative). This would be a perfect source of DNA. Failing that, when she slept in Baby Bear's bed, it's pretty well a sure thing that some of her hair would be on the pillow, and considering baby bears don't usually have long blonde hair, they would be easy to spot. Another source of DNA in the bag. The FBI would be round her house in a shot! --KageTora - (영호 (影虎)) (talk) 23:15, 1 June 2009 (UTC)[reply]
Okay, I understand that it's easy to collect cells. What I'm really after is the ease with which the DNA can be extracted. This would probably have to take into account the cell membrane and the features that differentiate the various human cells. --The Dark Side (talk) 23:55, 1 June 2009 (UTC)[reply]
Extracting DNA from any recently living nucleated cell is so trivial that its essentially moot to decide which cells one is extracting from. Forensic scientists don't even differentiate it that way. A single microscopic sample may contain DNA from dozens of different kinds of cells from the same individual. For example, take a blood stain. The sample obviously contains DNA from white blood cells, but it also likely contains skin cells (of which there are many different types, vertically striated, from the same cut even!), subcutaneous fat cells, muscle cells, and from any number of other sources. With tools like polymerase chain reaction any DNA sample can be amplified to a useful amount, so it really doesn;t matter where it comes from. --Jayron32.talk.contribs 04:29, 2 June 2009 (UTC)[reply]
And that is exactly what I was implying, but said in a more scientific way. --KageTora - (영호 (影虎)) (talk) 10:08, 2 June 2009 (UTC)[reply]
A buccal swab (swabbing the inside of the cheek) is the most prevalent way of DNA sampling, assuming you haven't already drawn blood for some other purpose. -- 128.104.112.106 (talk) 20:23, 2 June 2009 (UTC)[reply]
Indeed, a buccal swab (can't believe we don't have an article yet) is probably the least invasive method that is most widely used. It isn't as "good" as blood, but its a hell of a lot easier to sample. In a job I used to do, I would extract DNA from both blood (a few ml) and buccal swabs on a daily basis. I must have done many hundreds, if not thousands, of both. The number of nucleated cells in a 1ml of blood outnumbers the best buccal swabs, so you would get much more DNA from blood. The quality of the DNA would also be better as the blood environment is more consistent than the mouth. We found that the quality of DNA from buccal swabs varied greatly depending on what was eaten immediately prior to the swab. Particularly problematic was when a kid ate an apple just prior to swabbing. Then DNA would always be crap. We never formally tested by this, but our assumption was that the acidity of the apple was to blame. Rockpocket 01:28, 3 June 2009 (UTC)[reply]
I remember reading that it is easy to extract human DNA in a home experiment suitible for schoolchildren, but do not remember how its done, except that it requires a beaker and a glass rod. 78.144.244.22 (talk) 22:31, 2 June 2009 (UTC)[reply]

Untreated Cancer

Horrible subject, this, but I've been thinking. What happens if cancer goes completely untreated? Of course, death is the usual consequence, but I am asking about what happens in the body leading up to that. Also, I know there are many different places in the body that can become cancerous, and so they will all exhibit differing symptoms from onset until the end, so this may be a difficult question to answer. I am interested in this because in the "modern developed world" we have cancer treatments, while in older eras there were none (or at least, none as we have now). To complicate matters more, there seem to have been many studies that suggest that untreated cancer victims tend to live longer than those who are treated (sorry, no reliable links). Another question is, do animals get cancer? I'm sure they do, but how is this treated, if at all? Thanks. And please do not post links with pictures of what it looks like, as I am a tad squeamish. :) --KageTora - (영호 (影虎)) (talk) 22:48, 1 June 2009 (UTC)[reply]

"there seem to have been many studies that suggest that untreated cancer victims tend to live longer than those who are treated" — if there aren't any reliable links, I would be inclined to suggest it is because such studies don't exist.
This is not to say that there are no trials which show some individual treatments are ineffective — some drugs and therapies don't work as well as animal models or biological theory might suggest, and the only way to know for sure is to do clinical trials. Further, there are some cases where watchful waiting (monitoring, without immediate treatment) is considered an appropriate response to a cancer diagnosis. Finally, opting for treatment does (potentially) expose a patient to immediate risks in return for the potential later reward. Let's say a surgical intervention cures 90% of patients, but 10% die on the table. One could say that the latter 10% did significantly worse than they would have had they not been treated — but it probably wouldn't be justifiable not to offer treatment. TenOfAllTrades(talk) 00:01, 2 June 2009 (UTC)[reply]
Animals do get cancer. In the wild that quickly puts them in the "gets eaten" category, so it's not a top animal channel item. Rats and mice serve as models for cancer studies. There are viruses that cause cancer [e.g. Feline leukemia virus or HPV}. Dogs are listed as having quite a variety of cancers, including one that is transmitted through sexual contact. A new form of cancer in cats is injection site carcinoma (no page??). It is ironically caused at sites of inoculations that are supposed to protect our feline friends from diseases. Vaccinations are now moved to legs, so that amputation becomes an option. Radiation treatment and chemotherapy are costly, so some owners opt to have their animal put down. Some of the criticism of cancer treatments is that tumors get identified and removed/treated aggressively that would have gone into remission or never developed if left to themselves. [8] [9] [10]Clinical study results are hard to come by because of the dire results if the tumor remains untreated and mestastasizes instead of going into remission. We have no way of predicting outcomes. AFAIK it is not disputed that there are cases when the immune system manages to cope with cancerous tumors.(e.g.Phoebe Snetsinger) Another complaint is that biopsies used in diagnosis cause some tumors to mestastasize which would not have done so otherwise. Some cases have been described, but clinical studies hinge on the same problem as above. [11]71.236.26.74 (talk) 01:05, 2 June 2009 (UTC)[reply]

Cosmological units

Why does cosmology use such odd units (parsecs, lightyears, solar masses, etc) instead of the SI units like almost all the rest of science? And why are there so many different distance units in use? It seems to be not uncommon to be discussing kilometers, parsecs, lightyears and astronomical units at once. For example, conventionally, Hubble's constant is given in (km/s)/Mpc, which seems insane to me -- would s-1 have not sufficed? —Preceding unsigned comment added by 79.72.180.91 (talk) 23:02, 1 June 2009 (UTC)[reply]

Probably because the numbers would be astronomically large in SI units. It is easier to grasp the mass of a star in terms of solar mass than kilograms or a long distance in light years rather than meters. Bubba73 (talk), 23:19, 1 June 2009 (UTC)[reply]
Are there lots of people writing and reading about cosmology who do not understand numbers like 1031? Why should one area of science stick with non-SI units? Edison (talk) 23:44, 1 June 2009 (UTC)[reply]
The parsec stems directly from one method used to measure the relative distances of stars from the Solar System. It's easier to determine that one star is, say, 2 parsecs away while another is 4 than it is to calculate exactly how far a parsec is, so - especially when you're comparing lots of similar measurements - it saves some trivial and unnecessary maths to leave parsecs unconverted. Parsec is short for 'parallax second', and 1 parsec is the distance (about 3.26 light years) an object would be if the revolution of the Earth about the Sun over 6 months caused it to exhibit an apparent (parallax) shift in position angle of 2 arc seconds; or conversely, it's the distance at which a length of 1 Astronomical Unit (see below) would subtend an angle of 1 second.
Astronomical Units similarly derive directly from comparative methods used to measure the sizes of orbits in the Solar System relative to Earth's (whose orbit is 1 AU in radius, 2 AU in diameter). Again, historically we could measure these sizes relative to each other more accurately than we could measure the actual distances involved.
A light year, as you know Bob, is the distance light travels in one year. Apart from giving usefully low numbers (nearest star 4.2 ly, diameter of Milky Way about 100,000 ly, nearest major external galaxy about 2,000,000 ly), it's a useful reminder that we're seeing something x ly away as it was x years ago.
Hubble's constant is, again, left in the form in which it gets measured, in part because it isn't a pure distance, but rather a consequence of the way the Universe is expanding whose fine details are still subject to some uncertainty.
Of course, it's not a distance at all, but a rate. I knew that really. Doh!. 87.81.230.195 (talk) 20:15, 2 June 2009 (UTC)[reply]
These (and other) methods of distance determination do not (yet) give precisely interconvertable results, just as different methods of age determination in archaeology (say C14 and thermoluminescence) give answers that have to be carefully calibrated rather than ones that can be taken at face value. Converting them all to SI would mask these uncertainties.
Solar masses are used because it's more useful and informative to compare the relative masses of other stars to each other and to our Sun's than it is to know their absolute tonnages, which again are far harder to determine (and give insanely large numbers in SI units). The luminosities of stars are also often measured in units of Solar luminosity.
Exactly, and the absolute mass of an astronomical body is hard to pin down because the gravitational constant is not known to great precision. --Bowlhover (talk) 00:26, 3 June 2009 (UTC)[reply]
In addition to avoiding often unnecessary conversions, using each of these units in their traditional contexts where likely magnitudes are familiar avoids continually dealing with SI unit quantities containing large exponents, in which it's easy to make unobvious mistakes. Astronomers will cheerfully convert them as necessary to communicate with the lay public (who seem perfectly happy with light years and, often, AUs anyway): if any non-astronomer scientists want measurements in SI units, the conversion factors are readily available and they're quite capable of performing the calculations themselves, and welcome to, ta very much. 87.81.230.195 (talk) 01:27, 2 June 2009 (UTC)[reply]
It's not like cosmologists and astronomers are alone in this though. Particle physicists use all sorts of specialised units - Electron volts instead of Joules, an entire system of Plank units that bear no resemblance to SI units. Electrical systems engineers use Kilowatt hours instead of Joules, Aerospace engineers still use 'knots' for airspeed and feet for altitude - despite using metric for almost everything else. Biologists and Atmospheric engineers measure pressures in millimeters of mercury instead of Pascals. Computer scientists abuse the 'k' and 'M' prefixes to mean 1024x and 1048576x respectively. All sorts of people abuse and adapt the system for convenience. If it makes communications clearer - then that's a good thing. It would be a real pain in the ass to have to talk about having 110 m2kg−3A−1s electricity coming out of the wall socket. SteveBaker (talk) 02:28, 2 June 2009 (UTC)[reply]
And atom masses are measured in atomic units instead of grams, High explosive energy output is measured in tons of TNT instead of joules, food energy is measured with Calories (capital c) which is equivalent to 1kcal istead of joules, there are several different conventions for electromagnetic units, and the cgs system of units is as popular as the SI system of units. The list is quit impressive. There is no rule (neither implicit nor explicit) saying that scientists should use SI units. We are only expected to state the unit system being used clearly. Dauto (talk) 03:24, 2 June 2009 (UTC)[reply]

Related question: Why is it that astronomers, who deal with the largest and heaviest things in the universe, generally use CGS units instead of MKS units? I understand it's probably convention, but how did such a bizarre convention come about (as it makes the numbers even more cumbersomely large than they already are)? -RunningOnBrains(talk page) 04:52, 2 June 2009 (UTC)[reply]

Historical accident, and in that case it really does not matter much - just add another two to the exponent. Note that most of the non-SI-units in use are units of convenience - lightyears, AU, Parsecs, Electron Volts, u, all have a direct and intuitive connection with the domain of discourse. The SI-units, and many traditional units have lost most of that connection, and are now very abstract and generic. How often do you consider the fact that the meter is approximately a millionth of one quarter of the length of the equator? Or that a mile is 1000 double steps of a Roman legionnaire (well, for frictionless legionnaires of standard weight on the Via Appia, I suppose ;-). --Stephan Schulz (talk) 09:29, 2 June 2009 (UTC)[reply]
Assume a spherical legionnaire... —Tamfang (talk) 03:44, 4 June 2009 (UTC) [reply]
Well, also remember that the metric system is supposed to be devoid of context. It's not like they created the metric system to measure the equator—they knew the length of the equator and worked backwards, the idea being that anyone could then reconstruct the length if need be. They are meant to be context-free units. Which have their ups and downs, as noted. --98.217.14.211 (talk) 12:38, 2 June 2009 (UTC)[reply]
There's a legend that the metre was conceived as the length of a pendulum whose half-period is 1 sec, until someone with colonial experience chimed in with the French equivalent of Afraid that won't do, old chap (gravity, and thus the appropriate pendulum length, varies with latitude). Sadly, one hears that there's no truth in it. —Tamfang (talk) 03:44, 4 June 2009 (UTC)[reply]
The use of parsecs, kilocalories, and other special units used above are objectionably non-SI, but still far better than the use by science popularizers such as Science Daily, adapting material from the American Chemical Society, originally published in ACSNano, which says that silver nanoparticles useful in medical treatment are "1/50,000th the diameter of a human hair" without specifying red, blonde or brunette, and from what part of the body. This type of nonsense is like comparing some object in space to so many "city blocks," another widely varying metric. Edison (talk) 15:58, 2 June 2009 (UTC)[reply]
See our list of unusual units of measurement and the associated list of humorous units of measurement. My personal favourite is the beard-second. Gandalf61 (talk) 16:32, 2 June 2009 (UTC)[reply]


June 2

What are the "tubes" on USB and similar cables?

My wife asked me about this last night. My guess is that it's some sort of inductance coil to reduce RF interference, but I realized that I don't know for sure. Now I'd like to know. Donald Hosek (talk) 01:20, 2 June 2009 (UTC)[reply]

A ferrite ring. It stops common mode signals traveling on the cable, and your guess is correct. They may be required to meet an EMI standard, or electromagnetic compatibility to stop the device beding disrupted by a strong nearby signal, such as a mobile phone. Graeme Bartlett (talk) 01:33, 2 June 2009 (UTC)[reply]
I fixed your redlink to common mode - hope you don't mind SpinningSpark 18:14, 2 June 2009 (UTC)[reply]
Thanks, the red link was a prompt for someone to write an article. I have now made a disamig page as there are two quite distinct meanings and four articles. Graeme Bartlett (talk) 06:20, 3 June 2009 (UTC)[reply]

Coughing and sneezing while unconscious

Can humans cough or sneeze while they are unconscious? -- Beland (talk) 05:05, 2 June 2009 (UTC)[reply]

It depends on the level of unconsciousness. Look at the Glasgow Coma Scale and you will see that in lighter states of unconsciousness the patient may be able to obey verbal commands. It seems possible that in this condition they would cough and sneeze. Richard Avery (talk) 07:18, 2 June 2009 (UTC)[reply]
Cough is a reflex, not requiring any cognitive input. Even in deep coma people will cough with irritation of the trachea (I've seen this many times). I don't know whether the same is true of sneezing, but I would guess that the right irritant could induce a sneeze in an unconscious person. --Scray (talk) 09:48, 2 June 2009 (UTC)[reply]
So what would be the purpose of endotracheal suction of saliva and secretions if the unconscious patient is able to cough? 86.4.190.83 (talk) 13:08, 2 June 2009 (UTC)[reply]
They may reflexively cough, but still not with enough vigor to clear their own air passages. Like blinking, there are likely both reflexive and voluntary components to coughing, and unconsciousness may hinder the clearing process without completely eliminate the cough reflex. Additionally, as Richard Avery has noted, there are different severities of unconsciousenss, and every patient is in a way unique; one patient may be able to clear his own air passages unconsciously, and another may not. --Jayron32.talk.contribs 13:31, 2 June 2009 (UTC)[reply]

Interesting. Has anyone ever seen anyone sneeze while they are unconscious? -- Beland (talk) 15:17, 2 June 2009 (UTC)[reply]

GPS in plane

Is current plane doesn't equip with GPS? Or within the black box that can survive crash? With current technology, is it possible to equip life vest with GPS? Thanks for the answer. roscoe_x (talk) 05:14, 2 June 2009 (UTC)[reply]

Many commercial aircraft are equiped with GPS: GPS_navigation_device#Commercial_Aviation. Smaller and older aircraft often do not have GPS. The GPS is not part of the black box (flight data recorder), but the data from the GPS is required to be stored on the flight data recorder (see [12] item 39). Yes, it is possible to equip lift vests with GPS. Sancho
Also it would be possible to equip each life vest with a satellite distress beacon. Of course it's not going to happen on a commercial airliner because it is not required by law and it would increase ticket prices by tens of cents. --203.22.236.14 (talk) 07:43, 2 June 2009 (UTC)[reply]


I think the op is slightly confused with how GPS works. GPS is passive. A GPS reciver doesn't transmit anything, just recives information from satalites to work out its location, meaning a life vest equipped with GPS wouldn't be much use other than to tell the wearer where they are. As far as I know you cannot be tracked via GPS. Gunrun (talk) 10:47, 2 June 2009 (UTC)[reply]

Quite so, GPS is passive (but a nearby GPS receiver could be detected due to it's internal amplification of the GPS signal). But I believe the OP's real question is "That AirFrance plane seems hard to find, couldn't it have some device to make it easy to find?" That device would be something like GPS+distress beacon and yes, it's very possible. Actually, I'm surprised that aeroplanes don't constantly upload all data that would be stored in the blackbox via the satellite phone network. --Polysylabic Pseudonym (talk) 11:03, 2 June 2009 (UTC)[reply]

From what i've found there are regulatory rules around carrying distress signals. Virtually all commercial planes must carry some - automated and manual. There are even life-rafts for planes that have them built into them. Not sure about life-jackets commercial wise but they exist. My understanding of the Air France disappearance was that the plane was flying over a part of the world that gets very limited coverage in terms of Radar / satellite and that was what has caused the difficulty of finding. 194.221.133.226 (talk) 11:12, 2 June 2009 (UTC)[reply]

Yes - as others have pointed out - you have to combine a GPS unit with some kind of a transmitter in order for someone else to find out where that unit is. However, in the case of the Air France disaster, the plane probably crashed somewhere in the mid-Atlantic - perhaps 800 miles from the nearest land. Being that far from civilisation means that there is certainly no cellphone coverage and you'd need a pretty powerful transmitter to reach anywhere useful. Just about the only practical technology would be a satellite phone. But you can't put all of that technology into something like a life vest because it's simply too expensive. There are something like 50,000 large passenger aircraft in the world - that's probably five million life vests - of which perhaps a few dozen ever get used for their intended purpose! Adding a satellite phone to each one...with a battery that's kept constantly charged - and replaced when it breaks...a phone that'll survive the worst a plane crash can do...that's an expensive thing - many hundreds of dollars each, certainly. You'd perhaps need to spend several billion dollars to add such a feature to the world's airliners...and quite frankly - it's a total waste of money because airline life vests are so very rarely used.
In the case of the Air France disaster - it appears that the aircraft was in mid-Atlantic, so it would have been flying at perhaps 30,000 feet. Whatever happened was so fast that a mayday signal couldn't be gotten out - and involved a sudden loss of cabin pressure and electricity. Basically - that means an explosion or catastrophic structural failure of some kind - and crash from 30,000 feet. Nobody is going to survive that. Having a way to find a few lifejackets floating out from the wreckage afterwards cannot possibly justify the cost.
The aircraft's own flight recorder does have various transmitters to aid in finding it later - but that assumes that the thing survived the initial disaster - and a fall from 30,000' and a good soaking in salt water afterwards. Those things are tough - but they aren't invincible. Finding the one from flight 447 in a search area of perhaps half a million square miles is going to be very hard indeed...it's probably impossible.
SteveBaker (talk) 12:27, 2 June 2009 (UTC)[reply]
Steve, an EPIRB radio beacon transmits via Satellite. Also the phones I was talking about were satellite phones which likewise have global coverage. I'm pretty sure there's global coverage. It should be fairly trivial to install at least one EPIRB in a location on an aeroplane where it will in any disaster end up separated from the wreckage and floating (radio isn't much good under hundreds of metres of water). Come to think of it, all of the black box data -- were it not automatically transmitted during flight over the satellite phone network -- could be easily stored on a small solid state memory device attached to such a beacon to make that also easy to find. --Polysylabic Pseudonym (talk) 12:47, 2 June 2009 (UTC)[reply]
"...is it possible to equip life vest with GPS?" Sure, here's one such set up. Many are carried on military and general aviation aircraft. Polysylabic Pseudonym above has linked to the two relevant articles.—eric 13:24, 2 June 2009 (UTC)[reply]
And the ugly truth of telecommunications finally comes out! Instantaneous, point-to-point radio contact is much less seamless than it appears to the untrained eye. Just because you can whip out your mobile phone and dial a long-distance number without interference from every other mobile phone on the street doesn't mean an airplane can do the same! Our radio systems, in general, are horribly short-range. While it is true that we do have some technologies for long-range transmissions, the amazing web of instantaneous digital connectivity to every other part of the world is only made possible because in most of our daily life, we are never more than one mile from the nearest cell tower, not more than 100 feet from the nearest 802.11 access point. This allows us to use high-frequency, wide-band shared channels. But these channels are not very good for long-range transmissions. Although it appears to be "point-to-point", it's a really complex set of relaying to get your off of the shared radio channel as proximally as possible. So, when you make a "wire-free" long distance call from your mobile phone to your friend in Angola, the wireless hop is not actually all that far. A series of base-stations route you to a cable (optical fiber) network, and the signal travels by wire for a very very large portion of its journey. It crosses the ocean by submarine communications cable, hits a couple more optical cable routers, and finally gets sent out to a field transmitter which is not more than a few thousand meters from the intended recipient.
An aircraft which is 3000 kilometers from the nearest base station has surprisingly few options for telecommunication, and they are not all that high-tech. On board, there is a suite of radios, ranging from VHF and UHF digital radios to very "1950s" style HF (shortwave) radios with ranges of around a few hundred kilometers. While in flight, the aircraft will often fly "convoy-style", maintaining communication to the ground by proxy over an HF channel to another aircraft a few hundred kilometers away. This radio signal is very low-bandwidth and not exactly reliable. (This has caused problems before). Unfortunately, the sort of nifty broad-band technologies we've grown very accustomed to are based on much higher frequency signals which have much shorter range. Trying to transmit a VHF radio or a 2.4 GHz microwave "mobile-phone" over a thousand kilometers is just not practical.
Maintaining a bidirectional satellite-based communication link during the entire flight would be really quite challenging, although not impossible. Therein lies the "ten extra cents per seat" which was casually described above. Now, here comes another ugly truth about satellites - they're not really global in coverage! To receive effective satellite service, a location must have one or more satellites above the horizon and in view of the transceiver. To save power and decrease launch costs, these communication satellites are NOT in a geostationary orbit - rather, they fly in constellations in predetermined orbits. These orbits do not necessarily provide full coverage for the entire planet (at least not out to "five nines" or 24-hours, 7-days-a-week). (Which telecommunication company wants to pay huge sums of money to provide fantastic satellite reception to the middle of the ocean? The number of subscribers out there is a little low). So, even satellite-based schemes might not provide a 100% uptime on the communication link.
In summary, the transoceanic airplane provides an interesting insight into our communication infrastructure. When isolated from the enormous network of ground relays, optical cables, and effective satellite coverage, the only viable solutions are pretty old-fashioned shortwave radios. Because of fundamental bandwidth limitations, these links are not suitable for an "always-on", constant monitor of the aircraft's position - imagine what would happen if every aircraft on the planet started broadcasting wideband digital updates over globally-ranged transmissions at 9 MHz - there's a shared channel, and it'd get used up pretty darn quick. (If you can't imagine, let me present an analogy - the idea is that you want every individual on a football field to yell all their conversations loud enough so that every other person can hear them, even on the other side of the field. But everyone will be yelling at the same time, and constantly hearing all the chatter from every other person - it destroys any hope for effective communication). Nimur (talk) 15:30, 2 June 2009 (UTC)[reply]
Awesome explanations! So there is no satellite that covers the crash area of AF447? If there was one, could it have located the wreck by simply looking? Without a 2-way radio communication. Satellite imagery says there are satellites that can distinguish objects on the ground at least 50 cm apart. Jay (talk) 10:03, 3 June 2009 (UTC)[reply]
No - you really don't understand the immensity of the task. They have actually found some wreckage now - just a few small parts. But the search area was immense - thousands to hundreds of thousands of square miles...they got lucky because another aircraft spotted burning wreckage on the ocean while it was still dark and that stood out fairly clearly. You don't take satellite photos at night. Suppose you take a photo (even from some super-high rez military spy satellite) with enough resolution that you can spot wreckage. 50cm resolution isn't enough - let's suppose it needs 20cm resolution to see a floating life vest that's maybe 40cm across for what it is. OK so on your (roughly) 1000x1000 resolution computer screen, you can start looking at the photos this satellite produced - right? You're looking for that life vest. But at 20cm resolution, each 1000x1000 pixel photo covers just 200x200 meters of the ocean. To look for wreckage over just one square kilometer, you'd have to carefully inspect 25 photographs looking for a TINY orange spec just a couple of pixels across that's just barely a smudge on your screen. To search 100,000 square kilometers, you have to look CAREFULLY through 2.5 million photographs of dull, boring ocean! Do have any conception about how impossible that is?!! SteveBaker (talk) 13:34, 3 June 2009 (UTC)[reply]
I didn't think of the human angle to it after the satellite had done its job. Also, I missed out that night could be a hindrance, though I did consider cloud cover. But shouldn't the task be fairly straightforward for a digital image processing software - to detect an orange, or yellow or red dot within a limited-coloured canvas of ocean? Jay (talk) 14:30, 3 June 2009 (UTC)[reply]
That's assuming you actually got a photo that shows the orange life vest. Any number of things could obscure it, or they might not be wearing a vest at all (holding onto debris, for instance). — The Hand That Feeds You:Bite 20:14, 3 June 2009 (UTC)[reply]
"[Sarkozy] said France has asked for help from U.S. satellite equipment to locate the plane." (http://www.wral.com/news/national_world/world/story/5254928/). Would be interesting to see what the US provided. Jay (talk) 03:47, 4 June 2009 (UTC)[reply]
Commercial aircraft are equipped with ACARS, which can send low-volume data. The system uses VHF radio when in line of sight of a fixed tower (essentially when over land) SATCOM (Inmarsat) when over ocean except at high latitude, and HF when over the poles. The ACARS unit aboard the aircraft is used, among other things, to send critical maintenance information (engine performing out of specification) and takeoff and landing information. This info is gathered automatically using the datalink that connects the various computers aboard the aircraft. The datalink always has location and time infor from the aircraft navigation system, which generally uses GPS plus an inertial navigator, at least. There is nothing to prevent the ACARS from sending the aircraft's location if something bad happens, but this is has not been done because this was not one of the goals of the ACARS system. Flight 447 sent out several maintenance alerts, presumably via Inmarsat, and could easily have sent a location message if anyone had thought to add this function. This is trivially easy to see -- in retrospect. -Arch dude (talk) 18:45, 2 June 2009 (UTC)[reply]
Update: The ACARS messages do in fact contain the location information. The last flight 447 message had location information to 3 decimal places (about 100 meters.) The problem, apparently, is that this classified as "maintenance data" not "critical flight data," so it (apparently) was not conveyed to the SAR crews. I suspect that this will change in th future. -Arch dude (talk) 22:01, 2 June 2009 (UTC)[reply]
I am puzzled that it is not required for all planes to continuously send out its location, say once a minute. If something suddenly happens to a plane and communication is cut off then there is not time to only then send a message. A simple flight number and GPS location (which includes altitude) once a minute is not a lot to ask for. That's maybe 30 bytes of data per plane per minute. I'm sure the cost is negligible for hooking into something like Irridium or Globalstar - and besides you probably get great reception from 10km up. Is there any downside to this? 196.210.200.167 (talk) 19:29, 4 June 2009 (UTC) Eon[reply]

Evolution denial & genetic engineering

Do evolution denialists usually take a stand on genetic engineering? If yes, for or against? --KnightMove (talk) 09:58, 2 June 2009 (UTC)[reply]

Since almost all of them are religious fundamentalists - it's probably safe to say that most of them are against genetic engineering...although logically, they should probably conclude that it simply doesn't work and is therefore harmless...but logic isn't generally their strong point. SteveBaker (talk) 12:03, 2 June 2009 (UTC)[reply]
Actually, Steve, you'd be surprised at how some of the even Young Earthers manage to weave modern genetics into their story. The Answers in Genesis people are the best in this respect, going to great lengths to appear biologically sophisticated.) I think they would probably argue that no new species would be created by such genetic engineering, though of course you can make changes to a species (in the same way you can breed dogs to superficially look different, but they are still dogs. (I don't agree with this, but that's likely their argument.) Answers in Genesis has a LOT to say about genetics in general, about stem cells, and about cloning, but I don't see anything about genetic engineering. I'm not sure they would disagree with it if it were just being used for medical activities, but I'm just speculating. --98.217.14.211 (talk) 12:33, 2 June 2009 (UTC)[reply]
Re it's probably safe to say that most of them are against genetic engineering: I find that hard to believe considering that half of Iowa is under GM corn with no signs of the Creationists making any fuss. --Sean 13:22, 2 June 2009 (UTC)[reply]
Surely evolution and genetics are logically and conceptually independent of one another ? You could (hypothetically) have evolution without genetics or genetic inheritance without evolution. Darwin, Wallace and Huxley formulated evolution without knowing anything about genetics - indeed, Darwin proposed the entirely incorrect idea of gemmules. When Mendel discovered the laws of genetics he knew nothing of evolution. It is only the joining of the two strands of thought in the modern evolutionary synthesis that makes us think they are inextricably intertwined. Gandalf61 (talk) 15:22, 2 June 2009 (UTC)[reply]
You're perfectly right, but evolution deniers usually believe in God as the only ruler of life, which plausibly might make them disregard genetic engineering. Whether this really is the case, was my concern. --KnightMove (talk) 19:35, 2 June 2009 (UTC)[reply]
Could you have genetic inheritance without evolution? The only way I can see this working is with no copying errors and asexual reproduction. I'm no biologist though; any thoughts? (Oops, this may be derailing. Should I start a new question?) 80.41.123.51 (talk) 20:02, 2 June 2009 (UTC)[reply]
They don't deny the possibility of birth defects; some even say that (some) existing species are corrupted varieties of those that existed in Eden. —Tamfang (talk) 16:58, 4 June 2009 (UTC)[reply]

It doesn't automatically follow that individuals who dispute evolution on a religious basis are against genetic engineering. Its a little more complex than that. There are practical examples of genetic mutation and by implication, microevolution, that makes it very difficult for scientifically educated anti-evolutionists to dispute the genetic basis of phenotypic inheritance (though millions of uneducated ones do so quite vehemently). Since genetic engineering (at our current level of sophistication) really impacts at this level, is entirely possible to come up with a rationale whereby one can resolve an anti-evolution, pro-GM stance.

For example, a report issued by the National Council of Churches of Christ takes such a positive stance toward GM. The report sees us continuing God's work in genetic engineering: "Dominion carries with it a concept of custody, of stewardship, of being responsible for, of caring for all creation." They believe the Scripture "exalts the idea that men and women are coming into the full exercise of their given powers of co-creation." In other words, they see GM as a fulfillment of our "dominion over the fishes of the sea, and the fowls of the air, and the beasts." (Genesis 1:26).

However, while accepting microevolution one may still reject macroevolution and the common descent, instead believing the range of species today derive from baraminologic "kinds". They key distinction, in their minds, is the genetic "missing link" between micro- and macroevolution, which allows them to accept the former while disputing the latter. Rockpocket 01:09, 3 June 2009 (UTC)[reply]

Except that there's really no difference between micro and macro evolution. — The Hand That Feeds You:Bite 20:19, 3 June 2009 (UTC)[reply]
Other then you can observe micro in real time. Macro has to have ancient remains to back it up. 65.121.141.34 (talk) 20:28, 3 June 2009 (UTC)[reply]

Output per man per shift

"output per man per shift" in any massproduction manufacturing industry?

Sathyavolu sar (talk) 14:36, 2 June 2009 (UTC)[reply]

This question is a double-post. A few answers were provided here. We can't answer your question any better unless you elaborate on what you mean by "output." Output can be measured a lot of ways. If you have a specific industry in mind, then it should be easy - count the number of widgets built, on average, during one shift. Different companies might have dramatically different productivity, but in a commodity industry those sorts of variations either equalize or one company goes bankrupt. The most uniform system for comparing different industries is to measure the value of the items produced, in dollars (or other currency). I would imagine that the output per man per shift is on the same order of magnitude as the wage paid to that man per shift (unless there is a severe case of worker exploitation, with the profits of the output being disproportionately allocated elsewhere). Nimur (talk) 15:38, 2 June 2009 (UTC)[reply]
Depending on the what you seek to do with the answer, output-per-worker may not be the correct measure. For example, if you want to know by how much output increases when a plant employs an additional worker, you want the marginal-worker-output, not the average-worker-output. Wikiant (talk) 15:46, 2 June 2009 (UTC)[reply]
In fact, we have an article on that, Marginal product of labor. Nimur (talk) 15:58, 2 June 2009 (UTC)[reply]

what is the name of the little crab animals that live in the sand that you see when a wave uncovers them

what is the name of the little crab animals that live in the sand that you see when a wave uncovers them. we saw them when we were at Daytona beach. thay look like tiny shells after a wave washes over them then they burrow themselves back into the sand —Preceding unsigned comment added by 72.65.6.228 (talk) 15:13, 2 June 2009 (UTC)[reply]

Emerita (genus)? Talitridae? Bus stop (talk)


Sandhoppers maybe? Check out Amphipoda, the family to which the already mentioned Talitridae belong. If you could specify a size it may be Emerita, again mentioned above, however I get the feeling that you mean something smaller. Hope this helps. 144.32.155.203 (talk) 15:56, 2 June 2009 (UTC)[reply]
Besides all of the above, you could also include the Uca genus or Fiddler crab. There are likely dozens of genera and hundreds of species of crabs that exhibit this behavior. --Jayron32.talk.contribs 03:47, 3 June 2009 (UTC)[reply]
Or better yet, the Ocypode genus, aka Ghost crab aka Sand crabs. These are often tiny (like, corn-kernel-sized) and make those tiny little holes in the sand at the beach. --Jayron32.talk.contribs 03:49, 3 June 2009 (UTC)[reply]
Judging by the location and personal experience, Emerita (genus) is definitely my pick. Sifaka talk 05:35, 3 June 2009 (UTC)[reply]

Missiles

Missile Technology involves which branches of engineering in actual sense????It may look naive but this doubt has been pounding me from a long time!!!! Does Mechanical Engineering play a significant role in this technology???? —Preceding unsigned comment added by 121.246.174.130 (talk) 16:52, 2 June 2009 (UTC)[reply]

Mechanical engineering, aerospace engineering, and electrical engineering are the primary contributing disciplines, with a significant amount of computer science, mathematics, chemical engineering, and other fields. It helps to break down the "missile technology" into some constituent elements, such as propulsion, structure, guidance/control, logistics, and so forth. Take a look at this FAQ from Lockheed Martin - they want "Aerospace Engineering, Business, Computer Engineering, Computer Science, Electrical Engineering, Finance, Human Resources, Math, Mechanical Engineering, Nuclear Engineering, Physics, Supply Chain Management, and Systems Engineering." It's worth noting that "missile" is an extremely broad term - an ICBM is designed and built with a host of different technologies than, say, a TOW missile. Nimur (talk) 17:03, 2 June 2009 (UTC)[reply]
A missle, like any other manufactured object, requires many disiplines to come together correctly. Mechanical Engineering plays an important role in keeping the missle in one piece while it reaches the target. On top of that, modern missles are most often computer guided, so programmers and computer engineers will need to have a role in building a missle. There are "rocket scientists" who may work on propulsion as well as the fields listed by the poster above. If you are studying to be part of the mechanical engineering field you should have no trouble finding a job anywhere that you find manufatured goods (and often beyond that). 206.131.39.6 (talk) 17:21, 2 June 2009 (UTC)[reply]

What do flu virus names mean?

For instance, H1N1 or H5N1. What is the H and the N? What are the numbers? Do these names apply only to influenza A viruses or are they given to B and C viruses as well?

-- Lesath (talk) 16:59, 2 June 2009 (UTC)[reply]

Have you seen H1N1#Nomenclature? "Influenza A virus strains are categorized according to two proteins found on the surface of the virus: hemagglutinin (H) and neuraminidase (N)." The numbers represent variations of these proteins. Nimur (talk) 17:16, 2 June 2009 (UTC)[reply]

LOL, each genotype has its own page? In any case, wouldn't a nomenclature discussion be better on a more general page or is this repeated on H5N1 too?

Nerdseeksblonde (talk) 15:38, 8 June 2009 (UTC)[reply]

Organometallic Compound

Why sodium ethoxide is not a oganomeallic compoun?Supriyochowdhury (talk) 18:11, 2 June 2009 (UTC)[reply]

For a compound to be considered organometallic, there must be a bond between a metal and a carbon atom that posses mainly covalent character. Sodium ethoxide contains a mostly ionic interaction between a sodium (Na+) ion and an ethoxide ion (EtO-). I hope this helps. —Preceding unsigned comment added by 144.32.155.203 (talk) 18:19, 2 June 2009 (UTC)[reply]

Exactly. here there is a bond between sodium and oxygen, not carbon. Rkr1991 (talk) 07:56, 3 June 2009 (UTC)[reply]

Bees

Hi, i am doing a project on bees at school and i have a few questions.... I am trying to understand why bees bother collecting pollen and nectar from plants? what's motivates them? also why do they bother making honey? again what's in it for them? it seems to me like they are working all their lives, while other animals are busy just pleasing themselves? 80.47.194.97 (talk) 18:13, 2 June 2009 (UTC)[reply]

A good place to start would be on our article on the European honey bee which is quite detailed on many of these matters. It specifically talks about what bees do with pollen and nectar and honey. From that article, you can follow blue links to other articles which contain even more information. Honey is a good read as well. --Jayron32.talk.contribs 18:38, 2 June 2009 (UTC)[reply]
Briefly summarized, the bees are motivated by nectar, not pollen. However, the flowering plants benefit when bees spread pollen. You might also want to read about pollination, which also discusses the coevolution of bees and flowering plants. Nimur (talk) 18:54, 2 June 2009 (UTC)[reply]
They collect the pollen in their pollen baskets and feed it to the baby bees. Bees are built out of pollen and run on nectar (and in the case of honey bees, on honey when they can't get the nectar). 213.122.59.67 (talk) 20:17, 2 June 2009 (UTC)[reply]
I don't think it's scientifically accurate to say bees are built out of pollen. Nimur (talk) 21:37, 2 June 2009 (UTC)[reply]
Pollen is important for bees too. Pollen baskets evolved to help the bee, not the flower. As our honey bee article notes, pollen is a good source of protein for developing bees. (Nectar and thus honey are low in protein.) -- 128.104.112.106 (talk) 20:16, 2 June 2009 (UTC)[reply]

Resonance and Delocalisation

All the important properties of some covalent speices can not be explained by representing single Lewis dot structure or by showing a single structure bbased on theory of hibridisation .Again unusual stability of some covalent speices can not be explained by considaring bond energies and bond length predicted by V.B.T. after that a new thought resonance arise which can explained above property quit sucssessfully. Tell me that how it fulfill these blanck. What is the requirement of the theory of "resonance and delocalisation"?Rikichowdhury (talk) 18:36, 2 June 2009 (UTC)[reply]

Resonance is an artifact of an insufficient model. There is no "fliping" between structures or back-and-forth shuffling of electrons in real molecules. The electrons are stable and no different than in molecules whose Lewis dot diagrams do not show resonance. Resonance is a heuristic invented to fit the real behavior of molecules to the Lewis model, especially where the Lewis model cannot accurately represent the actual bonding in the molecule. Valence Bond Theory and the Lewis model cannot, for example, handle fractional bond order. Other theories, developed simultaneously to VBT, such as Molecular orbital theory do a fantastic job of handling fractional bond orders, and there is no need to introduce resonance into molecular orbital theory. However, MOT has its own shortcomings, such as its inability to deal with molecular geometry in a convenient method. Thus VBT or VSEPR theory handles geometry very well, but does not handle bond order well. MOT is complimentary because it handles bond order very well, but not geometry. Of course, real molecules are not identical to either model, and no single model can fully capture reality, but these two work well together in explaining much of molecular behavior. Delocalization is a slightly different issue; it is a real event which occurs in situations where bonding electrons are not "localized" between two atoms, but instead are shared equally among a group of atoms. Two classic examples are the "Three-center two-electron bonding" in diborane and the cyclic delocalized pi-system in aromatic hydrocarbons. --Jayron32.talk.contribs 18:48, 2 June 2009 (UTC)[reply]

Grignard Reagents

Would there be a reaction between a grignard reagent and a carboxylic acid, I have drawn a few (probably flawed) mechanisms which would suggest a ketone and water as the products, in addition to HOMgX. This just seems unlikely, it doesnt sit right as it involves OH- as a leaving group and I felt that as a small ion it would be a poor leaving group. Am I missing some fairly basic organic chemistry?
This would be so much easier to post if I knew how to insert chemical structures in here. —Preceding unsigned comment added by 144.32.155.203 (talk) 18:43, 2 June 2009 (UTC)[reply]

The first reaction between a grignard reagent and a carboxylic acid is an acid-base reaction: the carbanion is a wickedly-strong base, removes H+ from the acid to give the carboxylate anion. Later (i.e., if there is still more grignard present), another carbanion can attack the carbonyl (just like for ketones and aldehydes). This tetrahedral (4 sigma bonds) intermediate can collapse to reform the carbonyl and eject an oxygen, but IIRC often does not for magnesium-based reagents. The reaction probably just stops at this stage. When you add water after the reaction is done, you protonate the oxygen anion, and this structure (looks something like a hemiacetal collapses to form a carbonyl. Nucleophilic reactions at carboxyl are not simple replacement of the singly-bonded piece, but rather are reactions of the carbonyl itself. That is, the loss of hydroxyl you propose is not direct. The reason that its instability isn't a problem (if it were to occur by that type of mechanism) is that you aren't just "creating unstable hydroxyl from nothing": the starting material is also an anion. You have to look at the overall reaction, and see that the result is more stable, even though it may have lots of instability. DMacks (talk) 18:59, 2 June 2009 (UTC)[reply]
If a grignard reagent gets anywhere near an active hydrogen containing compound (that is even a weak acid), then the immediate product is the corresponding alkane and water. Rkr1991 (talk) 07:54, 3 June 2009 (UTC)[reply]
Water? DMacks (talk) 08:00, 3 June 2009 (UTC)[reply]

about water quality.

sanitization process and chemicals using in water treatment plants. The concentration and duration period also want to know. —Preceding unsigned comment added by 94.97.53.246 (talk) 19:28, 2 June 2009 (UTC)[reply]

See Water purification. 78.144.244.22 (talk) 22:21, 2 June 2009 (UTC)[reply]

Cactus Question

I recently impulse purchased this cactus, which may have been poorly cared for in the past. I was just wondering what species it might be, and also what are the brown fuzzy things on it, circling the top of the plant?

Brown Spots Close: http://commons.wikimedia.org/wiki/File:CactusBrownSpots.jpg Full Cactus: http://commons.wikimedia.org/wiki/File:FullCactusWindow.jpg —Preceding unsigned comment added by Mattman723 (talkcontribs) 19:50, 2 June 2009 (UTC)[reply]

It looks like a barrel cactus to me, though I'm quite ignorant on such things. --Sean 21:17, 2 June 2009 (UTC)[reply]
Well it doesn't have notches that deep, so I doubt it. M@$+@ Ju ~ 23:18, 2 June 2009 (UTC)[reply]
Here is a cactus ID site [13]. The ring of "brown fuzzy things" may be flower buds, but possibly abhorted. Richard Avery (talk) 06:10, 3 June 2009 (UTC)[reply]
Here is a "ID request" forum on that site. It might help if you took a better picture, as the detail is in shadow in the window shot. You could also look through all the globose cacti here. --Sean 12:45, 3 June 2009 (UTC)[reply]

Does this article sound feasible at all?

This thing Does it defy the laws of physics? --71.234.104.243 (talk) 20:31, 2 June 2009 (UTC)[reply]

This invention is not new. It has been 'discovered' many times, but it violates the laws of physics and no one claiming discovery of such a machine has ever successfully demonstrated it. *Max* (talk) 21:37, 2 June 2009 (UTC).[reply]
Per Max, any perpetual motion machine of the first kind must violate the law of conservation of energy. Unique arrangements of magnets have been popular among perpetual motion aficionados for years; unsurprisingly, not one such invention has ever been successful. TenOfAllTrades(talk) 21:40, 2 June 2009 (UTC)[reply]
Here is the patent application: [14]. (Hmmm - he lives on "Asylum street" - what are the odds?) This is certainly recommended reading for enthusiastic "nut-job" spotters such as myself. (I'm DEFINITELY going to have to figure out a way for someone to pay me a dollar every time someone confuses "force" with "energy".) The Chicago Tribune article says he spins the wheel and it spins for a while and then stops...this is pretty much exactly what you'd expect any claim of a perpetual motion machine to do - so there is no surprise there. On my home Wiki, I have a "You know you are a crank when..." checklist for exactly this kind of occasions: [15]...let's see how this guy does...hmmm...so far only a rather disappointing 3 out of a possible 8 points - but I've decided to award him a bonus point for using his magnets "in cold fusion mode" - which is certainly a novel and exciting breakthrough in perpetual motion machine design. He has "neutron barrier planes" AND "atomic holes" - and it somehow involves the synergy of the entire universe as a part of it's operating mechanism - which is all really quite remarkable for a machine comprising a dozen magnets nailed onto a wheel. It's comforting to know that this wonder of mechanical genius operates equally well in clockwise AND anticlockwise rotations. But I trust that if he gets his patent, he'll be claiming that it proves that his machine works - and if he doesn't get it, he'll claim it's "big oil" putting him down...so we can look forward to plenty more fun in the future. SteveBaker (talk) 22:56, 2 June 2009 (UTC)[reply]
I think I have stumbled across a free energy system. I take TIME, and convert it to electricity. You see, I put in some TIME at another place from my home. They in turn give me a slip of paper periodically. In turn, I give this slip of paper to a a person in another building. Following the paper trail, another company sends me a slip of paper in the mail, to which I attach yet another slip of paper and mail back. As a result, I plug into the wall and get electricity! BRILLIANT! :) ArakunemTalk 23:08, 2 June 2009 (UTC)[reply]
Does he get any extra points for "rolling stator magnetic field in the fourth dimension" (beyond the existing point for a reference to magnetism)? I think he should do. --Tango (talk) 01:01, 3 June 2009 (UTC)[reply]
Well, perhaps a half point. He might mean time (4th dimension?) in which case this is just a fancy way of saying "it's moving", for which no bonus is possible. A true crazed nut-job wouldn't be satisfied with a mere 4th dimension - it would have to be the 11th or something. To be honest, I'm bitterly disappointed that he's not yet announced that he intends to fit this to his car. Proper perpetual motion machine designers no sooner have an idea for their machine than they're out there pulling the motor out of their 1986 Acura to make way for it. But he's just getting into the role - and he's already comparing himself favorably to Edison...that's a promising start. SteveBaker (talk) 02:20, 3 June 2009 (UTC)[reply]
I just ... don't understand. The man has built a prototype. Doesn't he spin it and ... see that it does not continue spinning? Doesn't he see that his prototype is incapable of powering anything? I can understand if he is put off by complex mathematical theories or well-established scientific explanations, because those require either a certain level of comprehension or a blind trust in "smarter" scientists who DO understand. (I mean, I can write out some Maxwell equations and take a Cauchy integral and prove beyond any reasonable doubt that there is no way to have a "continuous never ending force-field" around a closed contour... but that requires knowledge of some higher math!) So my (dis)proof of the concepts in the machine might be worthless to the man. But ... he has built this machine! Can't he see that it doesn't actually work? When he tries to power it up, and it doesn't power up, you don't need any theoretical knowledge to verify that behavior. Why in the heck doesn't he accept empirical evidence? Nimur (talk) 03:24, 3 June 2009 (UTC)[reply]
And then he says it doesn't work because the magnets need to be more precision-machined and precisely aligned! That would mean... he would have to have an analytic, mathematical description of where to place the magnets and how to shape them (so that he could give the schematic to a machinist). So, he will have to measure, "by observations of the universe and reason itself," the force induced by each magnet (sort of like experimentally rederiving the Biot-Savart law?) And then he will need to calculate the fluxes and field lines and all that (sort of like, solving some Maxwell equations?) If he actually put in the proper scientific rigor into these measurements and calculations, he would rederive all of electromagnetism, and see for himself that his plan doesn't work. That's experimental physics, but he's apparently two hundred years behind on his reading, because it has been done and confirmed observationally a hundred thousand times by high school physics students. It's very frustrating to see how far these cranks can get, because most of the people who look at their work don't know enough to debunk it. Nimur (talk) 03:38, 3 June 2009 (UTC)[reply]
As my "You know you are a crank when..." page points out - 99% of cranks are not just ill educated in the sciences - but actually proud of the fact because (they generally claim) their thinking isn't restricted by the limited world-view of the people who went before them. Sadly, this neglects the solid fact that real earth-shattering advances are always made by building on the work of those who came before. Because they don't know all of this really well known science, they are doomed to repeat the mistakes of others. This failure to understand the difference between a force and energy has resulted in this guy wasting (probably) years of his life - paying a small fortune to patent lawyers and making himself look like a complete idiot to the majority of the people he's trying to impress.
It's not just knowledge though and education though. Even if he's unaware of the laws of thermodynamics - and cannot calculate all of those complicated interactions between magnetic fields (I'm quite sure I couldn't do that) - even if you know nothing of that...using the very basics of the scientific method would tell him he's going nowhere with this: He spins the wheel - it spins for a while, then gradually comes to a stop. He claims that this is just because he doesn't have the magnets quite perfectly set up and machined. But it would take him only minutes to try a 'control' experiment in the time-honored scientific tradition. First give your wheel a push of a known amount (maybe wrap a known length of string around the axle and hang a weight on the end to make it spin at a known rate) - and just measure the amount of time the machine takes to come to a stop. Then replace the magnets with bits of non-magnetic material weighing the same amount - which ought to nullify whatever effect you think you have invented. Now give the wheel the same initial push and time how long it takes to stop in that case. Do the experiment 100 times and average the results.
We all know that if he did that super-simple experiment, he'd discover that his complicated system of magnets has NO effect whatever. Not just that his magnetic motor isn't quite able to overcome friction - he'd find that the wheel would spin for exactly the same amount of time whether the things around the edge are magnets or not. At this point, a rational person would have to concede that they were wrong and go back to the drawing board. THAT is what makes someone a scientist - not the years of training and brain full of laws and equations. Although those kinds of things definitely help you to say "Hmmm - I think I'll try to invent a perpetual motion machine today...oh...but wait...those darned laws of thermodynamics again. OK then - back to making a better mousetrap." - but they aren't NECESSARY to being a scientist. SteveBaker (talk) 13:20, 3 June 2009 (UTC)[reply]
Nimur, Perpetual motion enthusiasts are convinced that if they can only "balance" out the forces that are working to slow down their machine. A non-working prototype that "isn't quite balanced" will still be taken as a proof of concept by these people, because they take it as given that they can "balance" the forces in such a way that they get free energy, in their mind the only part of the design that needs testing is everything else.
Of course, to a rational outside observer that's completely backwards. I don't need proof that you can spin some magnets around in a circle. What I need proof of is that you can "balance out" the laws of thermodynamics. APL (talk) 13:44, 3 June 2009 (UTC)[reply]
As long as his hydrocoptic marsel vanes are properly fitted to the ambifacient lunar wane shaft, I can't see why it wouldn't work. --Sean 12:56, 3 June 2009 (UTC)[reply]

vapourised alcohol after flaming sambuca?

Hi a friend of mine after having downing a shot of flaming sambuca always inverts the shot glass. He then proceeds to lift up the glass marginally (enough to get a straw under), and then inhales. He says that by doing so he is breathing in vapourised alcohol. He claims these actions help one get drunk faster. I thought wouldn't the gases in the glass just be normal gases found in air with perhaps a bit more carbon dioxide? Would there be any vapourised alcohol? Presumeably a little - but I believe alcohol (in this case just talking about ethanol, obviously) is not very well absorbed when inhaled anyway; so I think that these actions wouldn't help him get drunk more quickly. Am I right? Any thoughts on the matter would be much appreiciated! Thanks RichYPE (talk) 20:58, 2 June 2009 (UTC)[reply]

The dude wants to get drunk on Sambuca??? That should be your first clue as to how seriously to take his ideas on the subject. --Trovatore (talk) 21:03, 2 June 2009 (UTC)[reply]
Sambuca isn't good for anything else... --Tango (talk) 21:07, 2 June 2009 (UTC)[reply]
Sure it is. You put it in coffee. Or you drink it in small quantities to appreciate the taste. But drink enough of it to get drunk? What, you're out of Listerine? --Trovatore (talk) 22:36, 2 June 2009 (UTC)[reply]
...and here I should probably point out that neither I nor the Wikimedia Foundation advocates drinking Listerine.... --Trovatore (talk) 22:39, 2 June 2009 (UTC) [reply]
Inhaling alcohol vapour would probably get it into your blood pretty quickly, but I can't see why there would be more than trace amounts under the shot glass in those circumstances. --Tango (talk) 21:07, 2 June 2009 (UTC)[reply]
Vaporized ethanol can certainly get one drunk...it's used in alcohol research quite often and there are vaporizers on the market (of varying legality). It's a rapid onset of intoxication if the vapor is of sufficient quantity, and it's well absorbed. A just-consumed shot glass always has remnants of the prior contents sticking to the sides, so there would be, presumably, some modest amount of ethanol vapor in the trapped space of an inverted shotglass...but probably not much. I'd guess that the effect of the shot itself would be of a considerably greater get-ya-drunk magnitude than a a little extra vapor...consider this: if your friend had had taken a deep breath over the full (pre-lit) shot, he probably would have gotten a similar amount of ethanol vapor exposure, and trying to become inebriated that way would likely have taken a looong time. — Scientizzle 21:09, 2 June 2009 (UTC)[reply]
Back-of-the-envelope calculation: Assume a shot glass volume of 50 mL (about 1.7 ounces). Assume further an in-glass gas temperature of 25 degrees Celsius (77 Fahrenheit). If the glass were entirely filled by ethanol vapor (no air, no water vapor, no other volatiles), there would be just under a tenth of a gram of ethanol present. That works out to a little over 0.1 mL (0.004 fluid ounces) of liquid alcohol, or less than one-hundredth of a shot. Those numbers are pretty generous assumptions, too — if the gas is hotter, it will be less dense and contain less alcohol; there's also little likelihood that other gases (including air and water vapor) will not be present to dilute the alcohol. The stuff he's inhaling might have an interesting taste, but it's not going to do a damn thing for his (in)sobriety. TenOfAllTrades(talk) 21:20, 2 June 2009 (UTC)[reply]
Those are good calculations. Furthermore, obviously the absolute maximum amount of ethanol vapor that would be possible to inhale is limited by the amount of liquid left in the glass following shot consumption. Considering the poster's friend is clearly out to get hammered, it's doubtful that there'd be more than a few mL total (of a 42% EtOH fluid) left over for straw-vacuum removal. The contribution of this extra step should have minimal effects on insobriety and will likely leave your counter sticky with Sambuca. — Scientizzle 22:35, 2 June 2009 (UTC)[reply]
There's a whole lot of mythology and urban legends surrounding alcohol, like most other mild-altering substances. Since feeling drunk is very subjective, it's easy for people to believe all sorts of unreasonable things about what makes them more or less drunk. Pretty much any high schooler will tell you that, for example, drinking beer through a straw gets you drunk faster. It's complete bullshit, but it's often repeated and widely believed. Friday (talk) 21:13, 2 June 2009 (UTC)[reply]
To be honest - if you really wanted to get drunk faster (why is that a good idea?) then not setting light to the sambuca in the first place would really be the best idea! What's burning isn't really the little coffee bean in there - it's the alcohol. By burning it off, you're lowering the amount of alcohol you drink. SteveBaker (talk) 12:58, 3 June 2009 (UTC)[reply]
Here is an article about some folks trying to open an alcohol vapor bar, with mixed results. --Sean 13:00, 3 June 2009 (UTC)[reply]

wow thanks to everyone who helped! Thanks especially to tenofalltrades for the calculation. I shall inform my friend of the error of his ways! Cheers RichYPE (talk) 19:23, 3 June 2009 (UTC)[reply]

Phosphorus in Evolution

In evolution of complex organisms from small and primitive ones, how and why was the Phosphorus taken as a candidate to be a part of almost all life in such a vital form, such as in DNA, ATP, Lipid layer etc, even so when it is not available in atmosphere, only present in soil, and that too almost always in complex phosphate ion forms because of its high reactivity? We (organisms in general) are slowly running out of easily accessible phosphorus. If we really run out, what will be the course of evolution from there on? - DSachan (talk) 21:49, 2 June 2009 (UTC)[reply]

Why are we running out of phosphorus? How does it get lost (or made inaccessible)? --Tango (talk) 22:33, 2 June 2009 (UTC)[reply]
Phosphorus mentions that "In 2007, at the current rate of consumption, the supply of phosphorus was estimated to run out in 345 years." This is for industrial production of phosphorus compounds, which would presumably include fertilizer. I guess that's the question; what does that mean for everything that needs it to live? Someguy1221 (talk) 23:36, 2 June 2009 (UTC)[reply]
Actually, I read something (I think) in a recent issue of Scientific American in an editorial which claimed that the current existing phosphate mines will be tapped out in 40 years or so, and they warned of an impending "phosphorus famine". However, I am not sure if they took into account anticipated untapped phosphorus sources. --Jayron32.talk.contribs 23:59, 2 June 2009 (UTC)[reply]
Surely that is phosphorous that we are using for industrial/agricultural purposes, most phosphorous will be in some kind of cycle. Organisms die and decompose, the phosphorous gets back into the soil, gets absorbed by plants, the plant gets eaten, whatever ate the plant gets eaten, so and so on until it ends up in an organism that decomposes and it gets back into the soil. I guess it is possible it is gradually getting leached into the oceans, but that's never going to happen on a scale of 40 years. --Tango (talk) 00:54, 3 June 2009 (UTC)[reply]
Oh, the phosphorus isn't going anywhere. However, phosphorus deposits in concentrations we need to use to make all the fertilizer to grow all of the food we do now IS, and that seems to be the big problem. Phosphorus basically allows us to grow much higher calories/acre in terms of food production than would be possible without it. The question comes what will happen when all usable phosphate mines pan out. If we have no sources of phosphate fertilizer, what will happen to our farming practices. Its a genuine problem; I am not sure I trust the 40 year figure, but fertilizer phosphorus is a finite resource, and its a problem that we will likely run into some day in the future. --Jayron32.talk.contribs 03:07, 3 June 2009 (UTC)[reply]
It's an interesting question, but it isn't the one the OP asked. --Tango (talk) 13:10, 3 June 2009 (UTC)[reply]
If you have access to the journal Science, there's the classic article "Why nature chose phosphates." Westheimer, F.H. (1987) Science. 235(4793):1173-8. [16] If you don't have journal access, you'll probably be able to find a pdf copy or two floating around on the internet if you search on the article title.-- 128.104.112.106 (talk) 14:49, 3 June 2009 (UTC)[reply]
It has been suggested that life began either inside the earth or at the ocean floor, so the atmosphere may have been largely irrelevant. Also, see Abiogenesis#Polyphosphates. --JWSurf (talk) 15:02, 3 June 2009 (UTC)[reply]

overpressure wave calculation

Hi When you calculate the pressure produced by the blast wave from an explosion, when it is an overpressure shockwave, that means it is over regular atmospherice pressure. Does this mean that objects struck by the wave feel 14.7 psi (atmospheric) plus whatever psi the shockwave is ? for example if the shockwave is 3 psi, does this mean it is subjected to 17.7 psi, or would that object already be experiencing atmospheric pressure, like we do everyday, and only experience 3 psi of pressure? I ask because different sites say different things. Thank you

Robert —Preceding unsigned comment added by 79.67.192.228 (talk) 22:06, 2 June 2009 (UTC)[reply]


It's called "over"-pressure for a reason! It's the pressure over and above atmospheric pressure. So, yeah - if the air pressure today is 14.7psi and I set off a stick of dynamite that produces 3psi of overpressure - then (say) a brick wall nearby has 17.7 psi pushing on one side of it - and only 14.7 psi pushing on the other side. It might seem at first sight that a mere 3 psi was nothing much - but if you have a 10' x 6' wall in a room then that's 10x6x12x12=8640 square inches - which is 25920 pounds of force - something like 13 tons. SteveBaker (talk) 23:29, 2 June 2009 (UTC)[reply]
If you aren't familiar with pressure measurements here's some context. Weather patterns can regularly cause a 1% variation in atmospheric pressure, and really awful hurricanes can maybe cause a 3% variation (underpressure). The worst storm underpressure ever registered about a 13% under-pressure. A shockwave with 3 psi overpressure is about 20% overpressure (twice as bad as a hurricane!), and though it isn't sustained, it's localized - as Steve Baker pointed out above, it can create some pretty nasty forces on anything it hits. (Straightforward multiplication of the shockwave overpressure peak times the area gives you a good estimate, but not an exact value, of the net force on the wall - there are transient time effects and a host of complex fluid-flow issues as well). Nimur (talk) 03:47, 3 June 2009 (UTC)[reply]

global warming potential

how can a GWP value decrease if the gas is in the air for a longer amount of years.? Global warming potential thanks, -Bill —Preceding unsigned comment added by 173.30.14.113 (talk) 23:27, 2 June 2009 (UTC)[reply]

I think you've somewhat misunderstood. The amount of a gas typically decreases over the years - it's broken down in one way or another. So the amount of damage you do by putting (say) a ton of Methane into the upper atmosphere gets less and less as that gas breaks down over the years. SteveBaker (talk) 23:32, 2 June 2009 (UTC)[reply]
but why after 500 years does the total potential of methane summed over the 500 yrs equal 7.2 and over all 20 yrs it's so much higher at 72? —Preceding unsigned comment added by 173.30.14.113 (talk) 01:11, 3 June 2009 (UTC)[reply]
The GWP is how much worse that gas is than CO2, it depends on the timescale because different gasses break down or are removed from the atmosphere at different rates. The GWP of methane decreases over time, which means it must be breaking down or being removed faster than CO2. --Tango (talk) 01:20, 3 June 2009 (UTC)[reply]
Indeed. GWP compares the effect of an instantaneous release of a given GHG to an instantaneous release of the same amount of CO2 over a given number of years. CO2 is relatively slowly removed from the atmosphere - it's lifetime is hundreds of years (albeit its not quite so simple). Methane (CH4)is, molecule for molecule, a stronger GHG than CO2. But it breaks down over only a few years into CO2 and water (and the water is removed from the atmosphere nearly instantaneously). So over time, the GWP of methane approches that of CO2. One important take-home is that a naked GWP value is useless - you always need to know the time frame over which it is computed. If none is specified, often but not universally 100 years is assumed. See Global warming potential#Importance_of_time_horizon. --Stephan Schulz (talk) 09:51, 3 June 2009 (UTC)[reply]

Ultimate Fate of Mankind

Has anyone ever done a survey of what the general populace believes the ultimate future of humanity is? For example, 40% say we'll colonize the universe, 40% say we'll blow ourselves up, 20% believe some deity will end it all, something along those lines? TheFutureAwaits (talk) 23:28, 2 June 2009 (UTC)[reply]

I imagine such a survey would depend very much on when you did it (during the cold war I expect a very large proportion expected use to blow ourselves up, probably fewer do now, although maybe a few more since North Korea started making significant progress with its nuclear program), and the populace you targeted (basically, the more religious people you ask, the more people are going to give a religious response). It would be interesting to find out, though (I don't know if such surveys have been done). It's not really what we're here for, but we could hold a mini-survey of ref deskers (whose opinions are far more interesting that the general populace!). Personally, I think the two most likely ultimate fates are colonising the solar system and then getting wiped out when the sun dies (that gives us few billion years - one billion on Earth, but we could move further out, although our population may need to dramatically reduce), or colonising the galaxy (I very much doubt we'll ever leave the galaxy, the distances involved are just too vast and faster-than-light travel is just too unlikely) and getting wiped out when the galaxy runs out of stuff to make new stars out of and the old ones all die (which gives us about 100 trillion years). I know this is far longer than other animal species have generally lasted, but intelligence means we adapt to new environments without evolving physiologically (so staying the same species), other species have had to become new species in order to deal with new environments. --Tango (talk) 23:42, 2 June 2009 (UTC)[reply]
Everything must have an end, so one day humans will no longer exist. 78.151.147.255 (talk) 00:16, 3 June 2009 (UTC)[reply]
Why must everything have an end? --Tango (talk) 00:50, 3 June 2009 (UTC)[reply]
Humanity will replace itself within the next 20 years. See Technological singularity. -Arch dude (talk) 01:31, 3 June 2009 (UTC)[reply]
I'm not at all convinced. I haven't even seen a reliable quantification of human mental capacity, so how could anyone possibly make such a prediction? --Tango (talk) 01:44, 3 June 2009 (UTC)[reply]
Twenty Years? Care to ... make a wager on that? APL (talk) 13:51, 3 June 2009 (UTC)[reply]
Sheesh, you know, it would be nice if the Refdesk regulars would try to answer the OP's question while citing some sources, so the OP can do some research on his or her own, instead of spouting opinions without any citations. He didn't want our opinions; he wanted to know about everyone's opinions. OP, I found this link on Google Books that says that a 1995 Gallup poll conducted in the US said that 61% of the adults and 71% of the teenagers agreed that "the world will come to an end or be destroyed". In a separate study, a sample of 17,000 high school seniors (meaning 17 or 18 years old), "more than one-third" agreed with the statement, "Nuclear or biological annihilation will probably be the fate of all mankind within my generation." As you can probably guess, the book itself is called "The End of the World As We Know It", so it doesn't dwell on the other options that people chose; but you can presumably track down the footnotes in the book or use search engines to find these polls (and hopefully followup polls to indicate trends). Tempshill (talk) 02:23, 3 June 2009 (UTC)[reply]
According to Christianity in the United States, about 76% of the US population is Christian, and 40% of the Christians are evangelical. The bulk of these profess literal belief in the bible, and therefore believe that the rapture will occur "soon." That pretty much does it for mankind. -Arch dude (talk) 03:48, 3 June 2009 (UTC)[reply]
As always, it's going to depend on what you define as the "ultimate fate." For example, if humans don't extinguish themselves, but gradually evolve, there will be some point when we are, beyond any reasonable doubt, decisively not human (unless we genetically stagnate indefinitely). So, would that be the end of humanity? You have to precisely define all the other terms as well: "human", "civilization", "ultimate", etc... Nimur (talk) 03:53, 3 June 2009 (UTC)[reply]
Our OP isn't interested in what WE thing the end of humanity will be - but what the population in general believes. The trouble is that most of the general public are very susceptible to suggestion. If you ask "What do you think the chances of us blowing ourselves up with nuclear weapons is?" you'll get some highish number. If you ask "Will we get wiped out by a massive meteor strike like the Dinosaurs?" you'll get another high number. You can keep this up indefinitely and the probabilities will soon add up to more than 100%! But if you go and ask "Do you think humanity will get off the earth and live on other planets?" which would preclude any abrupt, disaster-type ending - then they also pick a highish number. People don't think these things through very carefully and they are terrible at understanding probabilities. So this kind of survey depends very sensitively on the questions you ask. SteveBaker (talk) 12:54, 3 June 2009 (UTC)[reply]
Steve, again, some source citations would be useful for our OP instead of just ragging on some of his possible assumptions. Tempshill (talk) 20:26, 3 June 2009 (UTC)[reply]
While I agree the way you word the question will have a big influence, as is well established in the public polling arena [17] and most people don't think much before answering, I don't think your example demonstrates that well. I can easily see someone saying yes to multiple questions of yours even with great thought because none of them specified it as what you believe is most likely. There's no reason why you can't believe all of those have a good enough chance of happening to say yes. And someone may say no or maybe even though they think one of those is the most likely fate simply because they think the chance for all is relatively low. Nil Einne (talk) 17:13, 8 June 2009 (UTC)[reply]

June 3

Earth is seriously moving away from sun?

Is earth seriously moving away from sun? I saw it was update like last night earth moving 20 centimeters apart each year they said because sun is losing solar winds and masses. Is it just earth or this is also happening to venus, mars, and others? If sun is losing mass now, then sun will just lose more and more, I don't think sun will gain mass back again.--69.229.240.187 (talk) 00:25, 3 June 2009 (UTC)[reply]

Yes. Tidal forces and loss of solar mass both result in the Earth receding slightly from the Sun. Without checking numbers, I'd expect tidal forces to be the dominant mechanism. The same effect occurs at varying rates on all other bodies orbiting the Sun. The Sun can gain mass (any time a long-period comet crashes into it, for instance) but is a net loser of mass. — Lomn 00:46, 3 June 2009 (UTC)[reply]
It is true, but it is a negligible amount. Our measurements of the distance from the Earth to the Sun are only accurate to a few metres (according to Astronomical unit), so a change of 20 centimetres isn't even measurable. It is happening to all planets, though. When the sun nears the end of its life it will throw off its outer layers, dramatically reducing its mass, and then the Earth will move significantly further away, but that won't happen for billions of years. --Tango (talk) 00:49, 3 June 2009 (UTC)[reply]
  • Does all planets including Mercury and Venus? Venus have backwar orbit, so I always thought Venus will just end up like Triton vs.Neptune or Phobos vs. Mars. But how is Mercury moving farther when it is so close to sun?--69.229.240.187 (talk) 00:58, 3 June 2009 (UTC)[reply]
    • Venus does NOT have a backwards orbit. It has a backwards rotation, which means that it turns through a day in the opposite direction as the other planets do (or if you prefer, it is upside-down. Same difference). It turns so slowly, however, that a venusian "day" is longer than a venusian "year". And it orbits in the same direction as all of the other planets. However, as noted, the sun is losing mass via mass-energy conversion, which means that as the mass decreases, its gravity decreases over time. Since the force of gravity is decreasing, ALL objects gravitationally bound to the sun will drift farther away. It doesn't matter if we are talking about Mercury or the Oort cloud. Less mass = less gravity. Less gravity = larger orbits. Its that simple. --Jayron32.talk.contribs 03:02, 3 June 2009 (UTC)[reply]
Even if it was a retrograde orbit (which is not the case), it would still move out to a larger radius if the sun's mass decreased. Nimur (talk) 03:55, 3 June 2009 (UTC)[reply]

For perspective, at a rate of 20 cm / yr, the Earth's orbit would take 7.5 billion years to change 1% in radius. In other words, the current rate of expansion is utterly negligible for any practical purpose. Dragons flight (talk) 03:58, 3 June 2009 (UTC)[reply]

  • If Dragon Flights is talking about in 7.5 billion years. It is still the same game as usual. Nothing have changed. Earth survival or not is still an even money (50/50). Mercury is going to go definitely. Venus' odd to doom is obviously better than a 50/50, Mars could be a doom one but highly unlikely. When sun expands, then it's surface area will increase drastically, then sun's mass and gravity will probably climb. This is possibly another story.--69.229.240.187 (talk) 04:26, 3 June 2009 (UTC)[reply]
Surface area expands OK, but why would mass/gravity climb? It will only decrease. - manya (talk) 04:54, 3 June 2009 (UTC)[reply]
We don't even know if earth will still exist at that time or not. It got to be some theory to make earth getting engulf, it's still a 50/50 odds.--69.229.240.187 (talk) 05:00, 3 June 2009 (UTC)[reply]
69': You have the ending of the sun all wrong. When it runs out of hydrogen and starts running on helium, it suddenly starts generating a LOT more energy (but not for very long) it blows off an immense amount of material - which is certainly enough to kill everything on earth, blow away it's atmosphere and oceans and generally trash the place. Once that's happened, both the mass and gravity of the sun are a lot less and the gravity can no longer stop the sun from expanding due to photon pressure from all the energy it's putting out - THAT'S why it expands. It doesn't gain mass or gravity - quite the opposite in fact!
You have to think of stars as fighting a continual battle between their immense gravity trying to collapse them into something much smaller and more exotic (like a black hole or a neutron star) and the photon pressure from the energy they put out that's trying to inflate them. If the gravity loses the battle, the star grows to a larger, dimmer red giant - if gravity wins, it shrinks to a white dwarf, a neutron star or a black hole depending on how much gravity there is. Ironically - the bigger the star is, the smaller it ends up being! The Sun - being relatively tiny gets bigger - but NOT heavier. SteveBaker (talk) 12:47, 3 June 2009 (UTC)[reply]
As the core fills up with heavier and heavier atoms it becomes denser and so so the pressure at the core increases (due to the stronger gravity) and the fusion rate increases. The sun expands because it is producing so much more energy. the expansion reduces the pressure in the core and slows down the rate of fusion. its a feedback process. (disclamer:I am not an expert) just-emery (talk) 20:04, 3 June 2009 (UTC)[reply]
I'm not getting the whole thing wrong just mass and gravity. I didn't know that in past, but I got it now. Yes, when sun gets bigger, it's mass and gravity plummets quickly, mass and gravity just dives way down. I got weight-size thing upside down. Yes I know neutron star and black hole comes from high mass star not sun since our sun is average mass star. yes, I know once sun runs out of hydrogen, earth ocean and life will run out first, earth will become a hell instantly until earth atmosphere runs out earth is a char garbage. But have you seen Formation and evolution of the Solar System#Future yet. They siad because of tidal interaction, earth will get destroy at end of sun's peak size, when mass and gravity drops suddenly. Studys 6 month ago hadn't change yet. Yes, if earth survives, then earth is a trash hell black-out scorchland with nothing on it. Maybe earth will be gone, destroy by sun. With tidal interaction between sun and earth I don't know what it is about.--69.229.240.187 (talk) 22:37, 3 June 2009 (UTC)[reply]
Actually what is a tidal force thing when sun gets to peak of it's diameter about? how can tidal force happen when gravity and mass drops.--69.229.240.187 (talk) 00:32, 4 June 2009 (UTC)[reply]
All the informations is on this site and source on Formation and evolution of the Solar System#Notes and references numbers 86-89, if my questions is clear fully.--69.229.240.187 (talk) 02:24, 4 June 2009 (UTC)[reply]


I haven't read all those posts yet, but the Earth really is moving away from the Sun, and it's not due to nuclear fusion: http://www.skyandtelescope.com/news/46618862.html. That article suggests tidal interactions between Earth and the Sun. Sky and Telescope is a reputable amateur astronomy magazine, but I don't know how scientifically reliable that specific article is. --Bowlhover (talk) 10:54, 4 June 2009 (UTC)[reply]
Main factor of earth moving away from sun is lost of mass.It is not only earth but all planets maybe moons vs. planets (Titan and Saturn) is also moving away so is Galilean moons vs Jupiter. Fron Section 15 they said because of tidal bulge. Earth is both moving farther and closer to sun. Is tidal bulge. Net solution is when sun gets old, hydrogen starts to deplete, then with less gravitational pull, then all planets and moons moves farther out. Tidal bulge is probably a friction, slows down earth's orbit, but I don't know how drag of sun and earth shrinking orbit works is my question.--69.229.240.187 (talk) 22:32, 4 June 2009 (UTC)[reply]

MM Experiment

When Michelson and Morley did their experiment they were looking for variations in the brightness of the light right? I was under the impresssions that the split beams were recombined into a eyepiece. As they rotated the apparatus they were looking for variations in the brightness of the light right? Because they knew they couldn't make the arms exactly the same length apart. They weren't looking for actual fringe shift, just the effect. —Preceding unsigned comment added by 24.171.145.63 (talk) 00:53, 3 June 2009 (UTC)[reply]

Basically you are correct. They could not measure the exact length of each arm with sufficient accuracy with any instrument at their disposal, but by looking at the interference pattern, you can obseerve changes in the difference between the lengths. Whe I set up an interformeter I used a laser, so I had a nice long coherence length to work with. With a laser, you start with two paths that ate roughly the same length. as you vary one length by one wavelength, you cah observe the pattern shift through an entire cycle. I used a mirror mounted to a piezoleletric actuator to vary the length through several wavelengths and observed the repeating cyclic pattern. When using a white light as Michelson did, you need to get the paths a lot closer to the same length to start with. -Arch dude (talk) 01:28, 3 June 2009 (UTC)[reply]
Just one more quick question:

http://spiff.rit.edu/classes/phys314/images/mm/mm3_rot.jpg On pg. 336 it says (little below the middle), that if the whole apparatus is turned through 90 degrees the fringe shift would double. Why? —Preceding unsigned comment added by 24.171.145.63 (talk) 04:34, 3 June 2009 (UTC)[reply]

That's what would happen if the speed of light *did* depend on motion through the "luminiferous aether". In actual fact, it doesn't, so there is no change. --Tango (talk) 14:56, 3 June 2009 (UTC)[reply]

Smallest number of neutrons

What is the smallest number of neutrons to start the formation of a Black Hole?

---- Taxa (talk) 01:04, 3 June 2009 (UTC)[reply]

Neutrons do not form black holes. Generally, they are formed by collapsed stars, as such they are called Stellar black holes (there are other types of black holes, but when most people talk about black holes, they mean stellar black holes). According to our article, which I link, there are several factors besides the mass of a star which will lead to it forming a black hole, but the threshold limit seems to be roughly 20 solar masses. That is, stars which are larger than 20x the size of our sun will generally form black holes upon their death, while stars smaller than that will not. --Jayron32.talk.contribs 02:57, 3 June 2009 (UTC)[reply]
Neutrons typically form neutron stars, which can not collapse to small enough size to form a black hole. This is because the Pauli Exclusion Principle forbids two neutrons from occupying the same spatial location with the same quantum state. This quantum-mechanical explanation can be "summarized": even though they have no electric charge, neutrons will repel each other if they get squished close enough together via a repulsive nuclear "force" (in truth, this is not a repulsive force because it can't be written as the gradient of an energy potential, which is why the quantum mechanical explanation is "better", but you can sort of conceptualize the idea). NeutronsPure neutrons can't get close enough together for gravity to dominate (which is required for black hole formation) - they must start off with other particle types present, or accrete extra mass in order to form a black hole. Nimur (talk) 04:02, 3 June 2009 (UTC)[reply]
Actually the degeneracy pressure can be expressed as a gradient of a potential in the mesoscopic limit. It's still not a force in the everyday understanding of the word as applied to isolated nucleons, but the potential formulation is plenty useful. For example, the Tolman–Oppenheimer–Volkoff limit describes the point at which degeneracy pressure is no longer able to counter gravity and a neutron star would inevitably become a black hole. According to the article it is between 1.5 and 3 solar masses (with uncertainties due to limited understanding of the behavior of nuclear matter at extreme density). So a neutron star can have at most something less than 3 solar masses, which translates to ~4×1057 neutrons. Dragons flight (talk) 04:30, 3 June 2009 (UTC)[reply]
Of course all this applies for a neutron star under only gravitational contraction. If some other force compresses it, it should be able to go black earlier. Romulan force fields come to mind, but more realistically: What if two minimum mass neutron star smash into each other? Gamma ray burst? --Stephan Schulz (talk) 13:19, 3 June 2009 (UTC)[reply]
If you're asking how big a neutron star has to be before its gravitational force is too large for it to keep form collapsing into a black hole, I have no idea. If you're asking for the smallest number required for a black hole to be physically possible, regardless of what forces them together, the answer is a Planck mass divided by the mass of a proton which is 2.1764411×10-8 kg / 1.672621637×10−27 kg = 3.6403606×1019 neutrons. I think it's actually somewhat less than that, as the energy required to force the neutrons together would raise the mass. Also, the black hole would evaporate almost instantly. — DanielLC 21:45, 3 June 2009 (UTC)[reply]
Why does a black hole need to be at least the Planck mass? I believe a smaller black hole would have a Schwarschild radius of less than the Planck length, which is a somewhat meaningless concept, but does that mean the black hole can't exist, rather than just that our theory of black holes breaks down? --Tango (talk) 22:03, 3 June 2009 (UTC)[reply]
The latter. We don't have an accepted theory that can understand gravity at such small scales, so whether a black hole could have less than a Planck mass of material or be less than a Planck length in size is unknown. General relativity would no longer be an adequate theory to describe such objects. Dragons flight (talk) 08:28, 4 June 2009 (UTC)[reply]
But could you (hypothetically) take a single neutron and accelerate it until its mass exceeded the Planck mass, thus creating a black hole from one neutron (plus a lot of energy) ? Gandalf61 (talk) 15:59, 4 June 2009 (UTC)[reply]
The energy used to accelerate a particle does not help creating a black whole, after all there will always be a coordinate system in which the particle is at rest. Dauto (talk) 18:18, 4 June 2009 (UTC)[reply]
Okay, how about colliding two neutrons travelling in opposite directions at near light speed - maybe in a galaxy-sized LHC ? Could that (hypothetically) generate a high enough energy density to create a quantum black hole ? Gandalf61 (talk) 12:45, 5 June 2009 (UTC)[reply]
I think the problem lies in the definition of what a "black hole" is. A single point-sized particle (a neutron, for example) is a black hole according to some definitions. (Mathematically, there is some distance that lies outside the (zero) size of the particle at which it's gravity becomes so strong that light cannot escape.) Does that make it a black hole? Well, maybe, maybe not. You certainly can't get close enough to a neutron to be 'sucked in' as you would with a black hole created by a collapsing star. I think the problem here is not what nature does or does not do - but merely that we've decided to attach a name to a somewhat flawed definition. SteveBaker (talk) 17:26, 4 June 2009 (UTC)[reply]
If a neutron is a black whole, than the no hair theorem is not valid. Dauto (talk) 18:18, 4 June 2009 (UTC)[reply]

How to cure fungal plant pathogens

My Mango tree with leaf spots.

My Mango tree has some kind of fungus which is causing it's leafs to produce spots. I think it might be Cercospora capsici but I am not sure since there are many different kinds of plant fungus that might have similar outcomes.

I would like to know what I can do to get rid of the infestation and also how I can prevent it in the future. I have added a photo of the actual plant in question to better illustrate the problem. Joel M. (talk) 02:46, 3 June 2009 (UTC)[reply]

Sorry for the lack of responses - have you tried contacting a "master gardener"? Where I live, the local university seems to loan them out to public places like shopping malls and tree nurseries, where they run a (physical) reference desk of their own for a few hours a weekend. (Photo moved) Tempshill (talk) 20:30, 3 June 2009 (UTC)[reply]
Fungicide has a couple of natural things you can try that aren't that difficult to find. (If you can't find them at your local grocery store, health food store, drug store or pharmacy, try ordering online) I'm not sure how specific you'll have to target your fungus. Most fungicides seem to be pretty broad spectrum.71.236.26.74 (talk) 15:22, 4 June 2009 (UTC)[reply]

Understanding another part of the MM experiment

Scrap my old questions; I get them now. Anyway- I hope this doesn't go against your reference desk policy since this isn't homework that I'm not trying on. here on page 340, first line, it says something along the lines of the width of the fringes varied from 40-60 divisions (im interpreting it to mean there were between 40 and 60 fringes). It than says the average is 50 and says one division means.02 wavelength. Where did this last statement come from?

One divided by 50 is 0.02. They are taking the reciprocal. The "divisions" are not number of fringes - it's the width of each fringe, measured in "Divisions of the screw head" instead of "millimeters" or "centimeters". I don't know why they use such arbitrary units - it's just like the number of marks on a ruler or optical viewer or other measurement device, and needs some conversion to standard units. Nimur (talk) 15:55, 3 June 2009 (UTC)[reply]

Lens and mirror placed in water

We have been given following question in summer assignment:-
"A concave mirror and a convex lens are placed in water. What change, if any, do you expect in their focal length"
Since f = 1/R and curvature of lens and mirror won't change if placed in water, am I correct in assuming that focal length won't change? --shanu 05:36, 3 June 2009 (UTC)

For a reflective system, the main concern is the geometry of the surface; but for a refractive system, I think you need to consider the refractive index when calculating the focal length. DMacks (talk) 05:51, 3 June 2009 (UTC)[reply]
Take a look art the articles on mirror and refraction. A key point in the mirror article is that "a beam of light reflects off a mirror at an angle of reflection that is equal to its angle of incidence." A key point in the refraction article is "refraction occurs when light waves travel from a medium with a given refractive index to a medium with another." You may need to read more of the refraction article to better understand what this saying. Once you have a better understanding of the two principles, you can revisit the question of what effect water immersion has. A couple of related questions that may test your understanding: (1) When swimming, if you open your eyes underwater, can you see as well as above water? Why? (2) If you wear goggles or a face mask, does this change your vision? Why? (Hint: What shape is the outer surface of the face mask?). Feel free to ask follow-up questions after you've had a look. -- Tcncv (talk) 06:00, 3 June 2009 (UTC)[reply]
The question is discussed here. Cuddlyable3 (talk) 09:14, 3 June 2009 (UTC)[reply]
The key here is in the precise definition of refractive index. Since this is borderline homework - I'll point you to the "Definition" section of Refractive index - read it carefully. SteveBaker (talk) 12:35, 3 June 2009 (UTC)[reply]

Thanks for your help. By the way does it also mean that R = 2f won't work for lens in water? And also since thin lens formula will change, microscopes will be affected when placed in water.--shanu 05:21, 4 June 2009 (UTC) —Preceding unsigned comment added by Rohit..god does not exist (talkcontribs)

That should also help. Dauto (talk) 15:36, 3 June 2009 (UTC)[reply]
Thanks for your help. By the way does it also mean that R = 2f won't work for lens in water? And also since thin lens formula will change, microscopes will be affected when placed in water.--shanu 05:23, 4 June 2009 (UTC) —Preceding unsigned comment added by Rohit..god does not exist (talkcontribs)
For an importatn application, see Immersion lithography.-Arch dude (talk) 08:11, 4 June 2009 (UTC)[reply]

Sperm Count

Good day,

I was wondering - what is the time frame for the sperm count to reach its maximum ounce the person has ejaculated ? ( how long will it take from a count of 0 to reach 20 000 000)

Thank you

Marcello —Preceding unsigned comment added by Saladza (talkcontribs) 07:21, 3 June 2009 (UTC)[reply]

If I understand your question correctly, I do not believe it's normal for the sperm count to reach zero even after multiple ejaculations in a short space of time. If a sperm count is zero, that would likely indicate fertility problems like azoospermia or perhaps that the person has had a vasectomy. In fact according to our article "How long the man has abstained prior to providing the sample for analysis affects the results. Longer periods of abstinence correlate with poorer results - one study found that men with repeated normal results produced abnormal samples if they abstained for more than 10 days. It is recommended not to abstain for more than one or two days before providing the semen sample for analysis" although it's not clear if this includes sperm count or other factors like motility and morphology. Note that sperm counts are only really meaningful in when measuring them from ejaculations. Nil Einne (talk) 07:45, 3 June 2009 (UTC)[reply]

Three Step Process Of Lethal Injection

Why is it that we have a three step process for lethal injection, while pets only have one simple injection when they are getting put down? Wouldn't it be better just to use the one? It'd be cheaper and wouldn't last as long, and there have been no known accounts of pets still being alive and conscious yet paralysed when the final injection stops the heart and collapses the lungs (obviously because there is only one injection that kills them outright). Why not just do it like that? --KageTora - (영호 (影虎)) (talk) 10:52, 3 June 2009 (UTC)[reply]

You are evidently talking about the US procedure. There has been much debate about this - our article Lethal injection covers this in some detail. The idea is to use the first shot to briefly aneasthetise the victim - the second to relax the muscles so there is no embarrassing thrashing around - and the third to stop the heart. There is much controversy over this because there is a belief that the first injection might not work - or might not work for more than a couple of seconds - resulting in the victim being conscious but unable to move or speak (because of the muscle relaxant) while a painful heart attack ensues from the third injection...also the possibility of surviving the heart attack and then slowly suffocating while the muscle relaxant prevents breathing. For smaller animals (cats, dogs, etc) a single massive shot of barbiturates causes unconsciousness and then both heart stopping and a cessation of breathing in about 30 seconds. The problem is (as our Animal euthanasia article indicates), this doesn't work well on large animals because the barbiturate dose required is too high. I can only presume that humans fall into the "large animals" category...hence the controversial three-step process. SteveBaker (talk) 12:29, 3 June 2009 (UTC)[reply]
  • Please avoid attempting a debate here by using loaded words like "victim". --Anonymous, 03:45 UTC, June 4, 2009.
Please avoid being uneducated in the use of the English language...pick up a dictionary sometime. From Wiktionary, meaning (2) for the word "victim" is: "Anyone who is physically harmed by another."...being killed by the state executioner certainly counts as being "physically harmed". I chose that word with great care. I you choose to pick a different, and perhaps more 'loaded' meaning - that's a debate of your own making! SteveBaker (talk) 15:39, 4 June 2009 (UTC)[reply]
To the talk page, please. --Anon, 08:00 UTC, June 5, 2009.
Also, a different ethical standard is clearly being applied to animal euthanasia than to human euthanasia (which is used more sparingly). It seems to follow that the procedures would reflect that difference in ethical concerns. Nimur (talk) 16:00, 3 June 2009 (UTC)[reply]
Having used barbiturates to anesthetize animals for surgery, I can tell you that they are pretty tricky. Even huge doses with small animals don't lead to quick death with 100% reliability -- once in a hundred times the animal continues to breathe at a very slow rate for quite a while. Barbiturates don't actually stop the heart, by the way -- so suppression of breathing is the way they kill. Looie496 (talk) 18:45, 3 June 2009 (UTC)[reply]
Animal euthanasia says there is cardiac arrest - is it incorrect? SteveBaker (talk) 20:03, 3 June 2009 (UTC)[reply]

Pushing a helicopter

Suppose there's a helicopter (something small like a Bell 206, not a Chinook) hovering a few feet off the ground. Would it be possible for a human standing on the ground to push/pull the helicopter enough to move it, in any direction, without the aid of any other equipment? — Matt Eason (Talk • Contribs) 11:06, 3 June 2009 (UTC)[reply]

Well, there is no friction and very little air resistance involved - so all you've got to do is overcome the inertia - which is considerable. The classic Bell 212 helicopter weighs in at 3000 to 5000kg depending on fuel and passenger load (actually, technically, what we care about here is the mass not the weight - but it's still 3000 to 5000kg) so it would take quite a bit of effort to get it moving at any kind of speed - and quite a bit more to stop it: Force = Mass x Acceleration - so with a normal kind of force and a big mass, you don't get much acceleration. However even the smallest push would impart some acceleration and if you continue to push, the speed will gradually get faster and faster until you couldn't keep up with it anymore - so technically, in the purest theoretical sense - yes, you could push the helicopter around with no particular problem.
However, a helicopter doesn't just hover in one place passively. If the ground beneath has even a slight slope or unevenness - or there is any kind of wind - or a vertical surface is within maybe 50' in any direction - or if the helicopter isn't set up just perfectly - then without the pilot actively working to keep the helicopter still - it would drift off and spin all over the place at accelerations much greater than you could overcome with your puny muscles! So realistically, the pilot (or perhaps some autopilot hovering aid) is actively and continually making tiny adjustments to keep the helicopter still - and those adjustments would easily counter your puny efforts to move it. If he stops making the adjustments - then the helicopter is going to drift and you're not going to be able to stop it or make a significant difference.
So I'm pretty sure that the theoretical answer is "Yes" and the practical answer is at best "No" and at worst "Not a sufficiently precise question to yield a meaningful response"! SteveBaker (talk) 12:19, 3 June 2009 (UTC)[reply]
Very interesting - thanks Steve — Matt Eason (Talk • Contribs) 12:30, 3 June 2009 (UTC)[reply]
Here is a guy doing the fairly common stunt of pulling a train with his teeth. They claim 300 tonnes in this case, and the situation seems analogous to Steve's theoretical perfectly tuned helicopter, with far greater mass. --Sean 13:10, 3 June 2009 (UTC)[reply]
The practicality of this can be changed to a similar problem: Can the kick of shooting a gun move a helicopter? Humans are capable of countering the kick of large rifles. However, it was noted during Vietnam that firing rifles that were bolted down to the helicopter would cause it to rock enough to notice. So, humans can push with more force than the kick of the rifle and the kick of the rifle is enough force to cause the helicopter to rock. So, it is feasible that a human would have enough pushing force to cause a helicopter to noticeably move. -- kainaw 13:54, 3 June 2009 (UTC)[reply]
"Humans are capable of countering the kick of large rifles." But a human probably cannot counter the kick of a .50 caliber machine gun fired rapidly - it's sort of a totally different animal. Even small machine gun like a Squad automatic weapon requires a bipod or a tripod to keep it from blowing back out of control. A large gun such as a GAU-19 (standard fare on a Black Hawk) is going to impart tens or hundreds of kilogram-meters per second of momentum (according to our article, 500 pounds force sustained). I doubt any human sustain such a push. Nimur (talk) 16:06, 3 June 2009 (UTC)[reply]
There is no doubt about that. As TotoBaggins points out, humans can move trains that are far heavier than a helicopter. The only problem is the one Steve points out about the natural movement of the helicopter due to the difficulties of keeping it steady are going to be greater than the movement a human could produce. --Tango (talk) 15:37, 3 June 2009 (UTC)[reply]
You would probably be pushing below the center of mass, causing the helicopter to pitch or roll a little depending on where you push. I'm not sure which net effect this would have without corrections by the helicopter. PrimeHunter (talk) 16:04, 3 June 2009 (UTC)[reply]
OK - I'm going to have to explain helicopter aerodynamics...bear with me! A civilian helicopter - or a heavy-lift military one - would have rotors that bend upwards a little (it's called the "coneing angle"). If you think about the helicoptors rotors - instead of being a couple of separate blades that are sweeping around really fast - imagine them as if they made a solid disk...that's kinda what it looks like. So that when the weight of the helicopter's body is supported by the rotors - the rotor disk is pulled down into a cone (with the point of the cone pointing downwards). Now, when the helicopter rolls (or pitches) that tilts that cone - right? So one side of the rotor disk/cone is now more horizontal and the other side is more steeply tilted upwards. As the rotors travel around the surface of the cone - the one that's moving more horizontally to the ground pushes down the air directly towards the ground - getting the maximum possible upward push - but the rotor that's on the opposite side is nowhere near horizontal - its at some angle to the ground - so the air it pushes goes out at an angle. This does two things. Firstly, it means that when you push on the skids at the bottom of the helicopter - the steeper blade (which is on the far side of the helicopter) pushes the air slightly away from you - which makes the helicopter try to push back against you. But because that air isn't being pushed as hard against the ground as on the blade that's nearest to you - the far side of the helicopter loses a bit of lift - and the side that's nearest to you gains a bit...for as long as you push. This tends to make the helicopter level back out again. So the effect of this coning angle thing - is to give the aircraft some inherent stability...if you make it roll - it tries to roll back to the level position again. That's actually the only thing that makes the helicopter flyable...it would be impossible to have good enough reaction time if it didn't do that. So the fact that you are pushing below the center of gravity turns out not to matter very much! The other thing that helps with stability is that the center of gravity is about halfway up the helicopter - where the engine, gearbox and fueltank probably are. However, the lift from the rotors pulls upwards from the top of the 'mast' - so (in physics 101 terms) you have a 'moment' - you have gravity acting on the center of gravity and the lift acting on the top of the mast. So long as those two forces are in a straight line - nothing much happens. But if the helicopter rolls (or pitches) the two forces are no longer in a straight line - which imparts a rotation - which in turn tries to keep the helicopter level. It's as if the body of the chopper was a plumb-bob hanging under the rotor disk. Anyway - I didn't want to complicate the earlier discussion with all of this complicated stuff. Suffice to say - this ISN'T the reason you couldn't (in theory) push a hovering helicopter. SteveBaker (talk) 19:58, 3 June 2009 (UTC)[reply]
Is it possible that the "pushing back against you" bit you mention would be strong enough to actually result in the helicopter moving towards you when you pushed it away from you? That would be a rather interesting bit of trivia. --Tango (talk) 22:00, 3 June 2009 (UTC)[reply]
I don't see how. But helicopters are complicated machines - all sorts of bizarre gyroscopic effects - and the way the tail-rotor figures into things...it's tough to reason all of the forces out. SteveBaker (talk) 22:50, 3 June 2009 (UTC)[reply]
It's irrelevant to the original question, but it might be worth noting that the effect Steve describes in relation to coning has an analog on some airplanes. Rather than both wings being in a single plane, they may be tilted slightly to form a gentle V-shape, called "dihedral", and this contributes to the airplane's stability in the same way that Steve describes. On the other hand, on fighter aircraft where a bit of instability is desired so the plane is more maneuverable, the tilt may be reversed, which is called "anhedral". See dihedral (aircraft).
One other point. Steve mentioned the helicopter pushing the air "against the ground". Either an airplane or a helicopter makes its lift by pushing air toward the ground, i.e. down. Newton's third law and all that. However, if the vehicle is near enough the ground that the air is pushed substantially against the ground, it gets more lift, as the ground bounces it back up: see ground effect in aircraft.
--Anonymous, 04:02 UTC, June 4, 2009.
"… the wing keeps the airplane up by pushing the air down" eh? True but useless. An aerodynamic force on a body moving through a fluid is accompanied by an equal and opposite force on the fluid, but one does not "make" the other. A body which pushes air down generates lift—a body which generates lift pushes air down.—eric 06:43, 4 June 2009 (UTC)[reply]
(Actually - I said it pushes the air "towards" the ground - not "against" it - I picked my words carefully.) I assume that if you're pushing against a helicopter - that it's close enough to the ground for "ground effect" to matter. That greatly increases the amount of lift the helicopter has - but it doesn't change the 'coning angle' effect - which is indeed similar to the effect of dihedral on a fixed-wing aircraft - except that in the case of the helicopter it helps pitch stability as well as roll.
Eric's criticism is technically valid but I can't agree that the original statement was useless - it enabled both he and I (and hopefully, the OP) to perfectly well understand what was being described...and I think it's a clearer statement to the layman. We have to tailor our ref desk responses to a typical layperson - because we can't make assumptions about their scientific background. Saying that pushing the air down keeps the plane up is a perfectly valid statement - it merely fails to explain the REASON why pushing air down keeps the plane up. But then we also say that gasoline propels your car along the road without going into the details of the fluid dynamics, chemistry, thermodynamics and mechanics that makes that happen. Nit picking is unnecessary here - so long as the meaning is clear. 15:32, 4 June 2009 (UTC)
The lice were in Anonymous' hair, not yours. You were explaining dihedral, he was directly discussing lift, and based on his response i suspected did not fully understand your shorthand. Apologies if this was an incorrect assumption.—eric 16:23, 4 June 2009 (UTC)[reply]
I think there's been some misreading here. I was the one that mentioned dihedral (for airplanes); Steve was talking about coning (of a helicopter rotor) and I pointed out, as a point of interest, that the principle can extend to airplanes. As to the other point about "against the ground", if Steve would kindly reexamine his carefully worded posting, he will find the phrase in it. --Anon, 17:17 UTC, June 4, 2009.

Monitoring blood glucose levels

A friend of mine is diabetic and she usually tests herself once before a meal and two hours afterwards. I was curious why she has to wait two hours. After a meal, does blood sugar slowly climb to its maximum point after two hours? Or does it quickly surge high and then comes back to a "normal" level over a two hour period. Why not test one hour after eating? or three? --68.92.139.62 (talk) 12:38, 3 June 2009 (UTC)[reply]

It depends to some extent on what she's eating. If it is high in sugar then it will increase her blood sugar level pretty quickly. If it more complex carbohydrates, or mostly protein, say, then it takes longer to digest. You may find glycemic index interesting. --Tango (talk) 12:51, 3 June 2009 (UTC)[reply]
The timing of glucose testing in diabetes management mostly has to do with the medication regimen the patient is taking. In patients who take insulin, there are different formulations that have short-term (i.e. within a couple hours) and long-term (i.e. over the course of 24 hours) effects. See insulin therapy for details. Every patient will have a different regimen, depending on their own situation. One typical regimen is to take long-term insulin to maintain blood sugar throughout the day, coupled with short-term insulin doses corresponding to each meal to cope with the influx of glucose that happens after eating. The "fasting" blood test (before the meal) is meant to verify that the long-term insulin dose is correct. The 2-hour "post-prandial" blood test checks that the insulin that was taken with the meal was appropriate. You can see from the glycemic index article that in normal individuals, the blood glucose should be about back to normal by 120 minutes = 2 hours. This is the result of the pancreas releasing a burst of insulin, which is what is being simulated by the dose of insulin at mealtime. This, along with the usual time of action of the short-term insulin, is why 2 hours is a good time for a post-prandial check. Often, the patient will be given instructions about what to do if the post-prandial glucose level is too high (take some more insulin) or too low (eat something). The doctor managing the diabetes treatment will look at the test records to make sure that the dosage regimen is appropriate. --- Medical geneticist (talk) 13:33, 3 June 2009 (UTC)[reply]
Very interesting. Now my friend doesn't take insulin. It's all diet controlled. --70.167.58.6 (talk) 23:17, 4 June 2009 (UTC)[reply]

Energy of a point charge

I have read that that the self energy of a point charge, that is, the energy required to assemble a point charge, is infinite, and i am still struggling to come to terms with it. Does it mean that there cannot exist any point charges in the universe, or does it mean that it is just an embarrassing result of classical electrodynamics? I mean, you can say even the fundamental units of charge, the electron and the proton, are not point charges, but i am asking in principle. Also, we have found that even these fundamental entities are made of quarks, and quarks are made up of what not i don't know, but is there a limit? Say we somehow find the structure of a quark, and find its made up of little xions, and now we start to analyze these xions-its back to square one... is there a limit to exploring the structure of matter? And going by this notion that there can be no point charges, will we ever be able to find an end to this non stop search inside an atom ? Rkr1991 (talk) 12:52, 3 June 2009 (UTC)[reply]

The energy required to reduce the separation between two like charges increases as they get closer together. Roughly speaking, the energy for the last bit involves dividing by zero, which gives you infinity (the more precise answer involves improper integrals). We often think about fundamental particles as being point particles, but that's really just because we don't really know what they are, talking about sizes at that scale is largely meaningless because of quantum mechanical effects. So we have to make exceptions for fundamental particles, but any charge made up of more than one such particle can't have zero size. --Tango (talk) 14:39, 3 June 2009 (UTC)[reply]
You can't get around this problem by supposing that point charges exist a priori, because even an always-present point charge has an electric field, and the energy in the field (½ ∫ E²) equals the energy required to assemble the point charge from infinity. The field energy shows up as inertial and gravitational mass of the particle. This is a classical result, but in quantum field theory the problem gets worse, not better. The workaround is renormalization, which amounts to treating the particles as though they had a small but nonzero size, or introducing some other small-scale cutoff with a similar effect. Fortunately the predictions of the Standard Model are independent of the cutoff scale as long as it's small enough (the Standard Model is renormalizable). This is understood to mean that the Standard Model is just a large-scale approximation to the real physics, which presumably avoids the infinity in some unknown way, possibly by being discrete or (as in string theory) by distributing the charge over a 1-D structure instead of concentrating it in a point. This is a lot like the use of calculus to model systems that we know are discrete, like fluids or biological populations. -- BenRG (talk) 16:00, 3 June 2009 (UTC)[reply]
A finite amount of charge in a one dimensional string would also have infinite amount of energy in its field unless it was infinitely long. the idea that the mass of an electron comes from its electric field through self induction has long be abandoned. just-emery (talk) 20:12, 3 June 2009 (UTC)[reply]
BenRG, sure you must be aware that what you described (the introduction of a small arbitrary size) is not the renormalization itself but an essential housekeeping step - the regularization - that must be taken before the renormalization proper can be performed. There are several different ways to regularize the formally divergent integrals that show up in the solutions. Not all of those regularization schemes are based in the introduction of a cutoff. The renormalization itself is done by carefully subtracting the physically unorbservable (and often formally divergent) part of those integrals and keeping only the physically relevant terms. All point particles self energies gets taken care of that way. Dauto (talk) 21:25, 3 June 2009 (UTC)[reply]

Well i am able to understand and few points but unable to certain others (having had no formal education in QM), so i can only ask where this leaves us. Does it mean we shouldn't ask questions like what is the energy of a point charge, or that QM has somehow overcome this problem and say this much Joules is the energy of the point charge, or that we are still in the dark and don't know what to make of things ? Can there exeist any point charges in the universe, or is that forbidden ?Rkr1991 (talk) 05:00, 4 June 2009 (UTC)[reply]

A little bit of each. Firt: yes, you probabily shouldn't be asking that question because a point particle is a theoretical construct any ways which may not be a valid description of nature, second: yes, QFT solves the problem but it does so by sweeping it under the rug, and third: yes, we are somewhat still in the dark since we don't have a final theory yet. Dauto (talk) 17:52, 4 June 2009 (UTC)[reply]

Is it possible to set your own innards on fire?

Is it really possible to set your own innards on fire if you're chainsmoking while drinking your Neutral grain spirit (Everclear and similar) straight? --90.240.197.75 (talk) 14:46, 3 June 2009 (UTC)[reply]

The autoignition temperature of ethanol is 425°C ([18]). I doubt the smoke from a cigarette could get it up to that temperature, although I can't find a source for the temperature of cigarette smoke... --Tango (talk) 15:34, 3 June 2009 (UTC)[reply]
I'm trying to imagine where the oxygen to maintain the combustion would come from. I guess you'd have to keep taking deep breaths, but then ... Richard Avery (talk) 15:56, 3 June 2009 (UTC)[reply]
Did you swallow the booze and then swallow the cig, otherwise I can't imagine smoking gets temps that high inside a person. I am skeptical. 65.121.141.34 (talk) 16:04, 3 June 2009 (UTC)[reply]
It may be feasible to, with a combination of a lit cigarette and alcohol vapors, set your face and/or inside of your mouth on fire. But there is no way I could imagine that any such combination could burn your internal organs like lungs or GI-tract. --Jayron32.talk.contribs 18:13, 3 June 2009 (UTC)[reply]
I could sorta kinda imagine your breath catching fire (although it seems unlikely) - but not your throat or stomach - there is just not enough oxygen down there to sustain anything like that. SteveBaker (talk) 19:40, 3 June 2009 (UTC)[reply]
Sure, an accelerant and central nervous system depressant (Everclear), ignition source (cigarette), make it an obese person who maybe does not wash their clothes very often, and you've all the makings of a case of spontaneous human combustion.—eric 23:30, 3 June 2009 (UTC)[reply]
Was anyone else reminded of Helpless (Buffy the Vampire Slayer)? —Tamfang (talk) 18:49, 4 June 2009 (UTC)[reply]

What is VCR doing to my TV signal?

I have a coaxial cable from the aerial going into the VCR input socket, and another short coaxial cable connecting from the VCR output socket to the TV. I've noticed that when the VCR is completely unplugged from the power, rather than being just on standby, then the TV signal almost disapears with a very bad very noisy picture quality. So my question, please, is what is the VCR doing to my TV signal? I thought the input and output sockets on the VCR were just passively connected, but apparantly not. 89.243.113.64 (talk) 20:36, 3 June 2009 (UTC)[reply]

Almost exactly the same question was asked further up: Wikipedia:Reference_desk/Science#Poor_TV_picture_improves_a_lot_when_VCR_on. See if that answer is any help. --Tango (talk) 20:51, 3 June 2009 (UTC)[reply]

I've already seen that thanks. This is a different question about the same items. 89.243.74.161 (talk) 08:53, 4 June 2009 (UTC)[reply]

If the only connection between the VCR and the TV is a coaxial cable, that single cable carries either the off-air signal from the aerial OR the video from the VCR in the form of a modulated r.f. signal. The choice is made by an active switch circuit in the VCR. Without power the switch circuit can't pass either signal. Cuddlyable3 (talk) 10:13, 4 June 2009 (UTC)[reply]
The VCRs I've owned have passed the input coaxial signal to the output coax when powered off. -- Coneslayer (talk) 17:16, 4 June 2009 (UTC)[reply]
You're right the question is largely unrelated but as mentioned in the above question, VCRs do amplify the antenna signal. This is done I believe because otherwise the VCR would antenuate the signal, as it is using it. This amplification is active as long as the VCR is on, regardless of whether it is in standby or fully on. As mentioned above, most VCRs are also capable of adding a additional signal on a selectable channel to the feed so that you can receive the VCR on your TV if your TV has no other inputs. Nil Einne (talk) 00:03, 5 June 2009 (UTC)[reply]
I would expect the input to output of a VCR to include a powered buffer amplifier. (The phrase "common collector" comes to mind, or some reason). Edison (talk) 04:57, 5 June 2009 (UTC)[reply]

Is there any connection?

Is there any correlation between people who have Irritable Bowel Syndrome and Panic Attacks? —Preceding unsigned comment added by 86.167.247.150 (talk) 21:23, 3 June 2009 (UTC)[reply]

I am not a medical practitioner and this is merely an observation from articles I read on Wikipedia. I seems that selective serotonin re-uptake inhibitors are listed as possible treatments for both conditions. I can imagine how frequent panic attacks could result from low serotonin levels and the same could possibly be true for IBS. If you added obsessive compulsive disorder to that list then it would definitely have my vote. But again, I'm not an expert or a professional and you should probably ignore me.
It's unlikely that you'd get an official answer here as that would violate the terms of this page. 196.210.200.167 (talk) 16:32, 4 June 2009 (UTC) Eon[reply]

Space inside a Black Hole

It appears then there are other requirements to form a Black Hole besides minimum number of neutrons[2]. Is there a complete and ordered list of requirements, in addition to minimum number of neutrons, necessary to form a Black Hole? Also can a Black Hole be described (or defined) as matter in space which contains no space? ---- Taxa (talk) 21:57, 3 June 2009 (UTC)[reply]

There is only one requirement for a black hole - that the matter in question be contained within a ball of radius smaller than the Schwarzschild radius corresponding to the mass of the matter. (Actually, that's for a non-charged, non-rotating black hole. Without those assumptions you need a slightly more complicated formula, but the principle is the same.) --Tango (talk) 23:19, 3 June 2009 (UTC)[reply]

Applications of invisible light frequencies

I need a summer project for school. I have understood that variable lighting is a problem in computer vision, my just physics is not quite strong enough to tell if there are light frequecies that are not so dependent on sun's position on the sky or nearby lamps as visible light and cheap to generate/capture in good quality. I have feeling the answer is no, but it would be better to know for sure. Anyone have any idea? --194.197.235.28 (talk) 23:02, 3 June 2009 (UTC)[reply]

Actually this is quite commonly used in image processing. For example, a lot of toll-booths have a camera to catch the license plate of anyone who does not pay the toll. To cope with varying light conditions, the cameras are often sensitive only to infrared light, and the area is illuminated by an infrared bulb. This reduces interference from other sources of light and provides a controlled, constant illumination for the Optical Character Recognition program which will identify the vehicle. One reason this method is easy is because a lot of digital CCD cameras are already sensitive to infrared, so visible light can be filtered out (or left in as supplemental illumination). You might find Infrared photography interesting as well. We also have Thermographic camera, which describes passive infrared photography (using infrared cameras without an illuminating infrared lightbulb). These are commonly used as a type of night vision (not to be confused with low-light amplification). Thermographic infrared images make it very easy to spot vehicles, trucks, humans, and other hot objects, and are also often used in automatic image-processing (for example, automatic aim correction on a combat helicopter). Nimur (talk) 02:41, 4 June 2009 (UTC)[reply]
It's important to note that, while both called "infra-red", the light used by thermographic cameras and the light used by standard IR night vision cameras is very different. The latter uses "near infra-red", that is, light with wavelengths just slightly longer than that of visible red light. Thermal cameras use more distant infra-red. IR spans about 3 orders of magnitude compared to less than one spanned by visible light. There is a far greater difference between near and far IR than between violet and red visible light. If it were up to me, I would give near and far infra-red different names (with the boundary corresponding to 100°C). --Tango (talk) 02:21, 5 June 2009 (UTC)[reply]
It's not the frequency of the light - after all, the computer's camera only really sees red, green and blue - and some computer vision is done in black and white just because it's easier and the color doesn't really help much. No - the problem is uneven lighting - where some things in the scene are lit by brighter light than others - or when the lights flicker or change brightness while the computer is gathering imagery - or when the light is strongly directional producing super-bright highlights and deep shadows. You really start to appreciate how amazingly adaptable human vision is. SteveBaker (talk) 02:43, 4 June 2009 (UTC)[reply]
Right - the point of using active infrared illumination is to control the illumination environment. We could probably also control the visible-light environment too - but in the case of covert military or traffic cameras, flashing visible light around the imaging target is not an option. Using "invisible" infrared allows the operator to produce uniform illumination for the computer-vision system, without directly affecting the human-perceptible parts of the environment. Also, sometimes infrared just has better signal-to-noise qualities - I used a (mostly) infrared beacon for my robotic target-tracker project last year. Nimur (talk) 02:49, 4 June 2009 (UTC)[reply]
That answered my question perfectly. Thank you. --194.197.235.28 (talk) 15:17, 4 June 2009 (UTC)[reply]

June 4

NDE recovery, fiction vs real life

I've seen it a hundred times on the tube: someone is found completely inert, often in water, and there's a moment of suspense – is the character pining for the fiords? – while CPR is attempted; then the rescuee noisily resumes breathing, and immediately is fully awake (though disoriented).

Does that really happen? —Tamfang (talk) 00:20, 4 June 2009 (UTC)[reply]

While it's possible for a victim to regain consciousness, any good CPR will break ribs. Going from unresponsive to verbal (making sounds without any meaning) is probably the best you can hope for. Certainly most CPR will not result in a save, and you can ask anyone in the field. M@$+[[@]] Ju ~ 00:33, 4 June 2009 (UTC)[reply]
In that situation, would the heart necessarily have stopped? If all that is required is mouth-to-mouth then I think full conciousness can return pretty quickly. CPR is normally just to keep the person alive until someone with a defibrillator gets there (and even then, your chances aren't anywhere near as good as TV hospital dramas would have you believe). --Tango (talk) 01:39, 4 June 2009 (UTC)[reply]
Isn't it the case that in Real Life, the vast majority of 'flatlines' still result in death, despite the best efforts of everyone? EDIT: Also, that a defibrillator is useless in this situation, despite what TV tells us? --Kurt Shaped Box (talk) 02:45, 4 June 2009 (UTC)[reply]
Properly done CPR need not break ribs. Edison (talk) 02:58, 4 June 2009 (UTC)[reply]
I have no idea if this is actually true but I remember reading one of those 'true medical confessions!' books ages ago in which an anonymous MD stated that it was not unknown for doctors to deliberately break the patients ribs by performing rough CPR on a patient that they already knew was toast - if the relatives were watching. The thinking being that they'd see that and be assured that absolutely everything that could've be done had been done in an attempt to save the patient... --Kurt Shaped Box (talk) 03:06, 4 June 2009 (UTC)[reply]
That is correct; the ever-popular 'flatline' (beeeeeeeeeeeeeeeeeep) in televised medical dramas is not a shockable rhythm. (See asystole). Cardiac arrests can be divided into two broad groups: those which still include some mechanical action by the heart (ventricular tachycardia, atrial fibrillation), and those which don't (asystole, pulseless electrical activity). The former are susceptible to defibrillation and have a much higher survival rate. The latter aren't shockable, and have a very poor prognosis.
Our article on cardiac arrest notes an overall survival rate of about 15% for in-hospital arrests. (Out of hospital rates are lower.) Patients with shockable rhythms fare about ten times better than those with asystole. TenOfAllTrades(talk) 13:48, 4 June 2009 (UTC)[reply]
There are many studies of near drowning events, unfortunately i don't have access to any that would answer your question. 100% of victims will survive in the short term (otherwise it's a drowning and not a near drowning). Approximately 80% will survive beyond 24 hours, perhaps with some degree of neurological deficit. Many children with cyanosis or hypoxia following recovery resume breathing after clearing the airway and one or two rescue breaths, and are conscious and alert immediately thereafter.
If the victim is in arrest the prognosis is much poorer, hypoxia has been prolonged and the brain is now ischemic. CPR alone will probably not result in a return of spontaneous circulation, let alone regaining consciousness, defibrillation and/or drugs are required. One thing to note tho is that even professional healthcare providers have a poor success rate at finding a carotid pulse during a suspected arrest event. The following scenario most likely could happen: an apneic victim is removed from the water, rescuers begin CPR but fail to note the presence of a pulse. The victim resumes breathing and shortly regains consciousness. Chest compressions were performed but were not required.—eric 15:37, 4 June 2009 (UTC)[reply]

To clarify, my question is not about the odds of survival after (near)drowning or heart attack or whatever, but about the TV cliché of sudden recovery of full consciousness. —Tamfang (talk) 19:02, 4 June 2009 (UTC)[reply]

Penile's Erectiom Angel

plz answeer, how can i measure my Penile's Erection angle ? —Preceding unsigned comment added by Greatfencer (talkcontribs) 01:11, 4 June 2009 (UTC)[reply]

I suppose a mirror might help. —Tamfang (talk) 02:33, 4 June 2009 (UTC)[reply]
Protractor? --Kurt Shaped Box (talk) 02:43, 4 June 2009 (UTC)[reply]
Of course, the angle of the dangle is inversely proportional to the heat of the beat... --Jayron32.talk.contribs 02:50, 4 June 2009 (UTC)[reply]
(EC)Use your goniometer. If you do not have one handy, the angle of the dangle has been said to be inversely proportionate to the heat of the meat, so a Meat thermometer might allow an accurate indirect measurement. Other anatomical surrogate measurements are mentioned in the work cited. Edison (talk) 02:54, 4 June 2009 (UTC)[reply]
The word "goniometer" does not mean "a thing to measure gonads," just as "episcotister" is not a device to test episcopals. Edison (talk) 04:54, 5 June 2009 (UTC)[reply]
(EC)How's about making an appointment for a visit to your local hospital's Penile tumescence lab? For some reason, the hospital seen on House M.D. would appear to have more than one. --Kurt Shaped Box (talk) 03:00, 4 June 2009 (UTC)[reply]
Employ the service of a fluffer who charges by the degree and read the invoice. Cuddlyable3 (talk) 09:53, 4 June 2009 (UTC)[reply]
<applause> —Tamfang (talk) 19:14, 4 June 2009 (UTC)[reply]
Drop a barometer from the top to determine the height, and use trigonometry. I am assuming that you know, or can measure, the length. -- Coneslayer (talk) 13:08, 4 June 2009 (UTC)[reply]
<applause> —Tamfang (talk) 19:14, 4 June 2009 (UTC)[reply]
Take the barometer to a fluffer and say "I'll give you this really nice barometer if you'll measure the angle". SteveBaker (talk) 23:45, 4 June 2009 (UTC)[reply]
I think that when dealing with an erection angel I wouldn't worry about her measurements. APL (talk) 03:49, 5 June 2009 (UTC)[reply]
Does an erection angel work for a sex goddess? --Jayron32.talk.contribs 04:21, 5 June 2009 (UTC)[reply]

<sigh> Okay. What you need is a protractor. That's a half-circle (usually plastic) with marks or scratches indicating the various angles. Stand upright (yeah, erect) and place the center of the flat edge of the protractor against the side of your erect penis, so that the protractor is straight up and down. It will probably be easier to take a measurement from your belly downwards rather than from your balls upwards. HTH. Matt Deres (talk) 23:58, 5 June 2009 (UTC)[reply]

global warming and human water retention

06:34, 4 June 2009 (UTC)Paul fitts (talk)What year did the true science of global warming start? what was the Earth's population at that time? What is the percentage of the human body that is made of water? If you had a 3ft cube of ice, and you melted it...how much water would that be in gallons??

the main reason for my questions.....it doesn't really apprear that sea levels are rising, so if ice caps and glaciers are "melting", and the water levels aren't really rising.....wouldn't it stand to reason that, that the "melted" water has to go somewhere, why not human water retention to make up 3+ billion more we've created over ther last 30 years???


Paul Fitts

The Tuvaluans beg to differ. According to this Reuters article, their whole country could disappear under the waves in 30-50 years. Another factor (which I was reminded of by An Inconvenient Truth) is that if the glaciers are in the water, their melting won't raise the water level. It's when the land-based ice melts that we have to worry. Clarityfiend (talk) 07:29, 4 June 2009 (UTC)[reply]
We have, of course, and article at Current sea level rise. Sea level is rising several mm per year. I'm too lazy to work in feet and gallons (which gallons, anyways?), but one cubic meter of ice has 1000 l and will melt into very roughly 900 l of water. --Stephan Schulz (talk) 07:36, 4 June 2009 (UTC)[reply]
Google can convert between units, put in something like "3 cubic feet in gallons". After that go outside and watch some grass carefully for a few hours. Did you see it grow? Dmcq (talk) 08:00, 4 June 2009 (UTC)[reply]
To put this in context - sea levels are rising a few millimeters each year - but each millimeter of rise represents 360,000,000,000 cubic meters of water. There are about 7 billion people - even if each of us was retaining a cubic meter of water (not even close!) we'd represent only about 0.02 millimeters of ocean depth. No - the reason the rate seems low is that 360,000,000,000 cubic meters means that it takes an awful lot of water to raise all of the oceans in the world by one millimeter. But while a few millimeters may not sound much - over 100 years, that's enough to drown quite a few coastal cities. Sadly, the evidence is that the rate of increase is going up year on year - so we could easily have a dozen or more meters of ocean level rise during the lifetimes of our children - of the younger Ref.Desk denizens. SteveBaker (talk) 15:00, 4 June 2009 (UTC)[reply]

Why are washers so called?

It was one of those idle, late night conversations in the tour van on the way home from a gig ... which led precisely nowhere. My theory is that the bigger examples are called penny washers because they are the size of pre-decimal pennies, but what about smaller varieties, and where does 'washer' come from? Any ideas please? Turbotechie (talk) 07:58, 4 June 2009 (UTC)[reply]

According to [19] (80% down the page) the origin of the term "washer" for that piece of metal is unknown, though it has had that meaning for at least 400 years and perhaps as many as 650 years. Dragons flight (talk) 08:23, 4 June 2009 (UTC)[reply]
Outside a hardware store hung the alarming sign "Nut screws washer and bolts". Cuddlyable3 (talk) 09:42, 4 June 2009 (UTC)[reply]
Not forgetting there was a launderette next door Mikenorton (talk) 10:01, 4 June 2009 (UTC)[reply]
You are right that penny washers are named for the old pennies, and often they are a similar size, but the term can actually be applied to any size washer. It is a washer with a disproportionately small centre-hole. SpinningSpark 16:34, 4 June 2009 (UTC)[reply]
Just for my own education, do you have a source I can read up on SSpark? I would frame it the other way, since washers are designed for the inserted item, so rather than "disproportionately small centre-hole", I would say "disproportionately large outside-diameter". In my very limited engineering experience in a very specialized field, we termed these "standard" and "frictional" washers, where "frictional" were the big ones, designed to offer a larger weight-bearing surface, as opposed to the I/D / O/D sizes needed to transfer load from a bolt head to a typical clearance hole. Franamax (talk) 23:42, 4 June 2009 (UTC)[reply]

Do facial products work?

Is there any scientific evidence that any face creams, anti wrinkle creams, eye treatments etc are any better than just splashing water on your face or is it really just hype? Kirk Uk —Preceding unsigned comment added by 87.82.79.175 (talk) 09:26, 4 June 2009 (UTC)[reply]

All 3 are better than water at generating profit for someone. Medical eye treatments have to be certified as safe. Cuddlyable3 (talk) 09:46, 4 June 2009 (UTC)[reply]

Many of the anti-wrinkle creams are proven to give a temporary lift and do so by using proven science, similarly darkness-removal creams can be proven to remove the appearance of darkness by masking/covering it. You may notice that in adverts of these types the claims are always quite vague (to paraphrase Charlie Brooker)...terms such as "98% of respondents agree", "help reduce appearance of", "helps fight" are all very 'vague' and undetailed - throw in a few random science-sounding (merged with natural-sounding) words and you've got something that says nothing when reviewed by a legal team but can suggest 'proof' to the average consumer. 194.221.133.226 (talk) 10:22, 4 June 2009 (UTC)[reply]

In the UK, they have to say "improves the appearance of wrinkles" rather than "removes wrinkles" on the ads now. There was one company a couple of years back that got absolutely castigated for making completely false claims about the abilities of their product (Google for 'Boxwellox'). Not quite as bad as the toothpaste that claimed to be able to split water molecules, producing free oxygen for a deeper clean - but still... --Kurt Shaped Box (talk) 10:47, 4 June 2009 (UTC)[reply]

Sunscreen definitely helps us against wrinkles. It can be scientifically tested that protecting your face against UV rays will make you look younger than you are. For example, faces of truck drivers that have been laterally exposed to sun light have been analyzed, and the half exposed to sun light looked older than the other half.--Mr.K. (talk) 10:57, 4 June 2009 (UTC)[reply]

It helps preventively against wrinkles, by the way. --Mr.K. (talk) 12:01, 4 June 2009 (UTC)[reply]
The big hype of these things in the UK ended quite a few years ago when the manufacturers were faced with either having to downsize their claims - or be treated as medical/pharmacuticals. The fair trade people argued that if they ACTUALLY reduced wrinkles, then these creams must be penetrating the skin and acting on the tissues beneath - which would require them to be classified as drugs. If all they do is fill in the wrinkles - or change their color/reflectivity to make them temporarily less noticable - then that's OK, it's just a cosmetic effect. I don't understand why that's not also the case in the USA. Certainly the claims they make are ridiculously impossible - and if they were possible, these cosmetics would certainly have to be seriously tested because they could have any number of dangerous side-effects. To the extent that they block sunlight, they might work...but their effects are essentially just changes in appearance. SteveBaker (talk) 14:52, 4 June 2009 (UTC)[reply]
Well, they also work as well as any other Moisturizer; that is by providing a barrier against evaporation, they cause the underlying skin to retain more moisture. Higher moisture content equals plumper epithelial cells, and plumper cells equals less wrinkles. Of course, a $5.00 bottle of any decent mositurizing lotion will do that; dropping $40.00 on a small 4 ounce tub of the same stuff mixed with a little make-up to cover over dark patches seems excessive to me... But then again, I'm not an aging woman trying to recapture my lost youth, what do I know. --Jayron32.talk.contribs 17:34, 4 June 2009 (UTC)[reply]
Hey, wait, here's [20] a SCIENTIFIC assessment of a particular facial product sold by that 'well known high street chemist'. Whether you are impressed by the fact that 43% of the meagre sample thought their skin had improved is up to you. 23% of the placebo sample thought their skin had improved. Richard Avery (talk) 17:38, 4 June 2009 (UTC)[reply]
A study involving 50 people; I assume half received the placebo and half the cream? That means of 25 receiving the placebo, 6 liked it, and of 25 receiving the cream, 12 liked it? I wouldn't call such small sample size "statistically significant". The error bars on a sample size of 25 would probably be bigger than the difference between the samples. Heck, I could flip a coin 25 times and get the same results. Additionally, as the sample did not compare the expensive treatment to a cheap one, only to a placebo, it only addresses the possibility that the expensive cream is better than nothing but not neccessarily better than cheaper alternatives. What you have here is a clear example of How to Lie with Statistics. --Jayron32.talk.contribs 17:45, 4 June 2009 (UTC)[reply]
Ah! "Buy our amazing anti-wrinkle cream - it has a one in five chance of being better than smearing lard on your face." SteveBaker (talk) 19:51, 4 June 2009 (UTC)[reply]
But lard has the advantage of making you smell like pie crust and biscuits. Who wouldn't want to smell fresh-baked pie crust all day? --Jayron32.talk.contribs 20:44, 4 June 2009 (UTC)[reply]
If I did it right, that's a 99.9999999635% significance level. If I did it wrong, it's even higher. This should be a two-tailed T-test (try saying that ten times fast), right? There may be problems with that study, but sample size isn't one of them. — DanielLC 16:48, 5 June 2009 (UTC)[reply]
According to the Mayo Clinic, "Research suggests that some wrinkle creams contain ingredients that may improve wrinkles. But many of these ingredients haven't undergone scientific research to prove this benefit. If you're looking for a face-lift in a bottle, you probably won't find it in over-the-counter (nonprescription) wrinkle creams. But they may slightly improve the appearance of your skin, depending on how long you use the product and the amount and type of the active ingredient in the wrinkle cream."[21] A Quest For Knowledge (talk) 17:58, 4 June 2009 (UTC)[reply]
(EC)::There is Absorption (skin). If any of the products achieve that the questions become: Do you want the substance that gets absorbed in your system? and Does your body do anything with the absorbed substance in the layers of skin that is beneficial or does the absorbed substance just accumulate or get transported away? There are some things like Botox which is a neurotoxin and Silicone or animal derived Collagen that can be injected into the skin. Since they work "internally" they get treated as medical/pharmacutical. Nevertheless they have come under criticism because the two questions above didn't come up with consistent answers when asked by different reviewers. The cosmetics industry is trying to stay away form the cost of having to get their products subjected to licensing for pharmaceuticals (see Cosmeceutical). So they have to come up with things that do not "affect the structure or function of the human body". The reputable ones also try to make sure that their products do not contain known toxins or substances that cause harm in the short term (long term use sometimes reveals things that didn't show up in testing.) Probably more driven by their desire to avoid costly product liability suits than anything else. Adding Surfactants like soap to water makes it a more efficient detergent. It also removes the protective layer of lipids covering the skin and knocks ph of the acid mantle out of whack. To counterbalance that you can apply oils to your skin after washing. Since using pure oils would give you an unpleasant and impractical film of oil on your skin what we use are emulsions. (Also see Cold cream). Since those are prone to Rancidification and microbial growth you'll not only find emulsifiers and stabilizers but also preservatives in those. (Sometimes cleverly marketed as new Vitamin ingredient) In a further step the cosmetics industry then created Moisturizers. Splashing water on your face would not have the same effect, rather the opposite actually. It would rinse away some more oils exposing cells to evaporation and upset the hydrostatic balance causing further drying. It works through constricting fine blood vessels in the skin. AFAIK interest in reducing wrinkles has only gained momentum throughout the past 70 years or so. That along with our increasing knowledge of biochemical processes in the human body has led to many theories being put forward and being rebuked in that respect. The focus used to be the moisture contents of the stratum corneum. Now things like electrolyte balance, Free radicals, cell aging and nerve stimulation, among others, are under study. Some of those studies are either financed by the cosmetics industry through grants, or done in their labs. Study results from company owned labs are rarely published. (If one of their labs found the holy grail in anti-wrinkle treatment I bet they'd gladly spin off a subsidiary and go through the pharmaceutical trials.;-) Facial products work in the regard that they don't leave greasy marks on your clothes or things you touch, don't spoil fast, smell pleasant and supply lipids to your skin. A cooling effect while the continuous phase of the emulsion evaporates is also well established. Product differentiation has companies add various ingredients that at best can provide better product performance and at least not cause any harm. You'd have to research each labeled ingredient separately to get an idea. It is very likely though that lots of substances aren't listed or any research studies are under lock and key at the company. 71.236.26.74 (talk) 00:04, 5 June 2009 (UTC)[reply]

What happens to toxic sewage sludge?

About 40 - 60% (depending on whether you are in the EU or USA) of treated sewage sludge (biosolids) is reused as agricultural fertilizer. From what I can see on the article on biosolids, the main reason some biosolids cannot be used as fertilizer is that they have a heavy metal and toxic substance content that is too high. Is this true? If so, what happens to this waste? Is it incinerated, landfilled etc? —Preceding unsigned comment added by 157.203.42.175 (talk) 13:23, 4 June 2009 (UTC)[reply]

According to this parliamentary note [22], in the UK 62% is applied to agricultural land, 19% is incinerated, 11% is being used in land reclamation with the remainder going to landfill or composting. It also discusses the use of sludge to generate energy from biogas through anaerobic digestion. The report does not mention toxicity as a problem, apart from the issue of Endocrine-Disrupting Chemicals. Mikenorton (talk) 14:11, 4 June 2009 (UTC)[reply]
Sludge has some information. 71.236.26.74 (talk) 04:10, 5 June 2009 (UTC)[reply]

Frozen peas float when cooked

When you put frozen peas in a saucepan of cold water they sink to the bottom. When the water is heated up the peas float to the surface. But as ice is lighter than water, and frozen peas must contain some ice, I would expect the opposite to happen, i.e. they would start off floating then sink when heated. What is going on here? Lonegroover (talk) 14:45, 4 June 2009 (UTC)[reply]

Peas contain lots of stuff besides just water, so maybe enough to overcome the buoyancy from decrease in density of the water being frozen. Do the peas remain the same size, or do they expand when they thaw/heat? Does this happen only if the water gets hot, or can you reproduce it with room-temp water (to exclude nucleated steam giving the lift)? DMacks (talk) 14:53, 4 June 2009 (UTC)[reply]
Don't forget that water is densest at about 4C. My guess (and it really is just speculation) would be the following:
Water, in a saucepan at about 10 degrees C, combined with peas at about -18C. Peas initially float in cold water before sinking so clearly the water is denser for a short period. Presumably due to the low temperature of the peas? I would assume that the peas rapidly warm due to their high surface area to volume ratio. As such, the water within the peas would be denser than the water in the pan. Peas have other constituent components, but I would suggest the density of the water in the peas outweighs and (lower) density from the solid matter of the pea. The overall density of the pea should approach the point where they are denser than the surrounding water - this shouldn't be too difficult, the water will be getting colder from the peas but also warmer from the heat input.
I would imagine that the peas remain denser (colder) than the surrounding water for a while - since the source of heat would warm the water first, then the peas When the water boils, the peas could reach a temperature equilibrium with the water (since the water cannot get hotter, regardless of heat input, without turning to steam), or at least become warm enough such that the weighted average of the density of the pea (i.e. the solid matter and water contained within the pea) could overcome the density of the hot/boiling water. —Preceding unsigned comment added by 157.203.42.175 (talk) 15:07, 4 June 2009 (UTC)[reply]
The obvious answer is thermal expansion of the peas. Assume the peas have a density near water, but slightly more dense. This is because they are mostly water, but with some solid materials. As the peas heat, their radius expands by some small amount - but their volume increases according to (radius3), so their density will decrease inversely to that. Although water does change its density slightly, it's generally a good assumption that it is an incompressible fluid. Its density should not change significantly between room temperature and near-boiling. (Our article gives about a 5% change, but I don't know if I trust the source data). But, the water inside the peas will also expand - and so the only relevant volume change is the thermal expansion of the solid materials in the pea. Nimur (talk) 14:44, 5 June 2009 (UTC)[reply]

What happens if I drink five-year-old soda?

Not fridged, the 12-pack wasn't even opened. Just curious. I think I'll throw it out anyway. 67.243.7.41 (talk) 15:34, 4 June 2009 (UTC)[reply]

OR having tried it once. It won't be toxic, but it may be unpalatable as the bubbles will be gone and it may be a bit sludgy on the bottom of the can. 65.121.141.34 (talk) 16:23, 4 June 2009 (UTC)[reply]
This is not medical advice BUT: you might gain super powers. It's a possibility. Consider your future life as Soda Man, and whether or not you are willing to take on that responsibility. --98.217.14.211 (talk) 16:55, 4 June 2009 (UTC)[reply]
Throwing it away it probably wise. I doubt there will be anything harmful about it, but it probably won't taste very nice (depending on how it was stored). It is possible the seal has been broken somehow which might have allowed something harmful to get in, so probably not worth the risk. --Tango (talk) 18:38, 4 June 2009 (UTC)[reply]
Sugar soda may retain its flavor longer than artificially sweetened soda. I once had some old pop sweetened with Nutrasweet and all sweetness was gone. It tasted like unsweetened Coke with pineapple juice added. Heat speeds the breakdown of Nutrasweet. There is always a possibility that over time there could be greater leaching of metal or plastic from the can into the drink. If it is sealed, how would any carbon dioxide escape to make it flat? Edison (talk) 18:42, 4 June 2009 (UTC)[reply]
Major OR here, but I had some cans of orange soda that actually started leaking through the bottom of the cans after eight or nine years. They were unopened and keep in a closet. The soda actually caused corrosion and leaked out. cheers, 10draftsdeep (talk) 19:38, 4 June 2009 (UTC)[reply]
If its in a can, the soda may have eroded away enough of the can to impart a metallic taste. Don't know if its toxic though. Livewireo (talk) 21:22, 4 June 2009 (UTC)[reply]
Most modern cans have a coating of plastic on the inside I do believe. If the (presumably) citric acid in the orange soda and/or the carbonic acid in any soda has gotten through to the metal of the outside of the can, it's likely also dissolved the plastic coating on the way. This would presumably include any phthalate and/or bisphenol components of the coating plastic. Luckily, most plastic compositions are considered trade secrets, and there is intense dispute over the bio-effects of various plastic additives (softeners, etc.) which may or may not be in the plastic anyway (trade secret!) - so we can just dismiss the whole thing as "no definite evidence". Franamax (talk) 23:33, 4 June 2009 (UTC)[reply]
Check this out: Benzene in soft drinks, then check the label of your soda can. It won't kill you (immediately) but it's not healthy either. Also "a soft drink such as a cola has a pH of 2.7-3.0 compared to battery acid which is 1.0". That and the other ingredients, plus the can should make for an interesting environment for many chemical reactions and interesting compounds to form over time. 71.236.26.74 (talk) 00:26, 5 June 2009 (UTC)[reply]
Of course, you do realize that a pH of 3.0 is 1% as acidic as a pH of 1.0. Thus, soda has about 1% of the acid concentration of battery acid. Of course, the liquid in your stomach is pH of around 2 or so, which means that the soda is only 10% as acidic as your own gastric juices, or if you prefer, your gastric juices are 10 times as acidic as soda. Welcome to the wonderful world of logarithms. --Jayron32.talk.contribs 04:17, 5 June 2009 (UTC)[reply]
That's one of the reason I put that quote from an ooold paper in quotation marks. Your stomach is however a pretty good example of an environment that has interesting chemical reactions happening. 71.236.26.74 (talk) 06:00, 5 June 2009 (UTC)[reply]
Note that you generally don't want to drink any non-diet soda with a broken seal that's older than a few days - once it gets exposed to the environment I would expect the bacteria to come in and party on the sugar. A diet soda on the other hand might be fine for a longer period of time. I'm not sure. Dcoetzee 06:39, 5 June 2009 (UTC)[reply]
Nope Aspartame is less stable that sugar and gets broken down after a while. The interesting thing is what reaction products you're going to get. 71.236.26.74 (talk) 07:28, 5 June 2009 (UTC)[reply]

Capturing a warm bath's heat

It's winter in the southern hemisphere and a nice way to heat up is by taking a warm bath. I couldn't help thinking though of all the energy that gets lost when the warm water flows out the drain after a bath. Given that:

  • a kilocalorie is the amount of energy it takes to heat a liter of water by 1 degree Celsius
  • lets assume my bathtub holds 150 liters of water
  • it's about 17 degrees in my apartment, the water after I'm done bathing is 40 degrees Celsius, a 23 degree difference

I asked Google: "(150 * 23) * kilocalories in kilowatt hours" and got "(150 * 23) * kilocalories = 4.00966667 kilowatt hours"

That's 4 kilowatt hours of energy down the drain! That's like a 1000 watt heater staying on for 4 hours. so this makes me ask, does it make sense to keep the water in the bathtub until it has cooled down so that my place will heat up a bit? One concern is evaporation that might cause the humidity to rise and thus the energy is kept as latent heat rather than actual. In that case, is there something simple one can do to prevent evaporation?

196.210.200.167 (talk) 16:14, 4 June 2009 (UTC) Eon[reply]

Any added humidity will make you feel warmer, which is just as good, isn't it? --Sean 18:25, 4 June 2009 (UTC)[reply]
I'm not sure. The formulas are given in Fahrenheit, but I suspect at 17 degrees a higher humidity might even make you feel colder. Seems counter intuitive though that just leaving the water in the bathtub can make such a huge difference. 196.210.200.167 (talk) 19:39, 4 June 2009 (UTC) Eon[reply]
A lid on the bath would prevent evaporation. --Tango (talk) 18:36, 4 June 2009 (UTC)[reply]
I don't see why leaving the water in the bath until it's cooled right down wouldn't work. The humidity might be a problem for your bathroom - but you'll certainly have a slightly lower heating bill if you do this. It's not a bad idea actually. SteveBaker (talk) 19:44, 4 June 2009 (UTC)[reply]
According to my electric bill, 4kWh is worth almost 35c, though I would imagine that it would not apply the heat evenly throughout your dwelling (unless you have incredible outer wall insulation)65.121.141.34 (talk) 19:53, 4 June 2009 (UTC)[reply]
Don't have the time to google for it, but there is a product that recovers heat from water and air leaving the house through bathroom vents and plumbing. Can't recall what it was called. (I think "this old house" had one in one of their shows)71.236.26.74 (talk) 00:31, 5 June 2009 (UTC)o.k. found a couple of pages [23] [24], Matt's comment here [25], [26] Ugh, should have known and we do have a page Waste Heat Recovery Unit - 71.236.26.74 (talk) 03:59, 5 June 2009 (UTC)[reply]
My contingency plan for what to do in the event of a prolonged power failure in winter includes leaving the hot (and a bit of cold) water trickling into bathtubs and sinks, to keep the pipes and drains from freezing and to heat the house from the water heater. This assumes the overflow can remove the water with the drain plugged. A diverter to direct the hot water to the far end of the tub might be useful. One would get tired of a batch mode of filling the tub and then draining it after an hour, repeated 24x per day. Fireplace and kitchen range (hob) or oven could also be used to advantage, but with suitable care for carbon monoxide. Edison (talk) 04:49, 5 June 2009 (UTC)[reply]

gravitational equations

Is/are the equation/equations for gravity at the center of a Black Hole the same as the equation/equations for gravity in unoccupied (empty) space? ---- Taxa (talk) 17:20, 4 June 2009 (UTC)[reply]

At the very centre of a black hole, the singularity, the laws of physics break down (ie. we don't really know what happens), so there are no equations. In an appropriate coordinate system (eg. Kruskal–Szekeres coordinates) you can use the same equations to describe everywhere except the singularity, though. --Tango (talk) 18:47, 4 June 2009 (UTC)[reply]
The singularity at the very center of a black hole does things to the equations that physics depends on that are essentially the same as dividing by zero on your pocket calculator. There is no meaningful answer. But a billionth of a trillionth of a gnat's eyebrow from the center, the laws of physics should be pretty well-behaved. The equations are exactly the same - but because these are rather extreme circumstances, you have to use the full relativistic forms of these equations, not the 'low speed/low gravity' approximations that we all learned in high school. SteveBaker (talk) 19:41, 4 June 2009 (UTC)[reply]
Eyebrows?
Eyebrows?
What is the length of a gnat's eyebrow in Planck lengths? Does anyone know? --Tango (talk) 02:08, 5 June 2009 (UTC)[reply]
Well, the photo at right claims to be 100x magnification. But (...pet hate...) I'm viewing this simultaneously on a 14" monitor and an 81" plasma screen...so it's a bit hard to measure exactly! SteveBaker (talk) 04:04, 5 June 2009 (UTC)[reply]
It's a featured picture as well, I'm surprised nobody caught that... --Tango (talk) 12:29, 5 June 2009 (UTC)[reply]

DO GRAVITATIONAL ENERGY OF EARTH REDUCE BY TIME?

Earth keeps the moon in it's gravity spending it's energy .From where do earth gain this energy? If earth do not gain energy,then is it's gravitational energy reducing by time?Surabhi12 (talk) 18:06, 4 June 2009 (UTC)[reply]

The earth does not spend energy in order to keep its moon. Dauto (talk) 18:19, 4 June 2009 (UTC)[reply]
Well, not in classical physics. But in relativistic physics, the circling moon will cause gravitational waves, which will carry away some of the potential energy of the Earth-Moon system. The effect is very small, and, at the moment, entirely overshadowed by the tidal transfer of rotational momentum from the Earth to the Moon. See Gravitational wave#Power_radiated_by_Orbiting_Bodies. --Stephan Schulz (talk) 18:26, 4 June 2009 (UTC)[reply]
The moon is falling towards the Earth, but luckily it keeps missing, just like cannonball "C" shown in the picture. It doesn't take any energy for this to happen. See orbit. --Sean 18:31, 4 June 2009 (UTC)[reply]
And the Earth is falling towards the moon, but luckily the moon keeps moving out of the way ;) Gandalf61 (talk) 12:30, 5 June 2009 (UTC)[reply]
Somebody owes me a dollar. (I've decided that I'm charging the universe a dollar every time someone confuses a 'force' (gravity in this case) with 'energy' - I plan to earn enough money from this confusion to replenish my poor 401K). The force and energy are very different things. When you stick a fridge magnet onto your fridge, the magnet exerts a force on the metal. But so long as the magnet doesn't move - it doesn't take any energy whatever for it to just hang there...magnets don't "run down". It's the same with the moon. The gravity pull between earth and moon is just a force. So long as the two bodies don't get closer or further apart - there is no energy transaction involved. Forces don't "run down" - they just are. If you hang something from the ceiling with a rope - the rope exerts a force...the rope doesn't "run down". Now, if something were to pull the moon further from the earth - that 'something' would have to expend some energy to do it...if the moon (for some reason) were to fall closer to the earth - then it would actually gain energy (kinetic energy initially - then one hell of a lot of heat energy when it kersplatted into the middle of the pacific ocean!). SteveBaker (talk) 19:35, 4 June 2009 (UTC)[reply]
Would another explanation be that the Earth is neither exerting a force nor expending energy, it is simply warping space-time such that the shortest path for the Moon to travel through the geodesic happens to be a circle? Just asking... Franamax (talk) 20:07, 4 June 2009 (UTC)[reply]
Now now, don't confuse a good discussion with a simple, elegent and correct answer. No one needs that now! --Jayron32.talk.contribs 20:34, 4 June 2009 (UTC)[reply]
Yes, but relativity gives people headaches! That, and the concepts Steve is talking about apply to all forces, even ones that can't be easily dismissed as mere geometry. --Tango (talk)
I'll give you a dollar. Remind me if see you - bring change! --Tango (talk) 02:07, 5 June 2009 (UTC)[reply]
Oooh! Thank-you! Actually - with the number of people making this error - I'm pretty sure I can still turn a profit at one Zimbabwean dollar per confused OP.  :-) SteveBaker (talk) 03:58, 5 June 2009 (UTC)[reply]
Single Zimbabwean dollars are now so rare that, pervesely, they are probably now valuable to collectors. SpinningSpark 13:48, 5 June 2009 (UTC)[reply]
and another piece of trivia (I can't leave this one alone) is according to this the $US is now worth more than a mole of the original $ZW. That must be a first for a currency. SpinningSpark 14:17, 5 June 2009 (UTC)[reply]

problem of race

Imagine:

A car travels 10km in an hour 
A bus travels 20km in an hour  
both have a race
Car is ahead of bus at point A.
By the time bus moves to the point A,car must have move little ahead say to point b.
By the time bus moves to the point b,car must have move little ahead again say to point c.
By the time bus moves to the point c,car must have move little ahead say to point d.
This continues .........
Thus car has to become the winner.
IS THIS POSSIBLE?  —Preceding unsigned comment added by Surabhi12 (talkcontribs) 18:30, 4 June 2009 (UTC)[reply] 
See Zeno's paradoxes. Short answer - it takes an infinite number of steps for the bus to overtake the car but because each of those steps takes sufficiently less time than the one before the total amount of time required to overtake is finite. See convergent series for the maths behind that. --Tango (talk) 18:35, 4 June 2009 (UTC)[reply]
Another way to look at this is that suppose the car starts off with a lead of (say) 10km. We know that using 'sensible' math, that the bus will travel at a distance D in time T=D/20 hours and the car at time T=(D-10)/10 hours. When the bus overtakes the car, they are at the same distance at the same time - so we can solve the simultaneous equations and calculate T as 1 hour. The bus catches the car after 1 hour - then zooms right past it. No problem, no controversy, no paradox.
But the crazy Zeno's paradox way says: The time it takes the bus to reach A is 1/2 hour. By that time, the car has travelled 5km to point B. 15 minutes later, the bus reaches B and the car has gone on another 2.5km to C. 7.5 minutes later, the bus reaches C...so Zeno tells us where the bus and the car are after 30+15+7.5+3.75+1.875+... minutes. Mathematically, that infinite series adds up to 59.999... minutes. And the distance that the car and bus have travelled in that time is 19.999...km from the start. Well, that's all very interesting and exciting - but by refusing to every allow the 'victim' of the paradox to just go ahead and ask where the vehicles are after a longer amount of time, he arbitarily forces us to look only at times before the two vehicles actually meet. If you limit your calculations to only times before the vehicles meet - you're obviously never going to find the time when they do actually meet. It's not a paradox - it's a dumb way of stopping the person from answering a ridiculously simple question!
But why doe Xeno insist on calculating all of these intermediate positions and stubbornly refuse to calculate where they are after exactly one hour? Because he's some stupid philosopher trying to make a name for himself by inventing a "paradox". This is why philosophers are a waste of quarks. We have a perfectly simple, completely understood problem with a VERY simple, non-paradoxical conclusion...but NO...the dumb-as-a-bag-of-hammers philosopher has to insist on never calculating the answer but instead answering an infinite series of questions that we don't need to know the answer to.
It's really no different to me saying "What is 2+2?" - but instead of just going ahead and counting it out on your fingers, I arbitarily insist that you first calculate a totally unrelated sum: 1+0.5+0.25+0.125+... which (if you try to do it the hard way) will take you an infinite amount of time - and thereby cunningly prevent you from calculating 2+2. Why the heck would you ever want to do it like that? It's obviously a stupid and unnecessary way to answer my question. So - please treat anything any philosopher says much as you would a comedian. Amusing, possibly mildly entertaining - but in no way relating to reality. SteveBaker (talk) 19:22, 4 June 2009 (UTC)[reply]
Well that and your number series will only equal 2 when you are done. 65.121.141.34 (talk) 19:49, 4 June 2009 (UTC)[reply]
I disagree - I think it is instructive to consider the "paradox" and realize that it is solved by convergence. The obvious logical solution allows the student to intuitively grasp the otherwise abstract idea that adding an infinite number of elements can still lead to a finite solution. This does have practical consequences in the study of advanced science and physics - it's an effective way to consider the parity between discrete and continuous number representations, or quantized vs. continuous physical models. Nimur (talk) 23:19, 4 June 2009 (UTC)[reply]
Perhaps the solution is to tell the philosopher you think he's wrong and ask him to write out the whole series just to be sure... Franamax (talk) 20:10, 4 June 2009 (UTC)[reply]
It is a little unfair to say philosophers are a waste of quarks in reference to ancient Greek philosophers. I agree modern ones are a waste of quarks, but in ancient times "philosophy" was a far broader subject since other things didn't exist yet. It included mathematics and what would now fall under science (but I'm not willing to call what they did science, since it didn't follow the scientific method). It easy for us now to laugh at them not understanding these simple concepts, but they are only simple to us because someone else has explained them to us. The ancient Greeks (and philosophers for quite a while after them) had some very odd ideas about infinity (as do most people today who don't have formal training in mathematics). The reason Zeno's problem seemed paradoxical to them was because they didn't think it was possible to do an infinite number of things (the idea that they got easier and easier so the total amount of resources required remained finite wasn't known to them). You may find Supertask interesting. --Tango (talk) 02:03, 5 June 2009 (UTC)[reply]
I am reminded of a quote attributed to Johannes Keppler, though it may be apocryphal. I will paraphrase. Basically, Keppler was lecturing on the organization of the solar system, and one of his students asked something like: "Weren't people 100 years ago rather stupid, I mean, they thought the earth was the center of the Universe, and that everything revolved around us. Doesn't that that make those people rather unintelligent." To which Keppler replied something like "If they were right, and the sun DID revolve around the Earth, would the sky look any different?" which makes the point that its easy given the sum of all human knowledge of today to riducule the past thinkers as somehow stupider than us; but we have the benefit of the incremental progress we have made towards understanding the universe, a process they themselves were a part of. One could just as easily have commented that Newton was an idiot for not taking into account relativistice effects of near-light-speed travel, or that proponents of phlogiston theory were stupid when they thought heat was a substance. 100 years from now people will think we are stupid because some "scientific fact" we are certain is right turns out to be inaccurate in some way. Xeno's ideas look stupid because he didn't have the 2000 years of mathematical thought already spelled out when he devised his paradoxes. However, the basic concept that infinity is a special idea that DOES need its own set of rules to deal with was a unheardof idea at Xeno's time. The fact that anyone was thinking about the infinite several hundreds of years B.C. is pretty prescient if you ask me. --Jayron32.talk.contribs 02:52, 5 June 2009 (UTC)[reply]
I'd just like to point out that 59.999...=60. You could have just said the infinite series just adds up to an hour. Also, this paradox was formulated before infinite series (it isn't just what you get when you add all the values together), so you could only get something below 60, but arbitrarily close to it. — DanielLC 16:16, 5 June 2009 (UTC)[reply]

Labour and multiple births

Given that Twins, and higher number births, are individuals, do mothers ever experience a sort of delayed labor, in which the second child is born at a vastly different time or even date then their sibling? For example, Paul and John are twins, Paul is born first, and John is born 2 days later, after the mother reentered labor.--HoneymaneHeghlu meH QaQ jajvam 19:31, 4 June 2009 (UTC)[reply]

When someone finds a good source on the range, with some statistics, please add the data to the Multiple birth and Twin articles, which should discuss this, but currently lack any data (or even a mention). Tempshill (talk) 19:53, 4 June 2009 (UTC)[reply]
If anyone wants to ask at the WP:Library for the fulltexts, here are two sources: "Management of Delayed-Interval Delivery in Multiple Gestations", S. Cristinelli et al., Fetal Diagnosis & Therapy, Jul/Aug 2005, Vol. 20, Issue 4, pp.285-290. (AN 18247478) [27] and "Delayed delivery of multiple gestations: Maternal and neonatal outcomes", M. Kalchbrenner et al., American Journal of Obstetrics & Gynecology, 179(5), pp1145-1149, Nov. 1998. [28]. According to the abstracts: in 6 cases studied, the delivery interval is 2:93 (median 7) days; the literature (148 cases from 1979-2001) supports 2:153 (median 31) days; and for 7 cases studied, a difference of 32.6 days. I'm rather startled at some of these numbers for delay in delivery, obviously some of the first-borns were very premature. However the absolute numbers indicate that this amount of delay is very rare. As for minor delays (<2 days), I didn't find much. I've fired off a query to my sister. Franamax (talk) 20:39, 4 June 2009 (UTC)[reply]
Oohh, jackpot! "Twin-to-twin delivery time interval:...", W. Stein et al., Acta Obstetricia et Gynecologica, 2008; 87: 346�353. [29] 4,110 "normal" deliveries, mean interval = 13.5 min (SD 17.1); 75.8% within 15 min, 16.4% within 16-30 min, 4.3% within 31-45 min, 1.7% within 46-60 min, 1.8% > 60 min. Now those are statistics! :) If anyone wants to try writing this up, I have some of the fulltexts and we can put up one of those "Refdesk significantly improved an article" thingies! Franamax (talk) 21:05, 4 June 2009 (UTC)[reply]
Those statistics with a median of several days are meaningless - the sample has clearly been chosen to be made up of cases with a large delay (since that is what they were studying). The rest of their results may well be interesting, but the length of the delays isn't since they were specially chosen. (The medians are probably only given in order to describe the sample chosen, not to imply anything about delayed births.) --Tango (talk) 01:54, 5 June 2009 (UTC)[reply]
Franamax, this is outstanding! I added a new "Delivery interval" section to Twin, let me know what you think. The section needs the discussion of the extremely long intervals and could use discussion (beyond the stats) of the very unusual situation where labor begins and then ends, and then a month later begins again. Thanks! Tempshill (talk) 05:47, 5 June 2009 (UTC)[reply]
Looks good, I'll work a little more on those anomalous cases where labour ceases and the delay runs to several days. I also had found this in Google Books (Multiple Pregnancy, Blickstein, Keith & Keith), which gets right into the details. Another successful RefDesk collaboration! :) Franamax (talk) 07:10, 5 June 2009 (UTC)[reply]
Thanks guys!--HoneymaneHeghlu meH QaQ jajvam 00:58, 6 June 2009 (UTC)[reply]

helium balloon

If one were to take a balloon that was made of a material that could stretch an infinite amount while maintaining its structure, and filled it with enough helium for it to rise quickly, would it make it into space (assuming temperature also does not effect the structure of the helium compartment)? It is going far slower then the speed needed for orbit, but helium does escape from our atmosphere.65.121.141.34 (talk) 19:58, 4 June 2009 (UTC)[reply]

If we're going to invent materials which cannot possibly exist, why not just cast a magic spell on the balloon and teleport it to space? But seriously, the ballon would rise to a point where its density matched that of the atmosphere. For your theoretical infinitely strecthy balloon, since the volume could be infinitely large, the density could be infinitely small, which means the balloon would keep drifting up. I just have a hard time picturing a helium balloon being infinitely stretchy. --Jayron32.talk.contribs 20:41, 4 June 2009 (UTC)[reply]
As I understand the way you've stated the material, the answer is maybe space, and maybe escape. It's important to remember that escape velocity is relevant only to a specific impulse of thrust. Anything that continues thrusting as it rises need not reach escape velocity as defined at the surface in order to reach escape velocity. Anyway, the balloon matches the pressure of the atmosphere and will continue to rise (and expand) until it reaches an altitude where its density is the also same as the surrounding atmosphere. This may well be higher than 100 km, which is the usual qualification for reaching "space". This further may be high enough for the solar wind to thrust it away from Earth, thus maybe escaping. In the absence of solar wind, though (and possibly in spite of it), the balloon reaches stable equilibrium -- were it to rise, it would be more dense than the surrounding atmosphere and thus sink. Were it to sink, it would be less dense and thus rise. Since it's still in the atmosphere and its movements dictated by the atmosphere, I wouldn't characterize this as an orbit and I wouldn't expect the balloon to be in free fall. — Lomn 20:50, 4 June 2009 (UTC)[reply]
I was going to mention how the balloon material would have a density greater than that of helium, thus we could calculate the weight of the balloon at Earth surface and add in the weight of helium it encloses and make a definite calculation of the equilibrium point where the overall density of the balloon-helium system was equal to that of the atmosphere. But of course, since it's infinitely stretchy, we can fill the balloon with all the helium on Earth before we release it, so yes, the average density approaches that of helium alone. Without some constraints, this really can't be calculated... Franamax (talk) 22:23, 4 June 2009 (UTC)[reply]
And below a critical density, the atmosphere will behave sufficiently like a vacuum or a sparse plasma. Once the balloon is in the ionosphere or thermosphere, buoyancy will be an insignificant force, and thermal interactions will become insignificant (although, a balloon of near-infinite volume might have sufficient number of thermal collisions to break down our cold plasma assumptions...) Nimur (talk) 23:24, 4 June 2009 (UTC)[reply]
You didn't give us enough information to answer your question. Tell us what's the mass of the ballon, what's the mass of the helium inside and what is the tension on the baloon's rubber, and than we may be able to give you a reasonable answer. Dauto (talk) 00:59, 5 June 2009 (UTC)[reply]
The atmosphere has no end, so I guess the simplest answer is that it will continue to rise and expand until interactions with the rest of the solar system become more significant than interactions with the Earth (which basically means it has escaped the Earth). At some point, either bouyancy would cease to be significant or you would reach a point where the atmosphere gives way to the Interplanetary medium, which I believe is mostly (ionised) hydrogen, so is less dense than helium at equal pressure (to the extent that pressure is well defined in such circumstances). However, your assumptions are clearly impossible, so the only entirely accurate answer is "anything could happen". (See Vacuous truth.) --Tango (talk) 01:46, 5 June 2009 (UTC)[reply]
Actually, the atmosphere has an ill-defined end; it gets infinitessimally thinner as you keep going out from the earth. This does NOT mean that it has no end; to assume it never ends is to make the same error that is made in Xeno's paradoxes. (See discussion elsewhere at the ref desk on this). Eventually, the matter density of the atmosphere fades to match the average matter density of the rest of the so-called "empty space" of the rest of the solar system. At that point, we can say that the atmosphere has ended, since it has become indistinguishable from space. That line can be clearly drawn around the earth, so there is no point in saying it "never" ends. The atmosphere is a physical example of a convergent infinite series, and the operative part here is "convergent". The limit where the diminishing atmosphere converges with the rest of space we can say the atmosphere ends. --Jayron32.talk.contribs 03:06, 5 June 2009 (UTC)[reply]
By "has no end" I meant it never becomes a true vacuum, which is what would be required for this idealised balloon to stop rising and float on top of the atmosphere. I talked about the atmosphere giving way to the interplanetary medium, so I think it is quite clear that I understand what is going on... --Tango (talk) 03:10, 5 June 2009 (UTC)[reply]
Well, lets call it a misunderstanding over the use of imprecise language. It would be easy to assume that, when you say it "has no end" that the it continues on forever. It doesn't really, its just that you have to define what you mean by "end". --Jayron32.talk.contribs 03:32, 5 June 2009 (UTC)[reply]

Falling for infinitly long.

Since I was learning about terminal velocity I was wondering that would it be physically possible for an object to have a terminal velocity greater than c. If it were a point object such that A was very small, falling through an low density fluid with a low drag coefficient, and a great enough mass. If this were possible (albeit physically it would take a long time to attain it; imagine it falling through an infintly long tube), would that mean that as v--> c, the mass would go to zero. Such that its K(e) would never exceed E=mc2. Or would the mass remain the same given that E2=(mc2)2+(pc)2. Using the second equation E could increase without bound? I would imagine a vacuum would have a drag coefficient of zero, this would make terminal velocity infinite, although a perfect vaccum is impossible. —Preceding unsigned comment added by 24.171.145.63 (talk) 21:16, 4 June 2009 (UTC)[reply]

The short answer is "no". I think we have an article, relativistic addition of velocities or some such? If not, it should redirect somewhere.
Things are a bit more complicated if you consider the large-scale geometry of the universe -- see observable universe. One way of expressing the edge of the observable universe is that it's where galaxies are moving away from us faster than c. --Trovatore (talk) 21:20, 4 June 2009 (UTC)[reply]
Velocity_addition_formula#Special_Theory_of_Relativity. --Tango (talk) 01:38, 5 June 2009 (UTC)[reply]
Why would the mass go to zero? Rest mass is always the same, irrespective of velocity. Relativistic mass increases as velocity increases. The total energy of a moving object is the sum of the rest energy, mc2 (where m is the rest mass), and the kinetic energy (for small velocities that's mv2/2, for velocities near the speed of light you need to use relativity). The rest energy is constant, the kinetic energy increases with velocity (and increases without bound, even though the velocity is bounded above by the speed of light). --Tango (talk) 01:38, 5 June 2009 (UTC)[reply]

A structural designation

When listing or describing archictectural structures what does HEW (insert number here) mean e.g. HEW 365, which is for the Gasworks Railway Tunnel around Kings Cross in London? Simply south (talk) 22:20, 4 June 2009 (UTC)[reply]

I think it is the Panel for Historical Engineering Works designation; and think said panel is part of or connected with the Institution of Civil Engineers. See, for instance, the introduction to London and the Thames Valley, by Denis Smith, Institution of Civil Engineers (Great Britain), and then note that the book lists HEW numbers against each structure described. --Tagishsimon (talk) 22:33, 4 June 2009 (UTC)[reply]
See also ICE overview of PHEW and a PHEW database with a crappy user interface --Tagishsimon (talk) 22:36, 4 June 2009 (UTC)[reply]
Thank you for that. Simply south (talk) 23:17, 4 June 2009 (UTC)[reply]

Understand weak interaction and beta decay

I tried long to understand how a force could result in β decay.

As for my knowledge I usually think of a force as a vectors (or a vector field if we speak of fields): electric force between 2 charges as 2 vectors with the same direction, the same for gravitational force, and I think I could do the same for strong force. But when I read of weak force and its role in β decay, first I see only feynman diagrams, which I understand, but I cannot fit them with my idea of vectors. Second I see only one particle interacts with this force, when for all of the other forces there are at least 2 particles, or a field and a particle.

I conclude saying that I think I understand the basic mechanism of β- decay. So, said in really raw words: a neutron emits (on its own or there is a reason?) a W- boson, which "extracts" one unit of negative electric charge from the neutron, and the neutron is transformed in a proton. Then the W- boson decays in an electron and an electron antineutrino.

Could someone help me and clean my doubts?

ColOfAbRiX (talk) 23:56, 4 June 2009 (UTC)[reply]

You said you see only one particle interacting in the beta decay, but when you gave us an example of beta decay you mentioned a neutron, a proton, a W-, an electron and an anti-neutrino. I count five different particles. Even if we exclude the W which plays the role of the "force" here, you still have four interacting particles. Forget about vectors, they won't help you here. You should think of "forces" as interactions between different fields ()The vertices of the Feynman diagrams. Dauto (talk) 01:19, 5 June 2009 (UTC)[reply]

June 5

Distribution of stellar classes

According to the table in Harvard spectral classification, stars of spectral class M comprise over ~76% of all main sequence stars while Sun-like stars in spectral class G comprise ~7.6% of all main sequence stars. This suggests to me that class M stars outnumber class G stars by a ratio of 10:1. My own OR suggests that is roughly true within 20 light years of the Sun, but is that ratio maintained throughout the Milky Way galaxy? Presumably similar ratios can be calculated for other spectral classes as well - I am particularly interested if some parts of the Galaxy are lacking in stars a particular spectral class. Astronaut (talk) 00:10, 5 June 2009 (UTC)[reply]

I think the ratio is maintained on a large scale (presumably universally, although I'm not sure how much evidence there is to support that). I'm not sure about the distribution of different spectral classes throughout the galaxy, but there are certainly differences in the metallicity of stars in different areas (that article explains it fully). --Tango (talk) 01:32, 5 June 2009 (UTC)[reply]
I saw the metallicity article, but I'm not convinced there is a strong link between metallicity and spectral class, and anyway I'm really only interested in stars in the Milky Way disc. The kind of thing I'm trying to find out is if, for example, we observed that the population density of O and B class stars in the spiral arms was typically double that in the regions between the spiral arms, could we assume the same was true of the population density of G, K and M class stars, even though we cannot observe them directly? In other words, can I use the distribution of bright stars that can be observed at great distances, to reliably infer the existance of a much larger population of dim stars at these great distances? Astronaut (talk) 04:39, 5 June 2009 (UTC)[reply]
I don't think you can do that because the O class stars are short lived (less than 10 million years) and won't have enough time to move very far from the star nursery where they are born, while smaller stars can live much longer and will have a more uniforme spacial distribution. Dauto (talk) 06:43, 5 June 2009 (UTC)[reply]
One factor in variation of star population is related to the size of the galactic body they are in. Globular clusters are known to lose stars through a process analogous to (and actually called by some) "evaporation". The lighter stars in the cluster are the most "volatile" since they will gain the highest velocities in energy exchange interactions with other stars. Globular clusters thus tend to have a relative poverty of K and M-type stars since these will be the first to achieve escape velocities. SpinningSpark 13:08, 5 June 2009 (UTC)[reply]
Two important keywords for this type of question are the initial mass function (IMF) and the star formation history, and it's the latter that chiefly determines the current distribution of stellar types in a given stellar population. Starting from the IMF, an assumption about the star formation history and using stellar evolution models, one can try to model the stellar populations in various environments of the Milky Way, but also in other galaxies. Obviously, while in the Milky Way one can make a tally of individual stars, in other galaxies one only has a limited amount of information, like the colours of galaxies or integrated spectra. What a population of stars in a given area looks like at a given point in time thus depends on both the IMF and the star formation history. From modelling this sort of information one can then try to reconstruct these two factors. I'm far from being an expert in this, but my impression is that the IMF is fairly universal. There are indeed different versions that have been tried and which vary in particular at the low mass end, i.e. at late spectral types, but that controversy seems to be due more to our ignorance rather than variation in different environments. The star formation history is much more variable. Early-type stars, O and B, say, are very short-lived and can only exist in environments where stars are currently being formed. If you switch off star formation (e.g. by removing cold gas) these stars will disappear very quickly and you'll be left with a population that is dominated by late-type stars, main-sequence stars dominating by number, and red giants dominating the light output. The neighbourhood of the sun is in the disk of the Milky Way which is still forming stars, hence we might expect some early-type stars (typically forming OB associations). The bulge forms much less stars and is therefore dominated by later-type stars. The extreme cases would be "red and dead" elliptical galaxies. --Wrongfilter (talk) 16:24, 5 June 2009 (UTC)[reply]

Q about lovebirds

Is it true that it's impossible to keep one lovebird as a pet? Is it also true that if you have two lovebirds and one of them dies, then the other one will become depressed, stop eating and die soon after? Thanks. --84.67.12.110 (talk) 00:13, 5 June 2009 (UTC)[reply]

I've looked around and read a bunch of likely pages - and I can't find anyone that says this is true. However, it's clear that they are extremely active, intelligent, social birds and might get depressed because they don't have anything to keep them amused when their mate dies. I suspect that the myth of one bird dying and it's mate dying "of a broken heart" soon after has it's roots in the fact that if you buy two birds of the same age on the same day - keep them in the same cage and feed them the same food - then the odds are pretty good that whatever kills one of them will soon kill the other...especially if it's a bit depressed because it's bored. This would seem a lot like death from a broken heart - but in all likelyhood it's probably just that whatever killed the first one also kills the second. As for keeping just one of them - according to most of the sites I've read, it'll do OK PROVIDING you give it tons and tons of attention and keep it busy and interested in life. Having two of them relieves you of some of this huge commitment in that (to some extent) they'll keep each other amused when you aren't around. SteveBaker (talk) 03:50, 5 June 2009 (UTC)[reply]
This song recounts a field observation of catabythismus incidental to amorous dysfunction by Petroica macrocephala, the Miromiro or New Zealand Tit, a bird of the Petroicidae (Australasian robin) family colloquially known as a tom-tit. Cuddlyable3 (talk) 08:39, 5 June 2009 (UTC)[reply]

largest possible Black Hole

Is there a radius at which the density of a Black Hole is so low that it is no longer a Black Hole? --71.100.6.71 (talk) 02:50, 5 June 2009 (UTC)[reply]

No. You are correct that the density of a black hole (ie. the average density within the event horizon as viewed from a distance) decreases as the black hole grows, but there is no minimum density required to remain a black hole. --Tango (talk) 03:15, 5 June 2009 (UTC)[reply]

zinc cloride

why zinc cloride is not weighed while it is still hot? —Preceding unsigned comment added by 115.134.149.71 (talk) 03:51, 5 June 2009 (UTC)[reply]

Nothing should be weighed while its temperature is different from the ambient temperature. If the substance it hotter than the air it will generate convection currents which can easily disrupt a sensitive balance and give an inaccurate reading. If you are going to weigh something, let it cool to room temperature so that such convection currents do not screw with the balance. Of course, you already knew all of this, because you paid attention when your chemistry teacher explained all of this to you. Right? --Jayron32.talk.contribs 04:02, 5 June 2009 (UTC)[reply]

Lame flight data recorders

The flight data recorders presently in use on jumbo jets send out a beacon signal detectable to a reported one mile, but the planes may crash in ocean depths of up to 4 miles, such as the recent Air France jet crash. Would the acoustic beacon power have to increase as the square or the cube to be detectable at 4 times the present distance, so there would be a great likelihood of recovery? This could mean 16 times the radiated acoustic signal or 64 times, respectively. News articles also say the batteries will fail after 30 days of beeping. Why couldn't the flight data recorder have a transponder which sends out a very powerful signal only when a powerful search ship probe signal triggers it, meaning it could still be found long after the crash? Edison (talk) 05:26, 5 June 2009 (UTC)[reply]

I think the deal with FDRs is that they are built to an old standard, and the standard has never been updated, because having one standard is preferable in most cases than having 100 different types of FDRs. If they are all identical, then one always knows what to look for. Actually, the better standard would seem to instead broadcast the information via a digital data stream over a satelite system. The problem is that the FDR is attached to the plane. If the flight data information were stored outside the plane, it would greatly reduce the need to recover the box. The black box could still exist as a backup. Of course, the technology exists to do exactly that; the problem would be retrofitting the worlds airplane fleet and relevent ground locations with the system, it would be a daunting task. But the technology certainly exists to devise a better system. --Jayron32.talk.contribs 05:33, 5 June 2009 (UTC)[reply]
I don't know how much data they record, but surely it's enough to make every plane in the air constantly streaming it over expensive satellite bandwidth impractical. --Sean 14:11, 5 June 2009 (UTC)[reply]
Also, couldn't bad weather (like the recent Air France crash occurred in) interrupt the signal? TastyCakes (talk) 14:25, 5 June 2009 (UTC)[reply]
In an simple isotropic medium it would generally be distance squared, but there is a correction for scattering so it is really more like , where λ is about 1 km in the ocean. Which means that at 4 km you get about 1/300th the power as at 1 km. A further effect is the diffraction around the sofar channel in the ocean, which makes it nearly impossible for deep accoustic signals to ever reach the surface anyway, so even with a more powerful transmitter you'd still have to dangle a deep hydrophone to hear it. Dragons flight (talk) 16:44, 5 June 2009 (UTC)[reply]
Transmitting 4 miles would require more like 2000 times the power as transmitting 1 mile, given the formula above. Dragons flight (talk) 17:04, 5 June 2009 (UTC)[reply]
The Mary Sears was unable to locate the black boxes from Adam Air Flight 574 using hull mounted sonar. After returning to port and equiping a Towed Pinger Locator she detected both at a depth of 1700m. According to Jane's a TPL can detect the signal from a max depth of 20,000 ft.—eric 19:31, 5 June 2009 (UTC)[reply]
According to the Adam Air accident report they were found at 2000 and 1900 meters, and the Mary Sears was required to pass within 500m of a beacon to locate them.—eric 20:08, 5 June 2009 (UTC)[reply]

Radiocarbon and DNA evidence on Voynich Manuscript

Has the Voynich Manuscript been subjected to radiometric dating to determine the age of its materials, or to DNA testing to determine the region of their origin? NeonMerlin 05:41, 5 June 2009 (UTC)[reply]

According to this Yale journal (scroll down to "Shelf Life"), dated February 2009, "two outside specialists are analyzing the pigments in its ink and carbon dating a tiny sample of its vellum." I haven't come across any reports of their findings, though. Deor (talk) 15:02, 5 June 2009 (UTC)[reply]
Is it a coincidence that this question came today? Jørgen (talk) 17:02, 5 June 2009 (UTC)[reply]
My guess is that Merlin's inquiry was sparked after reading Mr. Munroe's comic this morning, and not by chance. 17:07, 5 June 2009 (UTC)

Electrical power usage of computer

Hi, in this green age I guess it's relevant to ask this question. I've always been told to leave my computer on all the time, as repeated on/off switching could not only damage the switch, it causes electrical spikes which could damage the motherboard(?) over time. However, now my latest concern is that leaving it on consumes at least 600W per hour (that is the size of the power supply unit with the fan). Is this true? When the computer goes into power saving mode (turn off monitor and hard disk) - how much less does it use? Would it use even less if I tell the computer to hibernate during times of no usage? Sandman30s (talk) 09:24, 5 June 2009 (UTC)[reply]

While there is certainly some increased wear and tear from turning it on and off, I don't think it is much to worry about. If you're just stepping away for 10 minutes it probably isn't worth turning it off, but if it is going to be several hours it would be good to. It's difficult to say how much power it uses at different times, it varies from computer to computer, but the 600W figure would be a maximum (plus whatever your monitor uses and any other equipment that has its own mains plug), it won't use that all the time. You can get a power meter that goes inbetween the plug and the mains socket that will tell you exactly what it is using, I think most high street electrical stores will have them. Hibernation is actually the same as turning it off completely, the only difference is that it saves its current state to the hard drive so it can load back to the same point when it turn it back on. It will probably still use some power when switched off unless you unplug it or switch it off at the mains (see Wall wart). Leaving it on standby will use a significant amount of power. --Tango (talk) 12:37, 5 June 2009 (UTC)[reply]
Thanks, I think I'll do some tests with a power meter. Sandman30s (talk) 13:27, 5 June 2009 (UTC)[reply]
Also, note that in the winter, if you heat your house with electrical heating, you can just leave your computer on as all "wasted" energy contributes to heating your house anyway. (and conversely, if you use AC in the summer, you waste "double" the energy by leaving the computer on (I'd think?)) Jørgen (talk) 13:54, 5 June 2009 (UTC)[reply]
Both of your estimates are accurate - in data centers, it is estimated that total electric bills are equal to 2 times the power consumption of the electronics - this is a very common rule of thumb for estimating costs. This means that for each watt of useful electronics, an extra watt is necessary for air conditioners. It's hard to grasp this heating/air-conditioning concept intuitively - you "know" your computer is warm, but it's not that big... If you have the chance to visit a server-room, get permission to walk BEHIND one of the rack cabinets - behind a rack of 40 high-performance systems - you'll be blasted by the hot-air fan output which might reach a hundred kilowatts - it's like a furnace! Nimur (talk) 14:53, 5 June 2009 (UTC)[reply]
Thanks for the info! This should mean that environmentally friendly data centers should be placed in the basements of large offices / residential buildings in cold areas, so that the excess heat can be used to heat the building. I think I've read some articles on large data processing centers being located in places where hydro power is prevalent, or where there is a river available for cooling, both of which would be fine, but I think that placing them in cold areas - pumping cold air in (in some controlled process, of course) to cool the components and using the excess heat for heating - would be an even better approach? Now, I know the logistics of this might be hard, but I'm sort of surprised that no fringe-targeted "environmentally friendly cluster" has marketed this yet. (Oh, and sorry for hijacking the question) Jørgen (talk) 15:08, 5 June 2009 (UTC)[reply]
The use of cold, outside air as a source of cooling for data centers is already an accepted solution in some areas. Care must be taken that outside air does not introduce fine particulates that might damage equipment, and the temperature and humidity of incoming air must be tuned; these challenges and concerns have delayed the adoption of so-called air-side economizers or outside air economizers. Nevertheless, some more recent studies (by Intel and some electrical utilities; linked from our articles) have allayed fears over these risks. Here's an article about a Canadian firm that gets about 65% of their data center cooling from outside air: [30]. TenOfAllTrades(talk) 16:51, 5 June 2009 (UTC)[reply]
Thanks! Good to see that good ideas are put to use. Jørgen (talk) 17:00, 5 June 2009 (UTC)[reply]
In this interview with an IT manager working at the South Pole, he says that at their old data center somebody had just cut a hole in the wall to cool everything inside. Tempshill (talk) 17:33, 5 June 2009 (UTC)[reply]

Satellite photos of London from today (5th June 2009)

Hi,

Is there any way of getting these? I realise live satellite feeds probably aren't available but what about very quickly archived pictures? --Rixxin (talk) 11:26, 5 June 2009 (UTC)[reply]

Photos from weather satellites, maybe. What kind of resolution are you looking for? --Tango (talk) 13:28, 5 June 2009 (UTC)[reply]
Honestly? This is sure gonna sound lame. I was interested to see if there was a big 'ol dark cloud over The Palace of Westminster today, 'cos it sure felt like there should be!--Rixxin (talk) 16:43, 5 June 2009 (UTC)[reply]
Well, there was a big dark cloud over the whole of England, pretty much... See here for visible light satellite images of London and the South East of England over today (one per hour). I can't find a close up of Westminster, but the weather was pretty consistent so it doesn't really matter. --Tango (talk) 18:51, 5 June 2009 (UTC)[reply]
The day will undoubtedly come when we will be able to view any part of the earth in real time, and zoom in on places of interest. Wars could be watched as they took place. Imagine what it would have been like to watch the D-Day invasion in real time.. – GlowWorm. —Preceding unsigned comment added by 98.21.107.157 (talk) 18:44, 5 June 2009 (UTC)[reply]
Actually, you could have watched it in real-time, with the right equipment. --98.217.14.211 (talk) 19:57, 5 June 2009 (UTC)[reply]
It's very unlikely that such a thing would ever happen, however such a nice idea it may be. Think about it. If satellite pictures of battles were freely available on the internet as the battles were taking place, they would be available to both sides and would present serious problems (and of course, advantages) to both sides. --KageTora - (영호 (影虎)) (talk) 19:11, 5 June 2009 (UTC)[reply]
You would need millions of satellites, and much more bandwith the entire Internet has today.. --131.188.3.20 (talk) 22:34, 5 June 2009 (UTC)[reply]
Half a century ago it would have sounded ridiculous that everyone was walking around with their own personal mobile phone. You could not build enough transmitters, and where would all the bandwidth come from? Technology will continue to improve apace and if you cannot think of a fundamental theoretical reason to stop such things happening, then they probably will if people want them to. SpinningSpark 00:04, 6 June 2009 (UTC)[reply]
Indeed. If you are willing to wait 50 years, almost anything is possible unless it is explicitly impossible (and maybe even then!). I believe three satellites in geostationary orbit with cameras with amazing resolution could cover most of the Earth, all it needs is for someone to invent the cameras and that could easy happen in the next 50 years. --Tango (talk) 00:08, 6 June 2009 (UTC)[reply]

Is it really true that Planes which cruise at high altitude get hit by lightning?

Hi All,

I just keep wondering if the latest French Aeroplane's accident is really due to lightning. As I understand, at the hights of over 28k feet, there are no rainbearing clouds, in fact (again within my limited knowledge)these clouds are much below. As such there shouldn't be lightning too. Am I correct in my understanding? Would appreciate if people could enlighten me on the same.

Warm Regards, Rainlover_9 Rainlover 9 (talk) 14:22, 5 June 2009 (UTC)[reply]

See upper-atmospheric lightning. Gandalf61 (talk) 14:26, 5 June 2009 (UTC)[reply]
Planes DO get hit by lightning - but not "Upper Atmospheric Lightning)!! Upper-atmospheric lightning is WAY above where a plane flies. A plane will fly at about 35,000 feet - only about 10 km above the ground. The phenomena referenced in upper-atmospheric lightning are more appropriately called "Transient Luminous Events" because they are neither lightning, nor do they take place in the "atmosphere" (largely being above the stratosphere and in the lower boundary layers of the ionosphere). There is no way even a military aircraft would fly at those altitudes (80 or 100 km above the ground). Airplanes can and do get hit by tropospheric lightning when they fly close to or through a storm - and some cumulonimbus clouds might have storm tops as high as 60,000 feet above ground. It is this type of lightning - typically a cloud-to-cloud strike - that results in the occasional airplane strike. Nimur (talk) 14:59, 5 June 2009 (UTC)[reply]
I have seen a photo of lightning striking up from a cloud as well, so that could be a possibility. Also, an airplane flying along might build up its own static charge, and may become a lightning target by itself, similar to how lightning will go from cloud to cloud. 65.121.141.34 (talk) 15:03, 5 June 2009 (UTC)[reply]

Is 2 + 2 = 4 in other universes

Hello, while wandering about on the Internet I came across this interesting question on Yahoo Answers:

"Is 2 + 2 = 4 in other universes? Assuming that parallel universes exist, would their mathematics be the same as ours? I.e. could there be a universe in which, for example, 2 + 2 = 5?"

People may be interested in the answers posted: http://au.answers.yahoo.com/question/index;_ylt=AvNVaIojr7pEw1Bn_IyyzxoJ5wt.;_ylv=3?qid=20090603213159AAKkQK1

Since the theory of multiple universes was derived using the maths that we understand, then it follows that the maths on parallel universes MUST be the same. Otherwise, it would make the theory inconsistent. That's my guess, I was just wondering what others might think about this.

203.206.250.131 (talk) 14:29, 5 June 2009 (UTC)[reply]

Unless they use different numbers, then it's probaly going to be the same. If you have 2 things and add 2 more it's going to be 4 everytime. --Abce2|AccessDenied 14:31, 5 June 2009 (UTC)[reply]

Actually I would disagree and say there definitely could be universes where 2+2=5 or even 2=3. The whole idea behind a theorectical multiverse is there are an infinite number of universes in which the laws of physics, logic and math can be different. Time could run backwards, accelerate, or not exist at all. Keep in mind math has never been proven to be (maybe once we get a grand unification theory it will be)the basis of reality but simply a representation which we've created. But either way, even with a G.U.T. this would only prove math is immutable in OUR universe, others could be entirely different. TheFutureAwaits (talk) 14:56, 5 June 2009 (UTC)[reply]

A better way to phrase this question is, "Suppose a hypothetical universe existed where 2+2 != 4. What would be the consequences?" I do not think there are any verifiable consequences of such a statement, so it is a non-scientific question. I think I'll leave my response at that, and we can just wait for SteveBaker to show up and remind everyone that needless philosophizing is wasting quarks... Nimur (talk) 15:03, 5 June 2009 (UTC)[reply]
(ec)Personally i think maths evolved from what you see. If you have two balls and someone else gives you two more balls, you find out that you have four balls. This process is called addition. So, maybe, just maybe, there might be a universe where if you have two balls already and i give you two more, another one automatically materializes and you get 5. This is a physical law in the universe that whenever you give two more balls, an additional one should materialize. In this particular universe, i am inclined to believe that 2+2 will indeed be 5, and hence all our mathematics breaks down. So, if there can be universes with different physical laws, this might well be possible, so i think it cannot be said with certainty that 2+2 is always 4. It can just as well be any other number, though it is hard to visualize in this example how it can be a fraction(half a ball? pi times a ball?) I am assuming decimal systems used in all universes. Rkr1991 (talk) 15:09, 5 June 2009 (UTC)[reply]
It depends on your definitions of 2, +, = and 4. If you define them the same way as we define them, which is not dependant on the universe, then you'll get the same answers. However, life in other universes may well define them differently if different definitions are useful for them. Mathematics and universes are separate things linked by models. We model parts of the universe on mathematical things. To use Rkr1991's example, balls are well modelled in our universe by natural numbers, they might not be well modelled by natural numbers in other universes. That doesn't mean natural numbers are any different, just that they wouldn't be used. There are all kinds of possible number systems in mathematics but we generally only use the ones which are useful for describing our universe. --Tango (talk) 15:20, 5 June 2009 (UTC)[reply]
I disagree with Tango. Even if you define 2, +, = and 4 exactly the same way in our system, the problem posed in my example remains unresolved. As i said earlier, i think maths was derived from what we observe, not hard solid facts. Maths and our universe cannot be separated, and it cannot exist independently. So if we see 2 balls and 2 balls make 5 balls in a universe, then i think our maths indeed breaks down there. Rkr1991 (talk) 15:38, 5 June 2009 (UTC)[reply]
I agree with Tango. If we see 2 balls and 2 balls make 5 balls, we will reasonably deduce that our definitions of 2, +, = and 4 are no longer useful, and we will redefine one of them. But unless we do that, if instead we keep 2 defined (as is most common) as the set {{}, {{}}} and + through Peano arithmetic, 2 + 2 cannot suddenly become 5. It's not like mathematics defines 2 + 2 as "the number of physical objects in the collection resulting from putting two physical objects together with two other physical objects". Correspondence with that physical result is of course the reason for our definitions (that's why Peano chose the axioms he did, etc), but it is not the definition. —JAOTC 15:54, 5 June 2009 (UTC)[reply]
Our definitions of arithmetic are axiomatic, not empirical. Observations don't come into it other than to determine whether those definitions are useful. --Tango (talk) 16:29, 5 June 2009 (UTC)[reply]
(e/c) Math was originally derived from experimentation, but this is no longer the case. A mathematical theory is made by a group of axioms and is supposed to be interesting. People will find parallels between the mathematical theory and the universe, and use that to make scientific theories. You could have a universe where the strength of gravity is not inversely proportional to the square of the distance, but that just means gravity will be modeled with different mathematics. All that being said, there could be universes where it's possible to build a hypercomputer, and it would be possible to do more sophisticated math. It's not that they'd get different theorems for the same theory (2+2=4 would still be true), it's that they'd be able to use axiom schemas that we cannot. For example, they might be able to find out if every even number above four is the sum of two odd primes by checking every case, something physically impossible in our universe. — DanielLC 16:03, 5 June 2009 (UTC)[reply]
I would like to point out that there are things that appear axiomatic that are in fact dependent on unstated assumptions. Euclid teaches, and people agreed for quite some time, that the interior angles of a triangle always add up to 180º. That's been a basic theorem of geometry for hundreds of years, and is as easy to demonstrate as 2+2=4. Of course, unstated is that only works if you are talking about a triangle on a 2-D plane—if you are sketching the triangle on a 3-D surface, it isn't true at all. A triangle on a sphere can have interior angles that add up to more than 180º. There's an unstated limiting factor in Euclidean geometry—that it only applies to 2-D surfaces—that becomes clear if you actually start, say, drawing large triangles on the ground (and you happen to live on a sphere).
I don't know enough about number theory to say whether there could be something analogous with arithmetic, but it's worth considering. --98.217.14.211 (talk) 17:21, 5 June 2009 (UTC)[reply]
No good mathematician would leave those assumptions unstated, at least in formal work. Everything in mathematics is based on assumptions. Results are always of the form "A implies B" never simply "B". One of the key things that is drilled into any mathematics student, however, is to always state your assumptions. --Tango (talk) 18:01, 5 June 2009 (UTC)[reply]
If 2+2 *always* equalled 5, then the alternate universe's laws of mathematics would likely look a lot like ours (though perhaps a bit more cumbersome). However, consider a universe in which 2+2 sometimes was 4, sometimes 3, sometimes 5, etc. In a sense, we live in such a universe now. The math we use to deal with this is statistics. Two random variables, when added together, can give you different answers each time you observe them. Wikiant (talk) 18:06, 5 June 2009 (UTC)[reply]
In which case they wouldn't model items as numbers but rather as random variables. Mathematics includes all kinds of tools, you need to use the right one for the right job. Just because one tool isn't right for a particular job doesn't mean that it is wrong mathematically. --Tango (talk) 18:10, 5 June 2009 (UTC)[reply]
I would like to leap to the defence of Euclid on this one. Euclid realised perfectly well that the angles in a triangle summing to 180º was not axiomatic but was dependant on the parallel postulate and clearly states this as a postulate in his Elements (postulate 5). It was later mathematicians who through the centuries refused to believe that the parallel postulate could not somehow be proved from other basic postulates. It was only finally admitted that Euclid was right when Einstein discovered that the universe was not, in fact, Euclidean and that other geometries were possible in the real world as well as in the minds of mathematicians. SpinningSpark 23:06, 5 June 2009 (UTC)[reply]
Are we really debating this? If I have two apples on a table, and place two more apples on the same table, I now have four apples on the table. If I lived in a universe where an extra apple materialized merely because I placed two groups of two next to each other, such a universe would violate the principle of causality so as to be entirely unexplainable at any level. Seriously, why is this even being debated to this level? --Jayron32.talk.contribs 19:00, 5 June 2009 (UTC)[reply]
Your example only violates causality if it is true that 2+2=4. So, your argument boils down to the circular "2+2 must equal 4 because if it didn't, 2+2 wouldn't equal 4." For a possible counter-example, consider this universe wherein subatomic particles and anti-particles spontaneously emerge. This could be interpreted to suggest that 0+0=2. Wikiant (talk) 21:05, 5 June 2009 (UTC)[reply]
Why should another universe obey our universe's laws of causality? It isn't difficult to imagine a universe with two timelike dimensions, which would have very different principles of causality. (I think I read a paper once about such a universe and the conclusion was that complex life almost certainly couldn't evolve in it, but that's not important.) --Tango (talk) 21:15, 5 June 2009 (UTC)[reply]
This reminds me of David Deutsch's concept of "cantgotu" environments in The Fabric of Reality, but I can't remember what if anything he said about the idea of 2+2 not equaling 4. He deals with this general kind of idea, though. 213.122.1.200 (talk) 22:01, 5 June 2009 (UTC)[reply]
No, no, no. You can't use things popping into existance or whatever to envisage a scenario when 2+2 doesn't equal 4. In our universe, 0.9 times the speed of light plus 0.9 times the speed of light equals 1.8 times the speed of light (yes, it really does). However, if you're travelling at 0.9 times the speed of light relative to me and you toss a ball out in front of you at 0.9 times the speed of light, it won't be moving at 1.8 times the speed of light relative to me - it'll be more like 0.99. That's not because 0.9+0.9 doesn't equal 1.8 - it's because the arithmetic operator '+' doesn't apply to speeds as we humans always thought it did. This really shouldn't be a surprise, the '+' operator doesn't apply to a while lot of things!
So if two subatomic particles pop out of nowhere, that doesn't prove that 0+0=2 - it proves that the total number of particles in some volume of space is not a constant - so addition is (again) not an appropriate operation to apply to them.
The point is that things like '+', '2', '=' and '4' are symbols that stand for concepts that have definitions that humans have applied to them. Our definition says that 2+2=4 - not because that necessarily represents something "real" but because that's what we've defined those symbols to mean.
Suppose I were to define the symbol '#' to mean "the sum of two numbers - except when there is a full moon, when it's the product of two numbers" - then 2#2=4 happens to be a true statement...but 3#3=6 isn't always true and 7#9=5 is never true. There isn't any deep inner meaning in what I just said - it's just words and definitions. There is probably no physical property of our universe for which the '#' operator is applicable - but my definition is still a perfectly reasonable one and 2#2=4 no matter what. Perhaps in another universe, placing apples on tables follows the '#' operator - but the '+' operator is kinda useless and considered weird to the people who live there, (who are constantly checking their almanacs to see when the next full moon will be!)
So this universe does whatever it does - the other universe does whatever it does - and whether our definitions for the symbols '2', '+', '=' and '4' are applicable to apples placed on tables in that other universe (or to the way speeds are combined) is a matter that we can't answer. But our definitions still apply - you can't invalidate something that's axiomatic in the system of mathematics you choose to use.
Hence, the answer is a clear "yes" - 2+2=4 everywhere - because we happen to have defined it that way. SteveBaker (talk) 22:27, 5 June 2009 (UTC)[reply]
I'm going to expand upon what's been said above, because hopefully it'll sort out some confusion. Back off a bit - say we're in a parallel universe which doesn't have the same laws of math. First let's start with a quantity. Names are arbitrary, so let's call it "@". That's going to be boring by itself, so we'll make another quantity, and we'll call it "!". Independent quantities by themselves aren't interesting, so let's add an operation: "%". How does this operation behave? It might be nice to have an identity, that is, if we 'percent' a quantity with the identity, we get the other number back. It's arbitrary which one we choose, as they are the same, so let's pick '@'. So we have '@ % @' => '@' and '! % @' => '!'. Okay, but now what about '! % !'? Well, we could say that '! % !' => '@', or even '! % !' => '!', but we can also introduce a new symbol, so let's do that and call it '&'. So '&' is defined as '! % !'. Now what is '& % !'? Keeping it open ended we define it to be '$'. And '$ % !' => '#'. Now, let's figure out what '& % &' is. Since '&' is the same as '! % !'; we see that '& % &' is '(! % !) % (! % !)'. If we say that order in which we 'percent' doesn't matter (associative property), we can rewrite that as '((! % !) % !) % !', or '(& % !) % !' or '$ % !', which we defined earlier as '#'. As long as '! % !' => '&', '& % !' => '$', '$ % !' => '#', and the '%' operation has the associative property, '& % &' => '#'. Hopefully you can see where I'm going with this; the names I gave them were arbitrary. I could have easily have said 0 instead of @, 1 for !, 2/&, 3/$, 4/# and +/%, and you have your situation. If we have defined 1+1=2, 2+1=3, 3+1=4, and addition as associative, then 2+2=4. If 2+2=5, then either 3+1=5, 1+1≠2, or addition is not associative. We certainly could call 3+1, '5' if we wanted, but it would behave exactly the same way 4 does now. It would be the equivalent of writing 'IV' instead of '4' - the name changes, but the properties stay the same. Conversely, with addition being non-associative, why call it "addition"? Associativity is part of what defines addition - if you change that, you have something else.
"Non-standard" arithmetic are used all the time, however. That case earlier, where '!%!'=>'@' (er, 1+1=0)? That's modulo 2 arithmetic. We also have the case where '!%!'=>'!', except we call that multiplication, and we substitute 1 for @ and 0 for ! instead of vice-versa. We also frequently encounter physical situations where 2+2≠4. Take, for example, adding two liters of water to two liters of sand. 2+2≠4 in that case. Mix 2 cups vinegar to 2 cups baking soda, and the result takes up much more than 4 cups. Physical reality doesn't have to match with abstract mathematical constructs - however, instead of redefining 2+2=3 because the sand and water don't measure 4 liters afterward, or saying that 2+2=8 because of the carbon dioxide gas evolved with baking soda and water, we leave mathematics as a "pure" ideal, with 2+2=4, and realize normal addition doesn't apply to those situations. Likewise, the fact that interior angles of a triangle don't add up to 180 on the surface of the earth didn't invalidate Euclidean geometry - it still exists theoretically, we just realize that when doing surveying, we need to use a non-Euclidean geometry. -- 128.104.112.106 (talk) 01:28, 6 June 2009 (UTC)[reply]

Peacock and Peahen

Somebody told me that Peahen gets her eggs fertilized by orally shallowing the semen of the peacock.Is it true? —Preceding unsigned comment added by 202.70.74.155 (talk) 15:32, 5 June 2009 (UTC)[reply]

No. --Stephan Schulz (talk) 16:11, 5 June 2009 (UTC)[reply]

Egg protein

Since eating raw eggs is not better than eating cooked eggs, what way should i cook them so that minimum protein is lost in the process of coooking (i.e. boil, fry etc.)? —Preceding unsigned comment added by 116.71.42.218 (talk) 18:00, 5 June 2009 (UTC)[reply]

This is a confusing question! If raw is not better than cooked, then why are you concerned about loss during cooking? As for outright loss during cooking, everything is still there except for maybe some small amounts that get stuck to the pan or leached out into the grease or other cooking medium, so something like hard-boiled would keep everything still in there. DMacks (talk) 18:04, 5 June 2009 (UTC)[reply]
The only protein loss during cooking would be the little bits that brown along where the egg meets the pan (see Maillard reaction). These are likely so small as to be insignificant, so you would be safe frying the eggs. As far as I am concerned, hard boiled eggs serve little purpose except perhaps to make deviled eggs or egg salad. I find them pretty bland and unpalatable. But if your concern is making sure you get every molecule of protein out that was there originally, taste be damned, then go with the hard-boiled method. --Jayron32.talk.contribs 18:28, 5 June 2009 (UTC)[reply]
Cooking will denature protein, but you will not lose the protein (specifically the amino acids). -- kainaw 18:33, 5 June 2009 (UTC)[reply]

To DMacks: I asked a question a couple of weeks back if raw eggs have any advantage over cooked ones and I was told no. And so if I work out and want max protein from the eggs I should hard boil them? I dont care about taste. —Preceding unsigned comment added by 116.71.59.87 (talk) 19:45, 5 June 2009 (UTC)[reply]

Kainaw already answered the above question, didn't he? Tempshill (talk) 22:10, 5 June 2009 (UTC)[reply]

Has anyone actually got any for the Euro elections? —Preceding unsigned comment added by 86.128.217.5 (talk) 22:43, 5 June 2009 (UTC)[reply]

work

Why would work be defined as a scalar rather than a vector? —Preceding unsigned comment added by 70.52.45.55 (talk) 23:29, 5 June 2009 (UTC)[reply]

Work done is synonymous with energy. The energy required to do something is equal to the sum of the energies required to do each part, they don't cancel out. --Tango (talk) 23:39, 5 June 2009 (UTC)[reply]
  1. ^ This is from my own experience that I have dealt with for 3+ years.
  2. ^ http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Science#Smallest_number_of_neutrons