Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
SineBot (talk | contribs)
m Signing comment by Callerman - "→‎NIST aluminium ion clock: "
Line 58: Line 58:


*If you have two smallest vectors with the same magnitude, then the mean vector will be on one of the angle bisectors. The info I got has lead I at 0° (to the right), lead II at 60° clockwise rotation, lead III at 120°. If I and III are equal in magnitude (don't need the same sign), then the mean vector can be 150° or -30°, but in those cases lead II will be smallest, so the only possibilities left are +60° or -120°, depending on the sign of the lead II result. That's how I understood it, but all the different electrodes made it a bit confusing. So far only arm and leg electrodes seemed involved?? A link to a site with the terminology or examples could help. More people inclined to have a look if the subject is just a click away instead of having to google first. <sub>Hmmm, would there be a correlation between links in question and number of responses...</sub> [[Special:Contributions/84.197.178.75|84.197.178.75]] ([[User talk:84.197.178.75|talk]]) 19:37, 14 March 2012 (UTC)
*If you have two smallest vectors with the same magnitude, then the mean vector will be on one of the angle bisectors. The info I got has lead I at 0° (to the right), lead II at 60° clockwise rotation, lead III at 120°. If I and III are equal in magnitude (don't need the same sign), then the mean vector can be 150° or -30°, but in those cases lead II will be smallest, so the only possibilities left are +60° or -120°, depending on the sign of the lead II result. That's how I understood it, but all the different electrodes made it a bit confusing. So far only arm and leg electrodes seemed involved?? A link to a site with the terminology or examples could help. More people inclined to have a look if the subject is just a click away instead of having to google first. <sub>Hmmm, would there be a correlation between links in question and number of responses...</sub> [[Special:Contributions/84.197.178.75|84.197.178.75]] ([[User talk:84.197.178.75|talk]]) 19:37, 14 March 2012 (UTC)

All basically correct! Our article on [[Electrocardiography]] has some information about vectorial analysis, but I'm not sure if that's sufficient for you. In the normal heart the mean electrical vector is usually around about 60 degrees (lead II), but anywhere between -30 and +90 is considered normal. The mean vector, of course, will be perpendicular to the direction of the smallest vector, and in the direction of the most positive vector. [[User:Mattopaedia|<font color='Indigo'><b>Matto</b></font>]][[Special:Contributions/Mattopaedia|<font color='DodgerBlue'>paedia </font>]][[User talk:Mattopaedia|<font color='Olive'> <sup>Say G'Day!</sup></font>]] 07:10, 16 March 2012 (UTC)


== Electricity prices ==
== Electricity prices ==

Revision as of 07:10, 16 March 2012

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


March 12

Track position of something

How can you tag something cheaply, to track its position? I mean, you could put an iPhone into something and report the position, but I need something much cheaper and simple, for tagging some expensive tools and maybe recover them if they get lost within a building, or even stolen. XPPaul (talk) 01:29, 12 March 2012 (UTC)[reply]

Well, you can place RFID tags that can track things fairly cheaply, per item, for a large number of items, as they pass scanners at the exits. However, trying to track things after they have left the building would be quite expensive, requiring cell phone technology at the least, and satellite technology if you need to be able to scan for items away from cell towers. StuRat (talk) 01:46, 12 March 2012 (UTC)[reply]
But can you track cheaply the location within a building of something tagged with RFIDs? Or only when they pass a threshold? XPPaul (talk) 01:56, 12 March 2012 (UTC)[reply]
They do make portable scanners, but you would then need to walk through the building until you get close enough (a few meters) to read it. Placing a network of scanners every few meters would likely be prohibitively expensive. StuRat (talk) 02:00, 12 March 2012 (UTC)[reply]
There are also key-finders, where you press a button on the base unit and the unit attached to the item selected starts beeping. Unlike RFID, these have batteries that need to be changed on each item. This is more economical for a very small number of items, though, although not good at preventing theft, because, unlike tiny RFID chips, these tags aren't easy to hide, so the thief would just remove them. The cost here is around $10 per item tracked initially, then maybe $5 a year in batteries each. StuRat (talk) 02:07, 12 March 2012 (UTC)[reply]
There are commercial GPS locators that sell for $150 or so. Depending on the cost of the tools, that might be a worthwhile investment. --Mr.98 (talk) 02:09, 12 March 2012 (UTC)[reply]
If you tell us more about your situation, like how many items you want to track, the average cost of the items, how large the area is where you need to track them, your annual loses from theft, etc., we can tailor our suggestions accordingly. StuRat (talk) 02:15, 12 March 2012 (UTC)[reply]
I have a non-technical solution for theft:
1) Keep tools locked up.
2) When somebody needs to use them, have them sign for them, with the understanding that if they don't return them, they will be charged for them (whether they lost them or they were stolen doesn't matter).
You might find they take better care of them, in this scenario. StuRat (talk) 02:28, 12 March 2012 (UTC)[reply]
To follow on that, it is often helpful to attach identifying information i.e. Property of XYZ (555-5555), or similar. This helps guard against accidental losses (e.g. people forgetting where an item came from) and in addition, identifying info may discourage theft if the information is sufficiently hard to remove. Not a guarantee obviously, but simple strategies like this can be helpful in decreasing the rate at which things go missing. Dragons flight (talk) 02:58, 12 March 2012 (UTC)[reply]
The tools are mechanical and electrical for apprentices going through vocational training. Having a tracking system could make it easier to just let them access the tools to play/learn as much as they want, maybe taking the tools outside the school. And they are not really expensive, just for the school they would be expensive, specially if they go missing. XPPaul (talk) 03:19, 12 March 2012 (UTC)[reply]
Put on RFID or simple barcodes, and require students to check them in- and out at a scanner, just as in a modern library. Do occasional spot checks to find out if procedure is followed. It's a hassle, but then, nearly everything would be. You might want to restrict this to power tools, and just eat the loss on plain screwdrivers and hammers... --Stephan Schulz (talk) 07:49, 12 March 2012 (UTC)[reply]
A cheap way is to install mobile phone tracking on a cell phone, and attach the phone to whatever or whoever you want to track. See also tracking system and vehicle tracking system.--Shantavira|feed me 09:10, 12 March 2012 (UTC)[reply]
Register the student who checks out an item, or give each student his own set of tools, those are the two systems usually used in my experience. Or make them available at one location for all students attending but check if all is returned at the end of each lesson. Students who borrow tools for outside the classroom would still be registered. There are always risks, I remember a student who got all his gear stolen on his first day. 84.197.178.75 (talk) 12:27, 12 March 2012 (UTC)[reply]
I !vote for a "library" system. Each idem has an individual identity (engrave a number on it) and students can check them in and out the same way as library books. If the cost can be justified install an RFID system with a scanner at the exit of the tool storage room - similar to the anti-shoplifting scanner systems used in retail stores. Roger (talk) 14:37, 12 March 2012 (UTC)[reply]

Journey time with constant acceleration drive

In the article on Space travel using constant acceleration, it is my understanding that the traveler will not see himself limited by the speed of light (if I remember my first year physics, the universe relatively moving at close to the speed of light will appear squashed to him, effectively making his journey shorter). But is the stronger statement, that from his perspective all the relativity cancels out and that he can treat his distance calculations a Newtonian, correct?

For example, setting aside all the other issues with interstellar travel, if I wanted to fly to a star 10 light years away, I would constantly accelerate for the first 5 light years and constantly decelerate for the last 5 light years. Assuming I accelerate at 1G, could I simply solve t from 0.5 * g * t^2 = 5 (light years), and double it for total journey time? If not, what is the formula for the travel time, as measured by the traveler's watch? 41.164.7.242 (talk) 13:40, 12 March 2012 (UTC) Eon[reply]

Distance traveled: (see Time dilation )
from that formula you can get t (duration as seen on earth) as a function of (half of) the distance, and put that into the next one:
Time in spaceship:
total time (with deceleration) will be double that...
See also: http://math.ucr.edu/home/baez/physics/Relativity/SR/rocket.html 84.197.178.75 (talk) 15:02, 12 March 2012 (UTC)[reply]
Oops, forgot the actual question: no, he can not calculate his travel time the 'newtonian' way, he would get the same result at the start as an observer. I forgot, his calculated speed would go beyond the speed of light, but a quick calculation shows that doesn't give him the right traveling time... 84.197.178.75 (talk) 18:07, 12 March 2012 (UTC)[reply]

Verrucas

The article on Verrucas does not give any mention of the long lasting effects of the wart. It skips straight from cause to diagnosis and prevention, but does not say anything about what can actually happen. Do they just stay there for years and never give you any problems? What exactly are the effects of them? Rcsprinter (talkin' to me?) 16:40, 12 March 2012 (UTC)[reply]

Unless one's immune system is deficient in some way, it will finally clear the infection. Then as the skin layer grows the wart drops off and one becomes immune to getting any more for quite some time. Agreed, the article really should explain this too.--Aspro (talk) 16:57, 12 March 2012 (UTC)[reply]

Fan

what would a fan look like if it spins in the speed of light? 203.112.82.2 (talk) 18:52, 12 March 2012 (UTC)[reply]

good question, like to know that too. Ehrenfest paradox seems to give no answer... 84.197.178.75 (talk) 19:33, 12 March 2012 (UTC)[reply]
Hang on a mo. Lets brake this hypothetic question down into digestible chunks. If this hypothetical fan made of super high tensile Wikimetal was able to rotate fast enough – only its very tips would be able to reach c or near as dam it. In this case the reflected light (which has a velocity of c ) it would still be reflected off of it at c thus giving the same appearance of slower fans, i.e. a transparent disk with a discernible thickness. Now for augments sake: If we consider a disk of pure Wikimetal, then we can do a thought experiment and consider the tips as being able to go at more than c. Some photons would not be able to touch the tips as they would move away faster than the light could approach. Some photons would hit it at an angle and get reflected. How they then get reflected together with those that pass by would be dependant, in part - on mass. Since the tips of Wikimetal fan are travelling at c or c + then mass is err.. extraordinary large! Some-where-in-between these two extremes, suggests to me that light will be bent by the mass of the tips to form a halo around said fan. Of course this would have to be done in a high vacuum, otherwise one's lab notes would be blown all over the place. Why can't you ask the more usual questions like why is the sky blue?--Aspro (talk) 22:13, 12 March 2012 (UTC)[reply]
because google can answer that and i go to wikipedia for intelligent answers. =) 203.112.82.2 (talk) 23:41, 12 March 2012 (UTC)[reply]
Wouldn't the blades of the fan simply appear to an observer, to twist into a spiral. The above explanation does not follow special relativity sense. For instance there is never a c+ instance. Plasmic Physics (talk) 23:03, 12 March 2012 (UTC)[reply]
Yes, but I didn't want to complicate a hypothetical question as one could probably not be able to observe the spiral anyway – unless a Femto strobe light or something was at hand. I was also counting on the special properties of rigidity that Wikimetal has, to allow the tips to reach c in the first place. In which case I think it would be just the effect on light that one would notice. Yes, there can never be a c+ It was all a thought experiment. I tend to find its easier to answer questions like does my bum look big in this but I am a bit of a masochist I suppose.--Aspro (talk) 23:29, 12 March 2012 (UTC)[reply]
Ignoring the practical difficulties of this experiment, and assuming "the speed of light" means "slightly less than the speed of light", the blades would appear thinner (with respect to simultaneous measurement in the lab frame), and that's about it. The blades would not twist into a spiral unless the fan was (angularly) accelerating or decelerating. -- BenRG (talk) 23:51, 12 March 2012 (UTC)[reply]
Remember, that the spped of light is not an angular quantity. Ordinarily, a point positioned at a further radial distance sweeps through a larger arc length, meaning that Lorentz contraction should be more pronounced towards the tips of the fan blades.
Wouldn't the length of the blades also contract, because they are undergoing acceleration towards the axis of the fan? Plasmic Physics (talk) 00:04, 13 March 2012 (UTC)[reply]
That's a good point. The point I wanted to get across, is that if this hypothetical situation could be approached without regard to the practical limitations due to the strength of materials and energy required to spin it up to speed, then the observed effects or as the OP put it ”what would a fan look like” the likely appearance -I think- would be a development of a halo due to Gravitational lensing. Maybe an astrophysicist can chip in as to whether a rapidity spinning black hole also shows astigmatic lensing or whether its spheroidal shape and speed of rotation causes it to deforms and cancel it out. --Aspro (talk) 00:14, 13 March 2012 (UTC)[reply]
I'm going to have to come back to this tomorrow. I can understand the fan's cord getting smaller but acceleration towards the axis requires me to to think in terms of π and I can't do that at the moment. If only the Op has asked about the colour of the sky; we would have had this polished off by now!--Aspro (talk) 00:33, 13 March 2012 (UTC)[reply]
It has a cord, here I thought it was battery powered. Plasmic Physics (talk) 00:49, 13 March 2012 (UTC)[reply]
Don't be an ignoramus. Cord as in Chord (aircraft) the OP is talking about a fan. And a thought: Isn't acceleration towards the axis just Newtonian mechanics. Its not absolute (in this frame). I still don't see how it applies here.--Aspro (talk) 01:26, 13 March 2012 (UTC)[reply]
Yes, the blades would be thinner at the tips, but there's no relativistic contraction in the radial direction. If anything the blades would be longer since they're under (ridiculously huge) tension. Gravitational effects would depend on the mass of the blades, which could be made arbitrarily small if you wanted to ignore gravity (since this is unrealistic anyway). -- BenRG (talk) 01:14, 13 March 2012 (UTC)[reply]
"(ridiculously huge) tension" This is why (for this thought experiment) I have chosen to use Wikimental, which will not elongate so much as half an Yoctometer under whatever any ridiculously huge tension one subjects it to. --Aspro (talk) 02:40, 13 March 2012 (UTC)[reply]
Why would there be no Lorentz contraction in the radial direction? Plasmic Physics (talk) 01:20, 13 March 2012 (UTC)[reply]
@Aspro: I've genuinely never heard of "chord" being used in that sense, and I haven't studied aircraft in such detail. Which way is acceleration then? Special relativity is just amended Newtownian mechanics to take the limiting factor c into account, there is no reason why Newton's first law of motion should no longer apply. Plasmic Physics (talk) 01:37, 13 March 2012 (UTC)[reply]
That's OK, I don't know how to mix a Manhattan cocktail. Have you hear of the Voyager space probe 'sling shot' manoeuvres. It didn't get 'accelerated' (gain extra momentum) rather it got its angular velocity converted. Gravity assist. Therefore, the blades are not really increasing their velocity due to rotation about their axis. I haven't put that very well but that is all that comes off the top of my head right now and my ISP has gone all fluttery on me and so my connection keep going down so I can't keep up with the comments.--Aspro (talk) 01:56, 13 March 2012 (UTC)[reply]
I have heard of slinghot manoeuvres however, your statement contradicts a sentance in the leading paragraph of that article. Plasmic Physics (talk) 03:03, 13 March 2012 (UTC)[reply]
The acceleration while at constant angular velocity is purely centripetal. But this doesn't imply any centripetal velocity since "change in velocity" is permitted to be a change in direction component of the vector, rather than the magnitude (which is indeed what centripetal acceleration is). While the fan is speeding up, there is indeed a component of the acceleration that is perpendicular to the radial. As for why there is no radial contraction, that's because there is no radial component to the velocity vector of any atom on the fan. Someguy1221 (talk) 02:57, 13 March 2012 (UTC)[reply]
Oh, so as a vector quatity, only the direction changes, not the magnitude. And only the magnitude is limited by the speed of light? Plasmic Physics (talk) 03:45, 13 March 2012 (UTC)[reply]
Yup. Someguy1221 (talk) 01:52, 14 March 2012 (UTC)[reply]

Aircraft and Faraday cages

Most of us probably know that it's easy to make a cell phone call from inside an airliner while it's on the ground, taxiing before takeoff, where it's well within range of, presumably, multiple cell phone towers. But an aircraft seems to me to be a perfect Faraday cage with the exception of the windows. Are our electromagnetic signals when making a cell phone call really just the result of the signal that travels through the plane's windows? Comet Tuttle (talk) 19:35, 12 March 2012 (UTC)[reply]

Faraday cages, per the article intro, are only effective if gaps are smaller than the wavelength of the radiation in question. Modern cell frequencies are ~2GHz, which gives them wavelengths of ~15cm. That's smaller than airliner window openings, so that's enough to prevent the aircraft from operating as a Faraday cage. Whether and how much the airframe is responsible for signal attenuation, I can't say. — Lomn 22:05, 12 March 2012 (UTC)[reply]
(ec)A Faraday cage cannot have holes bigger than about the wavelength of the electromagnetic wave its supposed to protect against. Notice how e.g. light easily comes in through the windows of an aircraft ;-). IIRC, cell phones use frequency bands somewhere between 800MHz (~40cm) and 2100MHz (~15cm). Thus, they signals can also simply come in through the plastic windows. --Stephan Schulz (talk) 22:12, 12 March 2012 (UTC)[reply]
Yes, it it simple the windows. Astronaut Dr. Owen K. Garriott ( call sign W5LFL) who stuck his antenna on a space shuttle window and with only 5 watts transmitting power, I could hear him better than if he was standing right next to me speaking into a tin can telephone.--Aspro (talk) 22:48, 12 March 2012 (UTC) .[reply]
Excellent, thank you. Comet Tuttle (talk) 23:08, 12 March 2012 (UTC)[reply]

Gut flora in poop

My question is about the undignified subject of bacteria that live in feces. How long does gut flora live in feces that has already been pooped out? How much of a difference does it make when the feces is immersed in water as opposed to drying out on the ground? (I'll preemptively stop one or two sniggering comments by suggesting we consider this to be gut flora of a dog rather than of a human.) Comet Tuttle (talk) 19:46, 12 March 2012 (UTC)[reply]

One of the first things to happen will be that the CO2 will diffuse out and so the pH will increase. But much of the faces is made up of dead bacteria and so lysis of those cells will create ammonia. This complicates trying to guesstimate the actual conditions as this chemical is not always a proton donor but I guess that it will also increase the pH. Becoming more acidic and with more oxygen diffusing in, the bacterial and fungal gut flora will then find their 'poop' environment becomes less than hospitable and some will turn into dormant spores. On dry ground I think this take just a few hours. In wet conditions it depends a lot on the pH and things. Even mildly acidic conditions don't favour Asiatic Cholera and those types of bacteria. Therefore, I don't think any definitive answer can be given – at least by me - as it depends on temperature, moisture, oxygen availability and pH..--Aspro (talk) 20:08, 12 March 2012 (UTC)[reply]
One of the bacteria that is prevalant in poo (and of particular interest because it can cause food poisoning) is ecoli, maybe that article might help a little. Specifically this paragraph Cells are able to survive outside the body for a limited amount of time, which makes them ideal indicator organisms to test environmental samples for fecal contamination.[8][9]. There is, however, a growing body of research that has examined environmentally persistent E. coli which can survive for extended periods of time outside of the host. Unfortunately, "limited time" and "extended periods" are not very descriptive. Vespine (talk) 03:17, 13 March 2012 (UTC)[reply]
I probably shoulnd't have used "prevalant" since it's only 0.1% of the bacteria which makes up 60% of the dry weight of poo, from the gut flora article. Just call it "present". Vespine (talk) 03:21, 13 March 2012 (UTC)[reply]
Ok, last post, promise:) The reference for the "extended period" claim is available to [[read online for free. It might have more specific details if you're interested in reading it. Vespine (talk) 03:27, 13 March 2012 (UTC)[reply]
E. coli is a good example of one of the bacterium that can tolerate a wide range of conditions. [1] both temperature and pH. So, now wash your hands. --Aspro (talk) 03:36, 13 March 2012 (UTC)[reply]

laws on cell towers

Are there any laws in the United Kingdom about the building of cell towers near schools? 72.129.183.133 (talk) 23:06, 12 March 2012 (UTC)[reply]

Cell towers are called "mobile phone masts" in the UK. This page says that in 2001, the Government issued guidelines for Local planning authorities called "...'Planning Policy Guidance 8: Telecommunications'," which "reinforced public consultation arrangements for small masts, increased the prior approval period to 56 days, and insisted that school governors must be consulted on any proposals for masts on or near schools or colleges." The following year there was "A Code of Best Practice on Mobile Phone Network Development". So although it's unlikely that planning permission would be granted close to a school, it's not actually prohibited. See this current case about permission being granted for a mast next to a church which hosts a pre-school kindergarten. Alansplodge (talk) 00:43, 13 March 2012 (UTC)[reply]
Why is there a problem putting transmission masts near schools? If it is because there is a "radiation" issue can you take the time to find a reputable citation. Richard Avery (talk) 23:04, 13 March 2012 (UTC)[reply]
I know of a number of schools here in South Africa that have cell towers on their property. The rental income is put to good use. Roger (talk) 06:39, 14 March 2012 (UTC)[reply]
There is a widely PERCEIVED danger from mobile phone radiation following some early research. For example Mobile phones 'alter human DNA' and 2-Year Study Finds Possible Cell Phone Danger To Brain. We have an article Mobile phone radiation and health with a section on Health hazards of base stations. Here is the UK Government's advice about masts, health issues and planning. Since local authorities are responsible for planning permission, and since they are also elected by people who perceive a risk from these structures, most play it safe and won't site masts near schools. For example, Phone firms appeal over mast rejection. But not all - see the press report linked above. Alansplodge (talk) 16:58, 14 March 2012 (UTC)[reply]
This Fact Sheet on Cell Tower Health Studies lists research that suggests that there IS a health risk. The main one seems to be The Naila Study, Germany (November 2004). Cancer Research UK's page: Mobile phones and cancer, highlights a major Danish study on mobile phone use which said: "We found no evidence for an association between tumor risk and cellular telephone use among either short-term or long-term users. Moreover, the narrow confidence intervals provide evidence that any large association of risk of cancer and cellular telephone use can be excluded.". However, this research was about phone use and not proximity to a phone mast. Alansplodge (talk) 17:51, 14 March 2012 (UTC)[reply]
That fact sheet is from a highly POV source. A couple of the articles cited are clearly nonsense. The others cite things like "increased fatigue" and "nausea" with people living less than 100m from a tower, but not one of them seems to compensate for the fact that almost everyone living that close to a tower is living in a dense urban area, and virtually everyone not living that close is out in the suburbs or other more rural areas. It's well understood that the poor air quality of a big city can damage your health.
The first study has 14% of participants living within 10m of a transmitter. I have to assume that most of those were lower-power urban transmitters. Probably transmitting at less power than your average handset. Now, notice the journal that it's published in. "Electromagnetic Biology and Medicine Journal" Someone correct me if I'm wrong but this seems to be a fringe journal entirely dedicated to this sort of scare story.APL (talk) 09:32, 15 March 2012 (UTC)[reply]

Dominant solar system plane inside a galaxy?

Hi

I see that the galaxies are rather 'flat'. Not entirely so that all solar systems share the same position in height (z-axis), but with a clear predominant tendency towards a plane. Now, is the tendency for solar systems inside the galaxy to share this plane? Ie that we look at the Milky Way in each solar system as though it is level with the orbital patterns of most other planets inside it.

If you could help me with this I would be much obliged!

83.108.140.82 (talk) 23:16, 12 March 2012 (UTC)[reply]

There might be a very weak association, but for the most part planetary systems are believed to be randomly oriented with respect the galactic plane. Our solar system makes a 60 degree angle to plane of the galaxy, if I recall correctly. Dragons flight (talk) 23:23, 12 March 2012 (UTC)[reply]
For reference, you might read Protoplanetary disc and Formation and evolution of the Solar System. It seems the 'local' forces of the protoplanetary disc outweigh the over all 'galactic' influence. Vespine (talk) 00:16, 13 March 2012 (UTC)[reply]
And for each planet with rings and/or moons, those orbits aren't necessarily aligned with either the solar system plane or galactic plane. StuRat (talk) 03:44, 13 March 2012 (UTC)[reply]
They are unless something weird is going on. Moons that formed with the planet, rather than being captured later, will orbit in roughly the equatorial plane of the planet, which will normally be roughly the same as the orbital plane (all the angular momentum comes from the same source - the net random movements in the cloud of dust and gas that collapsed to form the solar system - so it all has roughly the same axis orientation). You get exceptions like Uranus, where some kind of close interaction or collision with another body moved its axis considerably. You also get moons that were captured later, which can have pretty much any orbit. The norm, though, is for everything in a solar system to rotate and revolve in roughly the same plane. --Tango (talk) 12:30, 13 March 2012 (UTC)[reply]
However, over the course of 4.3 billion years, nearly every object in the solar system has had ample opportunity to be so disturbed. Axial_tilt#Axial_tilt_of_selected_objects_in_the_solar_system shows that most objects in the solar system to not align that way, the one exception being Mercury (planet). This suggests, to me, that Mercury is close enough to the Sun for some gravitational effect similar to tidal locking to have had a major effect. Surprisingly, even the Sun itself has a tilt of 7.25 degrees, so there must have been an interaction with quite a large object (larger than any current planet) at some time in it's history. StuRat (talk) 16:01, 13 March 2012 (UTC)[reply]
Though individually quite negligible, I wonder if the small stochastic impulses from coronal mass ejections might be enough to introduce a small tilt to the sun after 4.5 billion years. Dragons flight (talk) 16:22, 13 March 2012 (UTC)[reply]
Wouldn't they tend to balance out ? StuRat (talk) 18:35, 13 March 2012 (UTC)[reply]
That's like saying, if you take a random walk shouldn't you stay near zero? In a certain sense it is true, but the longer it goes on the higher your odds of finding the walker far from the origin. Dragons flight (talk) 19:10, 13 March 2012 (UTC)[reply]
Having run some numbers, the impulse per event seems to be too small to appreciable effect the sun's orientation, even after billions of years. Dragons flight (talk) 02:23, 14 March 2012 (UTC)[reply]

Why does the Sun have a 7.25 degree tilt ?

OK, I'm going to break this off as a new question. As per the discussion above, what caused the Sun to have this obliquity/axial tilt, relative to the ecliptic plane ? StuRat (talk) 05:39, 14 March 2012 (UTC)[reply]

This particular one? Probably not a lot. But if it was any different, would you be less likely to ask the question? I'm no astrophysicist (obviously), but it seems to me that even if on average the axis of rotation of a star tends to aligned with the axis of rotation of the planets around it, being seven degrees or so off doesn't seem particularly implausible - at least without data to the contrary. If the (local) universe had been perfectly uniform in all directions before the Sun formed, it wouldn't have. It wasn't, so it did. As to why the universe isn't perfectly uniform in all directions, I've no idea (I'm no... well, I've said that already), but if it was, you wouldn't be here to ask the question. AndyTheGrump (talk) 06:33, 14 March 2012 (UTC)[reply]
Sorry, but that's not much of an answer. The Sun would presumably have to have had zero tilt relative to the planets when the solar system was formed, so what caused it to change ? StuRat (talk) 06:38, 14 March 2012 (UTC)[reply]
Why do you presume that? AndyTheGrump (talk) 07:09, 14 March 2012 (UTC)[reply]
All rotation of the Sun and planets and moons must have come from the rotation of the proto-planetary disk. The planets and moons could fairly easily have been disturbed since then by gravitational interactions with each other and other passing objects, explaining their tilts, but it would take a massive object to cause the Sun to tilt. StuRat (talk) 07:15, 14 March 2012 (UTC)[reply]
If the planets all started off in the same plane, gravitational interactions with each other couldn't move them out of this plane - which leaves your 'passing object', presumably quite massive. Maybe it changed the tilt of the sun as it passed, maybe it only changed the tilt of the planets (though maybe it hit the Sun?). Once the planets have been perturbed out of their original plane, gravitational interactions will tend to realign them again - but not necessarily in the same plane as they were originally. AndyTheGrump (talk) 15:37, 14 March 2012 (UTC)[reply]
If everything was exactly in the same plane, that's true, but whatever tiny variations exist can be greatly magnified over time. Pluto, for example, was thought to have been in the same plane as the rest, until many near misses with Neptune knocked it out of the plane. StuRat (talk) 22:49, 14 March 2012 (UTC)[reply]
Not an answer, but I did make the observation that the rotational angular momentum of the sun is approximately 3% of the total angular momentum of the solar system. About 60% of the total is the orbital angular momentum of Jupiter. I suppose one possibility is that the very early solar system could have contained some extra protoplanets. If they was scattered out of the disk in the right way, the remaining angular momentum for the disk might have been shifted by the right amount relative to the sun. Dragons flight (talk) 22:07, 14 March 2012 (UTC)[reply]

Have scientists proposed any theories as to what sized objects might have perturbed the Sun, and when ? StuRat (talk) 22:30, 14 March 2012 (UTC)[reply]

Why are you assuming that it is the Sun that has been perturbed, rather than the planets? AndyTheGrump (talk) 22:51, 14 March 2012 (UTC)[reply]
Because, as noted above by Dragon's Flight, it would take far more to move all the planets, especially Jupiter, into an orbit 7.25 degrees out of the original plane, since the Sun's angular momentum is only some 3% of the total. (Even though the mass of the Sun is far more, the distance from the surface to the center of rotation is far less.) StuRat (talk) 23:06, 14 March 2012 (UTC)[reply]
Actually, if you consider the torque required, it would seem much easier to perturb the planets since the effective lever arm would scale as the orbital radius, while for the sun you only have the solar radius to work with (and really not even that, due to the high symmetry). If it is an outside influence, then my bet would be on something skewing the planetary disk. Even so, it wouldn't be that easy. Dragons flight (talk) 00:12, 15 March 2012 (UTC)[reply]
But a single object slamming into the Sun could do it (a star at low speed or a gas giant at high speed), making that a simpler scenario than objects slamming into each and every planet. StuRat (talk) 00:44, 15 March 2012 (UTC)[reply]
I don't think anyone is suggesting that objects were hitting each planet - I'd assumed we were talking about a massive object perturbing them with its gravitational field as it passed by. AndyTheGrump (talk) 00:48, 15 March 2012 (UTC)[reply]
I wouldn't expect it to perturb each item in the same way, you'd end up with a rather chaotic system, methinks. StuRat (talk) 00:55, 15 March 2012 (UTC)[reply]
On long timescales the orbits of the planets interact with each other. I would guess that there is a tendency for the planes of the various orbits of the major planets to drift towards each other if perturbed. If there is such a tendency then a large perturbation for one planet could eventually cause the plane of all the planets to shift. Dragons flight (talk) 01:58, 15 March 2012 (UTC)[reply]
I don't think that's right. To bring each other into the same plane the objects actually need to strike each other, as they do in a proto-planetary dust cloud or a ring around a planet. Once you have large objects with empty space in-between, gravitational perturbations are just as likely to move objects farther out of the plane as back into it. Just look at the orbits of the moons of Jupiter to get an idea of how chaotic they are: [2]. Saturn's moons are almost as bad, despite the rings: [3]. StuRat (talk) 02:39, 15 March 2012 (UTC)[reply]
For Jupiter's moons, >99.99% of the mass is in the four large moons, which all have inclination less than 1 degree (as do all four of the moons that orbit inside of the radius of the Galilean four). For Saturn, the 13 largest moons each have inclination less than 7.5 degrees (11 of 13 are less than 2 degrees), though one of the 13 is retrograde. I still think settling is plausible, though the objects involved would have to be large enough to have appreciable mutual interactions. The irregular moons of Saturn and Jupiter have such a tiny fraction of the total mass, that they probably can't do much to influence the other orbits. Dragons flight (talk) 16:58, 15 March 2012 (UTC)[reply]
There does seems to be a pattern, both with objects orbiting the Sun and the planets, that the innermost objects orbit in a plane, while the outer objects orbit at random. In the solar system, the Oort Cloud is where the most of the randomly aligned orbits fall. StuRat (talk) 19:04, 15 March 2012 (UTC)[reply]
Note that the article on the ecliptic gives a table of how inclined the planets are relative to the Sun - Earth is the most inclined of all the orbits. The gas giants and Mars are all at 5.5-6.5 degrees, the inner two planets at about 3.5 degrees. This is not my field and I don't know the answer to this question, but I'm thinking that because the outer planets fall into such a close range the protoplanetary disk must have been at about 5.5 degrees relative to the Sun, and the Sun is what was twisted for some reason. Did the same thing twist Mercury and Venus, or were they somehow entrained into the same plane as the Sun afterward? I have no idea. Earth should have an excuse though - it was whacked hard enough to make the Moon, after all. Wnt (talk) 23:20, 14 March 2012 (UTC)[reply]
To clarify, Ecliptic#Planets lists the inclination of the planet's orbits about the Sun. If you include the minor plants, as noted in the article, you get some much higher values. And the angle of rotation of each planet about it's own axis is also wildly variable, especially for Venus and Uranus, and, if we include dwarf planets, Pluto: Axial_tilt#Axial_tilt_of_selected_objects_in_the_solar_system. StuRat (talk) 23:36, 14 March 2012 (UTC)[reply]

On the subject of the alignment of planetary orbital planes

I assume everyone here has noticed Jupiter and Venus playing footsie in the evening sky. [4]. Even with misty cloud cover, and much of the London suburbs polluting the night sky darkness, they were unmissable earlier tonight. AndyTheGrump (talk) 03:46, 15 March 2012 (UTC)[reply]


March 13

penicillin

is it true that penicillin by inter-muscular injection is less likely to cause an allergic reaction than oral Penicillin? — Preceding unsigned comment added by 64.38.197.218 (talk) 03:54, 13 March 2012 (UTC)[reply]

There is nothing in the current Penicillin article to support or refute the claim. I unfortunately don't have the skill to do a search of the relevant literature. Roger (talk) 14:45, 13 March 2012 (UTC)[reply]
"Topical application is the most likely to elicit sensitization followed by parenteral administration, especially intramuscular injections, whereas the oral route is the safest probably because larger doses are delivered parenterally or intravenously over a shorter period of time. In general, drugs used continuously for extended periods of time have less chance to trigger an adverse reaction but multiple intermittent therapy courses increase the risk..."[5] Wnt (talk) 23:30, 14 March 2012 (UTC)[reply]

Aerodynamics

If an aircraft wing works by reducing pressure on the top surface Cf the bottom surface, then how come planes can fly upside down and stil have lift from the wings?--92.25.105.29 (talk) 15:36, 13 March 2012 (UTC)[reply]

Most planes can't. Those are special stunt planes with different wing designs. Not quite sure how they work, but I suspect they allow the plane to change the angle of attack on the wings, so they strike the air at a different angle when upside-down. There is the less efficient method of deriving lift by merely ramming the air at an angle, and perhaps this is what they use. StuRat (talk) 15:51, 13 March 2012 (UTC)[reply]
No. That is at best horribly misleading. If you look at the plot of lift vs angle of attack for any normal wing section it is reasonably linear around 0, therefore the airfoil will generate lift when inverted. Most a/c are not stressed (at the design stage) to fly inverted, so the manual says don't do it. But -1 g is fairly small departure from the design loads and if you gently flew into it (preferably via half a barrel roll, as that is the gentlest way in) I would expect almost any aircraft to be able to fly inverted, briefly. The reason I say briefly is that the airfoil will be generating a lot of drag, which leads into the more detauiled response below. Greglocock (talk) 01:03, 15 March 2012 (UTC)[reply]
What part is misleading ? Your "more drag" is basically the same as my "less efficient". StuRat (talk) 01:35, 15 March 2012 (UTC)[reply]
Here is the Straight Dope explanation. Basically, the pilot increases the angle of attack by keeping the nose up and the tail down (relative to the ground), and it needs a light but strong plane with powerful engines. Gandalf61 (talk) 15:57, 13 March 2012 (UTC)[reply]
Indeed. see Angle of attack. --Tagishsimon (talk) 15:58, 13 March 2012 (UTC)[reply]
(edit conflict) The lift created by airplane wings is not entirely due to one side of the wing being shaped different than the other. Depending on who you ask, this may be a very minor component of the lift generated - some planes do have symmetrical wings, and manage to fly just fine. The oft-cited "Bernoulli’s principle" explanation for lift is simplistic, and may be misleading for many people. Other people prefer the "Newton explanation", where lift is explained in terms of deflecting air, rather than in terms of a pressure difference. Neither explanation is "wrong", but they can both be misleading, if applied incorrectly. At the end of the day, if you really want to understand lift, you need to understand the Navier–Stokes equations. They say (only joking a little) that even most people with PhDs in fluid dynamics don't really understand the Navier-Stokes equations. Anyways, take a look at Lift (force) for some further treatment of the subject. For a more humorous take, see [6] Buddy431 (talk) 16:03, 13 March 2012 (UTC)[reply]
This is a complex question. I'd suggest that a good place to start would be our Airfoil article. Both the shape of the wing and the angle of relative airflow are factors in determining in which direction the 'lift' force is applied. Though some stunt aircraft may have symmetrical airfoils, not all do, and the capacity to maintain inverted flight may be more a question of having an engine that will work properly under negative G. AndyTheGrump (talk) 16:08, 13 March 2012 (UTC)[reply]
Also, I'd recommend playing with NASA's Foilsim simulation to get a feel for what is actually going on [7]. AndyTheGrump (talk) 16:13, 13 March 2012 (UTC)[reply]
The fundamental answer is that wings don't work by reducing pressure due to the profile shape. In fact, wings on stunt planes meant for frequent upside down flying are often completely symmetrical. Flight is really achieved by angle of attack. You can make a plane fly with a flat piece of material for a wing. Air molecules hit the underside of the wing, are deflected downwards, and due to force and counterforce, the wing pushes up. For this and many more things you think you know check out List of common misconceptions. 88.112.59.31 (talk) 16:37, 13 March 2012 (UTC)[reply]
That may be a 'fundamental answer' - but only in as much as it is fundamentally wrong. I suggest that you also should read the articles linked above. AndyTheGrump (talk) 16:49, 13 March 2012 (UTC)[reply]
A side issue to the aerodynamics, but flyers have told me that the plane needs some special feature in the carb to allow upside-down flight. Fuel injection might help with that, but there could still be issues with the fuel inlet in the fuel tank not finding the fuel when the plane is upside down, whatever its angle of attack. Edison (talk) 18:33, 13 March 2012 (UTC)[reply]
I spent many years academically studying fluid motion, but it was not until I first controlled a sideslip in a Citabria that I felt like I "understood" aerodynamic lift. In reality, if you want an intuition of how an aircraft flies, you may gain more by spending time on a sailboat or flying a kite, and observing how flowing air interacts with rigid and semi-rigid surfaces. The mathematical formulations of aerodynamics are great if you are performance-tuning an airfoil, but they don't do a great job elucidating the conceptual underpinnings of modern aviation aerodynamics. It may help the OP to state, for the record, that even a Citabria does not fly very well when the wings are upside down (though in this configuration, its name reads as a slight variation on "acrobatic"). Technically stated, its drag coefficient and lift-to-drag ratio are much worse in the inverted configuration. Anyway, the aircraft I am training on is rated for only a few moments of inverted flight; (it is carburated, but even worse, its fuel system is gravity-fed; so in inverted configuration, fuel cannot flow "uphill" to the engine). The more advanced Super Dec has a fuel pump and injectors, and can sustain inverted flight for several minutes (though, few pilots can sustain this). Our club requires pilots to wear parachutes when flying aerobatic maneuvers; and while this seems "cool!" at first, it actually becomes quite worrying when you consider the implications. Nimur (talk) 19:01, 13 March 2012 (UTC)[reply]
And, for the record, neither the 7ECA nor the Super Dec have symmetrical airfoils. They are usually outfitted with the NACA 4412 or 1412 airfoil design. You can even graphically experiment with the airflow on these airfoils 4412 using Wolfram Alpha. In fact, a perfectly symmetric wing has poor stalling characteristics. Nimur (talk) 19:13, 13 March 2012 (UTC)[reply]
I wonder if anyone has considered a system where the cockpit rotates so the pilot is always "up" (according to the current G-forces, not the horizon). Just a bottom-weighted cylindrical cockpit floating in a fluid in a slightly larger cylinder would do it, using all fly-by-wire technology (or fly-by-wireless, if there is such a term). This would fix the problem of blood rushing to the pilot's head, and the fluid might also tend to damp out noise and vibration. StuRat (talk) 19:08, 13 March 2012 (UTC)[reply]
Well, it appears engineers have implemented that idea into cereal bowls, so there's that :) 20.137.18.53 (talk) 20:25, 13 March 2012 (UTC)[reply]

An essential read is this paper about theoretical work done by NASA scientist Mary Shafer and others. Roger (talk) 06:49, 14 March 2012 (UTC)[reply]

We talk about the top surface and the bottom surface of a wing but these are just simple, convenient expressions that serve the purpose most of the time. The top and bottom surfaces are most definitely not selected relative to the fuselage, or relative to the Earth's surface. When an aircraft is flying upside down it is necessary for the lift to continue to support the weight of the aircraft so the surface that is normally the top surface becomes the bottom surface, and vice versa. By using the pitch control (or joystick or control column) the pilot can determine the angle of attack on the wing and force the aircraft to fly right-way-up or inverted. Dolphin (t) 21:44, 15 March 2012 (UTC)[reply]

Deflecting a bullet with a laser

It seems the energy of a bullet fired from an AK47 is 1.4 kilojoules. If you were to fire a 1.4 kilojoule laser at that bullet, would that be enough to deflect it? ScienceApe (talk) 18:54, 13 March 2012 (UTC)[reply]

I suppose a powerful enough laser could vaporize or break up the bullet. But, if you can manage to hit a bullet in flight with a laser, you can probably just hit it with another bullet, too. StuRat (talk) 19:01, 13 March 2012 (UTC)[reply]
It strikes me (punny) that there would be advantages to using lasers to hit bullets as opposed to other bullets. The laser's speed would solve a lot of practical problems involved in intercepting a bullet, no? --Mr.98 (talk) 19:29, 13 March 2012 (UTC)[reply]
Aren't lasers specified by wattage (power) not joules (energy)? Just a nitpick. If you want to suppose that the laser beam imparts K joules to the bullet in 1/nth of a second, then your laser would have to be a Kn watt laser. As always, I expect to be corrected by someone smarter. 20.137.18.53 (talk) 19:17, 13 March 2012 (UTC)[reply]
Well power is just energy per second. If it fires a single pulse you just measure it in joules. A 1.4 kilowatt laser outputs 1.4 kilojoules per second. ScienceApe (talk) 19:20, 13 March 2012 (UTC)[reply]
Oh, if it were only that simple! Unfortunately, that is not how laser people usually report their laser power! You're talking about average power (either DC average, or RMS average); but lasers report peak power, because it makes them seem more powerful! I'll find a reliable source to cite when I get back from lunch; but you can check vendor websites for details. Nimur (talk) 19:56, 13 March 2012 (UTC)[reply]
Some more details at pulsed laser in our article. For the advanced, reader, Measuring Beam Quality, from an emeritus Stanford professor of lasers. Awesome website overall, including an archive of "History of Lasers" presentations, loads of animations, online PDF laser textbooks... Nimur (talk) 00:05, 14 March 2012 (UTC)[reply]
Is your hypothetical laser tracking the bullet for 1 second? The heat capacity of the bullet's material is probably relevant. 20.137.18.53 (talk) 19:49, 13 March 2012 (UTC)[reply]
We're really talking two different kinds of energy: radiant energy versus inertia. The laser has the same energy, but in the form of light and heat; it has very little force per F=ma. What little deflection you could get would mostly be through ionization, or the heat deforming the bullet and altering its aerodynamics. Laser propulsion hasn't really caught on for just that reason: you'll eventually make an object move in a vaccuum, but it takes forever. Doing so in an atmosphere is a nightmare. — The Hand That Feeds You:Bite 19:59, 13 March 2012 (UTC)[reply]
Inertia is a property of matter, not a form of energy. I think what you mean to say is that laboratory-grade laser light cannot impart a large amount of force or momentum to a macroscopic object. In theory, laser light is capable of imparting both force and momentum to an object, thereby changing the target's kinetic energy and trajectory; but this would require a laser beam of much greater intensity than we are able to build using current technology.
A laser's energy is not usually described as being "in the form of light and heat;" it is a highly collimated, highly monochromatic, propagating electromagnetic field disturbance. This energy can be transferred to matter through many physical processes: the photoelectric effect; dielectric heating; joule heating; and so on. The laser light also is able to exert radiation pressure, and therefore exert force on the target object. If you work out the math for any of these processes, you'll see that the effects of each of these energy and momentum transfers are very small when you plug in reasonable values for laser light intensity - even for extraordinarily powerful lasers. Nimur (talk) 23:34, 13 March 2012 (UTC)[reply]
The laser pulse may have a comparable amount of energy to the bullet, but momentum has to be conserved in a collision, too, and light has very little momentum. So providing a deflection directly via an ordinary reflection wouldn't work too well. What might have a chance of working under the right circumstances would be using laser ablation to evaporate a bit of the bullet. The momentum carried away by the ablated material would then give the bullet some momentum in the opposite direction. That's the basic idea used in some directed-energy weapons, and in a laser broom. Red Act (talk) 20:20, 13 March 2012 (UTC)[reply]
A laser certainly can apply a force to the bullet (see light pressure), but your problem is hitting it with enough light fast enough. As others have pointed out, lasers are measured in terms of power, not energy. As you said, a 1.4kW laser will emit 1.4kJ in a second, but a second is far too long - the bullet will have already hit you by then. The muzzle velocity of an AK47 is (according to the infobox in that article), 715m/s. So, it will have already travelled 715m by the time your laser has emitted 1.4kJ. I'm not sure of the best way to calculate it, but the power of laser you would need to actually deflect a bullet a useful amount would be enormous. You would also have a real challenge aiming it and tracking the bullet as it moves. You would be better off using the laser to blind the gunman - a 1.4kW laser to the eye would blind someone very quickly. --Tango (talk) 00:23, 14 March 2012 (UTC)[reply]

If you change "bullets" to "artillery shells", and "deflect" to "heat until it explodes", then that's very doable. See the Tactical High Energy Laser. But in addition to all the technical difficulties pointed out above, I think Tango hit the nail on the head. If you have a system that can track and zap a moving bullet over short distances with an uberlaser, you are better off just building a killbot that aims for the gunman. Someguy1221 (talk) 00:30, 14 March 2012 (UTC)[reply]

I'm guessing that the strongest pulsed laser would have no difficulty dealing with a simple bullet:
Highest-energy laser pulse: 150 thousand joules, in a single 10-nsec pulse. The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory in Livermore, California, achieved this result in 2005. The energy contained in the 150 kJ pulse is equivalent to a 1-ton automobile traveling at about 60 miles per hour. . The bullet only travels 7 microns during that pulse.. 84.197.178.75 (talk) 12:37, 14 March 2012 (UTC)[reply]
How massive a bullet or projectile could be melted or vaporized by such a pulse? A difficulty would be how much of the energy would be absorbed by the projectile. Reagan's "star wars" notion would have shot down ballistic missiles with x-ray lasers. It was claimed that a device the size of an office desk could "shoot down the entire Soviet land-based missile force." Later the US shifted to a different quixotic plan to hit warheads with solid objects. The kinetic interceptors occasionally worked in carefully staged tests when no countermeasures were employed (such as decoys). The Tactical High Energy Laser , Boeing YAL-1 and Advanced Tactical Laser articles seem relevant to the question. THEL supposedly can destroy mortar rounds and artillery shells, but the project is on hold for some reason, such as being expensive and very big and heavy, requiring several trailers full of equipment. The airborne laser weapons also supposedly suffered from a poor weight to output problem. The other programs also seem to be on hold (though if I were really developing such a weapon I would not necessarily advertise the fact). Weight and the need for large power input would not seem to be much of a problem for ships, compared to traditional large gun turrets and magazines. Edison (talk) 16:16, 14 March 2012 (UTC)[reply]
The whole point of the doomsday machine...is lost if you keep it a secret! Airborne Laser is still in development (though the 747 is grounded); you can read about it and other projects of the Air Force Research Laboratory's Directed Energy Directorate at their main website, http://www.kirtland.af.mil/afrl_de - laser enthusiasts on Wikipedia may recognize their laboratory's homepage splash picture, which is also featured on our main article, Laser. Nimur (talk) 18:20, 14 March 2012 (UTC)[reply]
Let's assume the bullet is made of iron. Iron has a heat capacity of 0.450 J/g/K (at 25C, anyway - let's assume it's constant in order to make the maths easy, it probably won't vary much). The boiling point of iron is 2862C and let's say it is at 25C to start with, so we need to heat it up by 2837 degrees. That would take 1277 J/g. In addition, you need to allow for the heat of fusion (248 J/g) and the heat of vaporisation (6069 J/g). In total, then it takes about 7,600 J to vaporise 1g of iron. 150kJ would be enough to vaporise about 20g. That's assuming all the energy is absorbed, which it wouldn't be. So, somewhere between 0g (if the laser is completely reflected) and 20g... a small bullet might get vaporised, then. An artillery shell wouldn't be. --Tango (talk) 02:40, 15 March 2012 (UTC)[reply]
But, unlike a bullet, you don't need to vaporize an artillery shell - you just need to heat and prematurely detonate it. Similarly, a ballistic missile need not be destroyed; it just needs a laser strike powerful enough to zap a tiny scoring in its heat shield, or an thermoelectrostatic discharge to its controller electronics. Nimur (talk) 18:23, 15 March 2012 (UTC)[reply]

Which country has the most rivers? The most river span?

The United States has more than 250,000 rivers, totaling about 3.5 million miles of river span....

I've been trying to find the same info for other countries, but I'm having trouble. I mostly need to know if the U.S. has more rivers and river mileage than any other nation, and, if so, what are the stats for the second place country.

If someone could email me at jerjacques at gmail dot com with any leads, I would be very grateful.

Thank you, JSJ — Preceding unsigned comment added by Jerjacques111 (talkcontribs) 20:07, 13 March 2012 (UTC)[reply]

What is the formal definition of a river? Without knowing that, there is no clear answer. By any reasonable definition, though, I would expect Russia, Canada, and Brazil to have more than the US. Looie496 (talk) 20:32, 13 March 2012 (UTC)[reply]
Hydrogeology and hydrology are the formal study of surface water. There are specialists in these fields who quantify parameters about the surface water in various regions. Typically, it's more useful to discuss the average precipitation over a large geographic area; rather than the specific number of rivers. The total amount of water that flows out of a region - called the total river discharge, is easy to measure for a drainage basin. However, trying to count the number of rivers is actually quite difficult - because where the water flows is going to depend on the ground and subsurface geology, and may even vary from day to day, season to season (for example, consider a wadi - is it a river? ...Perhaps only during the spring rain season? How large must a tributary become before you count it in its own right?) In light of this, you might start by looking at List of drainage basins by area; watershed basins by country..., and see where the related links take you. You may also find these websites interesting: The Environmental Fluid Mechanics research program, and for a more commercial tack, Schlumberger Water Services, who make their money by quantitatively analyzing surface and groundwater. Nimur (talk) 23:45, 13 March 2012 (UTC)[reply]
It is hard to define precisely enough what constitutes a river. One closely related thing that is relatively easy to measure is the total amount of annual rainfall received by a country (since all water that falls down as rain, less some small fraction lost to evaporation, ends up in the rivers.) By that measure, Brazil is the leader by a large margin, followed by Russia, USA, China, and Indonesia.--Itinerant1 (talk) 06:12, 14 March 2012 (UTC)[reply]
Todd River near Alice Springs
In Australia we're pretty generous when naming rivers. Here's the Todd River near Alice Springs. It even has its own regatta. HiLo48 (talk) 06:54, 14 March 2012 (UTC)[reply]
The article Stream says that rivers and creeks (sometimes called brooks and other names) are both streams. I have seen some large creeks and some small rivers in the US, and I would judge there to be some overlap in their width, depth, and flow volume. The overlap is very large if the extremes of seasonal flow are considered. Is there an operational definition, or some bright-line standard endorsed by some international scientific or geographic or hydrological body? What are some very "small" (variously measured) rivers and some very "large" (variously measured) creeks? It also seems difficult to determine how some small a water channel must be to qualify as a stream, since I know some small "branches" which drain a valley and have a definite watercourse with water flowing all year, fed by runoff from a "hollow" and exiting into a creek which locals would never call a creek or stream, because they are narrow and shallow (the watercourse, not the local folk). Edison (talk) 15:46, 14 March 2012 (UTC)[reply]
A "creek" in the England is generally a tidal inlet from the sea; Barking Creek is an example. It just shows the difficulty of trying to pin down an exact number of features going only on what local people call them. Alansplodge (talk) 09:12, 16 March 2012 (UTC)[reply]

March 14

NIST aluminium ion clock

I was watching a popular science program on TV and it said that the aluminum ion experimental clock at the National Institute of Standards and Technology is the world's most precise clock, and is accurate to one second in about 3.7 billion years.

What do they mean by that? If I say my watch is accurate to 1 second a day I mean that it gains or loses no more than 1 second a day relative to GMT or some other standard. In other words, accuracy can only be measured relative to a standard.

But if the NIST clock is truly the most accurate then it is the standard, since there is nothing more accurate to compare it to. In effect, the NIST clock defines time. To check its accuracy you would have to measure 3.7 billion years by some other, more accurate, means, which contradicts the premise.

So, what do they mean by saying it’s the most precise clock? — Preceding unsigned comment added by Callerman (talkcontribs) 00:37, 14 March 2012 (UTC)[reply]

They are translating the clock resonator's Q factor, which is a very technical measurement (of phase noise, or frequency stability), into "layman's terms." Expressing the frequency stability in "seconds of drift per billion years" is a technically correct, but altogether meaningless, unit conversion. Over the time-span of a billion years, it's probable that the Q-factor will not actually remain constant. It's similar to expressing the speed of a car in earth-radii-per-millenia, instead of miles-per-hour. The math works out; the units are physically valid and dimensionally correct ([8]); but we all know that a car runs out of gas before it reaches one earth-radius, so we don't use such silly units to measure speed. Similarly, physicists don't usually measure clock stability in "seconds drift per billion years" in practice.
Here's some more detail from NIST's website: How do clocks work? "The best cesium oscillators (such as NIST-F1) can produce frequency with an uncertainty of about 3 x 10-16, which translates to a time error of about 0.03 nanoseconds per day, or about one second in 100 million years." And, they link to "From Sundials to Atomic Clocks," a free book for a general audience that explains how atomic clocks work. Chapter 4 is all about Q factor: what it is, why we use it to measure clocks, and how good a Q we can build using different materials. Nimur (talk) 01:04, 14 March 2012 (UTC)[reply]
Alright Nimur, a question to your answer so you know you're not off the hook yet. Does anyone in science or society-at-large benefit from the construction of a clock that is more accurate than 0.03 nanoseconds per day, or is this an intellectual circle jerk? (which is also fine, by the way, because science is awesome.) Someguy1221 (talk) 01:56, 14 March 2012 (UTC)[reply]
Indeed, there are practical applications. If I may quote myself, from a seemingly unrelated question about metronome oscillations, in May 2011: "This "theoretical academic exercise" is the fundamental science behind one of the most important engineering accomplishments of the last century: the ultra-precise phase-locked loop, which enables high-speed digital circuitry (such as what you will find inside a computer, an atomic clock, a GPS unit, a cellular radio-telephone, ...)." In short, yes - you directly benefit from the science and technology of very precise clocks - they enable all sorts of technology that you use in your daily activities. The best examples of this would be high-speed digital telecommunication devices - especially high frequency wireless devices. A stable oscillator, made possible by a very accurate clock, enables better signal reception, more dense data on a shared channel, and more reliable communication. Nimur (talk) 06:43, 14 March 2012 (UTC)[reply]
Nimur has not correctly understood the relationship between Q and stability. Q is a measure of sharpness of resonance. If you suspend a thin wooden beam between two fixed points and hit it, it will vibrate - ie it resonates. But the vibrations quickly die away, because wood is not a good elastic material - it has internal friction losses. If you use a thin steel beam, the vibrations die away only slowly - it is a better elastic material. Engineers would say the wood is a low Q material and the steel a High Q material. All other things equal, a high-Q resonator will give a more stable oscillation rate, because if other factors try to change the oscillation rate the high-Q resonator will resist the change better. But there are other things and they aren't generally equal. Often a high-Q material expands with temperature - then the rate depends on temperature no matter how high the Q is.
In the real world (ie consumers and industry, as distinct from estoric research in university labs) the benefit of precise clocks is in telecommunications - high performance digital transmission requires precise timing to nanosecond standards, and in metrology - Time is one of the 3 basic quantities [Mass, Length, Time] that all practical measurements of any quantity are traceable back to. Precise timing is also the basis of navigation - GPS is based on very precise clocks in each satelite. So folks like the NIST are always striving to make ever more precise clocks so they DO have a reference standard to which they can check ever better clocks used in indutry etc.
It is quite valid to state the accuracy of clocks as soo many nanoseconds error per year or seconds per thousand years or whatever. It's often done that way because it gives you a good feel for the numbers. You don't need to measure for a year or 100 years to know. Here's an analogy: When I was in high school, we had a "rocket club". A few of us students, under the guidance of the science teacher made small rockets that we launched from the school cricket pitch. We measured the speed and proudly informed everyone - the best did about 300 km per hour. That does not mean our rockets burned for a whole hour and went 300 Km up. They only burned for seconds and achieved an altitude of around 400 m, but we timed the rockets passing two heights and did the math to get km/hour. If we told our girlfriends that the rockets did 80 m/sec, that doesn't mean much to them, but in km/hr they can compare it with things they know, like cars. Keit120.145.30.124 (talk) 03:02, 14 March 2012 (UTC)[reply]
I respectfully assert that I do indeed have a thorough understanding of Q-factor as it relates to resonance and oscillator frequency stability. I hope that if any part of my post was unclear, the misunderstanding could be clarified by checking the references I posted. Perhaps you misunderstood my use of frequency stability for system stability in general? These are different concepts. Q-factor of an oscillator directly corresponds to its frequency stability, but may have no connection whatsoever to the stability of the oscillation amplitude in a complex system. Nimur (talk) 06:59, 14 March 2012 (UTC)[reply]
Does "Q-factor of an oscillator directly correspond to its frequency stability" as you stated? No, certainly not. An oscillator fundamentally consists of two things: a) a resonant device or circuit and b) an amplifier in a feedback connection that "tickles" the resonant device/circuit to make up for the inevitable energy losses. This has important implications. First, you can have a high Q but at the same time the resonant device can be temperature dependent. As I said above it doesn't matter what the Q is, if the resonant device is temperature sensitive, then the oscillation frequency/clock rate will vary with temperature. Same with resonant device aging - quartz crystal and tuning forks can have a very high Q, but still be subject to significant aging - the oscillation rate varies more the longer you leave the system running. Second, the necessary feedback amplifier can have its own non-level response to frequency. This non-level response combines with the resonant device response to in effect "pull" the resonance off the nominal frequency. Real amplifiers all have a certain degree of aging and temperature dependence in teir non-level response. Also, practical amplifiers can exhibit "popcorn effect" - their characteristics occaisonally jump very slightly in value. When you get stability down below parts per 10^8, this can be important. All this means that it HELPS to have high-Q (it make "pulling" less significant), but you CAN have high stability with low Q (if the amplifier is carefully built and the resonant device has a low temperature coefficient), and you can have rotten stability with very high Q. I've not discussed amplitude stability in either of my posts, as this has little or no relavence to the discussion. Keit120.145.166.92 (talk) 12:16, 14 March 2012 (UTC)[reply]
Several sources disagree with you; Q is a measure of frequency stability. Frequency Stability, First Course in Electronics (Khan et al., 2006). Mechatronics (Alciatore & Histand), in the chapter on System Response. Our article, Explanation of Q factor. These are just the few texts I have on hand at the moment. I also linked to the NIST textbook above. Would you like a few more references? I'm absolutely certain this is described in gory detail in Horowitz & Hill. I'm certain I have at least one of each: a mechanical engineering text, and a physics textbook, and a control theory textbook, on my bookshelf at home, from each I can look up the "oscillators" chapter and cite a line at you, if you would like to continue making unfounded assertions. Frequency stability is defined in terms of Q. Q-factor directly corresponds to frequency stability. Nimur (talk) 18:34, 14 March 2012 (UTC)[reply]
(1) If you read the first reference you cited (Khan) carefully it says with math the same as I did in just words: High Q helps but is not the whole story. It says high Q helps, but for any change, there must be a change initiator - so if there is no iniator, there's no need for high Q. Nowhere does Khan say stability directly relates to Q. In fact, his math shows where one of the other factors gets in, and offers a clue on another. (2) I don't have a copy of you 2nd citation, so I can't comment on it. (3) The Wikipedia article does say "High Q oscillators ... are more stable" but this is misleading, as Q is only one of many factors. (4) I don't think you'll find anywhere in H&H where it says Q determines frequency stability. With respect to you good self Nimur, you seem to be making 3 common errors: a) you are reading into texts what you want to believe, rather than reading carefuly, (b) like many, you cite Wikipedea articles as an authority. That's not what Wikepedia is for - the articles are good food for thought and hints on where to look and what questions to ask, but are not necessarily accurate. c) you haven't recognised that Khan, as a first course presentation, gives a simplified story that, while correct in what it says, doesn't not cover all the details. Rather than dig up more books, read carefully what I said, then go back to the books you've already cited.
A couple of examples: Wein bridge RC oscillator - Q is 0.3, extreemly low, but with carefull amplifier design temperature stability can approach 1 part in 10^5 over a 30C range, 1 part in 10^4 is easy. 2nd example: I coulkd make up an LC oscillator with the inductor a high Q device (say 400) and the C a varicap diode (Q up to 200). The combined in-circuit Q can be around 180. That should give a frequency stability much much better that the Wein oscillator with its Q of only 0.3. But wait! sneaky Keit decided to bias the varicap from a voltage derived from a battery, plus a random noise source, plus a temperature transducer, all summed together. So, frequency tempco = as large as you like, say 90% change over 30 C, aging is dreadfull (as the battery slowly goes flat), and the thing randomly varies its frequency all over the place. I can't think why you would do this in practice, but it does CLEARLY illustrate that, while it HELPS to have high Q, Q does NOT directly correspond to frequency stability lots of other factors can and do affect it. Keit121.221.82.58 (talk) 01:25, 15 March 2012 (UTC)[reply]
Keit, your unreferenced verbage is no more than pointless pendantry. And nobody believes you had a girlfriend in high school either. — Preceding unsigned comment added by 69.246.200.56 (talk) 01:57, 15 March 2012 (UTC)[reply]
What Keit is saying makes sense to me. According to our article, "[a] pendulum suspended from a high-quality bearing, oscillating in air, has a high Q"—but obviously a clock based on that pendulum will keep terrible time on board a ship, and even on land its accuracy will depend on the frequency of earthquakes, trucks driving by, etc., none of which figure into the Q factor. If you estimate the frequency stability of an oscillator based on the Q factor alone, you're implicitly assuming that it's immune to, or can be shielded from, all external influences. I'm sure the people who analyze the stability of atomic clocks consider all possible external perturbations (everything from magnetic fields to gravitational waves) in the analysis. Those influences may turn out to be negligible, but you still have to consider them.
Also, it seems as though no one has really addressed the original question, which is "one second per 3.7 billion years relative to what?". The answer is "relative to the perfect mathematical time that shows up in the theory that's used to model the clock", so to a large extent it's a statement about our confidence in the theory. I don't know enough about atomic clocks to say more than that, but it may be that the main contributor to the inaccuracy is Heisenberg's uncertainty principle, in which case we're entirely justified in saying "this output of this device is uncertain, and we know exactly how uncertain it is." -- BenRG (talk) 22:59, 15 March 2012 (UTC)[reply]
I'm afraid I don't think anyone has addressed my original question. Answers in terms of frequency stability, etc., do not seem to work. How do you know a frequency is stable unless you have something more stable to measure it against? How do you measure a deviation except with another, more accurate clock? But if this is the most accurate clock, what is there to compare it against? Just looking at my watch alone, for example, if I am in a room with no other clocks and no view of the outside daylight, I cannot say whether it is running fast or slow. I can only tell that by comparing it with something which I assume is more reliable. — Preceding unsigned comment added by Callerman (talkcontribs) 06:32, 16 March 2012 (UTC)[reply]

Hospital de Sant Pau Barcelona Spain.

Is this hospital open for medical care to patients today? 201271.142.130.132 (talk) 03:53, 14 March 2012 (UTC)[reply]

Have you seen that we have an article on Hospital de Sant Pau? it seems to suggest that it ceased being a hospital in june 2009. Vespine (talk) 04:04, 14 March 2012 (UTC)[reply]

Mean Electrical Vector of the Heart

Hello. When would one drop perpendiculars from both lead I (magnitude: algebraic sum of the QRS complex of lead I) and lead III (magnitude: algebraic sum of the QRS complex of lead III), and draw a vector from the centre of the hexaxial reference system to the point of intersection of the perpendiculars to find the mean electrical vector? Sources are telling me to drop a perpendicular from the lead with the smallest net QRS amplitude. Thanks in advance. --Mayfare (talk) 04:46, 14 March 2012 (UTC)[reply]

I don't have medical training, so I can only guess, but if I understand correctly:
  • the contraction of different parts of the heart have accompanying electrical signals that move in the same direction as the contraction. Movement towards one of the electrodes of an electrode pair will give a positive or negative signal, while movement perpendicular to that direction would have little influence because it would effect both electrode potentials the same way, increase or decrease.
  • All these movements can be represented by vectors and the mean vector of these is what you're after.
  • For each electrode pair you have measured the positive and negative deflection voltages, the sum of those give you a resulting vector for each electrode pair, and these correspond to the magnitude of the mean vector in each of those directions.
  • If one of these vectors is zero or very small, you know that the mean vector must be perpendicular to that direction, leaving you only one last thing to determine, which way it points.
  • If you have two smallest vectors with the same magnitude, then the mean vector will be on one of the angle bisectors. The info I got has lead I at 0° (to the right), lead II at 60° clockwise rotation, lead III at 120°. If I and III are equal in magnitude (don't need the same sign), then the mean vector can be 150° or -30°, but in those cases lead II will be smallest, so the only possibilities left are +60° or -120°, depending on the sign of the lead II result. That's how I understood it, but all the different electrodes made it a bit confusing. So far only arm and leg electrodes seemed involved?? A link to a site with the terminology or examples could help. More people inclined to have a look if the subject is just a click away instead of having to google first. Hmmm, would there be a correlation between links in question and number of responses... 84.197.178.75 (talk) 19:37, 14 March 2012 (UTC)[reply]

All basically correct! Our article on Electrocardiography has some information about vectorial analysis, but I'm not sure if that's sufficient for you. In the normal heart the mean electrical vector is usually around about 60 degrees (lead II), but anywhere between -30 and +90 is considered normal. The mean vector, of course, will be perpendicular to the direction of the smallest vector, and in the direction of the most positive vector. Mattopaedia Say G'Day! 07:10, 16 March 2012 (UTC)[reply]

Electricity prices

In this Scientific American article, US Energy Secretary Chu says natural gas "is about 6 cents" per kilowatt hour, implying it is the least expensive source of electricity. However Bloomberg's energy quotes say on-peak electricity costs $19.84-26.50 per megawatt hour, depending on location. Why is that so much less? 75.166.205.227 (talk) 09:44, 14 March 2012 (UTC)[reply]

The Bloomberg prices are prices at which energy companies can buy or sell generating capacity in the wholesale commodities markets. An energy company will then add on a mark-up to cover their costs of employing staff, maintaining a distribution network, billing customers etc. plus a profit margin. Our article on electricity pricing says that the average retail price of electricity in the US in 2011 was 11.2 cents per kWh. Gandalf61 (talk) 10:48, 14 March 2012 (UTC)[reply]
(Per the description,) The figures given in the Scientific American article are apparently the Levelised energy cost. This is a complicated calculation and the number depends on several assumptions and it's not clear to me what market the estimated break even price is for although it sounds like it depends on what you count in the costs. Also I'm not sure why the OP is making the assumption natural gas is the least expensive source, I believe coal normally is if you don't care about the pollution.Edit: Actually the source says coal is normally more expensive now although I don't think this is ignoring pollution. Nil Einne (talk) 11:21, 14 March 2012 (UTC)[reply]
The Bloomberg quote for gas is 2.31$/MMBtu, and 1MMBtu is 293 kWh, so that would be 0.7 cents per kWh. Maybe he was a factor 10 off? Or he quoted the European consumer gas prices, those are around 0.05€ per kWh... 84.197.178.75 (talk) 12:56, 14 March 2012 (UTC)[reply]
The article I linked above suggests the figures are accurate. For example the lowest for naturalised gas (advanced combined cycle) is given as $63.1/megawatt-hour. The cost for generating electricity from naturalised gas is obviously going to be a lot higher then just the price of the gas. Nil Einne (talk) 13:24, 14 March 2012 (UTC)[reply]
Oops, you're right of course. I was thinking 10% efficiency was way too low for a power plant, I saw your comment but the penny didn't drop then... 84.197.178.75 (talk) 15:27, 14 March 2012 (UTC)[reply]
Talk of gas-generated electricity being cheaper than coal is puzzling. In the 1970's baseload coal and nuke were cheap electricity, and natural gas was used low efficiency fast-start peakers, typically 20 megawatt turbine units, which could be placed online instantly to supplement the cheaper fueled generators, when there was a loss of generation or to satisfy a short-duration peak load. The peakers might be 10 or 15 percent of total generation for a large utility. The coal generators were 10 times larger (or more) than the gas turbines, and took hours to bring on line. The fuel cost for gas was over 4 times the fuel cost for other fossil fuels, and 18 times as much as for nuclear. The total cost was over 3 times as much for gas as for other fossil and 8 times as much as nuclear.. Is gas now being used in large base-load units, 300 to 1000 megawatt scale, to generate steam to run turbines, rather than as direct combustion in turbines? Edison (talk) 15:33, 14 March 2012 (UTC)[reply]
I think so. This 1060 MW plant built in 2002 is very typical of the new gas plants installed from the late 1990s to present. They are small, quiet, relatively clean except for CO2, and have become cookie cutter easy to build anywhere near a pipeline. 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)[reply]

I still do not understand why a power company would quote a wholesale contract price less than 30% of their levelized price. Even if it was only for other power companies, which I don't see any evidence of, why would they lose so much money when they could simply produce less instead? 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)[reply]

In some areas (for example, in California), energy companies do not have a choice: they must produce enough electricity to meet demand, even if this means operating the business at a loss. Consider reading: California electricity crisis, which occurred in the early parts of the last decade. During this period, deregulation of the energy market allowed companies (like the now infamous Enron) to simply turn the power-stations off if the sale-price was lower than the cost to produce. Citizens didn't like this. To bring short summary to a very long and complicated situation, the citizens of California fired the governor, shut down Enron, and mandated several new government regulations, and several new engineering enhancements to the energy grid. The economics of power distribution are actually very complicated; I recommend to "proceed with caution" any time anyone quotes a "price" without clearly qualifying what they are describing. Nimur (talk) 18:43, 14 March 2012 (UTC)[reply]
You should remember what levelised cost is. It tries to take in to account total cost over the lifespan and includes capital expenditure etc. It's likely a big chunk of the cost is sunk. Generating more will increase expenditure e.g. fuel and maintenence and perhaps any pollution etc taxes and may also lower lifespan, but provided your increased revenue is greater then the increased expenditure (i.e. you're increasing profit) then it will still likely make sense to generate more. The fact you're potentially earning less then needed to break even is obviously not a happy picture for your company, but generating less because you're pissed isn't going to help anything, in particular it's not going to help you service your loans (which remember are also part of the levelised cost). It may mean you've screwed up in building the plant although I agree with Nimur, it's complicated and this is a really simplistic analysis (but I still feel it demonstrates why you can't just say it's better if they don't generate more). (Slightly more complicated analysis may consider the risk of glutting the market although realisticly, you're likely only a tiny proportion of the market.) You should also perhaps remember that the costs are (as I understand it at least) based on the assumption of building a plant today (which may suggest it makes no sense to build any more plants, but then we get back to the 'it's complicated part). Nil Einne (talk) 20:13, 14 March 2012 (UTC)[reply]
Ah, I see, so they have already recovered (or are recovering) their sunk and amortized costs from other contracts and/or previous sales, so the price for additional energy is only the marginal cost of fuel and operation. Of course that is it, because as regulated utilities their profits are fixed. That's great! ... except for the Jevons paradox implications. 75.166.205.227 (talk) 22:08, 14 March 2012 (UTC)[reply]
Natural gas is the cheapest energy source delivered directly to the home, at about 1/3 the cost of electricity per BTU, for those of us lucky enough to have natural gas lines. Sure, our homes explode every now and then, but oh well. StuRat (talk) 22:38, 14 March 2012 (UTC)[reply]
If you extrapolate these numbers for cumulative global installed wind power capacity, you get this 95% prediction confidence interval.
When the natural gas which is affordable (monetarily and/or environmentally) has been burned up by 1000 megawatt power plants, then what heat source will folks use who now have "safe, clean, affordable" natural gas furnaces? I have always thought (and I was not alone) that natural gas should be the preferential home heating mode, rather than electric resistance heat. Heat pumps are expensive and kick over to electric resistance heat when the outside temperature dips extremely low (unless someone has unlimited funds and puts in a heatpump which extracts heat from the ground). Edison (talk) 00:08, 15 March 2012 (UTC)[reply]
I agree that we are rather short-sighted to use up our natural gas reserves to generate electricity. Similarly, I think petroleum should be kept for making plastics, not burned to drive cars. We should find other sources of energy to generate electricity and power our cars, not just to save the environment but also to preserve these precious resources. We will miss them when they are gone. StuRat (talk) 07:42, 15 March 2012 (UTC)[reply]
I completely agree, but honestly think there is nothing to worry about. Wind power is growing so quickly and so steadily that it has the tightest prediction confidence intervals I have ever seen in an extrapolation of economics data. Also, there is plenty of it to serve everyone and it's going to get very much less expensive and higher capacity on the same real estate and vast regions of ocean very soon. Npmay (talk) 22:01, 15 March 2012 (UTC)[reply]
Who did that extrapolation and what are the assumptions? It appears to be based on an exponential growth model—why not logistic growth? -- BenRG (talk) 00:35, 16 March 2012 (UTC)[reply]
I agree a logistic curve would be a better model, but when I tried fitting the sigmoids, they were nearly identical -- within a few percent -- to the exponential model out to 2030, and did not cusp until long enough that the amount of electricity being produced was unrealistic. Npmay (talk) 01:20, 16 March 2012 (UTC)[reply]

Polyethylene

Is the dimer for polyethene butane? If not, what? Plasmic Physics (talk) 12:25, 14 March 2012 (UTC)[reply]

Butene? --Colapeninsula (talk) 12:57, 14 March 2012 (UTC)[reply]
Butane is the saturated hydrocarbon C4H10, and cannot be a dimer for polyethene (more corectly known as polyethelene), a saturated hydrocarbon H.(C2H4)n.H. Perhaps you meant "Is the dimer for butane polyethelene?". For the polyethelene with n=2, H.(C2H4)n.H reduces to C4H10, ie it IS butane. But for all n<>2, the ratio of C to H changes, so the answer is still no. To form dimers, you need two indentical molecules combined without discarding any atoms. Keit120.145.166.92 (talk) 13:12, 14 March 2012 (UTC)[reply]

The only solution is to that would be cyclobutane? Plasmic Physics (talk) 22:21, 14 March 2012 (UTC)[reply]

That is very well not the answer either, the only solution to preserving the elemental ratio is to have a diradical, polyethylene is not a diradical. Plasmic Physics (talk) 22:30, 14 March 2012 (UTC)[reply]

@Colapenisula:Butene would require a middle hydrogen atom to migrate to the opposite end of the chain, not likely - the activation energy would be pretty high. Plasmic Physics (talk) 23:51, 14 March 2012 (UTC)[reply]

Dissuading bunnies from eating us out of house and home...

... literally. My daughter's two rabbits, when we let them roam the house, gnaw the woodwork, the furniture, our shoes... Is there some simple means by which we can prevent this? Something nontoxic, say, but unpleasant to their noses that we can spray things with?

Ta

Adambrowne666 (talk) 18:36, 14 March 2012 (UTC)[reply]

From some website: The first thing to do is buy your bunny something else to chew on. You can buy bunny safe toys from many online rabbit shops. An untreated wicker basket works well too. They also enjoy chewing on sea grass mats. To deter rabbits from chewing on the naughty things, try putting some double sided sticky tape on the area that is being chewed. Rabbits will not like their whiskers getting stuck on the tape. You can also try putting vinegar in the area too, as rabbits find the smell and taste very very offensive. Bitter substances tend not to deter rabbits as they enjoy eating bitter foods (ever tried eating endive? very bitter.)
Or google "rabbit tabasco" for other ideas .... 84.197.178.75 (talk) 19:59, 14 March 2012 (UTC)[reply]
Why would you let bunnies run free indoors ? Don't they crap all over the place ? Not very hygienic. Maybe put them in the tub and rinse their pellets down after an "outing". StuRat (talk) 22:26, 14 March 2012 (UTC)[reply]
Fricaseeing might help. Supposedly they taste just like chicken. ←Baseball Bugs What's up, Doc? carrots→ 23:38, 14 March 2012 (UTC)[reply]
Rabbits tend to go to the toilet in the same spot and they're pretty easy to litter box train. It's harder, but not impossible to train some rabbits to not chew things, but we never managed to do it with our two dwarf bunnies. For that reason we just don't leave them in the house unsupervised. We do let them run around the house sometimes and they won't go to the toilet on the floor, but we did catch one of them once chewing on the fridge power cable which was the final straw of giving them free reign of the house. Vespine (talk) 23:53, 14 March 2012 (UTC)[reply]
As a final point, I do remember reading that some bunny breeds are just more suitable as "house" pets then others. There are plenty of articles on the subject if you google "house rabbit". Vespine (talk) 23:57, 14 March 2012 (UTC)[reply]

Thanks, everyone - yeah, they crap everywhere; we're not masters of the household at all - I don't know how they get away with it: they drop dozens of scats all over the house, but if I do one poo in the living room, people look askance! Will try the doublesided tape and other measures; wish me luck!

Resolved

March 15

feeding plants carbon

What are some ways to feed plants carbon? Could I administer malic acid to CAM plants through the stomata? There's a product out there on the market that supposedly is an artificial carbon source for plants-- what are some possible mechanisms to "help out" plants with carbon fixation? (This is for ornamental plants, where the large-scale boost of organic material is desired.) I can't use ammonium (bi)carbonates because of the ammonium ion's toxicity to fish. Could I administer an organic acid + bicarbonate, or maybe boric acid? 74.65.209.218 (talk) 06:01, 15 March 2012 (UTC)[reply]

Sugar works pretty well. Plasmic Physics (talk) 07:09, 15 March 2012 (UTC)[reply]
Googling the topic of feeding plants sugar, I doubt this is a beneficial solution. It seems to encourage bacterial, rather than plant, growth. 74.65.209.218 (talk) 08:14, 15 March 2012 (UTC)[reply]
What makes you think your plant is deficient in carbon ? Doesn't it get enough from carbon dioxide in the air and/or organic molecules in the soil ? StuRat (talk) 08:17, 15 March 2012 (UTC)[reply]
You could always put an animal in with the plants as they produce carbon dioxide, maybe more fish? Or a hamster. SkyMachine (++) 08:29, 15 March 2012 (UTC)[reply]
I'm trying to speed up carbon fixation. Furthermore, these are aquatic plants in a fish tank, which seem to grow slowly. 74.65.209.218 (talk) 09:10, 15 March 2012 (UTC)[reply]
First, are they growing more slowly than is typical for their species ? Second, what makes you think that carbon is the limiting factor ? Perhaps something else is deficient, such as light. If you don't accurately determine the problem, any "solution" is likely to cause more harm than good. StuRat (talk) 09:59, 15 March 2012 (UTC)[reply]
Did your search just focus on sucrose or did you include a variety of sugars. Plasmic Physics (talk) 08:32, 15 March 2012 (UTC)[reply]
My search included mentions of glucose. My search shows that glucose is actually an herbicide. 74.65.209.218 (talk) 09:11, 15 March 2012 (UTC)[reply]
I suppose if the sugar was colonized by yeast you would produce carbon dioxide. SkyMachine (++) 08:43, 15 March 2012 (UTC)[reply]
This doesn't help. I can't have too many microbes proliferating in the tank water. Furthermore, I am not sure if roots are meant to absorb carbon dioxide or even sugar. I am looking at more sophisticated ways of adding carbon. Does administering malic acid to CAM plant stomata speed up carbon fixation? 74.65.209.218 (talk) 09:10, 15 March 2012 (UTC)[reply]
You could always use compressed CO2 like you can get at home brew stores or soda stream canisters. Modify a switch to slowly relase CO2 to bubble through the tank. SkyMachine (++) 09:29, 15 March 2012 (UTC)[reply]
That's going to produce some carbonic acid in the water and make it more acidic, which might not be good for the fish. StuRat (talk) 09:56, 15 March 2012 (UTC)[reply]
Seems to be a mix of factors; light, CO2, fertiliser.
  • plants need light to grow, but the more light they get, the more CO2 and trace elements they will need.
  • CO2 diffusion in water is much slower than in air. There can be a CO2 depleted layer of water around the plants. CO2 injection is one of the techniques used, with a CO2 tank, valves, regulators and controller, measuring the pH to adjust the CO2injection. Seems to be a bit expensive.
  • Trace elements, especially iron it seems, may be lacking. Add some trace element mix for water plants.
air bubblers, biofilters, plants will remove CO2 from the water. Fish add CO2. Yeast generators are low cost way of adding CO2.
Adding CO2, when the lighting is adequate, will increase the oxygen in the water due to more photosynthesis from the plants.
It's all a balancing act it seems, check out some forums like forum.aquatic-gardeners.org for more info. 84.197.178.75 (talk) 11:25, 15 March 2012 (UTC)[reply]
What if you combine malic acid with a buffering agent? Plasmic Physics (talk) 11:28, 15 March 2012 (UTC)[reply]


Note about carbon fixation: plants take up CO2 via photosynthesis. That's the ONLY way they take up carbon in significant amount. Forget about trying to feed them carbon any other way. Do you want to boost carbon fixation because you want the plants to grow or because you want to reduce the amount of CO2 in the water?? For the first you want to add CO2 and light and trace elements if needed. For reducing CO2 you would add light and more plants and again trace elements if needed. But from what I understand, faster growing plants by CO2 injection will result in more O2 in the water for the fish, and under 30 ppm, the CO2 does not hurt them. 84.197.178.75 (talk) 12:19, 15 March 2012 (UTC)[reply]


If this is for aquatic plants, they make commercial CO2 injectors specifically intended to introduce extra carbon dioxide into planted tanks. It's a whole category of products on the specialist sites (e.g. here). These get carbon dioxide from pressurized tanks, available either from a welding supply company or from paintball supply companies. You can also put together a DIY system with a homebrew reactor based on sugar and yeast (you don't put the sugar and yeast in the tank, you put it in a separate tank, and pipe the gas that comes off into the tank. (Search /diy co2 aquarium/ or /diy co2 planted tank/ on Google, and you'll get plenty of results, including many step-by-step instructions. Try also /co2 system for aquarium/). While adding the CO2 will depress the pH a little due to the carbonic acid formed, when the plants take in the carbon dioxide, they'll reverse that process, neutralizing the acidity. And the pH drop can be mitigated by making sure your tank has enough buffering capacity (usually referred to as "KH" in the test kits). If you want your plants to really take off once you start adding CO2, you may want to add some additional aquatic plant fertilizer. Try to avoid using regular plant fertilizer, as depending on formulation, it may produce algae blooms. You'll probably also want to invest in a better water chemistry test kit, as keeping acidity/buffering/nitrogen/phosphate/iron/etc. in balance in a planted tank maintained as such, especially with CO2 injection, is more important than for a tank maintained just for the fish. -- 71.217.13.130 (talk) 16:44, 15 March 2012 (UTC)[reply]

Bullet through the brain

I'm under the impression that shooting a bullet through the brain almost always causes instant death. Is this true, and if so, why? Phineas Gage had a huge tamping iron driven through his skull, yet he remained mostly unaffected. Lobotomies remove the entire prefrontal cortex, yet leaves the patient mostly functional. In literature, I routinely read about studies of what happens when this or that area of the brain is lesioned/damaged. Why would a bullet, which is physically small and unlikely to take out a major portion of any brain structure, be so likely to cause death after penetrating the brain? --140.180.5.239 (talk) 06:49, 15 March 2012 (UTC)[reply]

As long as the brain stem is intact, there is a possiblity of survival. If you want to not be brain dead, then you'll have to miss a few more sections. In addition, a bullet doesn't always make a clean wound, sometimes (depending on specs) the bullet liquifies tissue around it. There is a youtube video somewhere of what can happen, although demonstrated on an apple. Plasmic Physics (talk) 07:07, 15 March 2012 (UTC)[reply]
Curiously, all of the good reviews on gunshot wounds to the brain happen to be in journals my library doesn't have a subscription to. But no, this is certainly not true. Without those reviews, I couldn't come up with many numbers. What I was able to glean from abstracts is that over 2000 American soldiers in Vietnam managed to make it to a hospital alive despite taking a bullet through the brain. As for why a bullet causes so much damage, it's fast and spinning. It doesn't simply poke a hole through the tissue in front of it; rather, a bullet effectively pulls and drags the tissue around it, potentially causing catastrophic trauma. See Zapruder film for a famous example of what that means. If the bullet stops in the brain, which is more likely if it's a hollow point bullet designed to slow down after hitting its target, the brain has to absorb all of that kinetic energy very quickly. Finally, getting shot in the head can cause severe bleeding and can easily send a person into respiratory arrest, none of which will typically happen in the controlled setting of a surgical room. Someguy1221 (talk) 07:09, 15 March 2012 (UTC)[reply]
I would guess that most deep penetrating brain injuries result in death... but there are the rare exceptions to that and bullets are no exception. The shooting of Gabrielle Giffords is a salient recent example. In few of these cases, whether that shooting or the case of Gage, or in lobotomies, is there no damage. In fact, the damage is often quite profound. What's remarkable is that the victim doesn't die immediately.
What makes a bullet different from many of the other kinds of head injuries, Gage's probably included, is the sheer velocity of a bullet. A low velocity bullet, say a .45 ACP caliber, moves at almost 1,000 feet per second (680 miles per hour/1000 km/h). A bullet from a modern rifle (military or hunting) is about 3x that speed.
Take a look at hydrostatic shock and stopping power. Hydrostatic shock describes why "remote", i.e. not directly to the brain, bullet impacts can incapacitate almost instantly. You don't have to be a scientist to extrapolate those findings to what direct brain injuries do. Also look at terminal ballistics (not a great article, a lot of it looks like one person's production, but should give you some context). The short answer is that while a bullet is small, the shockwave it creates as it enters an object, particularly an object with features like tissue, create temporary disruptions much larger than the projectile itself. I actually doubt there's too much tumbling in brain tissue, although I could be very wrong about that. But there are a lot of very morbid journal articles (the above articles reference some of them) that talk about how occasionally supposedly more "humane" bullets have counter intuitive effects. (sidenote: the Geneva ConventionHague Convention requires militaries use full metal jacketed bullets, however there's some debate over the differences between hollow point and full metal jacketed rounds). The brain is particularly sensitive in this respect, which is why these injuries are usually fatal. Shadowjams (talk) 07:26, 15 March 2012 (UTC)[reply]
A few points:
1) A modern rifled weapon, unlike an ancient one, has a spiral groove inside it, designed to spin the bullet to keep it from tumbling. This reduces air resistance and makes it go faster, farther, and straighter. If it continues like this through the brain, it may cause less damage than a tumbling bullet.
2) A slower bullet may actually cause more damage, by ricocheting around in the brain, rather than just going in and out.
3) As mentioned above, there are hollow-point bullets and other types, designed to rip apart on impact, causing much more damage. Such bullets are often illegal.
4) If you survive the initial trauma of the bullet, then infection becomes a major concern. StuRat (talk) 07:37, 15 March 2012 (UTC)[reply]
Well you're wrong on pretty much every point StuRat. All modern handguns are by definition rifled (this is a pretty elementary point to anyone with a cursory familiarity with firearms)... smoothbore guns are generally considered shotguns or muskets (if black powder)... ever wonder why a handgun that shoots .410 shotgun shells is legals? answer is... because it has a rifled barrel. Btw, none of that has anythign to do with my point... if you could get a musketball to do 1000 fps it'd do substantially more damage too. Again, "straight through" if it was slow that might be true, but the high velocity of a bullet has affects on tissue that are disproportionate. As for point 2, there's a huge debate over "energy delivered" and "stopping power" and "hydrostatic shock" and other similar concepts. Many modern militaries have shifted to high velocity, smaller rounds. I doubt there's much chance for "richochet" inside the skull with most modern rounds. I've heard that mafia tale that a .22 was used for assassinations for this reason, but I get a strong suspicion that's urban legend. Your point 3 is again subject to the intense debate about the effectiveness of particular round types Point 4, i doubt that's true with modern medicine. I think swelling is probably the greater risk. Shadowjams (talk) 09:07, 15 March 2012 (UTC)[reply]
I've modified my point 1 accordingly, but don't think you've made your case that my points 2-4 are wrong. On point 4, swelling may also be a major concern, but that doesn't mean that infection isn't. StuRat (talk) 09:51, 15 March 2012 (UTC)[reply]
This looks like another great opportunity to mention Mike the Headless Chicken on the science desk.--Shantavira|feed me 08:55, 15 March 2012 (UTC)[reply]
Or even Roland the Headless Thompson Gunner. --Jayron32 14:06, 15 March 2012 (UTC)[reply]
Or Carlos Rodriguez. --Itinerant1 (talk) 18:41, 15 March 2012 (UTC)[reply]
You may also be interested in this man, who lost 43% of his brain in the Falklands War and is still with us: Robert Lawrence (British Army officer). --TammyMoet (talk) 09:35, 15 March 2012 (UTC)[reply]
The amazing part is that he's not only with us, but he managed to lead an active life and even to get married after the injury. I'd have expected him to be a vegetable (or at least severely mentally disabled.) --Itinerant1 (talk) 02:30, 16 March 2012 (UTC)[reply]

Car radio

I asked this question a few years ago on another forum, but didn't get any replies I felt answered it. I used to own a car where the car radio would sometimes go quiet. A quick push on the front panel of the radio would restore the sound level. So far so straightforward. However, I noticed that driving under high-voltage power lines would also sometimes restore the sound level. Any ideas why this would happen? 86.134.43.228 (talk) 20:00, 15 March 2012 (UTC)[reply]

Metal contacts can oxidize, and such an oxide layer can be an insulator, or have semiconductor properties. Pushing the radio may shift the contacts a bit, breaking through the oxide layer. A high voltage can also break through a thin semiconductor layer, and once it does it causes avalanche breakdown: the electrons are accelerated, collide with atoms which get ionized by this creating a chain-reaction. That's how zener diodes above 5.5 volt work, see avalanche diode and avalanche breakdown. Usually there's an hysteresis effect, meaning that the voltage at which the conduction stops will be lower than the one where it started. That could be an explanation, with the power lines inducing a higher b-voltage over the contacts, enough to break through. Also the micro-weld phenomenon seen with coherers could be involved. In general, it would be some thin insulating layer that can withstand the 12 volts over the contacts but breaks down at a higher potential. That's my best guess. 84.197.178.75 (talk) 21:25, 15 March 2012 (UTC)[reply]
The following explanations are much much more likely: The strength of radio waves often changes dramatically near and under high voltage power lines. Usually the signal strength falls under power lines but it also can increase. AM Radios incorpoarte and automatic volume control system (AVC)(the more correct term is automatic gain control) so you don't notice the change as tune form one station to another, move around, go under power lines etc etc. My guess is that there is a bad solder joint in an area affecting the AVC. When the radio passes under the power lines perhaps the change in signal causes a sufficient change in voltage in the AVC circuit to overcome the oxide layer in the faulty solder joint. FM radios often incorporate a Mute circuit, as without it you get a full volume blast of noise when tuning between stations. Maybe the mute circuit is affected by a crook solder joint, making it mute at too high a signal strength, and the change in signal when passing under power lines is overcoming it. Keit124.178.61.156 (talk) 00:40, 16 March 2012 (UTC)[reply]

Hydrogen scattering length density?

I was discussing SANS this morning with another grad student, and realized I don't actually know the answer to this question myself: Does anyone have a simple answer for why hydrogen has a negative scattering length density? The article says "neutrons deflected from hydrogen are 180° out of phase relative to those deflected by the other elements", but that's purely phenomenological. I know it's quantum mechanical in origin, but beyond that I don't have a really good grasp of why this the case. It's slightly counterintuitive to me that something akin to a scattering cross-section would be negative. I've also looked over the article on neutron cross sections, but it in turn just references back the the scattering length density article. Any thoughts? (+)H3N-Protein\Chemist-CO2(-) 21:55, 15 March 2012 (UTC)[reply]

To what extent is a fermion's position part of its quantum state for the purposes of Pauli exclusion?

Recently this controversy regarding a Brian Cox (physicist) lecture was brought to my attention. Although it is only touched on briefly by the many people objecting to May's interpretation of the Pauli exclusion principle, it is generally agreed that position is part of an electron's quantum state. But to what extent is that so? For example, two electrons orbiting the same helium nucleus are forced into different spins because they are close enough together, and similar things cause Pauli exclusion in much larger molecular orbitals. But how far apart do two electrons need to be before they can otherwise both exist in the same quantum state? Npmay (talk) 22:07, 15 March 2012 (UTC)[reply]

To do this correctly, you need to solve the wave function for interacting electrons, which is very hard. (Why is it hard? Because the potential energy is not constant - much like any non-quantum n-body problem - only, also add the complexity of quantized states.) If you take your ordinary quantum mechanics textbook, they'll walk through the solutions for a single electron around a highly-ionized atomic nucleus; and usually, they'll assume the potential energy function for a stationary, electrostatic potential well. But if you have multiple moving charged particles, you can't do this; the math becomes quite difficult. If you'd actually like to work it out, I can recommend several good texts to walk you through the math - but let's be honest: physics students (who are very smart people) usually spend something like a full year working the basic mathematics that describes the quantum-mechanically correct electron orbit, during the course of a two or three semester advanced physics class, and still do not even solve for two electrons. So, the probability that we can summarize this quickly or easily is very low.
If you're looking for a one-line answer, though, let's phrase it this way: "The farther apart the electrons, the greater the probability that they are non-interacting." Quantized states notwithstanding, electron-electron interactions are modeled by a Coulomb potential, whose strength falls off as the inverse of distance. Nimur (talk) 23:00, 15 March 2012 (UTC)[reply]
So, the extent to which the electrons interact, which is proportional to the strength of the electromagnetic force in accordance with the inverse square law, determines whether they are in the same position for the purposes of being in the same quantum state? That would make some sense. It would also resolve the controversy in that distant electrons only have a tiny but nonzero probability of being subject to Pauli exclusion. Is that good enough to avoid the math details? Npmay (talk) 23:29, 15 March 2012 (UTC)[reply]
Electromagnetic interaction doesn't really have anything to do with it—in everything that I wrote below, it's irrelevant whether the fermions are electrons or neutrinos or (hypothetical) particles that don't interact at all. -- BenRG (talk) 23:59, 15 March 2012 (UTC)[reply]
The key point is that there's one wave function for the whole system, not one per particle. In a system of two spinless identical fermions confined to a line segment, you can think of the wave function as defined on a square whose corners are "both fermions at the far left", "fermion A at the far left and fermion B at the far right", "both fermions at the far right", and "fermion A at the far right and fermion B at the far left". (In fact it's not fair to give the particles labels since they're indistinguishable, but I can ignore that here, so I will.) The exclusion principle says that the wave function is zero at all points that correspond to the fermions being in the same place, which in this case is the diagonal line from "both fermions at the left" to "both fermions at the right". Since the wave function is continuous, it also approaches zero as you approach that diagonal, but there's no particular bound on how large it can be except exactly on the diagonal. The exclusion principle doesn't make any difference when the wave function is zero (or nearly zero) near the diagonal—in other words, when there's no (significant) chance that the fermions are near each other.
For spin ½ particles (like electrons) you can use four copies of the square, one for "both particles spin-up" and so on. The diagonals in the two squares where the particles have the same spin are zero, but the diagonals in the two squares where they have different spins don't have to be zero.
Regarding Cox's lecture, see WP:Reference desk/Archives/Science/2011 December 18#Pauli exclusion principle and speed of light. His words can be interpreted in various ways, but basically he was just wrong. I mostly agree with Sean Carroll's blog post, but even he seems to believe that every quantum object is spread out over the entire universe, an idea which I mocked in my last post to that old Ref Desk thread. -- BenRG (talk) 23:59, 15 March 2012 (UTC)[reply]
How do you decide which particles to include in the "complete system" wave-function? I own some beachfront electrons in Tucson and I feel that their interactions should be included in the wave-function for your two-particle system. Clearly, there must be some sanity in deciding when a particle is "far enough away" that it no longer matters. If this criteria isn't based on the magnitude of potential-energy function of the interaction, (i.e., electrostatic potential, for an electron-electron interaction), then what else would it be? Nimur (talk) 00:11, 16 March 2012 (UTC)[reply]
True, you need some criterion to separate system from environment. But electromagnetism has nothing to do with the Pauli exclusion principle, so I don't think it's relevant here. I had two particles in my system because one wouldn't be enough and three would be an unnecessary complication. The two particles are isolated from all outside influence because it's my thought-experiment and I say they are. Electromagnetism is relevant if you're specifically talking about atomic orbitals, but that's complicated enough (as you said) that I couldn't have given anything like the answer I did. -- BenRG (talk) 00:54, 16 March 2012 (UTC)[reply]
That doesn't make sense to me. Two adjacent helium atoms have between them two pairs of electrons, each pair of which is in the same quantum state except for its position. If their electromagnetic interaction determines whether they are in the same quantum position as well when they are near, then what determines whether they are in the same quantum position as well when they are further away? Npmay (talk) 01:11, 16 March 2012 (UTC)[reply]


March 16

Meteorology question

Reading through the Dodge City, Kansas National Weather Service forecast discussion today I came upon something that somewhat confuses me (not something that happens often being a meteorology student). It says (I apologize in advance for the all caps, but that's what NWS products use) "FRIDAY EVENING COULD BRING MORE WIDELY SCATTERED CONVECTION HOWEVER AS THE LEADING EDGE OF A LEFT FRONT QUADRANT JET MAY PRODUCE A THERMALLY INDIRECT VERTICAL CIRCULATION NEAR THE OKLAHOMA LINE IN THE EVENING."[9] The part that confuses me is the part about "the leading edge of a left front quadrant jet may produce a thermally indirect vertical circulation", as this is not a concept I have come across before. I also seem to have seen something related to this on the evening TV weather forecast here (2:30 into the video). First, what does the forecast discussion part mean? Second, what is the meteorology behind it (i.e. how does the part that's confusing me cause the convection mentioned in the first part)? Thanks in advance, Ks0stm (TCGE) 00:09, 16 March 2012 (UTC)[reply]

I am not sure, but I think "jet" here refers to a front from the jet stream mixed in vertically from downward convection. Npmay (talk) 01:15, 16 March 2012 (UTC)[reply]

Can a black hole also be, or contain, a neutron star?

If so, would that be ascertainable? Aside from a certain mass range, what else if anything might give evidence for it? Thanks, Rich Peterson198.189.194.129 (talk) 00:33, 16 March 2012 (UTC)[reply]

Sort of. Many black holes would have been neutron stars if they were less massive. You can not ascertain anything about the contents of a black hole directly, but you can infer quite a bit about its mass and former composition from the remnants of its formation. As for the matter in a black hole which was there upon its formation, most of it is in a frame of reference where it is a very hot and compressed quark-gluon plasma I believe, but I'm not sure, and nobody really knows what the physical state of a singularity is. Everything that falls into the black hole (even a moment) after its formation is, in its own frame of reference, trapped in a state of being continually stretched and heated. Npmay (talk) 01:07, 16 March 2012 (UTC)[reply]
I remember reading that large black holes could exist without being very dense. It was a popular science magazine.(not Popular Science)198.189.194.129 (talk) 01:10, 16 March 2012 (UTC)[reply]
Supermassive black holes under a string-theoretical interpretation can be less dense than ordinary matter. There is a discussion of this in Fuzzball_(string_theory)#Physical_characteristics. Npmay (talk) 01:13, 16 March 2012 (UTC)[reply]
Neutron stars often spin fast, could that give the black hole containing it an angular momentum that we could observe? Or could "Hawking radiation" be affected by the nature of the stuff inside? Thanks, Richard Peterson198.189.194.129 (talk) 01:21, 16 March 2012 (UTC)[reply]

As for the density, the claim that black holes aren't very dense is based on a measure of average density, with the region contained by the event horizon considered the volume. Not only can we not see past the event horizon, we can't sense what's beyond it through any means. That also means the gravity field of the black hole beyond the event horizon is the same no matter the internal distribution of mass. As for whether you'd see evidence in the hawking radiation, I have no idea, but physicists can't seem to agree on how you'd "read" the radiation anyway. Someguy1221 (talk) 01:26, 16 March 2012 (UTC)[reply]

Could cold dark matter have accumulated in orange dwarf stars?

Thanks again. Richard Peterson198.189.194.129 (talk) 03:16, 16 March 2012 (UTC)[reply]

I don't have an answer for you, but why do you specifically pick that one kind of star? Someguy1221 (talk) 03:21, 16 March 2012 (UTC)[reply]
If your dark matter does not inter-act with normal mater or itself, then any that falls into a star should just come out the other side and not stop. So it would be difficult to accumulate, as the dark matter would hve to lose momentum to stay in the star. Graeme Bartlett (talk) 04:56, 16 March 2012 (UTC)[reply]