Wikipedia:Reference desk/Archives/Science/2008 November 13

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Science desk
< November 12 << Oct | November | Dec >> November 14 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

November 13[edit]

anaerobic motor /propulsion[edit]

i would like to know whether an anaerobic motor /propulsion really exists. To me anaerobic refers to organisms. furthermore: the described process releases oxygen besides heat . can such a process/reaction called anaerobic? clear question: can an oxygen producing reaction be called anaerobic or what can it be caled instead?

finds in wikipedia:

thanks--Stefanbcn (talk) 00:38, 13 November 2008 (UTC)

Anaerobic - means "not needing air" (basically) - so any motor that doesn't need air (like an electric motor - or a clockwork motor) is "anaerobic". A coal fired steam engine - or a gasoline powered car is "Aerobic" needs oxygen from the air to work. A motor that used some chemical process to produce oxygen and burned that to make motion would probably be called "anaerobic" too. SteveBaker (talk) 00:51, 13 November 2008 (UTC)

Artificial holographic sun[edit]

Using sulfur lamps and rotating mirrors, would it be possible to create a false window with a nice holographic landscape, with a completely realistic rectangle of direct false "sunlight" striking the floor/walls? (this is for people with just a brick wall to look at and no direct sunlight) and could mirrors make it plane parallel? —Preceding unsigned comment added by Trevor Loughlin (talkcontribs) 04:34, 13 November 2008 (UTC)

Well, it has been done - some really expensive flight simulators have used laser-projected displays that are bright enough to simulate natural sunlight at real-world candela values - and I've worked on them.
However, it's freakishly expensive and insanely dangerous. Remember that if you stare into the real sun - it's so bright that you will damage your eyes if you don't look away within a few seconds. Now consider a display device (a projection TV or something) that put out enough light to produce that same effect - plus enough to project all of the rest of the world at natural brightness levels too. In fact, it would need more energy even than that because it would have to scan over the scene at least 60 times a second - and the screen itself would not be perfectly, 100% reflective - so the display is putting out a lot more energy than the sun - within the small range of angles that covers the scene. If you think for a moment about the amount of heat that the sun puts out as it shines onto your skin - the display would probably have to chuck out that much heat too!
The result would have to be an amazingly powerful laser or something very similar. If you were to happen to catch a glimpse of the light from the laser itself (rather than it's dispersed, reflected image) - you'd be blinded chance to blink or look away. When I worked with such a system a few years ago - everyone who entered the room when it was turned on had to go through a 30 hour laser safety course - the doors had to have automatic locking devices to stop people coming in when the laser was turned on - and there were all manner of handrails and such stopping people from going where the laser was operating. We nicknamed the gigantic water-cooled laser "The Death Ray of Ming the Merciless" because it looked exactly like something from the lair of a 1950's SciFi super-villain.
There is another issue here - which is for an outdoor scene to look completely real (ie not like a super-high-def TV screen) - the light has to appear such as to cause your eyes to focus at the correct depth. For an outdoor scene where nothing comes within (say) about 30 feet of the viewer, it's enough to 'collimate' the light so it appears to come from a source that's infinitely far away. This is tough to do. You either need a dome to project it onto that's at least 30' away - or a curved mirror such that the path from the laser projector to the eye is at least 30 feet - or you need some large, expensive glass lenses that are the size of your window to perform the same task. The difficulty with all of those things is that the scene only looks truly real when your head is at the "designed eyepoint" of the system - if you step a few feet to the side, the illusion is destroyed. It's kinda possible to correct for that - but the viewer needs to be wearing some kind of tracking device - and only one viewer gets a perfect view at a time.
So, yes - it's certainly possible - I've seen it done and the result is highly realistic and extremely compelling. But I don't think we'll see it happening as an entertainment device anytime soon.
SteveBaker (talk) 15:23, 13 November 2008 (UTC)

I don't mean to bring the topic back up, but what if replaced with a projector, it was an LED array. LED don't emit heat, just light I thought. (talk) 23:37, 13 November 2008 (UTC)
They aren't really bright enough to create the effect of sunlight. Even packed tightly together...I don't think it would work. SteveBaker (talk) 16:04, 14 November 2008 (UTC)

Suphur lamps give the full spectrum of sunlight, LED's can't and have nowhere near the efficency. I have not heard of the laser device, it sounds fascinating. My idea would be have a pane of rapidly rotating tapered interleaved reflective spirals with a plasmonic projector changing the image on the turn of the spirals to create the hologram. That way your head would be torn off like putting it into an industrial dough mixer well before you managed to blind yourself looking into the projectors at the top and bottom of the machine. Another method would be projecting on to translucent hemispheres over a hole on a spinning washing machine cylinder. I will get this second hand, though I will have to be careful not to get too close to it.

A laser projector is actually three lasers, one red, one green and one blue - the light from them is combined into one colored beam with some mirrors and the resulting light is aimed into a pair of spinning hexagonal prisms - with mirrors on each of the 6 facets. One mirror spins horizontally at (say) 60/6 = 10 rotations per second - and that causes the lasers to scan vertically downwards and then jump back 60 times a second. The second mirror spins at 1000 times that speed(!) and causes the lasers to scan from left to right and then jump back one thousand times in every 1/60th second. Modulating the light from the lasers can be managed in a variety of ways. The result is that the laser scans a rectangular area exactly like the electron beam in a TV set and produces a very bright, full-color picture. Aiming that onto the inside of a large dome which is 'painted' with microscopic spherical glass beads results in a really compelling image of whatever TV/video picture you care to provide.
But as I said - such systems are huge, hideously costly, and vastly too dangerous to be "consumer" equipment. SteveBaker (talk) 16:04, 14 November 2008 (UTC)

Bird identification[edit]

Hello, I was at Taronga Zoo today and I took a photo of this little sparrow-ish thing. I would be very grateful if someone could help me with identifying its species. Thanks! (talk) 04:43, 13 November 2008 (UTC)

That looks a bit like a silvereye to me. Tonyrex (talk) 06:02, 13 November 2008 (UTC)

rectilinear perspective[edit]

Do people actually see in rectilinear perspective, or is this just a convention of western art? For example, I've seen Looney Toons cartoons in which when viewing a tall rectangular building from the bottom, the lines start curving together at the top (rather than pointing straight at the vanishing point). --VectorField (talk) 06:11, 13 November 2008 (UTC) I guess an example of what this would look like can be found at fisheye lens. --VectorField (talk) 06:18, 13 November 2008 (UTC)

See also Perspective (visual) and for Cartesian rectilinear perspectives and variations on this see Perspective (graphical). In traditional Japanese and Chinese art, the perspective is constructed differently, more like parallel perspective. Julia Rossi (talk) 10:05, 13 November 2008 (UTC)
It's more than just convention. The idea is that the canvas should be like a window through which you can see the three-dimensional scene, and if you work that out in detail for a flat canvas (as described in Perspective (graphical)) you get the rules of perspective. It's the flatness of the canvas that matters, not anything going on inside the eye or brain. I don't think that people see in rectilinear perspective (I don't think the idea even makes sense—I think it's an instance of the homunculus fallacy) but the theory of perspective makes no such assumption. It does, however, assume that you stand in the correct location and have only one eye.
If you work out the rules for a cylindrical canvas instead, you get cylindrical perspective. (More precisely, if you follow the correct perspective rules for a cylindrical canvas with the viewer in the center and then unroll it, you get what's normally called cylindrical perspective.) The advantage of a cylindrical canvas is that you can get a wider field of view. A flat canvas is limited to a 180° FOV, and as you approach that limit the canvas size goes to infinity or the distance from canvas to viewer goes to zero, both of which are inconvenient. If you stand at the wrong distance from a large-FOV flat image (and any convenient distance will be wrong) it will look very distorted. A cylinder doesn't have that problem. Any small enough part of a cylinder is roughly flat, and so an unrolled cylindrical perspective is suitable for a very wide image that's meant to be looked at only a bit at a time (walking from side to side). A wide or tall background in a cel-animated TV show will normally be drawn in cylindrical perspective since it's designed to be panned over. -- BenRG (talk) 14:45, 13 November 2008 (UTC)

Oooh! Good question!
Perspective of some kind is a necessary part of any practical visual system. We really do see things that way. The different kinds of perspective come about through the shape of the 'screen' onto which the scene is projected - and the nature of the lens and how it gathers the light.
  • In an idealised 'pin-hole' camera, the film (or digital imaging device) is flat - and since light travels in straight lines through a notional zero-sized pin-hole at the front of the camera - and then onto the flat surface inside, all straight lines in the world outside project into straight lines in the image - and we have perfect rectilinear perspective.
  • In a practical camera - the lens isn't a pinhole and it has to bend the light to make it focus properly - that results in a non-linear mapping of real world onto the flat film plate - and depending on how much the light is bent, straight edges turn into curves and you no longer have rectilinear perspective.
  • In our eyes, the imaging surface (the retina) isn't flat - so straight lines in the real world (like the sides of a tall building) don't project into straight lines on the retina. However, our visual system isn't a matter of a bunch of pixels on the retina being "absorbed" somehow by the brain. It's MUCH more complicated than that. One of the things our visual system does is to compensate for those complicated curves so that we "see" straight lines where the lines are straight...we're not aware of the weird mapping that goes on because of a non-zero pupil diameter and a non-flat retina. We are aware at some level that things seem "smaller" the further they are away - but we're also unconsciously correcting for that - so we don't think that a car is tiny because it's further away. The mathematical fact of perspective has been converted by our visual system into something that takes on different meanings depending on the context about which we are considering them. There are several optical illusions that play on that to demonstrate that we don't "see" what is really there.
  • In art, the painting itself is generally flat - and artists generally want to give the impression that the rectangular frame of the painting is like a hole cut into the wall and the art is like an image coming through that hole - which is then percieved by our eyes. So then the mathematically "correct" thing for the artist to do is to pretend that the painting is a pinhole camera image and use rectilinear perspective - and then let our eyes process that image as if it were really light coming through a hole in the wall. Because that image then enters the eye in the same way that the light from a real object would - and the painting "looks real" to us (well, not quite because of issues of depth of focus and "collimation" of the resulting light).
  • In 3D computer graphics, (my speciality - I'm a graphics programmer for the games industry) no matter how wide-angle the "virtual camera" becomes - since the screen is flat - and we use rectilinear perspective because the math is simplest that way. Especially - we want to map straight lines onto straight lines - because our graphics algorithms are much simpler that way. And (fortunately) it all looks right for the same reason that art looks right. But we do see "fish-eye distortion" in computer graphics - and this is true even though the computer is translating straight lines into straight lines (actually, the graphics hardware is incapable of directly drawing curved edges - rectilinear perspective is built into the 3D circuitry at a fundamental level!). We perceive wide field-of-view images as distorted (and some people will even go so far as to claim that everything is curved even when that is a physical/electronic impossiblity!) The reason for that is that we are taking an image from a 'virtual camera' with a wide field of view (say 120 degrees) and presenting it on a screen that only subtends (perhaps) 30 degrees at your eye. This is not a natural thing - and our brains have to somehow interpret this as if the image were being seen through a distorting "fish eye" lens when all it's really seeing is through an idealised pin-hole camera. Our mental compensation for that imagined distortion (which is not present in the mathematically "correct" rectilinear perspective) results in a strong impression of curvature.
A similar problem occurs with narrow-angle images (eg taken with a telephoto lens) where the relative lack of perspective foreshortening leaves us with a wrecked sense of distance. Film makers love to use this. In a romantic scene with the moon in the background, they'll pull their camera WAY back from the actors - then zoom WAY into them - and the resulting screwup of our mental idea of perspective makes the moon look HUGE. This is used in action shots too - when the hero is running away from the burning car - which explodes behind him hurling him towards the place the camera WAY back from the car - put the actor fairly close to the camera and a VERY safe distance from the car...then the telephoto lens screws up the perspective for us - and we think the car is really close. It's still very close to being strictly rectilinear perspective - but our brain's inability to compensate for the distortion that results from the mismatch of the field of view means that we see things in way that they really are not.
SteveBaker (talk) 15:03, 13 November 2008 (UTC)
I think it's interesting that how we see things is generally governed by how we expect to see them. We know what size a car is, so we see it as being that size regardless of how far away it is (to use your example). However, if you're in an unusual situation, for example seeing cars on a road when looking out of the window of a plane, your brain doesn't really know what to expect and you need to consciously work out what you're seeing and you do notice the fact that cars are all so tiny (the standard cliché is to compare them to ants). --Tango (talk) 15:33, 13 November 2008 (UTC)
I'm going to take issue with the idea that our brains "compensate" for the curved shape of the retina. There's no one in there looking at the image projected on the retina. All the brain gets is a bunch of electrical signals down the optic nerve. If the retina were flat and the distribution of cones and rods were tweaked accordingly (to keep the visual acuity fixed), you'd get exactly the same collection of signals going to the brain (except in slightly different focus). I don't think the brain "knows" the shape of the retina—it doesn't need to know.
To put it another way, once you've projected onto one canvas you can re-project onto any other without needing the original scene as a reference. Fisheye Quake and PanQuake take advantage of that to produce realtime 3D in a mathematically accurate fisheye/cylindrical projection, by first rendering to a cube map and then rearranging the pixels. You could do the same thing in hardware these days (using one big rectangle and a pixel shader for the second pass). So, although modern graphics cards aren't designed for non-rectilinear projections, they can certainly produce them, probably at full frame rate, and it would be a pretty neat gimmick to have in a new game. So get cracking, Steve. :-) -- BenRG (talk) 21:22, 13 November 2008 (UTC)
To put this simply, our eyes use elliptic geometry and a flat canvas uses Euclidean geometry. Trying to put the same picture on both will require some warping. — DanielLC 15:56, 13 November 2008 (UTC)
The extent to which rectilinear perspective correctly replicates a 3D view, at least for a fixed observer and small field of view, can be judged by the success of trompe-l'oeil illusions. Gandalf61 (talk) 16:02, 13 November 2008 (UTC)
Recent coverage of drawing on curved canvases here. —Preceding unsigned comment added by (talk) 19:21, 13 November 2008 (UTC)

Some painters used "Chinese perspective" where things are higher in the picture to indicate distance,or which are more axonometric and others used "landscape perspective" where distant objects are in more subdued hues, without the mathematical device of our modern geometric perspective. It is a modern mathematical invention. Edison (talk) 19:54, 13 November 2008 (UTC)

And, of course, painters like Picasso don't bother much about the rules of perspective at all. That's how it is with geniuses - they break the rules and sometimes make new ones. —Preceding unsigned comment added by (talk) 01:10, 14 November 2008 (UTC)

Ceiling fan[edit]

Hello. If a Ceiling fan is switched on at the wall but it is actually off then does it still use electricity? Sorry if its a stupid question. ```` —Preceding unsigned comment added by AreDeeCue (talkcontribs) 13:49, 13 November 2008 (UTC)

No. The circuit has to be closed to use any noticeable amount of electricity (I'm ignoring cases such as worn out wires sparking against nails in the attic and such). Any switch anywhere in the line that opens the circuit will cause the flow of electricity to stop. -- kainaw 13:58, 13 November 2008 (UTC)
Your question implies that there are two (serial) switches to operate the fan? If this is the case, both switches must be on for the fan to work. Therefore the fan is not using electricity. Sometimes the wall-mounted control has an LED to indicate that it is on. However this LED uses a trivial amount of electricity. Axl ¤ [Talk] 14:01, 13 November 2008 (UTC)
The ceiling fans that use an infrared remote (to avoid the need to add house-wiring for houses that don't have a second switch and circuit) DO use a tiny amount of electricity when they're turned on at the wall but off using the remote. Some piece of circuitry has to be powered in order to pay attention to the InfraRed receiver. It's pretty tiny - but with a bajillion appliances around the home all eating small amounts like that - it does add up. The kind of ceiling fan that has a pull-cord to turn it on and off shouldn't be consuming any electricity at all when it's turned off there instead of at the wall. SteveBaker (talk) 14:26, 13 November 2008 (UTC)
The OP might also consider reading Standby power.--Lenticel (talk) 00:34, 14 November 2008 (UTC)

Echinoderm diversity[edit]

Estimations of the diversity of extants echinoderms vary widely, from 6,000 to 10,000 species, and even reliable sources differ a lot. Which is the most authoritative source on echinoderms, and what figure does it give? Thanks. Leptictidium (mt) 14:56, 13 November 2008 (UTC)

dangerous situation arising from killing germs?[edit]

If you kill all the germs in a place germs like to live, won't it just create natural evolutionary pressure toward germs that aren't affected by death? —Preceding unsigned comment added by (talk) 15:22, 13 November 2008 (UTC)

Not being affected by death is tricky, but antibiotic resistance certainly arises in this way. Algebraist 15:47, 13 November 2008 (UTC)

Well - I don't think they can be "unaffected by death" - that's kinda silly! But I guess what you mean is that these critters are not killed by whatever means you've been using to kill off the others. If so - then yes! This happens all the time. In hospitals particularly, there are antibiotics everywhere - in the air and on all of the surfaces. Bacteria are mostly killed by this stuff being everywhere - but one in a trillion (maybe) survives - and this causes an evolutionary effect that results in types of bacteria that are immune to all of the common antibiotics. It is therefore necessary for the drug companies to continually come up with new antibiotics that the bacteria has not yet been exposed to - and which can therefore kill them...until a few more years have gone by. Doctors also recognise this and they don't prescribe the newest and most powerful antibiotics until they know that the older (and by now, better-resisted) have failed. This is an attempt to keep the newer kinds of treatment in reserve for the most resistant bugs. It's not just 'germs' either - strains of rats and mice that are immune to the common kinds of rat and mouse poisons are also appearing. There is no doubt that evolution does this all the time. SteveBaker (talk) 16:52, 13 November 2008 (UTC)
Also, it depends on just how you kill them. For example, very few living things can survive a good autoclaving. Naturally, a hospital bug that could stand it would have an enormous evolutionary advantage — but that matters little if there are no bugs that would stand even a chance of doing that in the first place. Simple broad-spectrum disinfectants like hypochlorite or hydrogen peroxide are similar: some organisms do resist them better than other, but in sufficient concentrations and quantities they kill pretty much everything indiscriminately.
A common trait of such non-specific disinfection methods is that they can only be used on non-living objects, or at best can only be applied externally to e.g. localized areas of skin. That's because, if they were applied e.g. to the entire human body in concentrations sufficient to kill bacteria, they'd kill the human too. Antibiotics, on the hand, can be administered internally, because they specifically kill bacteria while not being toxic to humans. However, this very specificity also allows bacteria to develop resistance to them: since a useful antibiotic must target some biochemical feature specific to bacteria, rather than indiscriminately killing every living cell, it's usually possible for bacteria to develop mutations such that the specific features the antibiotic attacks are no longer present.
Mind you, it is possible for organisms to develop resistance to even non-specific poisons, at least to some extent, over sufficiently long timescales. For example, ethanol is toxic to most living cells at sufficient concentrations, but most species, including humans, have developed some degree of resistance to it, since it's so commonly found in nature e.g. as a product of fermentation. An even more striking example is free diatomic oxygen: to anaerobic organisms, it's as deadly as ozone or fluorine, but over the 3 billion years since the evolution of photosynthesis, most living things have developed elaborate mechanisms for tolerating and, eventually, even making use of it. Even so, oxidative stress remains a significant source of cellular damage in organisms, including humans, that are exposed to high concentrations of oxygen. —Ilmari Karonen (talk) 20:02, 15 November 2008 (UTC)

What do you call this camera trick?[edit]

OK, I'm not sure how to describe something visual in words, but I will try. In movies and TV shows, they have this camera trick where the object in the forefront (usually a person) stays stationary but the background somehow moves. It seems like some sort of trick of perspective. This technique is usually used to convey shock, something unexpected or when the person makes a sudden realization. Sorry I can't give a better description, but this technique is used enough that hopefully someone will know what this is named. (talk) 15:32, 13 November 2008 (UTC)

Dolly zoom APL (talk) 15:45, 13 November 2008 (UTC)
Yep, that's it! Thank you. (Comment: So they actually move the camera? Wow.) (talk) 15:58, 13 November 2008 (UTC)
I've always heard it called a "Hitchcock zoom" because he was the one who first popularized it. It requires zooming the camera either in or out and simultaneously moving the camera either backwards or forwards. Done correctly (which is tough), the person in the foreground stays at exactly the same size - but the background zooms in or out. It has the effect of separating the character from their surroundings - which Hitchcock used to great effect to get across emotional state and such. Adding 'rim lighting' or other inconsistent lighting for the character is another way to separate a character from the background that Hitchcock used effectively. SteveBaker (talk) 16:57, 13 November 2008 (UTC)
Specifically, Hitchcock developed it for the movie Vertigo to illustrate Jimmy Stewart's character's acrophobia. I think I remember reading in Truffaut's book on Hitchcock that he conceived it in connection with one of his earlier films but was then told it could not be done. --Anonymous, 23:03 UTC, November 13, 2008.
Yep - in an era before computer-controlled cameras and dollies it would have required great skill (and possibly MANY retakes!) to pull off the effect - the dolly would have been pushed by a bunch of guys while the cameraman adjusted the zoom...coordinating those actions to the required degree of accuracy would be nightmarish! You can see why some people would say it was impossible. These days, you could just as easily cheat by standing the actor in front of a blue-screen and zoom into the background with a separate camera - but we have had computer controlled cameras that could do it since about the time of the first StarWars movie. SteveBaker (talk) 15:51, 14 November 2008 (UTC)
You could do it with an all-mechanical system. One option would be a zoom lens with a custom-cut thread (rather than the standard helical thread) so that a constant-speed rotation of the zoom ring produces the needed zoom rate, at which point a pair of electric motors (one to move the camera around, and one to drive the zoom lens) will give you the effect you want. If you don't have electric motors, you could move the camera on a rack-and-pinion track with the rack driving the zoom lens. I'm sure there are a number of other ways to set up a mechanical linkage between the zoom and the camera motion. --Carnildo (talk) 22:07, 14 November 2008 (UTC)
Sure - I'm sure it could be done - but it wasn't. I believe they also had to coordinate refocussing the camera while zooming and dollying. SteveBaker (talk) 05:45, 15 November 2008 (UTC)

Energy drink ingredients[edit]

Do any ingredients in energy drinks other than sugar and caffeine have proven desirable short-term effects? NeonMerlin 15:43, 13 November 2008 (UTC)

You're probably best to just run through the list of ingredients in your favourite energy drink and check the Wikipedia articles. The major active ingredient in guarana is caffeine. Taurine has been shown to axiolytic effects in some animal studies, but no effect on human beings has been observed. The NIH has reported that supplements containing gingko biloba have no measurable benefit when taken as directed. The list goes on, but you can probably find what you're looking for by following the links from Red Bull, Rockstar, and the others. TenOfAllTrades(talk) 16:25, 13 November 2008 (UTC)

Visual acuity of hawk's eye: why/how?[edit]

Hi, is there an article that explains why the hawk's eye has such great visual acuity? I can't find any info on WP on such a fascinating subject. Kreachure (talk) 16:52, 13 November 2008 (UTC)

Maybe you should read the article you linked to, then. Matt Deres (talk) 17:41, 13 November 2008 (UTC)

Yeah, thanks, but I was looking for more information, like the one provided by the bird vision article which I just found. Kreachure (talk) 18:09, 13 November 2008 (UTC

I think visual acuity is provided by the number of cone cells in the fovea, and presumably by how other bits of the eye work. Hawks presumably have a higher percentage of cones in the fovea than other species. This is just complete assumption. —Cyclonenim (talk · contribs · email) 18:54, 13 November 2008 (UTC)
I understand that some hawks and falcons actually have a slight concavity in the retina which allows for greater focus on distant objects and more visual cells than there would normally be, although I can't remember the details. (talk) 21:56, 13 November 2008 (UTC)


today i encountered with a resistor named 3k9,4k7,and 100R.can any body plz tell me what values of these resistance and what is the type of these resistors . -- (talk) 17:36, 13 November 2008 (UTC)

3900 ohm, 4700 ohm and 100 ohm. It's a funny bit of notation that is standard in the industry, but makes sense easily enough. They're just bog standard resistors as far as I can tell from that information. (talk) 18:56, 13 November 2008 (UTC)
Is there a ref for that notation? It seems a bit uninformative, lacking an indicator of the precision rating like the color coding on resistors has. Edison (talk) 19:49, 13 November 2008 (UTC)
It is mentioned in the Resistor article briefly and also at Electronic color code#Other schemes. The notation used seems to follow BS 1852. Nanonic (talk) 19:56, 13 November 2008 (UTC)
BS 1852 now superseded by BS EN 60062. Latest version seems to be 2005.--GreenSpigot (talk) 13:09, 15 November 2008 (UTC)

Fructose Malabsorption[edit]

Disclaimer: I have no intend to ask for medical advice here. If you think you cannot answer without giving medical advice please just ignore my questions. If you feel obliged to tell me to see a doctor please give me name and address, too, as I have seen many doctors and none of them has even mentioned fructose malabsorption.

The one section of the fructose article (fructose malabsorption#Fructose Metabolism) states that fructose is absorbed using GLUT-2, the rest of the article states it is absorbed by GLUT-5. This seems to be a bit of a contradiction, or can GLUT-2 absorb fructose, too, but GLUT-5 is specialized on it?

It is not clear to me from the articles if fructose is normally (without f.m.) absorbed in the small intestine, the large intestine, or both, and if f.m. affects only the absorption in one of those or both. Fructose#Malabsorption gives the medical advice that Exercise can exacerbate these symptoms by decreasing transit time in the small intestine, resulting in a greater amount of fructose being emptied into the large intestine which I know is not true from my own experiences. Fructose not being absorbed and rotting in the large intestine as well as in the small would account for this experience.

Are ther many different forms of f.m. and is the absorption via GLUT-2/5 totally absent or only reduced?

Fructose is a small molecule, isn't it absorbed by pinocytosis or is it that the pinocytosed amount is too small? Thanks (talk) 19:24, 13 November 2008 (UTC)

As our fructose article states, "The mechanism of fructose absorption in the small intestine is not completely understood. Some evidence suggests active transport, because fructose uptake has been shown to occur against a concentration gradient. However, the majority of research supports the claim that fructose absorption occurs on the mucosal membrane via facilitated transport involving GLUT5 transport proteins. Since the concentration of fructose is higher in the lumen, fructose is able to flow down a concentration gradient into the enterocytes, assisted by transport proteins. Fructose may be transported out of the enterocyte across the basolateral membrane by either GLUT2 or GLUT5, although the GLUT2 transporter has a greater capacity for transporting fructose and therefore the majority of fructose is transported out of the enterocyte through GLUT2." In case this isn't clear, it's essentially saying that fructose is primarily transported from the lumen of the small intestine into the cells lining it by GLUT5, and transported out of those same cells into the bloodstream by GLUT2. - Nunh-huh 19:43, 13 November 2008 (UTC)

Poisoning through the ear[edit]

(This is merely a question of curiosity with no intended "medical" application.) Is it actually possible to poison someone, as King Hamlet was killed, by pouring poison into their ear? If so, what poisons are effective that way and how, physiologically, does the poisoning take place? Was it a common belief in Shakespeare's time that someone could be poisoned through the ear, and would Shakespeare have known if this depiction of poisoning was accurate or not? (talk) 20:33, 13 November 2008 (UTC)

There was a long discussion on this very topic on here last February. -- (talk) 20:52, 13 November 2008 (UTC)
Thanks for the link. The interpretation that it's symbolic (poison in the ear representing evil words) instead of based on medical knowledge is quite plausible-- the result of the poison, in which the King suddenly becomes covered in leprous scabs, certainly doesn't seem like a literal description of something that could really happen. (talk) 21:51, 13 November 2008 (UTC)
Also see eustacian tube. A good read on this topic. 21:08, 13 November 2008 (UTC)
Though I agree with the symbolism interpretation discussed previously, I would also point out that perforated eardrums would have been much more common in Shakespeare's day than they are now. Someone with a perforated eardrum would be quite susceptible to poison poured into the ear, because it would pass right through the middle ear and into the throat. --Scray (talk) 03:31, 14 November 2008 (UTC)
I realize this is wandering off topic. Would perforated eardrums have been more common in Shakespeare's day than they are now because a) they could not be treated or b) eardrums were more often damaged or c) both? Thanks, CBHA (talk) 03:51, 14 November 2008 (UTC)
I'm not sure you;d need a perforated ear drum for the poison to work. The poison, after all, doesn't have to reach the digestive tract; just the blood stream. While your digestive tract provides a relatively easy way to reach the blood stream, other venues (the lungs, the nose, the eyes, and yes the ears themselves) may provide a relatively easy means by which to get the poison into the blood. Indeed, the poison may not have to even get past the ears; it could probably be absorbed directly through them straight to the blood stream... 04:16, 14 November 2008 (UTC)
While there are poisons that can pass through intact skin (including external ear), much more permeable are the mucous membranes (like the linings of lungs, nose, eyes). A nightshade poison like the one employed in Hamlet would not cross intact stratified squamous epithelia (such as skin or intact external ear) efficiently, but would be absorbed through a mucous membrane (such as nasopharynx after passing through a perforated eardrum). @CBHA: Perforated eardrums are associated with untreated middle ear infections, which can result in spontaneous rupture and release of "pus under pressure". In addition, I don't know when European physicians started puncturing eardrums to provide relief, but my guess is that the practice has been around for a very long time. --Scray (talk) 04:47, 14 November 2008 (UTC)

spider-silk 'stronger' than steel. What does that mean?[edit]

I heard that spider-silk is stronger than steel. What exactly does that mean, and if it's true, why don't we spin spider-silk into threads, then strands, then long thick cables, coat them in something to keep the weather out and make ****in suspension bridges and ***t out of em... —Preceding unsigned comment added by (talk) 20:33, 13 November 2008 (UTC)

It generally means that, pound for pound, spider silk has a higher tensile strength than steel. Its just that if you made steel threads as small as spider silk, the silk would be stronger, or alternately if you made cables of spider silk, those would be stronger than similarly weighted steel cables. However, getting enough spider silk together to build a suspension bridge is likely impossible. 21:07, 13 November 2008 (UTC)
The fact we're not doing it (yet) is not for want of trying [1] [2]. As our page spider silk points out there are several different types of spider silk, all with their unique properties. At least for capture-spiral silk some of its resilience comes from lengths of silk that are coiled up in the glue drops and sort of act like giving a lassoed animal some rope when it pulls. The extra silk uncoils while the spider prey is decelerated when it gets caught; that absorbs some of the kinetic energy. Although the material of spider silk itself is already amazingly strong, see BioSteel, the real stuff is not just strands of the material spun into threads etc. but a complex construct of interlinked strands and bits. It's probably going to be a while till the process of producing steel cables has become as expensive as that of manufacturing spider silk. A cautionary note, though, before you start heading for a future in spider silk bridge construction: being based on organic chemistry, evolution has so far only produced very few critters with a special appetite for iron [3] that could pose a risk to steel bridges, but protein is a whole different matter. Apart from critters that would already consider a tall protein structure delectable, it's a very small evolutionary step for lots of species to specialize on protein bridge cables. Scientists would have to run hard to stay ahead of the game to keep your bridges from getting eaten as fast as you could build them. (talk) 22:59, 13 November 2008 (UTC)
Thanks! I didn't even consider the delicious aspect... —Preceding unsigned comment added by (talk) 23:02, 13 November 2008 (UTC)
Note that steel is widely used as a construction material not just for its tensile strength, but also for its high compressive strength and high Young's modulus (stiffness). Even if we could make suspension bridge cables from spider silk, we would still need to use steel for the towers (where compressive strength is required) and decks (where stiffness is required). Gandalf61 (talk) 10:21, 14 November 2008 (UTC)

Polar deserts[edit]

Is it possible for a hypothetical planet to have polar deserts instead of ice caps? As in - not in the sense Antarctica is a rainless desert, but rather a hot Gobi-esque desert. Lady BlahDeBlah (talk) 21:10, 13 November 2008 (UTC)

If it was tilted over like Uranus and tidally locked to its star so that one of the poles always faced the star, then that would probably be a polar desert. For a normally tilted planet, the poles are always going to be colder than the equator because the sunlight hits them at a shallower angle. --Tango (talk) 21:12, 13 November 2008 (UTC)
The first case is impossible. It the planet does rotate, none of its poles can always point to the star. Preservation of angular momentum ensures that the axis of rotation is stable. If one of the poles would always point to the sun, the axis of rotation would rotate itself. And in particular, if the planet is tidally locked, the axis of rotation (one per planetary year) is necessarily perpendicular to the orbital plane. --Stephan Schulz (talk) 21:55, 13 November 2008 (UTC)
Stephan is right (as usual) that you can't have a pole tidally locked to a star. However, if a planet is tilted more than ~60 degrees, then one expects that the poles have a higher annual average temperature than the equator because they spend more time in direct sunlight than any equatorial location. In fact, a highly tilted planet should form an equatorial ice cap instead of a polar one. Of course the poles will still get cold during the half of the year they are pointed away from the star, but just not cold enough to offset being pointed directly at the star for half the year. Dragons flight (talk) 23:48, 13 November 2008 (UTC)
I heard once that the critical angle, where the poles and equator are equally insolated, is 54°; but I never did get around to working it out. —Tamfang (talk) 08:17, 14 November 2008 (UTC)
Of course it's impossible... in my defence, I was suffering from a 24 hour stomach bug yesterday... The best you could get would be the poles going between hot desert and cold desert as the planet orbits the star. --Tango (talk) 10:50, 14 November 2008 (UTC)

So a planet with a tilt of more than 54 degrees would do it, kinda? Lady BlahDeBlah (talk) 13:55, 14 November 2008 (UTC)

...or of course, a planet that's devoid of water and closer to its star - even though the poles would still be colder than its equator, the poles could be hot and dry by our standards. I bet the poles on Venus are pretty hot and dry. But I like Tango's answer better. SteveBaker (talk) 21:43, 13 November 2008 (UTC)
I was working under the assumption that the whole planet being a desert doesn't count - it is something of a trivial solution! --Tango (talk) 10:50, 14 November 2008 (UTC)
I'm altogether in favor of trivial solutions! SteveBaker (talk) 15:33, 14 November 2008 (UTC)

Newton and the travel time of light from the Sun[edit]

I read last night that Sir Isaac Newton estimated that it took 10 minutes for light to travel from the Sun to the Earth...the true number is around 8 minutes - so that was a pretty good estimate for the era. The trouble is that he didn't know the speed of light (not even approximately) - and he didn't know how far the Sun is from the Earth either.

So how the heck did he come up with a number that's so accurate?

I'm scratching my head trying to come up with any way he could have estimated this time at all...let alone being so amazingly accurate about it.

SteveBaker (talk) 21:48, 13 November 2008 (UTC)

Ole Rømer made the first reasonably-accurate measurements of the speed of light in 1676. Newton lived until 1726/7, and he would have had access to Rømer's result. The method is described at Ole Rømer#Rømer and the speed of light, and there's a history of speed-of-light measurements at Speed of light#Measurement of the speed of light. TenOfAllTrades(talk) 21:57, 13 November 2008 (UTC)
I think knowing Kepler's laws did the rest. T^2 = D^3. where T is the Earth's orbital period and D is the planet's mean distance from the Sun. (I am not sure of the units to use... (talk) 05:45, 14 November 2008 (UTC)
T^2 is proportional to D^3. Determining the proportionality constant (and thus, the scale of the solar system) was a major problem in astronomy for a long time. --Carnildo (talk) 22:17, 14 November 2008 (UTC)
They're equal in the appropriate units, and the anon did say they weren't sure of the units, so technically they were correct. I think the units would be years and AUs, thus the formula says 1=1, which is indeed true. --Tango (talk) 23:02, 15 November 2008 (UTC)
Rømer calculated the speed of light compared to the speed of the earth in its orbit. This gives the time for light to reach the earth with a little maths without needing to know the distance to the sun. Dmcq (talk) 11:40, 14 November 2008 (UTC)
Indeed. Our Ole Rømer article says that, prior to Rømer's more accurate measurements, Cassini used observations of Jupiter's satellites to conclude in 1675 that "light seems to take about ten to eleven minutes to cross a distance equal to the half-diameter of the terrestrial orbit". The calculation is based on the apparent variation in the satellites' periods depending on whether the Earth is approaching or receding from Jupiter - essentially, it is a Doppler effect measurement. Gandalf61 (talk) 11:57, 14 November 2008 (UTC)


Does the Large Hadron Collider only operate at night? (talk) 23:31, 13 November 2008 (UTC)

No. Dragons flight (talk) 23:51, 13 November 2008 (UTC)
Is there any reason to suspect that the LHC only operates at night? -- (talk) 13:15, 14 November 2008 (UTC)
If you run it at night, fewer people in Europe will notice when the black hole sucks them up, because they'll be asleep. It's purely pragmatic. -- (talk) 01:51, 15 November 2008 (UTC)
One of the large colliders only opperates in Summer when the power consumption is lower. In Winter the Electricity is needed for heating... This might be the case to work in the night, although I never heared of that.--Stone (talk) 13:52, 14 November 2008 (UTC)
There is too much involved in operating a large accelerator to be able to power it up and down every day. In general, large colliders run 24-7 and the people who work on/with them get scheduled in shifts that run around the clock. Dragons flight (talk) 15:38, 14 November 2008 (UTC)
Actually - it doesn't operate at all right now because it's broken and is going to take many more months to fix. It's possible that they might want to operate it at night in order to place the bulk of the earth between the machine and the sun in order to shield it in some way - but that wouldn't be true for all of the experiments it does. SteveBaker (talk) 15:31, 14 November 2008 (UTC)