# Wikipedia:Reference desk/Archives/Science/2012 October 5

Science desk
< October 4 << Sep | October | Nov >> October 6 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

# October 5

## Gibbs free energy change of a hydration of a gas

The hydration energy of the gas is +8.4 kJ/mol, which gives a K of 0.0337 at 298K. How am I supposed to calculate the concentration from the partial pressure of the gas? The equilibrium constant of the hydration reaction is in units of pressure, but 0.0337 is dimensionless. — Preceding unsigned comment added by 71.207.151.227 (talk) 01:32, 5 October 2012 (UTC)

We'd need more information. The problem must have given you more data, and we'd need that to help you solve the problem. K is, to a first approximation, the ratio of partial pressures. This could be solvable with something like the total pressure. If you can tell us the entire problem, as it is written, perhaps we can help and see where you are being tripped up. --Jayron32 02:16, 5 October 2012 (UTC)

## Is there a theoretical frames-per-second limit to capturing motion?

This is probably a seriously stupid question, but for some reason, it's bugging me. You know that infinity paradox thing (that's the scientific term) for walking across a room, where in order to get there, you have to cross over the halfway point, but before you do that, you have to reach half of half, but before that, you have to reach half of half of half, so forth, down to the molecular level. Obviously, we can walk across a room just fine. So at some point, there really must be some bridge between one side of the half, and the other. If that were somehow filmed with a theoretical highspeed camera, like billions to trillions—maybe more—of frames per second, would we be recording motion on a completely incomprehensible scale? See, I don't even know how to explain it. And the answer is probably just a "no". If I were to film a bullet fired at a wall at a theoretical speed, would I simply end up watching a film (projected at normal speed) of a bullet in perfect stasis for days/weeks/years? Or would we actually see something else? I want to delete this question lol. – Kerαunoςcopiagalaxies 08:57, 5 October 2012 (UTC)

You may be interested in the planck length and planck time articles. I don't really understand the concepts myself but it seems there is a minimum length and time for everything that cannot be split into 2 smaller lengths or times. --85.119.27.27 (talk) 09:22, 5 October 2012 (UTC)
For the record, the 'infinity paradox thing' is Zeno's dichotomy paradox. AndrewWTaylor (talk) 11:02, 5 October 2012 (UTC)
As a practical matter, in extreme slo-mo the individual frames tend to be rather dimly lit. ←Baseball Bugs What's up, Doc? carrots→ 12:32, 5 October 2012 (UTC)
If you have eleven minutes to spare, there is a TED video here that demos a trillion frames per second camera - showing a light pulse like a regular high speed camera shows a bullet. They get around the dimming problem mentioned above by imaging the same scene many times. 88.112.36.91 (talk) 12:55, 5 October 2012 (UTC)
With conventional film, the shorter the exposure time, the more light is needed for each exposure, along with having a film that is designed for short exposure times. There are extreme high-speed cameras in use or in development for phenomena such as lightning strikes - to see how the lightning originates and how it travels. Those types of cameras use lots of individual cameras taking well-timed individual pictures. This is basically taking the Eadward Muybridge approach to an extreme. ←Baseball Bugs What's up, Doc? carrots→ 13:07, 5 October 2012 (UTC)
Eadweard Muybridge. hydnjo (talk) 15:09, 5 October 2012 (UTC)
More to the point than Muybridge (not very many frames per second in his original work that I've seen), see Harold Eugene Edgerton, who did amazingly creative and innovative work in high speed motion pictures, using high speed strobes to slow things down. Edison (talk) 21:37, 5 October 2012 (UTC)

Thank you guys for all the responses! Here I thought I was asking a dumb question and got some seriously cool answers. Planck length, planck time, Zeno's dichtomy paradox, and Raskar's TED talk actually answer my question! Which is, according to the planck length article, most likely unknowable—but I didn't even know that "length" had even been given a name. (As an aside, I'd previously watched two other of Raskar's videos on his "around the corner camera" and this is the first video that actually showed the results. So that was a frustration finally settled.) Thank you!! – Kerαunoςcopiagalaxies 22:50, 5 October 2012 (UTC)

I think there is a simpler answer. Light consists of individual photons. Once your frames are so short that each frame records either zero or one photons, there is nothing to be gained by making them shorter. That might seem bizarre, but it actually comes into play in real life when recording very-low-intensity light. Looie496 (talk) 02:34, 6 October 2012 (UTC)
Sure, it's a real problem for scientific instruments recording either very fast, very small, or very dim events. So-called shot noise is the inherent noise - variation - introduced in measurements or images when you try to collect data or take pictures when you just don't have enough photons to play with. TenOfAllTrades(talk) 03:30, 6 October 2012 (UTC)
Looie496, that's a great point, although if motion is still occurring between the photons, then my question still stands, just not in regards to being visible, I suppose. The answer I really was looking for was the planck distance. But I appreciate your reply because it's completely true and I hadn't thought of it that way. (I was sort of ignoring the whole "faster fps = dimmer exposure" conversation because I didn't really mean to go in that direction. On the other hand, I had no idea about shot noise, either, so TenOfAllTrades, thanks for that point, and I promise to never be bad and close an answer ever again! :D Seriously, the last two responses may have never happened, so I definitely see everyone's point.) – Kerαunoςcopiagalaxies 07:27, 7 October 2012 (UTC)

## alcohol

is a 40% 1oz shot of wiskey stronger, the same, or less than 1 beer with 5% alcohol?, also is a "standard" shot 1 or 1 1/2 oz? if someone uses a 1 1/2 oz shot is that stronger than a beer or equivalent to it? --Wrk678 (talk) 11:08, 5 October 2012 (UTC)

Assuming that by "stronger" you mean "contains more alcohol", we'd need to know the amount of beer. Obviously 1oz of 40% abv whiskey contains as much alcohol as 8oz of 5% abv beer. (assuming the specific gravity of both is not significantly different to that of water). Rojomoke (talk) 12:22, 5 October 2012 (UTC)
Here in the UK, the smallest beer until recently was the half, which is 10oz. On that basis, even a short beer has more alcohol than a single whisky. Nowadays some pubs offer a smaller beer, a third, which is just under 7oz, and therefore would be less alcohol (at 5%) than the single whisky. However, shots here are not fluid ounces, but either 25ml (single), 40ml (large) or 50ml (double). A fluid ounce is just shy of 30ml. So - assuming a standard Scotch at 40% and a European medium-strong lager at 5% - the amounts of alcohol in order are:
1/3 pint lager (UK pints) - 9.5 ml
UK single whisky - 10 ml
1/2 pint lager (US pints) (= 8oz) - 11.4 ml
US single whisky (oz) - 11.6 ml
1/2 pint lager (UK pints) - 14.2 ml
UK large whisky - 16 ml
US large whisky (1.5oz) - 17.4 ml
UK double whisky - 20 ml
1 pint lager (US pints) - 22.7 ml
1 pint lager (UK pints) - 28.4 ml
AlexTiefling (talk) 13:23, 5 October 2012 (UTC)
Point of order: most "shots" are 1.5 ounces, as measured by the Jigger. If I got served only 1 ounce of whiskey in a neat shot, I'd think I was being shorted. --Jayron32 17:15, 5 October 2012 (UTC)
If our article is correct, you may very well often get 'shorted' in much of the US besides Utah. Nil Einne (talk) 09:53, 6 October 2012 (UTC)

i am referring to a 12 oz beer --Wrk678 (talk) 07:42, 6 October 2012 (UTC)

This is pretty basic maths, I hope you realise. As it happens, a 12 oz beer contains 17.4 ml alcohol, identical to the 1 1/2 oz whisky. AlexTiefling (talk) 10:26, 6 October 2012 (UTC)

isint 1 1/2 ounces of whiskey 40 ml, not 17.4 ml?--Wrk678 (talk) 14:49, 6 October 2012 (UTC)

Whiskey is not pure alcohol--usually under %50. μηδείς (talk) 16:56, 6 October 2012 (UTC)
Wrk678, you yourself said in your original post that you were taking the whiskey to be 40%, which is a good estimate. Indeed, many whiskies are specifically balanced at 40%. And 1 1/2 oz isn't exactly 40ml of anything - I've been using 1oz = 29ml as a close approximation, since US and UK fl oz are both close to that value, but not identical to it or each other. Thus 1 1/2 oz is approximately (29x1.5) = 43.5ml. And 40% of 43.5 is 17.4. I'm happy to help further, but please at least read what everyone here has written including yourself before asking. Thanks. AlexTiefling (talk) 07:42, 7 October 2012 (UTC)

## Scurvy and scars

I was contemplating scurvy, and specifically the well-publicised symptom whereby old, long-healed wounds reopen when the sufferer is severely deficient in Vitamin C. I was trying to find out the exact mechanism whereby this happens, but without luck. I suspect it has something to do with the composition of the wound tissue - our article Scar says that scar collagen is not laid down in the "random basketweave formation ... found in normal tissue, but "forms a pronounced alignment in a single direction" and is "usually of inferior functional quality to the normal collagen randomised alignment". And I know that collagen has to be replaced regularly, and Vitamin C is essential to its formation. But how does this work in practice? Does all a sufferer's tissue deteriorate, and scars just split - or even dissolve - first because of their "inferior functional quality". Or does scar tissue need more renewal than normal tissue and thus shows signs of damage earlier? Or what? Plus, some sources mention reopening of old fractures as another scurvy symptom, but Scar suggests that bone, unlike soft tissue, heals "without any structural or functional deterioration." So do old breaks recur with scurvy, and if so, what is the mechanism? Thanks. - Karenjc 14:00, 5 October 2012 (UTC)

I found one paper about it (Disruption of healed scars in scurvy -- the result of a disequilibrium in collagen metabolism. Cohen IK, Keiser HR.; Plast Reconstr Surg. 1976 Feb;57(2):213-5.), but only the abstract is available, and pretty short:
Old scars break open in scorbutic patients because
• (1) the rate of collagen degradation is greater in an old scar than it is in normal skin, and
• (2) the rate of collagen synthesis is diminished throughout the body in ascorbate deficiency. Ssscienccce (talk) 20:48, 5 October 2012 (UTC)
I was able to access that paper, and ironically, it's actually a rebuttal to the theory. Well, not the theory itself, but the methods used by those who came up with it. Someguy1221 (talk) 21:03, 5 October 2012 (UTC)
• This is an intriguing question. It makes me wonder, if scurvy can break apart scars entirely, is it possible to reduce them and replace them with more normal tissue? To take advantage of this, some topical compound would be desired, so I cast about for an "ascorbate antagonist" and came up with ethyl-3,4-dihydroxybutyrate, which inhibits it on prolyl hydroxylase. Turns out that if prolyl hydroxylase is inhibited, the collagen chains don't form triple helices and get degraded, and apparently this some effects on cell differentiation and morphogenesis... [1][2][3] However, I didn't find any hits for EDHB and "scar" in a quick search. Certainly collagen deposition is an end point in scarring [4], a lot of things upstream of collagen are involved in scarring, the collagen receptor DDR1 has a role in it, but I didn't yet pull out whether you can actually inhibit the scar by inhibiting the collagen e.g. by genetic means, and at least one collagen causes chronic scarring if knocked out genetically. I should look at this more, I just took one poke at the top of a very big pile of papers about this stuff. Wnt (talk) 02:44, 6 October 2012 (UTC)
Thanks, all, replies appreciated. As for the bone part of my question, Bone healing plus some other sources lead me to think that the long interval between breakage and "full repair", where remodelling has occured and lamellar bone is restored, gives a window of some years when the healed fracture is still significantly more vulnerable than the surrounding bone to collagen degradation. And if there had been inadequate immobilisation in the early weeks leading to fibrous repair, or poor nutrition during the longer window, the site could well remain more than usually vulnerable to rebreakage due to scurvy. - Karenjc 18:00, 6 October 2012 (UTC)

## Holographic displays

At User_talk:Jimbo_Wales#iPhones_and_editing, our respected founder made the perfectly reasonable comment that editing Wikipedia from a phone would always be difficult due to the small size. But it makes me wonder...

If the sole required accomplishment of a computer-generated holography or other head-up display is to give you the image of a large, flat, decent resolution computer screen a foot and a half from your face, despite the fact that it's projected on some little patch near your eye, how far is that from feasibility? I see from the latter article that what (according to a sympathetic news article) sounds very similar to this is already available for what seems like a niche market of swimmers looking at their lap times.[5] So why don't phones yet have this accessory, so that the entire phone can be used as a keypad and so Wikipedia could be viewed and edited from one pretty much normally? Wnt (talk) 17:09, 5 October 2012 (UTC)

Does the Nintendo 3DS do what you're asking? If so, it is available on at least one handheld device. --Jayron32 17:13, 5 October 2012 (UTC)
John Carmack talks about the practicalities of this, among a bunch of related topics, in his 2012 QuakeCon keynote. It's very long, but it's all worthwhile. -- Finlay McWalterTalk 17:31, 5 October 2012 (UTC)
One thing to note, at least for the "project it onto your eye" case, is that the image will never appear larger than the display, simply due to optics. The technology could still be used to provide a private display that is only readable by the person targeted, but it can never simulate a larger display. See Virtual retinal display. 209.131.76.183 (talk) 17:43, 5 October 2012 (UTC)
Thanks - I realize now that the reason I wasn't figuring out an answer is that my question didn't make much sense - there's no real need for a holographic technology simply to see a screen; for example you could do it with a very high res mini display and a strong contact lens. I suppose the virtual retinal display works the same way, but reverses the perspective of the focusing/scanning to minimize the equipment involved. A true hologram would allow two people to see the same apparent object, but it would inevitably be small like the "phone" then which doesn't address the main problem. Some quick scanning of the Carmack link turns up stuff around 1:12 about the display (VRD at 1:32, focus/contact lenses at 1:38, his notion of "hyperfocusing" though is bull I think, because he's neglecting the phases of the light; the comments about the Palmer kit at 1:42, \$500 with distortion, the practical difficulties in head tracking, sort of explains why I don't see this on the shelf!); apparently moving the display with your head is undesirable. I suppose there's some way to measure head motions and move the projection to compensate, giving it an illusion of reality; seems like the VRD must have to do the same for eye motions because people would never put up with not being able to move their eyes to look at something. Wnt (talk) 18:21, 5 October 2012 (UTC)
I think there's also a different consideration. On my Android phone, if you use the phone in landscape, they virtual keyboard already takes up most of the screen. Typing is still very difficult compared to a real keyboard. While my phone is a fairly small one by modern standards (3.2" screen) and a large one like the 4.7" will definitely improve matters a fair amount, it will still be a lot more difficult then typing with a real keyboard for size reasons alone. The lack of tactile feedback is of course another problem (there are plans for haptic feedback or something else to try and counter the lack of tactile feedback but these still seem a while away). Even with a larger touchscreen like on an iPad this issue remains (I can say from experience). In fact some people prefer a split iPad virtual keyboard as they find it easy to type with (using thumbs). Remember also that the keyboard is fairly landscape (not including the cursors, numpad etc), hence why in most virtual keyboards are fairly landscape and when used with a phone or tablet, even a widescreen one, tend to still have space above or below (or both). And while the experience with typing with a phone or tablet isn't the same, it's similar enough that most people find it a lot easier to just stick with a layout fairly similar to the normal one at least for the letters/QWERTY. In other words, your assumption that using the entire phone as the keypad is somehow going to make things a lot easier is likely flawed, the phone is simply too small amongst other issues. (There are of course plenty of issues beyond simple typing particularly when needing to deal with markup or when needing to edit what you've already typed, And using in landscape with the keyboard active may make things worse in this regard. But I think the typing issues are enough for first consideration.) I haven't viewed any of the above links, but most sci-fi ideas tend to think of not just some sort of holographic projection or retinal display but a projected or completely virtual keyboard so you aren't limited to the size of your phone or whatever. Nil Einne (talk) 18:48, 5 October 2012 (UTC)
The keypad issue is serious, but I don't see any obvious reason why there wouldn't be a way to change the shape of the surface enough to provide tactile feedback, even on a dynamic basis. (I'd think someone should come up with a decent "display monitor" for the blind, at which point it could be adapted... off the top of my head, I think of either small projecting pins or else microfluidics and ampullae) More fundamentally, I would think someone should have invented a combinatorial alternative to QWERTY already. Suppose five fingers each hand (no thumb semantics...), so pressing any two fingers gets you one of 25 letters. Pressing any four fingers from a well chosen subset should provide all the extra letters with minimal risk of missed letters. Of course, this implies that the software can tell which finger is which, a creepy notion - one of those perennial questions I've been meaning to ask here is if anyone ever caught Synaptics uploading fingerprint databases to Unknown Agencies, but my feeling is if the NSA doesn't have a full set of every finger put on a laptop keypad in the past 10 years I should eat my hat. Wnt (talk) 19:10, 5 October 2012 (UTC)
There are tactile displays and chord keyboards. The issue with nonstandard keyboard layouts is getting people to learn to use them. I don't know what's holding back tactile displays, but it's easy to guess (cheap mass production and making them transparent, for starters). -- BenRG (talk) 23:50, 5 October 2012 (UTC)
Yes in case it wasn't clear what I meant, while you could likely develop something better taking advantage of the whole screen (and there are likely better alternatives out there), this is unlikely to succeed because few people would use it. Even despite the differences and problems, if you're a decent typist the existing knowledge greatly helps in using the virtual keyboard. Getting people to switch to something else is difficult at best. Even the sliding Swype despite the alleged advantages seems to be less popular then the tapping mode and stuff like SwiftKey because most people just find it too annoying to learn to slide. And without taking a side in the great Dvorak Simplified Keyboard debate, I think even most of those who argue it isn't better then QWERTY don't deny that even if it were better many people wouldn't switch simply because they don't want to have to relearn. Nil Einne (talk) 05:14, 6 October 2012 (UTC)
P.S. I should add that even with a 4.7" phone and with whatever keyboard design/layout you develop and presuming the user is willing to learn it, it's hard to imagine it'll ever be as fast as with a larger keyboard. One possibility is that QWERTY is really so bad that with your fancy layout it'll be be faster but this seems unlikely. The more likely possibility is that your design will be fast enough for most purposes. However even that being the case there's still the editing problems I alluded to earlier. (In fact while perhaps I didn't make this clear, particularly outside commenting without refs, it's likely to be the more significant problem.) This is still a rapidly developing interface area and undoutedly things will get better but the fact of the matter no matter how big your virtual display is, if your input device is still the size of the phone screen it's difficult to imagine it'll ever be that easy. Nil Einne (talk) 05:33, 6 October 2012 (UTC)
Here's my idea:
1) High-res display glasses (at least 1920x1024 per eye) to provide the display.
2) A pair of VR gloves with tactile feedback on the fingertips.
3) Virtual reality software which will provide a full-sized virtual keyboard on any hard surface, so you can type on it and feel key-clicks as you type.
4) Tie it all together with Bluetooth. StuRat (talk) 05:13, 6 October 2012 (UTC)

## Can visible light induce an electric current in an antenna?

As visible light is also electromagnetic radiation, can it not induce an electric current in a properly oriented antenna, just like microwaves do? I am kind of aware that the antenna should be approximately the same size of (or comaparable to the) wavelength of the radiation. If this is the problem, if we make microantennas (of the order of micro meters), can we generate electric energy from visible light (I am not talking about the photo voltaic effect) - WikiCheng | Talk 17:50, 5 October 2012 (UTC)

Yes, in principle one can make antennas that respond to light. However, the feature size of the wires and other components becomes extremely small (e.g. 10 nm) since the size of the entire antenna needs to be comparable to the wavelength of light (e.g. hundreds of nm). Such things are possible with current technology, but still difficult and the resulting antennas are currently only of real interest as a research tool. Each antenna captures only a tiny amount of energy, and so you would need huge arrays for energy generation. All in all, they are simply way too expensive to compete with other light-to-electricity power technologies. For some details, try [6]. Dragons flight (talk) 18:14, 5 October 2012 (UTC)
Light frequency EM waves will not travel over conductors suitable for much lower frequencies. If I build an antenna to capture some frequency of electromagnetic radiation, say 800 mHz, then the coax or lead-in wires will carry electric current of that same frequency to a receiver. If the antenna picks up much higher frequency microwave radiation, then a dish could focus the energy on a waveguide which could carry it to a receiver. A coax would work well to carry microwave frequency EM radiation. "Light" is just em radiation of a much higher frequency and much shorter wavelength. The question seems to imply that an "antenna for light" would send down from the antenna "electric current" which is not "light." If I used a parabolic reflector or a convex lens to collect light and focus it on a fiber optic cable, a "light pipe" or even a glass rod and convey it down from the collector, it might lose some of its properties, but wouldn't that amount to what the OP requests? That is basically what a Telescope, receiving EM radiation in the 405 THz to 790 THz frequency range. Mirrors and "beam combiners" can be used to combine the signal from multiple telescope mirrors, like combining the multiple units in some V antennas. You just can't make light become electrical current of a much lower frequency which will be picked up by wire antenna elements of a practical size and then be made to travel down conventional antenna wire or coax like a radio or TV signal. Edison (talk) 21:11, 5 October 2012 (UTC)
You can, however, take advantage of the photoelectric effect, which is a totally different physical process, to convert light-frequency electromagnetic radiation in to an electromotive force, and therefore drive a current down a conductor. The incident photon frees an electron from certain types of material, and a signal can propagate through an attached electrical circuit. The photoelectric process relies on properties of atomic physics, though, and is unlike the ordinary induction of electric current in a radio-frequency antenna. Nimur (talk) 22:16, 5 October 2012 (UTC)

The interesting here is that you could, in principle, store the electromagnetic fields detected by each antenna as a function of time. So, you would have a visible light telescope that detects light coherently. All the information about objects in any arbitrary direction will be stored this way. To look at some position in the sky at some time in the past, you just have to access the memory and add up the detected fields with the right phase shifts. Count Iblis (talk) 15:42, 6 October 2012 (UTC)

Absolutely, yes. But, if you work out the physics and mathematics for resolution of an image, you will probably find that your device has similar physical dimensions and properties to a camera. There has been an immense amount of theoretical and applied research in to the subject of wave field imaging, and application of the imaging condition to coherently-sampled time-history measurements. For example, a radio telescope array can be used to synthetically image radio waves; this same algorithm has common application in synthetic aperture RADAR, using a differently-shaped antenna. In a similar way, SONAR can be used to generate an image using coherently-sampled acoustic wave fields. The hand-held medical ultrasound imager uses one (or more) sensors, and performs coherent sampling with multiple samples collected at different times, to generate a single "snapshot" image of a medical subject. The capability of modern computers to fully analyze a three-dimensional wave-field has been increasing, steadily, over the last few decades, so as practical implementation problems are solved, we are getting closer and closer to theoretical limitations governed by wave mechanics. At the end of the day, you can't construct an image if you can't physically resolve the waveform data - which is governed by the mathematics of sampling. Nimur (talk) 17:49, 6 October 2012 (UTC)

## 12.5 million pixels

Why doesn't wiki software like PNG files with more than 12.5 million pixels? Whoop whoop pull up Bitching Betty | Averted crashes 22:55, 5 October 2012 (UTC)

It requires too much RAM to resize them under the current infrastructure. Dragons flight (talk) 23:34, 5 October 2012 (UTC)