Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 195.188.208.251 (talk) at 11:52, 8 October 2012 (Pyramids at Giza: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


October 2

Kinetic reclamation to your phone due to bodily movements.

I've heard of watches that recharge just by the movement of your wrists. It's piezo mechanics, right?

Why can't we put that in phones so that when they move with your body while in the pocket, your bodily movements help recharge them too?

If they're on their way, how far away is that technology, and why isn't it here already? Thanks. --129.130.237.27 (talk) 03:59, 2 October 2012 (UTC)[reply]

The power drain of a wrist watch is minuscule compared to a smartphone. Think about it: A little button battery can run a watch for 5 years, and that technology is like 50 years old. You can use your smartphone for what, like 8 hours before it needs a charge, and that's on supermodern batteries that didn't exist 5 years ago. The difference is like the difference between shooting a bullet with a gun (the smartphone) and throwing it (the watch). Not even a comparison. --Jayron32 04:15, 2 October 2012 (UTC)[reply]
Our article, orders of magnitude, claims a quartz watch uses about a microwatt, without a source. That's about one million times less power, on average, than a modern computerized mobile telephone (at least, while it is in use). While looking for sources, I found a fantastic detailed overview of the Seiko 7S26 mechanism; and Mike Murray's Everything You Ever Wanted To Know About Mainsprings (except, of course, a quantitative estimate of their potential energy content!) I seem to recall, though I'm unable to cite a source, that quartz digital watches consume more power than a mechanical spring-driven watch; but a quartz watch battery contains significantly more potential energy than a spring... which is why digital watches last longer than mechanical watches, between winding. Let the race begin to find a reliable reference! Nimur (talk) 04:24, 2 October 2012 (UTC)[reply]
OK, we've established that movements of the wrist won't be enough, but how about if we put a crank on the phone ? I have a portable radio that runs off a crank, and that uses a comparable amount of energy to a cell phone, I bet. Obviously cranking your phone to charge the battery isn't something you'd want to do often, but you might on occasion, like if your car breaks down and you then discover that your cell phone battery is dead. StuRat (talk) 04:52, 2 October 2012 (UTC)[reply]
They exist, though mostly they are advertised as "emergency" chargers: Crank like crazy for 5 minutes and you can get enough charge to call a tow truck. Maybe. Not enough to be practical to keep a constant charge, but enough to put a few minutes of call time on the phone. --Jayron32 04:57, 2 October 2012 (UTC)[reply]
Virtually all automatic watches have been mechanical - a swinging weight rewinds a spring, as with the Sieko cited by Nimur. I very much doubt that windup cellphones would catch on, for three reasons: Firstly, the typical radio frequency output of a hand held cell phone is 200 milliwatts (200 mW). The typical maximum electrical power into the speaker in the sort of transistor radio that you can get with a crank generator is 250 mW. That sounds about the same, but that's not the whole story. The cell phone's RF output is continous throughout each call. Assuming a typical call duration of 3 minutes, that's 36 Joules of energy, assuming the circuitry is 100% efficient at converted battery DC into radio frequency output (which it won't be). However, the ratio of average to peak power in voice and music is very low - a transistor radio may have a max output of 250 mW but the battery drain under typical programme conditions wil be more like 20 mW, or 1.2 Joules per minute. But there's more: The second reason: When you've had enough of listening to the news and the latest silly nonsense from the politicians, you can turn the radio off and energy consumption is then zero. But a cell phone must be kepton, in standby mode, so you can recieve calls. It's hard to find reliable data for phone standby drain, but 10 mW would not be unreasonable. That means a consumption, above the call consumption, of 0.6 joules per minute, 144 joules each 4 hours. Thirdly, a generator and crank of the necesary size would increase the volume and weight of a cell phone by about a factor of 10. Keit60.230.201.133 (talk) 05:14, 2 October 2012 (UTC)[reply]
Jayron's answer above includes pics that appear to about equal the size of the cell phone they charge, so not 10X. You might want to take one with you camping, for example (away from electrical outlets but still within cell tower range). Also, you don't need to leave your phone in standby mode. You can just wind it up, make a call, and turn it back off again. When you next wind it back up and turn it on, you should get any messages left for you. Many people in the older generation don't even like to leave them on, given the choice. StuRat (talk) 05:31, 2 October 2012 (UTC)[reply]
You must have magic fingers as when I tried Jayron's link, it didn't work. However Jayron makes it clear it's not very practical - he said "crank like crazy for 5 minutes...and make one (quick call) ...maybe". Yes, you can turn the phone off, but to catch on ie be a market success the phone has to work normally - stay on standby and let you make decent length calls. Orthwise only a few nutters will buy it. Don't forget that every time you turn a cellphone off, and everey time you turn it on, it enters into an automatic dialog with the network, so the network knows which base stations to route the call to, should it be worthwhile. This is so that the network can operate efficiently - each time a call (or text) comes in, the network doesn't try every base station in the country (and any other countries you can roam to) - the network only tries base stations at and adjacent to where they last found you, and doesn't try at all if it knows your phone is off. It means the sequence turn it on - make a quick call - turn it off is very inefficient energy-wise. Keit124.182.178.117 (talk) 13:12, 2 October 2012 (UTC)[reply]
...which is because modern mobile telephones are cellular phones, not general-purpose radio telephones. Cellular mobile telephones require an elaborate digital communication protocol, and must remain in contact with a cellular base-station, in order to operate. Contrast this to, for example, a conventional civilian aviation handset, whose RF transmitter is on exactly only when transmitting voice signal; and whose receiver amplifier requires very little power. Nimur (talk) 16:23, 2 October 2012 (UTC)[reply]
I'm certainly not suggesting that the crank would be used normally, you'd go with the usual battery operation and wall outlet recharging, then. And the crank charging unit would be separate, so you don't have to carry it around with you (although you could, in a pocket or purse, say). StuRat (talk) 18:02, 2 October 2012 (UTC)[reply]

Infant mortality rate and birth rate

Is it true that if the Infant mortality rate goes down in an area (i.e. child survivability goes up), the birth rate usually goes down more so that the population declines? Bubba73 You talkin' to me? 05:07, 2 October 2012 (UTC)[reply]

Damn skippy. See Demographic-economic paradox for a fuller treatment. --Jayron32 05:35, 2 October 2012 (UTC)[reply]
I don't know what "damn skippy" means, but in any case the statement is not true. The literature on the relationship between infant mortality and fertility struggles to find any consistent effect at all, and certainly not a large enough effect to counteract changes in infant mortality. It's hard to even imagine how that could come about. Looie496 (talk) 05:39, 2 October 2012 (UTC)[reply]
You could read the article I provided a link to. Or you could provide your own links or references. See [1] for your other question. --Jayron32 05:45, 2 October 2012 (UTC)[reply]
Well, as you point out, there is a lot of evidence that increases in prosperity can reduce birth rates to the extent of causing a population decline. But that article doesn't say that changes in infant mortality alone can do it. A Google Scholar search for articles on the topic finds a number of them, none of them supporting such a result as far as I can see, but none that is recent and authoritative enough to be worth citing; see for example http://www.nber.org/papers/w1528. Looie496 (talk) 06:13, 2 October 2012 (UTC)[reply]
I suppose part of the problem is teasing out a specific correlation. There's a melange of factors which leads to modern development, and the things which lead to decreased infant mortality tend to get all wrapped up in the same sorts of things (education, industrialization, improved access to health care) that leads to "development" generally speaking. What you'd need is some sort of society where somehow there was access to fully-modern prenatal care, but with no other development at all. It's hard to imagine such a place existing, so there isn't really a great way to run the experiment. What we're left with is the broad trends, that show that as a population becomes more "developed" in an economic sense, the birth rate goes down (as does infant mortality), but I'm not sure there's a causative or directly correlative effect which could be isolated for those two AND ONLY those two variables. --Jayron32 06:34, 2 October 2012 (UTC)[reply]
It's false. See Qatar, UAE, and Bahrain for counter-examples. A8875 (talk) 07:54, 2 October 2012 (UTC)[reply]
See outlier. Come back when you have a question. --Jayron32 13:58, 2 October 2012 (UTC)[reply]
They certainly look correlated, but that doesn't mean that one is the cause of the other. Like Jayron pointed out, the two are probably both just small parts of overall development. Here is a graph showing the two values plotted against each other: [[2]] 209.131.76.183 (talk) 11:48, 2 October 2012 (UTC)[reply]
The (loose) association is known as the demographic transition. Itsmejudith (talk) 13:20, 2 October 2012 (UTC)[reply]
Those graphs are quite interesting, and the type of information I was looking for. Bubba73 You talkin' to me? 14:00, 2 October 2012 (UTC)[reply]
I'm in favor of helping children in underdeveloped countries live, but is that making the problem worse in the future? Bubba73 You talkin' to me? 17:12, 2 October 2012 (UTC)[reply]
Good question. like it could explain the genocide that occurs in Africa. those that were "supposed" to be dead a long time ago are eventually killed by the militias fighting for food, etc.165.212.189.187 (talk) 17:33, 2 October 2012 (UTC)[reply]
It's a complex problem. One the one hand, compassion dictates that we don't let people starve. On the other hand, long-term continuous foreign aid in the form of basic necessities may also be what is keeping these countries from developing their own native food and clothing industries. Here is just one article on the detrimental effect of clothing donations on the domestic clothing industry in Nigeria. here is a similar paper which poses the same sorts of problems related to food aid and farming. However, issues noted in the D-E paradox and Demographic transition articles noted above point to the solution likely coming from development and education in general. I've seen some studies which show a marked improvement in living conditions in areas with properly applied Microcredit systems that allow development of native industries instead of blanket food aid. Which is not to say that food aid isn't needed in some dire cases, but it isn't the end of the solution. --Jayron32 17:42, 2 October 2012 (UTC)[reply]
Relatedly, I've seen analysis of why current Somaliland (formerly British Somaliland) is fairly highly functioning and stable, compared to the basketcase that is the rest of Somalia (formerly Italian Somaliland), claiming that it is at leastly partly that Somaliland has received hardly any foreign aid, and hence politicians who wanted money had to negotiate with local groups to get funding (leading to a level of democracy), and local markets were not undercut. 86.159.77.170 (talk) 19:55, 7 October 2012 (UTC)[reply]
There is also breastfeeding infertility, the tendency of women breastfeeding live children for a few years not to ovulate during that period. μηδείς (talk) 00:44, 3 October 2012 (UTC)[reply]

derivation of lensmaker's equation for a thick lens

What is the derivation of the lensmaker's equation for a thick lens. It seems hard to find on the internet. (Why isn't it on Wikipedia?) 137.54.11.202 (talk) 05:44, 2 October 2012 (UTC)[reply]

Does Lens_(optics)#Lensmaker.27s_equation answer your question? --Jayron32 05:46, 2 October 2012 (UTC)[reply]
By googling "derivation lensmaker's equation" I found the following two derivations[3][4]. I personally don't think derivations belong on Wikipedia(unless the derivation warrants its own article), since they add unnecessary bulk to the articles. People looking for an equation and people looking for a derivation of the same equation are on completely different skill levels. A8875 (talk) 07:20, 2 October 2012 (UTC)[reply]
I had always presumed that the lensmaker's equation is strictly an empirical first-order approximation. This is why it doesn't hold up very well to modern, complicated materials like compound glasses. The incredibly over-priced, but totally-without-equal, textbook Applied Photographic Optics, has no substitute: it thoroughly runs through the physics and the actual engineering specifications for many common lens glasses and compound lens groups. Nimur (talk) 16:18, 2 October 2012 (UTC)[reply]
The reason why I need a derivation is that I am solving a problem for an exotic lens, so I'm trying to use the same technique to derive the thick lens equation to create an equation for my exotic lens. 199.111.224.96 (talk) 18:21, 2 October 2012 (UTC)[reply]
If your lens is exotic enough that the lensmaker's equation won't work, you would probably have to analyze it by tracing rays through it rather than trying to derive a new lensmaker's equation. Commercial lens design software does this, but it is quite expensive.--Srleffler (talk) 04:27, 3 October 2012 (UTC)[reply]
Nimur, the lensmaker's equation is not just an empirical formula. It's based on a simple, geometric model for ray propagation. As you guessed, it is a first-order approximation; specifically the paraxial approximation. Geometrical optics in the paraxial limit is sometimes called Gaussian optics.--Srleffler (talk) 04:27, 3 October 2012 (UTC)[reply]
But ray-propagation refraction is based on a linear fit to an experimentally-measured index of refraction. There are more elaborate methods to model index of refraction. The text I linked above outlines a 3-parameter dispersion model for different types of common lens glass materials; using that model, the effects of apochromaticity, and chromatic aberration, are all accounted for; but the lens-maker's equation assumes that n (index of refraction) is constant across the visible spectrum. In fact, I believe I discussed this extensively in December of last year in response to a question about modeling index of refraction. Everything in physics is always experimentally derived; even if you design a ray-tracing algorithm where each photon interacting with each iota of matter in the glass lens is calculated from quantum-mechanical first-principles, you still need an experimental value for the fundamental physical constants. Nimur (talk) 17:02, 3 October 2012 (UTC)[reply]
I think we are using the same terms differently. An empirical equation is one which is obtained purely by finding an expression that fits experimental data, without any theoretical model of the process. The lensmaker's equation is based on a simple model of light propagation. It is therefore not an "empirical equation". Almost everything in physics is experimentally derived, but not all formulas are empirical.
The lensmaker's equation does not assume that the index of refraction is constant across the visible spectrum. It assumes that you are using the correct value of n for the wavelength that you are interested in.--Srleffler (talk) 17:27, 3 October 2012 (UTC)[reply]
Fair enough. I was probably abusing the terminology, "empirical equation," in a way that wasn't clear. Nimur (talk) 17:43, 3 October 2012 (UTC)[reply]

what is the resistor for?

Hello, in this circuit, what is the 1MOhm resistor for? Also, shouldn't there be a series resistance in series with the potentiometer? Thanks in advance! Asmrulz (talk) 14:09, 2 October 2012 (UTC)[reply]

The 1M resistor sets the time constant (and the scaling factor) for the differentiator. An extra series resistance in line with the potentiometer wouldn't help or hurt the measurement of the time-derivative of current, because it is constant; it might be a good idea to prevent accidental shorting of the battery. Many potentiometers don't span all the way to zero ohms, so it's not necessary in practice. To fully analyze the role of the resistor, you should write the circuit equations in terms of a complex impedance, which will allow you to solve the circuit in the Laplace domain, permitting a straightfoward accounting for the time/frequency effects as well as the differentiation. Nimur (talk) 14:25, 2 October 2012 (UTC)[reply]
I haven't thought of this as an RC element *slaps himself on the forehead.* Thanks again. Asmrulz (talk) 15:01, 2 October 2012 (UTC)[reply]
Out of curiosity, how would I go about writing the circuit equations? Asmrulz (talk) 15:59, 2 October 2012 (UTC)[reply]
Kirchoff's circuit laws, appropriately using complex impedance instead of simple resistance. A typical first course in circuit analysis goes over the common techniques to write out equations for each node and then uses the techniques of linear algebra to solve simultaneously, giving the voltage at each node, and the current in each branch. Nimur (talk) 16:14, 2 October 2012 (UTC)[reply]
Like this, just in complex numbers? Asmrulz (talk) 18:10, 2 October 2012 (UTC) not quite, probably... Asmrulz (talk) 23:39, 2 October 2012 (UTC)[reply]
Yes, network analysis is the basic technique. The page you linked is specifically about DC network analysis: you notice that every example contains only resistors and batteries, never a capacitor or inductor or active circuit element. That's because those components require AC analysis, or complex impedance, which is a subject in mathematics that some introductory circuit texts try to avoid because it's a little bit harder than basic arithmetic. But, there's no magic; there's just a few rules for the algebra when complex numbers are involved; and then a handful of common techniques, like multipole expansion or separation to partial fractions... these are just common mathematical recipes that help you solve the parallel-and series- circuit equations that commonly show up in circuit analysis with capacitors and inductors mixed in. Nimur (talk) 14:58, 3 October 2012 (UTC)[reply]

Morphine/Dilaudid conversion

Does anyone know what 20 mg of Kadian converts to in terms of hydromorphone? This is simply a factual question; I am not seeking any kind of medical advice. Joefromrandb (talk) 14:50, 2 October 2012 (UTC)[reply]

Anyone?... Nevermind...too late.165.212.189.187 (talk) 17:29, 2 October 2012 (UTC)[reply]
Some kind of Wikispeak? Joefromrandb (talk) 17:50, 2 October 2012 (UTC)[reply]
I think the OP is suggesting this is a homework question. Nil Einne (talk) 20:27, 2 October 2012 (UTC)[reply]
Our Hydromorphone article, while fairly poor, ('and it can be said that hydromorphone is to morphine as hydrocodone is to codeine and, therefore, a semi-synthetic drug' - is it a copyvio or copied from a public domain source without talk page attribution or something?) says this:
Hydromorphone's oral-to-intravenous effectiveness ratio is 5:1 and equianalgesia conversion ratio (hydromorphone HCl to anhydrous morphine sulfate, IV, SC, or IM) is 8:1. The oral equianalgesic conversion rate (hydromorphone HCl to morphine SO4) can vary between 5:1 to 8:1. Therefore, 30 mg of immediate-release morphine by mouth is similar in analgesic effect to about 4–6 mg of hydromorphone by mouth (requiring extra care during conversion & titration), 10 mg of morphine by injection, and 1.5 mg of hydromorphone by injection.
Given the state of the article, I can't vouch for these figures. But it does indicate an unsurprising issue, this isn't a simple factual question and there's no simple conversion. These are related but different drugs so don't have any perfect correlation in effect. (Our article also says other things which indicate this.)
Also your question is fairly unclear. In the subject you mentioned 'Dilaudid', in the question you mentioned 'Kadian'. The later is apparently an extended release form of morphine sulfate in capsules and the former a immediate-release form of hydromorphone hydrochloride in tablets or liquid. From this it sounds like you're referring to oral ingestion (obviously an important consideration) but the lack of consistency and generally limited information makes this unclear. Notably, if you're referring to an extended release form of morphine and an immediate release form of hydromorphine, this wasn't clearly specified and is unlikely to be obvious to anyone unfamiliar with the specific brands you mentioned in the subject or in the question.
Nil Einne (talk) 20:15, 2 October 2012 (UTC)[reply]
BTW I've added one link above from the article as again while not a great article, it'll give you an idea of how to begin to look for answers (plural intentional) Nil Einne (talk) 20:24, 2 October 2012 (UTC)[reply]
Thank you very much for the help! And although I am perpetually learning, I can assure you my "homework" days are decades behind me. Joefromrandb (talk) 20:33, 2 October 2012 (UTC)[reply]
And perhaps I was more vague than I needed to be. Specificallly, I was comparing an 8mg immediate-release Dilaudid to a 20mg extended-release Kadian. Joefromrandb (talk) 20:40, 2 October 2012 (UTC)[reply]

Bowel flora

What happens to the bowel flora if a person only receives IV nutrition? Does it "starve" to death? If not, how does it obtain sufficient nutrients? Thanks in advance.--Leptictidium (mt) 14:56, 2 October 2012 (UTC)[reply]

Do people only receive IV nutrition? I'm pretty sure that for any nutrition is dealt with via Feeding tube. Intravenous therapy is used for fluid or electrolyte replacement, but not generally for caloric needs, at least on a long term basis. --Jayron32 16:34, 2 October 2012 (UTC)[reply]
Yes, check out "Total peripheral nutrition", for example.--Leptictidium (mt) 16:55, 2 October 2012 (UTC)[reply]
Google is your friend. This list came from Google Scholar. Use it wisely. --Jayron32 17:05, 2 October 2012 (UTC)[reply]

what is the formula for the angular magnification for a single lens?

Like say, a magnifying glass? Everywhere I look on the internet, the formula is always given for microscopes or telescopes with an "eyepiece", which is totally inappropriate to my problem. 199.111.224.96 (talk) 18:22, 2 October 2012 (UTC)[reply]

It sounds like you want a textbook introduction to geometric optics! Our article does have a formula for magnification for a simple lens, which is suitable for a high-school-level approximation. (This equation, which is commonly used in simple lenses, expresses magnification as a magnitude, in terms of focal-length and object distance). Whether that formula applies to your lens, or not, is entirely what makes optics a not-very-easy subject. If you've never studied optics formally, a decent introduction is found in Tipler's Physics for Scientists and Engineers, (in the second volume, Chapter 30-something in the 2nd edition). If you really really want to study optics, you should start with formal analysis of geometric optics and then study generalizations of electromagnetic wave propagation; so that you can appropriately model the lens or optical path you care about. Nimur (talk) 18:30, 2 October 2012 (UTC)[reply]
I think the problem is that a single lens can give you magnification that is so distorted and dim as to be useless. So, the question becomes "what is the useful magnification", which depends on the application and is subject to opinion. StuRat (talk) 18:33, 2 October 2012 (UTC)[reply]
A simple lens doesn't have a single angular magnification. The magnification depends on how far away the object and/or image planes are. Magnifying glasses are a special case: these are commonly quoted as having a particular angular magnification. You can find the formula for this at Magnifying glass#Magnification (inline within the text). There are two formulas, both of which presume a "typical" human eye. One formula presumes that the magnifying glass will be used to bring an image to the near point of the eye. This gives the highest magnification, and is the value typically quoted when magnifying glasses are sold. The second formula presumes that the lens will be placed about 1 focal length from the object to be viewed. This gives slightly lower magnification, but is often more convenient. The actual magnification a person experiences depends on the ability of his or her eye to accommodate. A young person experiences much less magnfication when using a magnifying glass, compared to an old person with presbyopia.--Srleffler (talk) 17:20, 8 October 2012 (UTC)[reply]

Animal eyesight and the electromagnetic spectrum

I was thinking about how awesome it would be to be able to see radio waves directly and not needing them to be interpreted by television sets, wireless receivers, etc. (watching TV and browsing the Internet directly with your eyes just by looking at open air!), which led me to the following questions:

  • Why did animals evolve eyesight focused toward the infrared, visible, and/or ultraviolet parts of the electromagnetic spectrum, instead of evolving sensitivity toward other parts of the spectrum such as radio waves, microwaves, x-rays, or gamma radiation?
  • Under what sorts of environmental conditions might we expect animals to have instead evolved sensitivity toward those other regions of the spectrum?
  • How would such hypothetical animals perceive the world? For instance, radio waves can penetrate right through walls, with distortion being minimal enough that wireless receivers read them just fine. Does that mean a theoretical animal that sees only radio waves would be unable to see walls?

SeekingAnswers (reply) 18:59, 2 October 2012 (UTC)[reply]

A picture is worth a thousand words. I'll let this one do the speaking. --Jayron32 19:22, 2 October 2012 (UTC)[reply]
Yes, and that's the problem. All they would see is the source of the radio emissions, so things like stars and lightning. Animals need to see what's right around them, so visible light, ultraviolet, and infrared are ideal, as they reflect off nearby objects, and some are actually produced by certain organisms (like mammals producing IR). Sound and vibrations, while not part of the EM spectrum, similarly react with, and are produced by, the local environment, so are useful. Similarly, the ability to "see" electrical fields is useful. StuRat (talk) 19:10, 2 October 2012 (UTC)[reply]
There are environments where there is no light, and blind organisms evolve there. (While some organisms can give off their own visible light, most do not, and sufficiently murky water would make even that approach unusable.) An organism living in space might not be able to use sound and vibrations, and, if inside a dark cloud of gas, might not be able to see, either. I'm not sure if being able to detect those other wavelengths of EM would be of much benefit, though. StuRat (talk) 19:30, 2 October 2012 (UTC)[reply]
(ec with Jayron) The solar spectrum and the opacity of the atmosphere to electromagnetic radiation have a lot to do with the wavelengths to which the eyes are sensitive. The Sun's output peaks at ~550nm and so does the sensitivity of the eyes. The articles on eyes has some general information on this, and Color vision has quite a lot of information. Astronaut (talk) 19:28, 2 October 2012 (UTC)[reply]

Cellular Renewal

I have heard that every seven years, all the cells in a human body will be replaced. I am quite skeptical of this claim, so I wanted to know about it's factual accuracy. Is it true? Also, aren't there some cells, especially neurons in the brain, that are not replaced?128.227.85.113 (talk) 19:57, 2 October 2012 (UTC)[reply]

No, that is not true. But it does seem to have a grain of truth, or was based on a true claim. According to the website for Stanford's Institute for Stem Cell Biology [5], "Every single cell in our skeleton is replaced every 7 years." (emphasis mine). Granted, that web page does not link to a specific reference for that claim, but I'm willing to trust them on this one. SemanticMantis (talk) 20:24, 2 October 2012 (UTC)[reply]
Also, some cells are replaced much faster, like red blood cells, which only last a few months. StuRat (talk) 20:27, 2 October 2012 (UTC)[reply]
This is a decent link on the subject: [6]. --NorwegianBlue talk 22:28, 2 October 2012 (UTC)[reply]
The way I heard this one a long time ago was that every seven years the atoms would be replaced because of how the (electrons? protons?) swap places with each other so much. Is that totally wrong? Would there be a viable equation or something to determine the atoms version? ~ R.T.G 00:17, 3 October 2012 (UTC)[reply]
No. Electrons can bop around, but protons and neutrons stay with the atom (barring nuclear fusion, nuclear fission, and radioactive decay). StuRat (talk) 01:55, 3 October 2012 (UTC)[reply]
Entire atoms are exchanged to. Much of the molecules in your body are constantly being repaired and regenerated on a molecule-by-molecule basis; a carbon atom that was part of a fat cell yesterday could be part of hemoglobin next week, and be breathed out as CO2 in a few weeks. I have no idea on the time scales involved, but it isn't just the electrons that shuffle around. The "renewal of the body" thing, taken on an "atom-by-atom" basis is a classic example of the Ship of Theseus/George Washington's Axe paradox... --Jayron32 03:08, 3 October 2012 (UTC)[reply]
Jayron is right. That's why cells have nuclei, to produce new protein molecules by genetic transcription, ultimately from chromosomal DNA. Red blood cells die within about a month because they cannot transcribe from DNA, lacking cell nuclei. One can't give a seven year expiration date, but all living cells obviously regenerate their molecular structure or die. μηδείς (talk) 03:20, 3 October 2012 (UTC)[reply]
A more ignorant question than usual: Do red blood cells have DNA? More to the point, is the premise of Jurassic Park, of mosquitoes trapped in amber and full of dinosaur blood whose DNA could be harvested, theoretically possible? ←Baseball Bugs What's up, Doc? carrots05:07, 4 October 2012 (UTC)[reply]
Red blood cells don't have a proper nucleus, so no, they don't have the same sort of DNA that other cells do. However, there are many nucleated cells in blood, including white blood cells, so you could get a full DNA sample from those. As far as the rest of the Jurassic Park scenario; which would involve using that DNA to make a T-Rex... no comment. --Jayron32 05:27, 5 October 2012 (UTC)[reply]
DNA is a stable molecule, but it still decomposes over time. There's no way there would be usable DNA in an amber sample.128.227.214.249 (talk) 20:19, 10 October 2012 (UTC)[reply]

The fossil part of Fossil fuels

How can we know that fossil fuels are originally from fossils? Before life appeared on Earth, there should have been plenty of carbon around, so, couldn't the fuel been formed be it? OsmanRF34 (talk) 22:09, 2 October 2012 (UTC)[reply]

While we wait for an answer to your question, I give you Abiogenic petroleum origin for the nay-sayers ;) --Tagishsimon (talk) 22:14, 2 October 2012 (UTC)[reply]
I know that there is such a kind of fringe theory about oil, but it doesn't explain the other side of the equation. Why is it regarded as common wisdom that oil's origin is from fossils? OsmanRF34 (talk) 22:19, 2 October 2012 (UTC)[reply]
Well, from direct observation we can see that plants and animals produce oils and methane (hopefully not too directly on that one). Methane is also produced by natural processes, but we don't know of any inorganic process which produces oil or coal. StuRat (talk) 22:27, 2 October 2012 (UTC)[reply]
In the case of anthracite (hard) coal, you sometimes find the fossils right in the coal: [7]. We also have intermediate steps, like peat bogs, around today. StuRat (talk) 22:20, 2 October 2012 (UTC)[reply]
"fossil fuel" does not mean "fuel made of fossils", it means "fuel that is a fossil", where "fossil" means "of the ancient past". That's all it means - fuel from long ago. -- Finlay McWalterTalk 22:21, 2 October 2012 (UTC)[reply]
Sure, but the question is why the biogenic origin theory is favoured over the abiogenic in the cases of liquid and gaseous fuel. Oddly, the best we have on this seems to be at Abiogenic_petroleum_origin#State_of_current_research which summarises arguments in favour of the biogenic origin theory. --Tagishsimon (talk) 22:25, 2 October 2012 (UTC)[reply]
One of the things to consider is that we've got examples of every stage along the mechanism from living things to coal. Consider things like Peat which is just very young coal, Lignite, Bituminous coal, anthracite, etc. You can pretty much find all of the steps along the mechanism right now. What we don't find is large quantities of all of the steps of so-called "abiogenic coal". --Jayron32 22:32, 2 October 2012 (UTC)[reply]
Fossil, according to Wikipedia, means something which is fossus, dug up from the ground. Fossil fuels are made of stuff that plants produce when they decompose and are found in places where plants seem to have been decomposing. Scientists can not only theorise how oil occurs, they can/could (it's all gone remember) could predict whereabouts it would be and how much would be there, even under the ocean, so they know something. Household garbage can be compressed down some way to squeeze petrol out of it sort of like coal can be squeezed into diamonds in a pressure machine and those things are probably about as direct as any evidence we could ever get without time travel. Sap from some trees, for instance of doubt, drips and pours in large amounts. It hardens and becomes amber with little million year old insects in it. Maybe it's not tree sap after all, But it's made of tree sappy stuff and has little tree insects stuck in it. But don't worry about that because the oil is all gone away to a better place now with the copper and other useful stuff. ~ R.T.G 00:00, 3 October 2012 (UTC)[reply]
Have I missed where we have an article on abiogenic coal? Thomas Gold seems to have suggested that some bituminous coal may have a non-biological origin. I am not aware of any serious, detailed, broadsweeping theory that all coal can be explained without reference to decaying plant matter. μηδείς (talk) 03:13, 3 October 2012 (UTC)[reply]
Why would you think there would be plenty of carbo around before the advent of life (photosynthetic life, in particular)? Given that it would be sitting in an atmosphere full of oxygen, for billions of years? Gzuckier (talk) 06:31, 3 October 2012 (UTC)[reply]
There was carbon in the form of calcite in limestone from the Archaean, the oldest hydrocarbon source rocks are of Paleoproterozoic age page 46. Mikenorton (talk) 09:57, 3 October 2012 (UTC)[reply]
oh, OK, sure. I was thinking of elemental carbon, like big lumps of coal lying around on the ground. Gzuckier (talk) 16:46, 3 October 2012 (UTC)[reply]
This paper describes what is known about the early composition of the Earth's atmosphere and specifically discusses this in relation to the origins of life (and therefore the availability of carbon). Mikenorton (talk) 12:39, 3 October 2012 (UTC)[reply]

Rating rechargeable battery input and output

I think most people get cheaper electricity at night and I wanted to work out if it would be viable to buy large batteries, such as those used in solar energy systems, charge them at night and then run some home electrical devices from them. Okay so working out, I understand wattage. KWh just means kilowatt per hour. My heater here beside me is 2kWh and that means while switched on it uses 2 per hour chared to me at 32cent each. Now, try to figure out how many kWh it takes to charge a battery and how many kWh it will return is not so straightforward and there isn't much explaining it on the internet from what I see. It's just a pie in the sky but if you could get back much more than 50% of the energy expended charging etc., no reason not to work that out.. There is some discussion board stuff on the net and I probably work it out after a while but might take me a few hours reading and frowning... Anybody on here just kind of know? ~ R.T.G 23:27, 2 October 2012 (UTC)[reply]

I can't see that working out. There's the inefficiency in charging and discharging the batteries, and the initial cost of batteries, plus maintenance costs, since they don't last long. And, from an environmental POV, there's all those old batteries to dispose of. A better approach would be to heat an insulated tank of water at night, then use that hot water to heat the home during the day. StuRat (talk) 23:36, 2 October 2012 (UTC)[reply]
It's more for the electricty itself. Water heating and conversion would be impractical for me (Ireland climate, small apartment), but if you could charge one of these extremely efficient new cells and discharge enough for the PC, the kettle, or even oven etc.. I know probably unlikely but also kind of frustrating not to do the equations very easily. I have a torch powered by 2 CREE batteries about mid size between AA and D cell, light as softwood, but it's bright enough to dazzle and goes for a couple hours, I've no idea however how much wattage it takes to charge and how much it releases, only that it seems powerful and that large size batteries can be made of the same stuff. I know if it could be done people would probably be doing it already but how close we are I can't tell. ~ R.T.G 00:14, 3 October 2012 (UTC)[reply]
Cree makes LEDs not batteries. It sounds like you have some random lithium ion cells, probably 18650 ones branded by some random manufacuturer in China with some random brand they knew was associated with torches. Nil Einne (talk) 14:46, 3 October 2012 (UTC)[reply]
There's a lot of discussion out there about storing energy from off-peak times. This is a very reasonable idea, and can potentially aid in the efficiency of the entire network, as well as save money for users. The cost/benefit analysis is tricky, but you may be interested in the idea of using a flywheel to store energy. I can't sort through them right now, but /home flywheel energy storage/ presents several interesting results on google, and some are commercial products for home use: [8]. See also our article on flywheel energy storage. SemanticMantis (talk) 01:40, 3 October 2012 (UTC)[reply]
BTW, you said your heater is 2 KwH. I think you mean it's 2 Kw, which means, if you use it for an hour, that makes 2 KwH. The main problem with batteries is that they are expensive and don't last, with inefficiency being a minor concern. The flywheel suggestion may work, because, unlike batteries, it shouldn't need to be replaced every few years. If you want electricity, rather than just heat, another option is to pump water into a water tower at night, and use that gravitational potential energy to run an electricity generator as the water flows back down to a lower tank, during the day. StuRat (talk) 01:43, 3 October 2012 (UTC)[reply]
The most common way that energy is stored from off-peak hours, at least residentially, is a Storage heater, which basically heats up some bricks at night, and allows the heat to escape into the room during the day. Storing heat is nice because it is, pretty much by definition, 100% efficient - any electricity you use will be converted into heat, and the design is such that most of it can be directed when and where you want it. If you want to get energy back in a usable form (to power your computer, or whatever), it's a little harder. On a municipal level, it's probably most common to pump water up, and then let it fall back down to reclaim the energy: Pumped-storage hydroelectricity. This is what is done at the Robert Moses Niagara Hydroelectric Power Station: They generate more power at night (because they don't have pushy tourists who want to look at the falls), but they have higher demand during the day. They therefore pump water up into a man-made reservoir at night, and let it come down during the day to provide supplemental electricity. Not practical on a residential level, to say the least. One of the ways being looked at to store power during cheaper times (again, mostly on a larger scale, though possibly adaptable to a household) is a flow battery [9]. The general term for this sort of thing, by the way, is Load balancing. Sorry I'm not providing much specific help, but I thought I'd point out some of the things that are done. Buddy431 (talk) 04:21, 3 October 2012 (UTC)[reply]
From what I read, it looks like a lead-acid or lithium-ion will cost you more than the energy they can store in total, i.e. even if you could charge them for free, you'd still be losing money. And when you compare with the market price, the price at which companies are selling to one another, the whole thing looks even worse: at the moment it's quite high, 60€/MWh or 0.06€/kWh for peak hours. Storing the electricity will cost you maybe 3 times as much, even if you get the power for free. See link. Water to a reservoir: assuming 100% efficiency, pumping 36000 liters 10 meter higher will store 1kWh. Don't expect a small pump to be very efficient, not in pumping nor in generating electricity. And you need two reservoirs, don't forget. I cn't think of much to use the cheaper energy to your advantage. Maybe unplugging the fridge and freezer during the day, if the highest temperature is still acceptable. (You can fill all free space with water filled containers to increase thermal mass.) Won't make that much difference, but at least you'll be paying a bit less, not a lot more. Ssscienccce (talk) 17:37, 3 October 2012 (UTC)[reply]
They (some sort of battery) can't cost more if you charge them for free because that would render renewable energy useless, and I don't know about the safety and cost of keeping a flywheel in the home powerful and precisely engineered enough to run cookers and heaters off, but basically I have the impression that the idea isn't often considered, which happens. Been looking at renewable energies for years myself and not had that idea or given it any consideration. Oh well I will just have to do the sums but thank you for all the energy info. ~ R.T.G 09:25, 4 October 2012 (UTC)[reply]
The advantage of batteries is that they are portable. So, while you end up paying more for energy delivered by battery, having an extension cord running to your flashlight isn't very practical, and this is where batteries shine. StuRat (talk) 16:56, 4 October 2012 (UTC)[reply]
That's (one of the) the problem(s) with renewable energy at the moment, solar and wind energy are unreliable since they depend on the weather, and storing the energy is still more expensive than burning coal. It's not the cost of the energy needed to charge them, it's the cost of buying them and the limited number of charge-discharge cycles you get. If they lasted forever there wouldn't be a problem, but lead-acid for example only last maybe 600 cycles, so after a 1kWh battery has stored a total of 600kWh you have to buy a new one for 90€. That's 0.154€ per kWh, without the electricity cost. Unless you pay only 0.16€ for electricity at night, you're gonna be paying more than 0.32€ for the electricity you've stored (actual cost will be higher because the efficiency of charging and discharging is only 75-85%). Electricity suppliers give you the lower night rate because they can't store the energy themselves and many power plants have to keep running at night (coal plant takes 12 hours to start up again, nuclear may take weeks, gas turbines only minutes). If it was cheaper to store the energy than to sell at the price they do, it would make no economic sense for them not to store it themselves, and they could use bigger, more efficient storage options than you can. Ssscienccce (talk) 13:47, 5 October 2012 (UTC)[reply]


October 3

Motherboard oscillators in the GHz..

How does the oscillator circuit that generate 2.0 .. 4.2 GHz on ordinary motherboards for the CPU look like?, seems hard to find the "anonymous chip" among all other components. It would be interesting too see how wire paths has been done to deal with RF-issues. Electron9 (talk) 02:25, 3 October 2012 (UTC)[reply]

In any case, you might be interested to know that most high-frequency digital logic chips are driven by fairly low-frequency clock sources. These frequencies are stepped up to high frequency, including the microwave range, on the silicon die, where the processes and parasitic effects can be controlled more carefully. You might want to read about frequency multipliers and phase locked loops.
For example, Intel's reference design for their 82583 10-Gigabit Ethernet Controller (which internally uses one of the highest frequencies present anywhere on many computer main logic boards) is driven by a 25 MHz (megahertz) crystal.
Once you've mastered low-frequency design, you can migrate to microwave engineering; and ultimately reach Planar Microwave Engineering, or, the art of putting very high frequency circuits together in a way that can be built into a silicon wafer or printed circuit board. Nimur (talk) 03:17, 3 October 2012 (UTC)[reply]
I believe most processors on "ordinary motherboards" use a simple 14.318 MHz crystal which the processor inernally multiplies to acheive the GHz clock rates, as the above replies describe. You can find the crystal by doing a google image search of clock crystal motherboard. But I don't think you'll find anything fancy about it, all the "RF issues" would be dealt with inside the processor it self. Vespine (talk) 03:40, 3 October 2012 (UTC)[reply]
The 14.318 MHz oscillator is the "dot clock" for the NTSC colour video system. It's an integer multiple of all the frequencies required in the NSTC system and its use began with the original IBM PC. It has no relavence to the CPU clock in modern PC's. Nimur has gien the correct answer. Keit121.221.215.67 (talk) 11:09, 3 October 2012 (UTC)[reply]

I should have thought of the multiplier.. ;), what's the highest FSB frequency in use? and how does that circuit look like? Electron9 (talk) 12:11, 3 October 2012 (UTC)[reply]

The FSB doesn't exist with most modern mainstream consumer CPUs having died out with the Hammer architecture on the AMD side (i.e. the first Athlon 64 processors) in favour of HyperTransport and with the Nehalem (microarchitecture) on the Intel side (i.e. the Core iX processors) in favour of the Intel QuickPath Interconnect. Intel Atoms still use a FSB, but it's unclear for how much longer. Nil Einne (talk) 14:42, 3 October 2012 (UTC)[reply]
keit, do you have any source for: It has no relavence to the CPU clock in modern PC's. ? This source (I agree it's far from academic) suggests that the 14.318 MHz crystal has been used for cpu clock reference since the dawn of PCs. I'm not saying it's right, I just haven't seen any evidence that its wrong. Also, just to clarify, I did not dispute anything in Nimur's reply. Vespine (talk) 22:45, 3 October 2012 (UTC)[reply]
Here is a better article in which Figure A pretty clearly shows a 14.318 MHz crystal. Vespine (talk) 22:56, 3 October 2012 (UTC)[reply]
I think the article has several misconceptions. The 14,318 MHz crystal perhaps is used for the CPU in sub 28 MHz systems. Not ones with CPU puming away in 500 .. 5000 MHz. The "RF chokes" in Figure B is used for SMPS intermediate energy storage in energy pumping as can be deducted from the surrounding 3-pin chip and capacitors and proximity to the CPU.
So far modern sockets use seperate clock connection and data transfers where the clock is increased in frequency by a multiplier inside the CPU. So I'm interested how the oscillator that drives the CPU sockets on modern motherboards looks like on the PCB. Electron9 (talk) 23:27, 3 October 2012 (UTC)[reply]
Not that I don't believe you, but I still don't see any source to support what you are saying. I have a bit of experience building circuits with microcontrollers, I built a project using the Microchip PIC32 which typically uses an 8Mhz crystal to run at a clock speed of 80MHz, (having a quick look at the datasheet, you can run it from a 3 or 4MHz crystal). Ok, 80MHz is not 2500MHz, but 4 to 80 is still a much higher ration then 14 to 28, so I don't really see why a 14.318 couldn't be multiplied up further. Vespine (talk) 00:11, 4 October 2012 (UTC)[reply]
There are hundreds, perhaps thousands, of Intel and competitive main CPU models. Similarly, there are thousands of main logic board designs. I'm certain if we look wide enough, we'll find a modern CPU main logic board with a 14.318 MHz crystal on it, driving the CPU. I know I've seen 25 MHz clock drivers on a lot of Intel designs lately, but this is hardly a universal standard; and Intel is not the only manufacturer of computer processors. If we consider esoteric designs by non-mainstream vendors, or low-volume, special-purpose systems, the variety increases even more. Really, the exact specification of the clock frequency is a minor detail. Crystals, or other digital reference clocks, can be built with almost any specified frequency. The original question was asking what the circuit looks like: and the circuit "often" looks like a phase locked loop on the main silicon die, driven by a low-frequency crystal oscillator external to the chip. If anyone wants to be more specific than that, we've got to stop speaking in the abstract and start naming part-numbers and technology steps. Nimur (talk) 01:41, 4 October 2012 (UTC)[reply]
My reply was specifically addressing keit's rebuke of my 1st answer: It has no relavence to the CPU clock in modern PC's. I showed a couple of sources that showed it wasn't completely irrelevant. Microchip use an 8 MHz clock in nearly all of their reference designs and starter kits. I would not be surprised if a arbitrary value had been chosen to use on PC motherboards way back when, so that one particular crystal became the standard, even if the only benefit was from volume manufacturing. In the past, and in some specific cases, the crystal value wasn't inconsequential, and it's not that hard to imagine that the crystal choice these days is just a remnant of when it did matter, why change it when it does the job it's supposed to? But yes, this is essentially OT. Vespine (talk) 05:24, 4 October 2012 (UTC)[reply]
Just for some additional info on the two main players in the x86 field. In Intel's case [10] [11] (from [12]) suggests the baseclock for the Sandy Bridge (microarchitecture) (and I think Ivy Bridge and likely all following processors for a while) is 100mhz which is the only clock needed by the CPU. This clock is provided by the Platform Controller Hub (i.e. an Intel chipset) [13]. (This of course greatly limits overclocking with locked multipliers.) I don't know what the internal crystal is, it doesn't of course (and probably isn't) 100 mhz.
With the AMD Hammer and following architectures, it's more complicated and of course it's also been a long time so things have changed somewhat over time (PCI express didn't exist at launch, PCI & AGP were the standards for normal desktop computers). As per [14] [15], a variety of clocks need to be provided including a 14.318 mhz reference clock, although some of these are for the motherboard/north bridge instead of the CPU so will depend on that. Many AMD chipsets, possibly since the SB7xx line of chipsets [16] [17] [18] [19] have an internal clock generator but it wasn't generally used on the SB700 due to a bug and I think remains optional (the motherboard designer can choose to provide their own internal clock generator). I belive this includes the SB950 [20] although I couldn't find an AMD data sheet to confirm this. BTW to give an example of a more modern system compared to the older Hammer example, you can see the RD990/980/970 mentions the need for an external clock generator (as per earlier I believe this can be provided by the SB) to provide a few 100mhz differential pair clocks for the PCI-express and HyperTransport links (and a 100mhz differential pair clock is also used by the CPU AFAIK, including the latest Trinity line) and also a 14.318mhz reference clock [21].
You can also see an example here of a clock generator for an AMD GPU [22] which provides a 100mhz clock for the memory and 27mhz reference clock for the GPU itself (it uses a 27mhz crystal).
Nil Einne (talk) 04:07, 5 October 2012 (UTC)[reply]

Audio waveform graphs

What is the official name for these sort of graphs? Is there a Wikipedia article about them?

What do the numbers on the Y axis correspond to? I gather they refer to amplitude, but what sort of unit are the values in? I've seen graphs where the values go from: 1 to 0 to -1 (like the one above); 30000 to 0 to -30000[23]; and -1 to -infinity to -1 (some audio software). Why the heck are these numbers so random? Do they actually mean something? Kaldari (talk) 02:41, 3 October 2012 (UTC)[reply]

These are line graphs of waveforms. I'm not aware of any "official" name for them; it looks like you took a screen shot of Audacity (software). The numbers can be just about anything; if they range from -1.0 to +1.0, they are normalized; if they range from about -30,000 to about +30,000, the axis is probably directly displaying the value of the signed 16-bit PCM sample data. (Even if the audio-file originally started as an encoded file, like an MP3, its decoded signal may be represented by PCM data internally by the software). Other times, axes will be labeled logarithmically; or in normalized decibel levels. (Logarithmic graphs often represent a "zero" signal-level as "negative infinity", which is a "correct" mathematical representation for the value of log(0); though there are various other conventions used for signal-processing, such as a logarithm of a moving average). As with any graph, if the axis isn't labeled with units, the data format is ambiguous, but we can draw reasonable conclusions based on common practice. Nimur (talk) 02:55, 3 October 2012 (UTC)[reply]
Thanks for the explanation. A follow-up question: If it is a normalized logarithmic graph, what would be the "correct" values for the top and bottom of the chart? Would they both be -1, both be 1, or would one be 1 and one be -1? And if it's not too much trouble: Why? Kaldari (talk) 04:13, 3 October 2012 (UTC)[reply]
Just to clarify, the normalized (-1, +1) range is probably a linear plot of amplitude, not a logarithmic plot. There are lots of different, valid ways to represent a logarithmic plot, and a normalized log plot means that the signal's maximum amplitude has been defined at 0 dB, and the minimum amplitude could be anything, depending on the signal, including -infinity for a signal that goes to zero amplitude. This takes advantage of the convenient properties of logarithms: scaling the entire signal by a constant amplification just changes the markings on the y-axis, without changing the log plot at all. So, I'd expect the axis labels on a normalized log plot to be (-infinity, 0). Nimur (talk) 14:38, 3 October 2012 (UTC)[reply]

It's called a time history. The y axis may be in volts, (-5 to +5) typically, Pa (-1 to +1 perhaps), % full scale (-100 to +100), or signed integer (-32000 to 32000) or almost anyhthing else. You can't represent negatve numbers on a logarithmic plot, easily. This may disagree with the previous answers, oh well. Greglocock (talk) 04:36, 4 October 2012 (UTC)[reply]

Mp3 sample rates are 32,000hz 44,000 and 48,000 ish. Maybe it is from an mp3 and the 30,000 represents the hertz. Just a guess sorry. Check out spectograms. ~ R.T.G 10:24, 4 October 2012 (UTC)[reply]
No, not on those graphs. Greglocock (talk) 00:59, 5 October 2012 (UTC)[reply]

Selective dissolution

What solutions will dissolve calcite (calcium carbonate), but not sphalerite (zinc sulfide), pyrite (iron sulfide) or galena (lead sulfide)? 203.27.72.5 (talk) 04:17, 3 October 2012 (UTC)[reply]

Water? --Jayron32 04:19, 3 October 2012 (UTC)[reply]
According to the CRC handbook, those compounds all have solubilities in water which are comparable to that of CaCO3 (0.013g/L); 0.00086g/L for Pbs, 0.0069 g/L for ZnS and 0.0049-0.0062g/L for the various forms of iron sulfide. 203.27.72.5 (talk) 04:45, 3 October 2012 (UTC)[reply]
Acidified water? Calcium Carbonate should fizz and dissolve in a moderately acidic solution. None of the rest will. --Jayron32 04:47, 3 October 2012 (UTC)[reply]
Better do weak acid (vinegar should be sufficient) - I've found conflicting reports, but it seems that pyrite anyway might dissolve in strongly acidic solutions. Buddy431 (talk) 05:03, 3 October 2012 (UTC)[reply]
All of those sulfide minerals react to form H2S in acidic conditions, including acetic acid (vinegar). 203.27.72.5 (talk) 05:12, 3 October 2012 (UTC)[reply]
I don't think so, at least not quickly [24]. Carbonate minerals will dissolve very quickly in even weakly acidic solutions. Sulfides, if they do at all, will be much slower to dissolve in weak acids. Buddy431 (talk) 05:28, 3 October 2012 (UTC)[reply]
The article you cited calls pyrite a "metal" and conflates high pH with high acidity. I've tried it. Adding vinegar causes an instant odour of hydrogen sulfide. 203.27.72.5 (talk) 05:34, 3 October 2012 (UTC)[reply]
It should be noted that the human nose's ability to detect even the smallest traces of H2S is well documented; a sample of calcium carbonate would long have dissolved to nothingness before a noticeable change (other than the smell) occurred with any of the sulfides. So one could certainly distinguish between them on that regard. Of course, they're all readily distinguishable on appearance only. I've never met someone who couldn't tell calcite from galena from pyrite just by looking at them. --Jayron32 06:23, 3 October 2012 (UTC)[reply]
The idea is actually to remove calcite from several pieces of composite sulfide minerals, not as a test to distinguish them. Anyways, I've found boiling NaEDTA solution works well and doesn't dissolve the Fe, Zn or Pb at all, as confirmed by elemental analysis of the liquor. 203.27.72.5 (talk) 06:33, 3 October 2012 (UTC)[reply]
Resolved

Car servicing

Why the car manufacturers suggest you to get your car serviced based on the time passed even though the distance travelled during this time is very low? Is it a business policy to sell more and more lubricants and other accessories or there is any engineering logic behind it?

Some degradation of the vehicle can be a function of time - corrosion of parts, slow leaks from hydraulic systems, and the settling and congealing of lubricants and other fluids. A periodic check also allows the garage to check the vehicle's VIN against outstanding recalls and safety notices and perform any checks, maintenance, and repairs that those suggest. -- Finlay McWalterTalk 10:36, 3 October 2012 (UTC)[reply]
Also the mileage isn't always a good indication that the car hasn't been driven; for some users a car may only be driven a few miles each day, which results in very low mileage but a relatively high number of starts and a higher proportion of cold operation. -- Finlay McWalterTalk 10:45, 3 October 2012 (UTC)[reply]
Oils and greases degrade by oxidation regardless of how much the vehicle is used. Roger (talk) 11:34, 3 October 2012 (UTC)[reply]
And of course, as is often pointed out, it's the lower temperature part of driving that degrades the oil the most; unburned fuel as well as water, carbonic acid, sulfuric acid, etc. as combustion products leak past the rings and end up in the oil; sustained higher temp operation eventually boils them out, but low mileage over a longer time often means a lot of short-time, low temp operation and little high temp operation.Gzuckier (talk) 16:51, 3 October 2012 (UTC)[reply]

Fishing or harvest algae from a submarine?

Is there any technical or economical hindrance for fishing or harvesting algae from a submarine while submerged (or possible above surface) ..? why.. because on board supplies are limited to 3 months. Electron9 (talk) 14:46, 3 October 2012 (UTC)[reply]

Assuming you're talking a military sub, then sure. Opening holes in a submerged sub is a technical (though not insurmountable, as is the case for most of this) hindrance. Adding a bunch of drag, and potentially noise, to a military sub is a massive operational hindrance. Turning algae into food sailors want to eat is a technical hindrance. Doing any of this stuff surfaced is likewise a massive operational hindrance. Paying to do all this stuff when resupplying every few months is a perfectly reasonable (and for morale purposes, functionally necessary) activity is an economical hindrance. — Lomn 15:25, 3 October 2012 (UTC)[reply]
Now, all that said, I wouldn't be surprised to learn that World War-era submarines fished for some food, but that's because of the confluence of submarines having long range (thus more likely to try to get fresh meat) but conventionally running on the surface (thus in a good position to trail a few lines regularly). Of course, subs of that era weren't looking at food as their primary endurance constraint. — Lomn 15:38, 3 October 2012 (UTC)[reply]
In the U.S. Navy the submarine service makes a point of having the best food in the fleet by way of compensation for the isolation. I don't think algae-derived food would be well-received. In any case, few sailors will want their tours extended beyond 90 days, so food isn't a limiting factor. Dehydrated food could be supplied that would keep people fed for longer than that, so nutrition (as opposed to food) isn't the limiting factor. If one wants greater use out of the submarine, a rotating crew deployment can be implemented: ballistic missile submarines, for example, have alternating crews (the Blue/Gold crew system) that maximizes the sub's deployed time Acroterion (talk) 15:34, 3 October 2012 (UTC)[reply]
on an annoyingly theoretical level, if you can supply a whale with food from fish or plankton while at sea, it should be possible to supply a submarine.Gzuckier (talk) 16:55, 3 October 2012 (UTC)[reply]
This page discusses food on a WWII U-boat; The food onboard the U-boats. I seem to recall that WWI U-boat crews used to hang a fishing line over the side when they had an extended period on the surface, but I can't find a reference for it now. I can't imagine many sailors wanting to eat seaweed though. Alansplodge (talk) 23:47, 3 October 2012 (UTC)[reply]
If you really wanted to support fishing from submarines, I suppose you could design them with "scoops" in front which filter out solid objects (like fish), from the water, like a baleen whale. This would be better than trailing fishing nets or lines which could get snagged. StuRat (talk) 00:01, 4 October 2012 (UTC)[reply]
I've just found How Do Submariners Eat? They Catch their Fish from the Bottom of the Sea. Sadly there's no date for this and Google can't find it anywhere else on the web. The illustration looks 1920s or 1930s to me. Alansplodge (talk) 00:03, 4 October 2012 (UTC)[reply]
What a cool illustration! I wonder if that method was regularly (or ever) employed ~WWII era. Also, that webpage cites popular mechanics, 1920 for its "copy/image."SemanticMantis (talk) 03:38, 4 October 2012 (UTC)[reply]
I don't know why I didn't look before, but the United States H class submarine USS H-5 (SS-148) mentioned in the article, was only in commission between Sep 1918 and Oct 1922. It's interesting that the magazine article refers to a US submarine as "a U-boat"! Alansplodge (talk) 21:18, 4 October 2012 (UTC)[reply]
My late father in law was aboard the USS Flounder (SS-251) for her third, fourth and fifth war patrols in the southwest Pacific; as the article notes, those boats could only stay out about 30 days before they had to re-provision and refuel. He said they started a patrol with food in every conceivable place; unfortunately I never asked him about fishing, but given the areas they were patrolling I doubt they had more than a minimum of people topside at any time when surfaced so they could dive quickly if spotted. Acroterion (talk) 03:27, 4 October 2012 (UTC)[reply]
My living father-in-law was a supply officer on Ohio-class submarines and I'll confirm, from his stories, the above. He said that at the start of any sojourn people felt about a foot shorter than at the end, because the walkways were lined with food. You were literally walking on your dinner. It was crammed into every open space. I don't believe that fishing was considered an option. --Jayron32 05:11, 4 October 2012 (UTC)[reply]
Note one concern is that you wouldn't want to clean the fish inside a sealed sub, as that would really stink the place up badly. StuRat (talk) 04:43, 4 October 2012 (UTC)[reply]
Depending on how long they've been submerged, it might actually be an improvement. ←Baseball Bugs What's up, Doc? carrots11:48, 4 October 2012 (UTC)[reply]
<smart_ass_warning>I'm guessing the fish have been submerged all their lives. </smart_ass_warning> StuRat (talk) 21:37, 6 October 2012 (UTC) [reply]

try adjusting the phase

See http://en.wikipedia.org/wiki/Time_constant#Relation_of_time_constant_to_bandwidth - an equation is derived which essentially describes the amplitude V as a function the time constant and the forced frequency omega when the forcing term is a sinusoid. Is there a comparable relationship between the phase and (tau,omega) ? Widener (talk) 18:11, 3 October 2012 (UTC)[reply]

A full and easy-to-understand algebraic derivation is provided in the section on forced oscillators in Mechatronics, (Alciatore & Histand), in the chapter on System Response for complex systems. Nimur (talk) 20:46, 3 October 2012 (UTC)[reply]
Can you reproduce it here? I don't have a copy of that book. Widener (talk) 22:32, 3 October 2012 (UTC)[reply]
Have you looked at the links from that page, in particular the freely-available PDFs of MathCAD examples? Not sure, but they might have what you need. -- Scray (talk) 23:10, 3 October 2012 (UTC)[reply]
Thanks, I managed to find the result in "first order system frequency response" ; a derivation would be nice though. Widener (talk) 02:53, 4 October 2012 (UTC)[reply]
Starting from the general solution at Time_constant#Relation_of_time_constant_to_bandwidth: for the solution reduces to . The phase difference is the argument of , which is minus the argument of , which is . -- BenRG (talk) 01:31, 5 October 2012 (UTC)[reply]
Chapter 4, System Response, equation cheat-sheet? Now that there's an internet, it seems that nobody wants to expend even the slightest effort investigating interesting topics anymore... this is really a tragic development, because it's never before been easier to find all the information you ever wanted. I recall a time, not so very long ago, that if one wanted to know things, one had to read and study extensively; and sometimes even travel great distances to get access to useful educational resources. Nimur (talk) 23:19, 3 October 2012 (UTC)[reply]

October 4

Solar Panel Kits

I'm looking at different solar panels and I'm really confused. How much power would a 20 watt panel product? Is it enough to run lets say a fan? or a laptop or light bulb?

If I were to hook it up to a used car battery to store the electricity produced, what equipment would I need to lets say charge my phone with the car battery without damaging the phone [by giving it too much electricity, or for to long (overcharging)].

Any help and/or links would be greatly appreciated :-) — Preceding unsigned comment added by 76.87.48.246 (talk) 04:44, 4 October 2012 (UTC)[reply]

20 watts is very little. It could power a CFL bulb equivalent to a 75 watt incandescent light. It probably wouldn't be worth trying to store that little energy. And note that you will get less than 20 watts if it's at an angle to the sunlight, it's overcast, the panel is dusty, or it's old. StuRat (talk) 05:06, 4 October 2012 (UTC)[reply]
Do please confirm that 20 Watts is a rate and not an amount of energy. μηδείς (talk) 05:16, 4 October 2012 (UTC)[reply]
For the purposes of this discussion, the distinction is unimportant (it only becomes important when talking about storing the electricity generated by the panel, and that isn't practical with such a small panel). StuRat (talk) 05:20, 4 October 2012 (UTC)[reply]
The site says:

LiteFuzeLiteFuze 20W Mono-crystalline Solar Panel 20 Watt - High-Efficiency Maximum Series Fuse 2AbrCells 36 units 125x125 monocrystalline siliconbrNOCT 48 /-2 CbrOperating Temperature -40 C to 85 CbrMax. System Operating Voltage 1000V DCbrCE, CSA, DVE, IEC Certified So in other words, these portable and small panels are kind of useless except to maybe charge a cell phone?

— Preceding unsigned comment added by 76.87.48.246 (talk) 05:22, 4 October 2012 (UTC)[reply] 
That sounds like it's 20 watts per cell. 20 watts times 36 units is 720 watts, which is a more useful amount. Can you provide a link to the site, so I can confirm this ? StuRat (talk) 05:26, 4 October 2012 (UTC)[reply]
http://www.sears.com/shc/s/p_10153_12605_SPM7433169811P?sid=IDx20110310x00001i&srccode=cii_184425893&cpncode=30-74342331-2

http://www.amazon.com/LiteFuze%C2%AE-Mono-crystalline-Solar-Panel-Watt/dp/B0079OA7SK/ref=sr_1_1?ie=UTF8&qid=1349328481&sr=8-1&keywords=litefuze+20w — Preceding unsigned comment added by 76.87.48.246 (talk) 05:28, 4 October 2012 (UTC)[reply]

Unfortunately, that looks like it really is just one 20 watt panel. StuRat (talk) 05:34, 4 October 2012 (UTC)[reply]

Thanks for helping me out :-) — Preceding unsigned comment added by 76.87.48.246 (talk) 05:37, 4 October 2012 (UTC)[reply]

I know my biggest TV (42" plasma) can use 1Kw, my oven uses 2Kw, I've a 20 or so inch lcd monitor that uses up to 54 watts. It can be hard to figure out the wattage of an appliance because it is often written on the retail packaging, but not on the product itself. Maybe like 400 watt rated panels in top sunlight to run a laptop without batteres? Maybe 500, and even more if it is an extreme gaming rig. I am guaging that by what a desktop computer might use, 300 watts or more PSU, 50 watts or more monitor, speakers, router. It wouldn't use 400 watts all the time, but if it wants to run the dvd and hard drive at the same time for a second, it will need the full power available or it will fizzle out or at least not work. My kettle says underneath that it uses between 2520 and 3000 watts which makes it the most power hungry device here. ~ R.T.G 10:56, 4 October 2012 (UTC)[reply]
Laptops are easy to judge - just see what the power supply is rated for. Mine is 90W. I doubt there is anything outside of netbooks that can run off of a 20W panel. One of the Amazon reviews is from somebody using it to run a few small PC fans for greenhouse exhaust. I think that is probably the sort of application this is designed for - running a small load somewhere without easy power access. The greenhouse is perfect because the fans don't need to run as strong or at all when the greenhouse and panel aren't in full sun. 209.131.76.183 (talk) 13:25, 4 October 2012 (UTC)[reply]
Laptop power supply are meant to charge the battery at a reasonable rate (as fast as possible without causing to much heating of the battery. Looking at a random laptop (HP ProBook), it has a 55Wh battery for a maximum battery life of 8.5 hours. Taking a more realistic value of 5.5 hours, it would consume 10W average.
The only problem is whether the battery charging circuity will accept a lower supply current than the 65W adapter it comes with. But when using an old car battery as storage, that doesn't matter, these can supply kilowatts and accept any charge rate. Problem with car batteries is that they aren't meant for deep discharging, they are designed specifically for high peak currents (driving the starter motor). That means they're made with many thin lead plates, and totally discharging them will damage those, resulting in high losses (self-discharge rate). When I left my lights on and didn't use the car for several days, I had to buy a new one, even fully charged it wouldn't start after one day.
The other problem using a car battery: say you have one of 240Ah; that's 120Ah*14V= 3.36kWh; discharge rates for them are 1% to 25% per month according to one website, take 20% as a guess for an old one, that's a loss of 0.74% per day (assuming constant discharge rate), or 25Wh per day, so you'll need more than an hour of maximum power your solar cells can provide just to keep the charge level. That's why a car battery would be too large for a 20W cell.
Also, it would take 168 hours (exactly a week) to fully charge it, or 3.5 days if you limit discharge to 50%. that's theoretical days of 24/24 maximum sunshine, not sure what a solar cell actually delivers per day, Photovoltaic system seems to suggest 5 hours of rated wattage (750W per day for 150W panel), meaning you have only 100Wh per day and the 3.5 days mentioned earlier would in fact be 3.5*24/5=16.8, say 17 days. All this assuming 100% efficiency of charging, real value would be less than 80% imo.
In any case, you'd be better of with lower capacity low loss rechargeable batteries. Low-self-discharge NiMH seems a good option, not the regular NiMH because these have a very high discharge rate (5%-20% first day, 0.5-4% /day after that). Ssscienccce (talk) 16:07, 5 October 2012 (UTC)[reply]
I'm disappointed by StuRat's dismissal of a 20W panel as not providing sufficient power to store; it's all about the applications. Sure, 20 watts isn't enough to drive major appliances, but it's a useful trickle if you're off the grid. Consider devices like this one: a portable 2.5 watt panel that can charge a couple of AA batteries – or a connected USB device – in a few hours of sun. It won't run a refrigerator, but it will give you a reading light, a charged ebook reader, a few minutes of cell-phone usage, a weather report on the radio, or a day of GPS reception when you're out in the wilderness. The usage pattern also matters; bigger batteries and smaller panels can be a reasonable combination if they're used at (for example) a remote cabin that is only used on the weekend: seven days of charging, and only two days of usage.
That said, what you can't do is directly hook a solar panel to a battery and expect good results. You'll need to build or (more likely) buy a charging controller: a device that regulates the voltage supplied to the battery for proper charging (and cuts off the power when the battery is full) and which prevents the slow discharge of the battery through the panel at night. (Some, but not all, panels are wired with a diode in line to prevent this type of discharging.) 12-volt chargers and tools can be connected to the car battery directly, but make sure that there is a suitable fuse or breaker in the line—a short circuit across a car battery can produce hellishly high currents. TenOfAllTrades(talk) 14:11, 4 October 2012 (UTC)[reply]
My point is the lack of return on investment. You are unlikely to be able to buy the solar panel, charging controller, and rechargeable batteries for less than disposable batteries would cost over the lifetime of those devices. The situation changes when the solar panels are scaled up. However, there are situations where such a small solar panel may be useful to directly use the energy produced, like the example given above of powering a ventilation fan (let's say on a shed without electricity service). StuRat (talk) 16:39, 4 October 2012 (UTC)[reply]
One would be wise not to neglect the situations where it's difficult to aquire fresh, charged batteries on a regular basis: hiking in the backwoods, staying in a remote rural cabin, living in a third-world country or war zone where deliveries are infrequent or unreliable. Sometimes it's nice to have the option of a solar charge just in case you forgot to pack batteries, or the power goes out unexpectedly.
As well, I am curious about how you calculated the return on investment in this situation. To do a back-of-the-envelope estimate, the Powerfilm device I mentioned above is selling on Amazon for $72.50 ([25]), including free shipping to the U.S. That charges two NiMH rechargeable batteries (buy four in case of cloudy spells, call that $8 all told) in four hours of sunlight. A quick survey of Amazon puts the cost of name-brand disposable alkaline AA cells at about $0.30 apiece (you'll pay more if you buy them in smaller packages at a bricks and mortar retailer, though). Assuming an average of just four hours sunlight per day, the solar charger saves $0.60 in disposable batteries per day. It pays for its entire $80 purchase price in about four months of continuous usage, or three years of summer weekends. TenOfAllTrades(talk) 20:22, 4 October 2012 (UTC)[reply]
1) That's not the device the OP asked about, which would require the additional purchase of a charging unit.
2) You listed the sale price, so let me compare with the sale price I get on disposable AA batteries, which is 8 for $1, or 12.5 cents each.
3) I don't believe rechargeable batteries hold as much of a charge as disposable batteries, especially after many discharge/recharge cycles.
4) I believe they also tend to discharge faster when sitting on the shelf.
5) The unit you linked to looks rather fragile, with folding electrical connections and such. I'd only use that in a window, which means I wouldn't get full sunlight on it most of the day (moving it from window to window might help). If you left that outside to charge for 4 hours a day for 4 months straight, I'd be very skeptical of it lasting. Also, any wind might blow it over.
6) They didn't mention angling it towards the sunlight. That doesn't look very easy to do with this device.
7) Applications which burn through 2 AA batteries a day, every day, are uncommon. Any application which does this clearly needs larger batteries. StuRat (talk) 04:54, 5 October 2012 (UTC)[reply]
Actually a decent rechargable NiMH cell, especially a LSD one, can provide more charge then disposable cell, with high drain usage patterns or when compared with zinc carbon primary cells both of which I presume is the case here given the mentioned usage patterns and prices. Cell life depends on the type, but again a decent LSD should get at least a few hundred cycles. Even LSDs do discharge faster compared to primary cells, but it isn't that fast and this seems irrelevent if we are talking about using and charging them regularly as seems to be the case here. Nil Einne (talk) 05:13, 5 October 2012 (UTC)[reply]
The batteries I use in my mouse and my camera are rechargeable AA both Duracell and Energiser. As I recall they were touted to last for 1,000 charges at least. That's like 100 bucks each battery at 12c and imagine all the chemical crap that gets flushed out when you are recycling 1,000 batteries... Anyway my batteries were like ten euros or more each for pk of 4 and I'd guesstimate I charge a pair every two days or so. I work that out to last me 8,000 days, or as they say in China, twenty odd years. I think they will be corroded into uselessness by the air before they mechanically pack in. I charge them with a Uniross charger which I believe was cheap and has been used for like 7 or 8 years and couldnt have cost more than 20 euro I think. So around 50 euros+ electricity to replace 8,000 disposeables and even if they are only half as good that's 4,000 which at 12c is about 360 euro. I doubt they'll run €300 in electricity charging and I only have to dispose of them once before starting again. 8,000 AA batteries would measure about 100cm x 30cm x 30cm which is like half a full size wheelie bin and where I live, if you only put bins out every few month's, that will cost you about half of €80 which almost covers the hardware costs to begin with. So, if it is only the electricty I am paying for vs disposeables at about 12c or 8c euro, I'd have to run 250 watts for one hour during the day to be charged 8c. The charger doesn't give wattage rating but I sincerely doubt it runs 250 watts an hour charging all four slots. In fact I know it doesn't because you can get a feel for the wattage of an appliance by the heat it gives out and I estimate a maximum 50 to 100 watts so I'd make an uneducated but careful guess that the rechargeable AAs are more like a fifth or quarter of the cost of the disposeables while the rechargeables are top brands and the disposeables are pound shop (cheapest imports, plastic toys, etc.) But you can't beat a good pound shop all the same. ~ R.T.G 13:59, 5 October 2012 (UTC)[reply]

Tool to remove fuel filler cap?

i have bad wrist atrophy from a injury i cant get the gas cap off my car is there any tools that will help me?--Wrk678 (talk) 05:36, 4 October 2012 (UTC)[reply]

Try a large channel lock wrench: [26], or perhaps this type of jar opener: [27]. Also, if the other wrist is OK, why not use that hand ? StuRat (talk) 06:16, 4 October 2012 (UTC)[reply]
Or ask someone to help you. Astronaut (talk) 16:41, 4 October 2012 (UTC)[reply]
You might also want to replace that gas cap with one which is easier to remove. StuRat (talk) 16:44, 4 October 2012 (UTC)[reply]
Would something like one of these help you? - Karenjc 19:22, 4 October 2012 (UTC)[reply]


do they make fuels caps that are easier to open for a honda accord ?--Wrk678 (talk) 11:10, 5 October 2012 (UTC)[reply]

If you have an auto parts store such as a NAPA or Autozone nearby I recommend talking to someone there. They should have replacement gas caps, and you can see if there are any options that should be easier for you. Looking at Amazon, I don't see anything designed to be easy-open, but there several different styles, one of which may be better than the others for you. 209.131.76.183 (talk) 11:42, 5 October 2012 (UTC)[reply]
A gas cap turner, you can order them online, here for example, $15.95. quote: Many of us, especially those with Arthritis or hand injuries, will appreciate this great new tool uses leverage to twist your gas cap open easily and to tighten as well. A tip that may not help with a gas cap (or a bad wrist): when I have trouble with plastic screw caps on bottles, I wrap some double-sided sticky tape around it; gives so much grip you don't have to squeeze, only turn. Ssscienccce (talk) 11:12, 6 October 2012 (UTC)[reply]
A good idea, but I'm not sure it will fit all gas caps, and the lever arm looks too short to help much. StuRat (talk) 00:16, 7 October 2012 (UTC)[reply]

Tastes of the alkali metal halides

NaCl tastes salty. What do the other alkali metal halides (especially the chlorides) taste like? (Obviously, please ignore the more poisonous ones.) Double sharp (talk) 07:31, 4 October 2012 (UTC)[reply]

Look at http://nsrdec.natick.army.mil/LIBRARY/80-89/R81-77.pdf. The predominant taste is salty especially for higher concentrations, but higher molecular weight salts are also more bitter than salty. LiI tastes sour and bitter. Graeme Bartlett (talk) 09:29, 4 October 2012 (UTC)[reply]
Thank you Double sharp (talk) 11:07, 4 October 2012 (UTC)[reply]

Necessary properties for an element to be a halogen

What properties (if any) would element 117 have to display for it to be counted as a halogen? Please give sources, if possible, as I need this information for the ununseptium article. Double sharp (talk) 07:33, 4 October 2012 (UTC)[reply]

Well the name suggests that it should form salts. So with an alkali metal you would get a salt. A Uus- ion should be possible. I would also expect 7 electrons in the outer shell. But if the nucleus is too short lived to have that many electrons or react to form compounds, then it is not really a halogen! Graeme Bartlett (talk) 11:08, 4 October 2012 (UTC)[reply]
The problem with these transuranic elements is that you have to make all sorts of relativistic corrections to the usual formulae. Having seven outer shell electrons is the key thing, as I understand it, but working out what the outer shell is is a little tricky - the usual patterns that make the periodic table work start to break down. --Tango (talk) 11:28, 4 October 2012 (UTC)[reply]
Would it have to be a nonmetal to be possibly a halogen? Would it have to show a −1 oxidation state (and must that be the most common oxidation state)? If a halogen must have the −1 state as its most common, Uus would probably not be able to be counted as a halogen, as it is predicted that the +1 and +3 states would be more common. (The inert pair effect would also practically reduce the octet rule to a sextet rule, as the 7s subshell is very stabilised by relativistic effects, so Uus probably can only use 5 electrons for bonding, but would still be one electron short of a full outer shell.) Double sharp (talk) 12:26, 4 October 2012 (UTC)[reply]
Concepts like "metal" and "nonmetal" also make little sense when dealing with large transuranium elements. As noted by several people above, the entire classification system we impose on the Periodic Table (for our own purposes as a heuristic means of understanding trends in properties) starts to break down with these larger elements. On a very simplistic sense, we would expect it to be "halogen-like", but less so than any other halogen; Astatine displays some quite non-Halogen properties, including observed elemental cationic oxidation states. How much less-halogen-like Uus would be is purely speculative as you'd need enough to empirically classify it, which we haven't got yet, and may never get. --Jayron32 13:38, 4 October 2012 (UTC)[reply]
Just to throw my one in there: I expect it to be a brittle, semi-conducting metalloid, with a melting point in the 500/600s, with poor halogenic properties. Plasmic Physics (talk) 21:21, 4 October 2012 (UTC)[reply]
So, what exactly are the halogenic properties? What properties are necessary for an element to be a halogen? Double sharp (talk) 07:44, 5 October 2012 (UTC)[reply]
I'd say a strong oxidant (ability to oxidise water and form an acidic solution), and a diamagnetic, singlet ground state, and of course have an affinity for the -1 state. Plasmic Physics (talk) 09:03, 5 October 2012 (UTC)[reply]

Using an NPN transistor as a switch - optimal circuit

Over the years, I've come across two circuits designed to use an NPN BJT transistor as a switch (i.e., the transistor is either off or saturated) and I'm curious as to whether or not one is superior (generally speaking).

The first type of circuit places the load and transistor in series; when the transistor is saturated, the load is connected to ground, providing a path for current and turning the load on. This circuit is ubiquitous on the internet; an example is on page 3 of this pdf.

The second type of circuit doesn't seem to be as common (at least on the internet); in its layout, the transistor and the load it controls are in parallel, so the load is off when the transistor is saturated (since, in this condition, the transistor provides a lower resistance path to ground than the load) and on when the transistor is off. An example of this type of circuit is on page 37 of this pdf.

Generally, is one of these two circuit layouts better than the other? With the second circuit, it seems possible that a small current could enter the load when the transistor is on (although in practice I imagine the load impedance would have to be quite low for this to happen), so I suppose the first circuit is more controlled in this respect.

Any thoughts on this would be appreciated. 142.20.133.132 (talk) 14:51, 4 October 2012 (UTC)[reply]

Your second citation doesn't make much sense, as there is no page 37. If you meant what's on page 2-9 Calculation of a Saturated Transistor Circuit, you have misunderstood the intent of this section; Rc is a resistance representing the load, but in accordance with analysis convention, the voltage at the collector is measured with respect to earth.
However, your question about the relative merits of series and parallel switching can still be answered. The most common case is that you want full voltage on the load when on, and zero current when off. That naturally & obviously suggests a switch in series. In the vast majority of cases, the load does not see the full voltage, as the transistor saturates with around 0.3 V accross it, but if necessary (it usually isn't) you can compensate by making the supply volage a bit bigger. The transistor isn't perfect in the off state either - a leakage current still flows. But the leakage is generally negligible in its effect on the load. Very often, the load is inductiove - it's a relay coil, a motor, etc. This presents a problem, when you turn off the current in an inductance, the magnetic energy has nowhere to go, so it creates an inductive kick voltage (known as back EMF), which can easily destroy the transistor. This can be fixed by connecting a diode in parallel with the load, or (less commonly) a resistance or RC circuit, or even just specifying a transistor with a very high voltage rating, or some combination of these methods. Using the diode will slow down the relay release to some extent.
The advantage of using the transistor in parrallel is that the back EMF issue cannot occur, as the magnetic energy is "dumped" in the transistor when the transistor turns on. The voiltage across the transistor cannot exceed the supply voltage. Parallel operation has two major disadvantages: (1) there must be a current limiting resistance in series with the transistor/load combination, and this resistance must dissipate significant power, making the circuit energy inefficient. (2), because the inductance keeps current flowing in the transistor, the relay is very slow to release. However, by using a high supply voltage and a high resistance, the relay operate time can be reduced slightly without exceeding its coil power rating.
Typical times for small relays:-
Series connection - operate 20 mS; release 25 mS; if diode used release 50 to 300 mS;
Parallel connection - operate 15 to 20 mS; release 300 to 1000 mS
Other pros and cons centre around reliability. If the diode fails to open circuit, the circuit will continue to work until the transistor has had enough of the back emf. A technician may replace the faulty transistor but forget to replace the diode. That means the system will fail gain, perhaps within hours or weeks. Correctly designed, the parallel circuit is inherently more reliable, as high transient voltages cannot occur, and there is one less semiconductor. Sometimes, the application requires the system to fail safe or start safe. Failing safe or starting safe may require the load to be energised (or not energised) whenever the power is on, but the input signal to the trasistor is not present (as say a micro-controller chip hasn't booted up yet).
Keit124.182.43.236 (talk) 01:45, 5 October 2012 (UTC)[reply]
Thanks for the information; that's quite a few interesting factors. I've only ever connected transistors and their controlled loads in series, so it's good to know about the benefits and disadvantages of a parallel layout. (Also, next time I'll be sure to provide both the document page number and the PDF page number when the two differ!) 142.20.133.132 (talk) 15:06, 5 October 2012 (UTC)[reply]

What constitutes a bacterial species?

If species is defined by the ability of individuals in the population that are able to reproduce with each other or amongst themselves, and species evolve over time, then what constitutes a bacterial species? Will the bacterial species keep its name or change its name when it evolves? Can a bacterial species die out? If it dies out but its genes get integrated into a different species somehow, then has a new species evolved, or the living species have evolved and the dead species is dead? 140.254.226.206 (talk) 16:19, 4 October 2012 (UTC)[reply]

A certain measure of genetic difference would be one way to distinguish between species with asexual reproduction. So, after it evolves enough, yes, it becomes a new species and gets a new name. Such species can also die out completely. If a new species evolves as the old one dies out, that situation is no different than sexually reproducing species. StuRat (talk) 16:30, 4 October 2012 (UTC)[reply]
The OP would do well to read the article titled Species problem. --Jayron32 20:20, 4 October 2012 (UTC)[reply]
It's probably mentioned in species problem, but I found ring species particularly fascinating when I learned about it. Vespine (talk) 00:41, 5 October 2012 (UTC)[reply]
The species problem article doesn't even discuss the most serious issue that arises with bacteria, which is that they can swap DNA with each other, including, on occasion, with types of bacteria that are quite dissimilar to them. Some biologists have argued, at least semi-seriously, that all bacteria ought to be considered as one single megaspecies. Looie496 (talk) 04:25, 5 October 2012 (UTC)[reply]
Horizontal gene transfer is not limited to bacteria. So unless you're going to take that argument all the way...Someguy1221 (talk) 04:32, 5 October 2012 (UTC)[reply]
As a rule of thumb, two bacteria are generally considered to be in the same species if their 16S rRNA are at least 99% identical. Two bacteria are often in the same genus if the 16S rRNA are at least 94% identical. An official declaration of a new species / genus requires more detail than that, and not all species / genera boundaries meet those criteria, but it is a rough guide that is easy to use because 16S rRNA is universal in bacteria, highly conserved, and frequently used as a means to identify particular bacteria. Dragons flight (talk) 01:22, 5 October 2012 (UTC)[reply]

SSC and Higgs

If the Superconducting Super Collider had been built, would it have been able to detect the Higgs particle? Bubba73 You talkin' to me? 18:21, 4 October 2012 (UTC)[reply]

Presumably, as the SSC's designed energy level was substantially higher than the LHC's (20 TeV to 7 TeV, for protons), and the gain in energy level over Tevatron was a primary reason that the LHC was able to gather its Higgs-related data. — Lomn 18:39, 4 October 2012 (UTC)[reply]
Resolved
Thanks. Bubba73 You talkin' to me? 19:09, 4 October 2012 (UTC)[reply]

We could have built that. Bubba73 You talkin' to me? 15:50, 5 October 2012 (UTC)[reply]

What are those spidery black things on Mars?

See this article. Is there a name for these objects, and what sources do we have on them? Thanks. μηδείς (talk) 18:35, 4 October 2012 (UTC)[reply]

The Spiders from Mars? :-) Bubba73 You talkin' to me? 19:10, 4 October 2012 (UTC)[reply]
I think they are usually just called dark dune spots (I don't really endorse the redirect to Martian geyser). There are plenty of people working on Martian dunes so you can find quite a lot of journal articles. Sean.hoyland - talk 19:18, 4 October 2012 (UTC)[reply]
If there are people working on the Martian dunes, that's a major news story. ←Baseball Bugs What's up, Doc? carrots13:09, 5 October 2012 (UTC)[reply]
Great! μηδείς (talk) 21:54, 4 October 2012 (UTC)[reply]
Resolved

Segmented sleep and animals

Segmented sleep makes the case that before the Industrial Revolution, people would sleep in 2 phases, being awake in the middle of the night. Unfortunately, the amount of research is not overwhelming but some examples sound convincing. I'm wondering if the thesis is right. If so, I'd like to know if animals, in particular primates, also wake up during the night. Joepnl (talk) 21:39, 4 October 2012 (UTC)[reply]

A lot of animals are crepuscular, active in low light and inactive mid-day and mid-night. If I am not tired and go to bed early I sleep segmentedly. I am not a fan of it. I prefer a solid 8 hours. μηδείς (talk) 21:53, 4 October 2012 (UTC)[reply]
My gut reaction is that it is no accident that humans (and other primates) have rods that can make out the landscape by moonlight, and that in times past moonlight would have been expected to have a very direct and practical effect on behavior. I have a suspicion that things like "harvest moon" and "hunter's moon" are more than just poetic phrases, and the articles appear to support that. So I would expect people to be biologically adapted to making a fluid response to the changing lunar cycle. Doing a quick search for primates and moonlight I found [28], which looks like a useful starting point. Wnt (talk) 17:18, 5 October 2012 (UTC)[reply]

October 5

Gibbs free energy change of a hydration of a gas

The hydration energy of the gas is +8.4 kJ/mol, which gives a K of 0.0337 at 298K. How am I supposed to calculate the concentration from the partial pressure of the gas? The equilibrium constant of the hydration reaction is in units of pressure, but 0.0337 is dimensionless. — Preceding unsigned comment added by 71.207.151.227 (talk) 01:32, 5 October 2012 (UTC)[reply]

We'd need more information. The problem must have given you more data, and we'd need that to help you solve the problem. K is, to a first approximation, the ratio of partial pressures. This could be solvable with something like the total pressure. If you can tell us the entire problem, as it is written, perhaps we can help and see where you are being tripped up. --Jayron32 02:16, 5 October 2012 (UTC)[reply]

Is there a theoretical frames-per-second limit to capturing motion?

This is probably a seriously stupid question, but for some reason, it's bugging me. You know that infinity paradox thing (that's the scientific term) for walking across a room, where in order to get there, you have to cross over the halfway point, but before you do that, you have to reach half of half, but before that, you have to reach half of half of half, so forth, down to the molecular level. Obviously, we can walk across a room just fine. So at some point, there really must be some bridge between one side of the half, and the other. If that were somehow filmed with a theoretical highspeed camera, like billions to trillions—maybe more—of frames per second, would we be recording motion on a completely incomprehensible scale? See, I don't even know how to explain it. And the answer is probably just a "no". If I were to film a bullet fired at a wall at a theoretical speed, would I simply end up watching a film (projected at normal speed) of a bullet in perfect stasis for days/weeks/years? Or would we actually see something else? I want to delete this question lol. – Kerαunoςcopiagalaxies 08:57, 5 October 2012 (UTC)[reply]

You may be interested in the planck length and planck time articles. I don't really understand the concepts myself but it seems there is a minimum length and time for everything that cannot be split into 2 smaller lengths or times. --85.119.27.27 (talk) 09:22, 5 October 2012 (UTC)[reply]
For the record, the 'infinity paradox thing' is Zeno's dichotomy paradox. AndrewWTaylor (talk) 11:02, 5 October 2012 (UTC)[reply]
As a practical matter, in extreme slo-mo the individual frames tend to be rather dimly lit. ←Baseball Bugs What's up, Doc? carrots12:32, 5 October 2012 (UTC)[reply]
If you have eleven minutes to spare, there is a TED video here that demos a trillion frames per second camera - showing a light pulse like a regular high speed camera shows a bullet. They get around the dimming problem mentioned above by imaging the same scene many times. 88.112.36.91 (talk) 12:55, 5 October 2012 (UTC)[reply]
With conventional film, the shorter the exposure time, the more light is needed for each exposure, along with having a film that is designed for short exposure times. There are extreme high-speed cameras in use or in development for phenomena such as lightning strikes - to see how the lightning originates and how it travels. Those types of cameras use lots of individual cameras taking well-timed individual pictures. This is basically taking the Eadward Muybridge approach to an extreme. ←Baseball Bugs What's up, Doc? carrots13:07, 5 October 2012 (UTC)[reply]
Eadweard Muybridge. hydnjo (talk) 15:09, 5 October 2012 (UTC)[reply]
More to the point than Muybridge (not very many frames per second in his original work that I've seen), see Harold Eugene Edgerton, who did amazingly creative and innovative work in high speed motion pictures, using high speed strobes to slow things down. Edison (talk) 21:37, 5 October 2012 (UTC)[reply]

Thank you guys for all the responses! Here I thought I was asking a dumb question and got some seriously cool answers. Planck length, planck time, Zeno's dichtomy paradox, and Raskar's TED talk actually answer my question! Which is, according to the planck length article, most likely unknowable—but I didn't even know that "length" had even been given a name. (As an aside, I'd previously watched two other of Raskar's videos on his "around the corner camera" and this is the first video that actually showed the results. So that was a frustration finally settled.) Thank you!! – Kerαunoςcopiagalaxies 22:50, 5 October 2012 (UTC)[reply]

I think there is a simpler answer. Light consists of individual photons. Once your frames are so short that each frame records either zero or one photons, there is nothing to be gained by making them shorter. That might seem bizarre, but it actually comes into play in real life when recording very-low-intensity light. Looie496 (talk) 02:34, 6 October 2012 (UTC)[reply]
Sure, it's a real problem for scientific instruments recording either very fast, very small, or very dim events. So-called shot noise is the inherent noise - variation - introduced in measurements or images when you try to collect data or take pictures when you just don't have enough photons to play with. TenOfAllTrades(talk) 03:30, 6 October 2012 (UTC)[reply]
Looie496, that's a great point, although if motion is still occurring between the photons, then my question still stands, just not in regards to being visible, I suppose. The answer I really was looking for was the planck distance. But I appreciate your reply because it's completely true and I hadn't thought of it that way. (I was sort of ignoring the whole "faster fps = dimmer exposure" conversation because I didn't really mean to go in that direction. On the other hand, I had no idea about shot noise, either, so TenOfAllTrades, thanks for that point, and I promise to never be bad and close an answer ever again! :D Seriously, the last two responses may have never happened, so I definitely see everyone's point.) – Kerαunoςcopiagalaxies 07:27, 7 October 2012 (UTC)[reply]

alcohol

is a 40% 1oz shot of wiskey stronger, the same, or less than 1 beer with 5% alcohol?, also is a "standard" shot 1 or 1 1/2 oz? if someone uses a 1 1/2 oz shot is that stronger than a beer or equivalent to it? --Wrk678 (talk) 11:08, 5 October 2012 (UTC)[reply]

Assuming that by "stronger" you mean "contains more alcohol", we'd need to know the amount of beer. Obviously 1oz of 40% abv whiskey contains as much alcohol as 8oz of 5% abv beer. (assuming the specific gravity of both is not significantly different to that of water). Rojomoke (talk) 12:22, 5 October 2012 (UTC)[reply]
Here in the UK, the smallest beer until recently was the half, which is 10oz. On that basis, even a short beer has more alcohol than a single whisky. Nowadays some pubs offer a smaller beer, a third, which is just under 7oz, and therefore would be less alcohol (at 5%) than the single whisky. However, shots here are not fluid ounces, but either 25ml (single), 40ml (large) or 50ml (double). A fluid ounce is just shy of 30ml. So - assuming a standard Scotch at 40% and a European medium-strong lager at 5% - the amounts of alcohol in order are:
1/3 pint lager (UK pints) - 9.5 ml
UK single whisky - 10 ml
1/2 pint lager (US pints) (= 8oz) - 11.4 ml
US single whisky (oz) - 11.6 ml
1/2 pint lager (UK pints) - 14.2 ml
UK large whisky - 16 ml
US large whisky (1.5oz) - 17.4 ml
UK double whisky - 20 ml
1 pint lager (US pints) - 22.7 ml
1 pint lager (UK pints) - 28.4 ml
AlexTiefling (talk) 13:23, 5 October 2012 (UTC)[reply]
Point of order: most "shots" are 1.5 ounces, as measured by the Jigger. If I got served only 1 ounce of whiskey in a neat shot, I'd think I was being shorted. --Jayron32 17:15, 5 October 2012 (UTC)[reply]
If our article is correct, you may very well often get 'shorted' in much of the US besides Utah. Nil Einne (talk) 09:53, 6 October 2012 (UTC)[reply]


i am referring to a 12 oz beer --Wrk678 (talk) 07:42, 6 October 2012 (UTC)[reply]

This is pretty basic maths, I hope you realise. As it happens, a 12 oz beer contains 17.4 ml alcohol, identical to the 1 1/2 oz whisky. AlexTiefling (talk) 10:26, 6 October 2012 (UTC)[reply]


isint 1 1/2 ounces of whiskey 40 ml, not 17.4 ml?--Wrk678 (talk) 14:49, 6 October 2012 (UTC)[reply]

Whiskey is not pure alcohol--usually under %50. μηδείς (talk) 16:56, 6 October 2012 (UTC)[reply]
Wrk678, you yourself said in your original post that you were taking the whiskey to be 40%, which is a good estimate. Indeed, many whiskies are specifically balanced at 40%. And 1 1/2 oz isn't exactly 40ml of anything - I've been using 1oz = 29ml as a close approximation, since US and UK fl oz are both close to that value, but not identical to it or each other. Thus 1 1/2 oz is approximately (29x1.5) = 43.5ml. And 40% of 43.5 is 17.4. I'm happy to help further, but please at least read what everyone here has written including yourself before asking. Thanks. AlexTiefling (talk) 07:42, 7 October 2012 (UTC)[reply]

Scurvy and scars

I was contemplating scurvy, and specifically the well-publicised symptom whereby old, long-healed wounds reopen when the sufferer is severely deficient in Vitamin C. I was trying to find out the exact mechanism whereby this happens, but without luck. I suspect it has something to do with the composition of the wound tissue - our article Scar says that scar collagen is not laid down in the "random basketweave formation ... found in normal tissue, but "forms a pronounced alignment in a single direction" and is "usually of inferior functional quality to the normal collagen randomised alignment". And I know that collagen has to be replaced regularly, and Vitamin C is essential to its formation. But how does this work in practice? Does all a sufferer's tissue deteriorate, and scars just split - or even dissolve - first because of their "inferior functional quality". Or does scar tissue need more renewal than normal tissue and thus shows signs of damage earlier? Or what? Plus, some sources mention reopening of old fractures as another scurvy symptom, but Scar suggests that bone, unlike soft tissue, heals "without any structural or functional deterioration." So do old breaks recur with scurvy, and if so, what is the mechanism? Thanks. - Karenjc 14:00, 5 October 2012 (UTC)[reply]

I found one paper about it (Disruption of healed scars in scurvy -- the result of a disequilibrium in collagen metabolism. Cohen IK, Keiser HR.; Plast Reconstr Surg. 1976 Feb;57(2):213-5.), but only the abstract is available, and pretty short:
Old scars break open in scorbutic patients because
  • (1) the rate of collagen degradation is greater in an old scar than it is in normal skin, and
  • (2) the rate of collagen synthesis is diminished throughout the body in ascorbate deficiency. Ssscienccce (talk) 20:48, 5 October 2012 (UTC)[reply]
I was able to access that paper, and ironically, it's actually a rebuttal to the theory. Well, not the theory itself, but the methods used by those who came up with it. Someguy1221 (talk) 21:03, 5 October 2012 (UTC)[reply]
  • This is an intriguing question. It makes me wonder, if scurvy can break apart scars entirely, is it possible to reduce them and replace them with more normal tissue? To take advantage of this, some topical compound would be desired, so I cast about for an "ascorbate antagonist" and came up with ethyl-3,4-dihydroxybutyrate, which inhibits it on prolyl hydroxylase. Turns out that if prolyl hydroxylase is inhibited, the collagen chains don't form triple helices and get degraded, and apparently this some effects on cell differentiation and morphogenesis... [29][30][31] However, I didn't find any hits for EDHB and "scar" in a quick search. Certainly collagen deposition is an end point in scarring [32], a lot of things upstream of collagen are involved in scarring, the collagen receptor DDR1 has a role in it, but I didn't yet pull out whether you can actually inhibit the scar by inhibiting the collagen e.g. by genetic means, and at least one collagen causes chronic scarring if knocked out genetically. I should look at this more, I just took one poke at the top of a very big pile of papers about this stuff. Wnt (talk) 02:44, 6 October 2012 (UTC)[reply]
Thanks, all, replies appreciated. As for the bone part of my question, Bone healing plus some other sources lead me to think that the long interval between breakage and "full repair", where remodelling has occured and lamellar bone is restored, gives a window of some years when the healed fracture is still significantly more vulnerable than the surrounding bone to collagen degradation. And if there had been inadequate immobilisation in the early weeks leading to fibrous repair, or poor nutrition during the longer window, the site could well remain more than usually vulnerable to rebreakage due to scurvy. - Karenjc 18:00, 6 October 2012 (UTC)[reply]

Holographic displays

At User_talk:Jimbo_Wales#iPhones_and_editing, our respected founder made the perfectly reasonable comment that editing Wikipedia from a phone would always be difficult due to the small size. But it makes me wonder...


If the sole required accomplishment of a computer-generated holography or other head-up display is to give you the image of a large, flat, decent resolution computer screen a foot and a half from your face, despite the fact that it's projected on some little patch near your eye, how far is that from feasibility? I see from the latter article that what (according to a sympathetic news article) sounds very similar to this is already available for what seems like a niche market of swimmers looking at their lap times.[33] So why don't phones yet have this accessory, so that the entire phone can be used as a keypad and so Wikipedia could be viewed and edited from one pretty much normally? Wnt (talk) 17:09, 5 October 2012 (UTC)[reply]

Does the Nintendo 3DS do what you're asking? If so, it is available on at least one handheld device. --Jayron32 17:13, 5 October 2012 (UTC)[reply]
John Carmack talks about the practicalities of this, among a bunch of related topics, in his 2012 QuakeCon keynote. It's very long, but it's all worthwhile. -- Finlay McWalterTalk 17:31, 5 October 2012 (UTC)[reply]
One thing to note, at least for the "project it onto your eye" case, is that the image will never appear larger than the display, simply due to optics. The technology could still be used to provide a private display that is only readable by the person targeted, but it can never simulate a larger display. See Virtual retinal display. 209.131.76.183 (talk) 17:43, 5 October 2012 (UTC)[reply]
Thanks - I realize now that the reason I wasn't figuring out an answer is that my question didn't make much sense - there's no real need for a holographic technology simply to see a screen; for example you could do it with a very high res mini display and a strong contact lens. I suppose the virtual retinal display works the same way, but reverses the perspective of the focusing/scanning to minimize the equipment involved. A true hologram would allow two people to see the same apparent object, but it would inevitably be small like the "phone" then which doesn't address the main problem. Some quick scanning of the Carmack link turns up stuff around 1:12 about the display (VRD at 1:32, focus/contact lenses at 1:38, his notion of "hyperfocusing" though is bull I think, because he's neglecting the phases of the light; the comments about the Palmer kit at 1:42, $500 with distortion, the practical difficulties in head tracking, sort of explains why I don't see this on the shelf!); apparently moving the display with your head is undesirable. I suppose there's some way to measure head motions and move the projection to compensate, giving it an illusion of reality; seems like the VRD must have to do the same for eye motions because people would never put up with not being able to move their eyes to look at something. Wnt (talk) 18:21, 5 October 2012 (UTC)[reply]
I think there's also a different consideration. On my Android phone, if you use the phone in landscape, they virtual keyboard already takes up most of the screen. Typing is still very difficult compared to a real keyboard. While my phone is a fairly small one by modern standards (3.2" screen) and a large one like the 4.7" will definitely improve matters a fair amount, it will still be a lot more difficult then typing with a real keyboard for size reasons alone. The lack of tactile feedback is of course another problem (there are plans for haptic feedback or something else to try and counter the lack of tactile feedback but these still seem a while away). Even with a larger touchscreen like on an iPad this issue remains (I can say from experience). In fact some people prefer a split iPad virtual keyboard as they find it easy to type with (using thumbs). Remember also that the keyboard is fairly landscape (not including the cursors, numpad etc), hence why in most virtual keyboards are fairly landscape and when used with a phone or tablet, even a widescreen one, tend to still have space above or below (or both). And while the experience with typing with a phone or tablet isn't the same, it's similar enough that most people find it a lot easier to just stick with a layout fairly similar to the normal one at least for the letters/QWERTY. In other words, your assumption that using the entire phone as the keypad is somehow going to make things a lot easier is likely flawed, the phone is simply too small amongst other issues. (There are of course plenty of issues beyond simple typing particularly when needing to deal with markup or when needing to edit what you've already typed, And using in landscape with the keyboard active may make things worse in this regard. But I think the typing issues are enough for first consideration.) I haven't viewed any of the above links, but most sci-fi ideas tend to think of not just some sort of holographic projection or retinal display but a projected or completely virtual keyboard so you aren't limited to the size of your phone or whatever. Nil Einne (talk) 18:48, 5 October 2012 (UTC)[reply]
The keypad issue is serious, but I don't see any obvious reason why there wouldn't be a way to change the shape of the surface enough to provide tactile feedback, even on a dynamic basis. (I'd think someone should come up with a decent "display monitor" for the blind, at which point it could be adapted... off the top of my head, I think of either small projecting pins or else microfluidics and ampullae) More fundamentally, I would think someone should have invented a combinatorial alternative to QWERTY already. Suppose five fingers each hand (no thumb semantics...), so pressing any two fingers gets you one of 25 letters. Pressing any four fingers from a well chosen subset should provide all the extra letters with minimal risk of missed letters. Of course, this implies that the software can tell which finger is which, a creepy notion - one of those perennial questions I've been meaning to ask here is if anyone ever caught Synaptics uploading fingerprint databases to Unknown Agencies, but my feeling is if the NSA doesn't have a full set of every finger put on a laptop keypad in the past 10 years I should eat my hat. Wnt (talk) 19:10, 5 October 2012 (UTC)[reply]
There are tactile displays and chord keyboards. The issue with nonstandard keyboard layouts is getting people to learn to use them. I don't know what's holding back tactile displays, but it's easy to guess (cheap mass production and making them transparent, for starters). -- BenRG (talk) 23:50, 5 October 2012 (UTC)[reply]
Yes in case it wasn't clear what I meant, while you could likely develop something better taking advantage of the whole screen (and there are likely better alternatives out there), this is unlikely to succeed because few people would use it. Even despite the differences and problems, if you're a decent typist the existing knowledge greatly helps in using the virtual keyboard. Getting people to switch to something else is difficult at best. Even the sliding Swype despite the alleged advantages seems to be less popular then the tapping mode and stuff like SwiftKey because most people just find it too annoying to learn to slide. And without taking a side in the great Dvorak Simplified Keyboard debate, I think even most of those who argue it isn't better then QWERTY don't deny that even if it were better many people wouldn't switch simply because they don't want to have to relearn. Nil Einne (talk) 05:14, 6 October 2012 (UTC)[reply]
P.S. I should add that even with a 4.7" phone and with whatever keyboard design/layout you develop and presuming the user is willing to learn it, it's hard to imagine it'll ever be as fast as with a larger keyboard. One possibility is that QWERTY is really so bad that with your fancy layout it'll be be faster but this seems unlikely. The more likely possibility is that your design will be fast enough for most purposes. However even that being the case there's still the editing problems I alluded to earlier. (In fact while perhaps I didn't make this clear, particularly outside commenting without refs, it's likely to be the more significant problem.) This is still a rapidly developing interface area and undoutedly things will get better but the fact of the matter no matter how big your virtual display is, if your input device is still the size of the phone screen it's difficult to imagine it'll ever be that easy. Nil Einne (talk) 05:33, 6 October 2012 (UTC)[reply]
Here's my idea:
1) High-res display glasses (at least 1920x1024 per eye) to provide the display.
2) A pair of VR gloves with tactile feedback on the fingertips.
3) Virtual reality software which will provide a full-sized virtual keyboard on any hard surface, so you can type on it and feel key-clicks as you type.
4) Tie it all together with Bluetooth. StuRat (talk) 05:13, 6 October 2012 (UTC)[reply]

Can visible light induce an electric current in an antenna?

As visible light is also electromagnetic radiation, can it not induce an electric current in a properly oriented antenna, just like microwaves do? I am kind of aware that the antenna should be approximately the same size of (or comaparable to the) wavelength of the radiation. If this is the problem, if we make microantennas (of the order of micro meters), can we generate electric energy from visible light (I am not talking about the photo voltaic effect) - WikiCheng | Talk 17:50, 5 October 2012 (UTC)[reply]

Yes, in principle one can make antennas that respond to light. However, the feature size of the wires and other components becomes extremely small (e.g. 10 nm) since the size of the entire antenna needs to be comparable to the wavelength of light (e.g. hundreds of nm). Such things are possible with current technology, but still difficult and the resulting antennas are currently only of real interest as a research tool. Each antenna captures only a tiny amount of energy, and so you would need huge arrays for energy generation. All in all, they are simply way too expensive to compete with other light-to-electricity power technologies. For some details, try [34]. Dragons flight (talk) 18:14, 5 October 2012 (UTC)[reply]
Light frequency EM waves will not travel over conductors suitable for much lower frequencies. If I build an antenna to capture some frequency of electromagnetic radiation, say 800 mHz, then the coax or lead-in wires will carry electric current of that same frequency to a receiver. If the antenna picks up much higher frequency microwave radiation, then a dish could focus the energy on a waveguide which could carry it to a receiver. A coax would work well to carry microwave frequency EM radiation. "Light" is just em radiation of a much higher frequency and much shorter wavelength. The question seems to imply that an "antenna for light" would send down from the antenna "electric current" which is not "light." If I used a parabolic reflector or a convex lens to collect light and focus it on a fiber optic cable, a "light pipe" or even a glass rod and convey it down from the collector, it might lose some of its properties, but wouldn't that amount to what the OP requests? That is basically what a Telescope, receiving EM radiation in the 405 THz to 790 THz frequency range. Mirrors and "beam combiners" can be used to combine the signal from multiple telescope mirrors, like combining the multiple units in some V antennas. You just can't make light become electrical current of a much lower frequency which will be picked up by wire antenna elements of a practical size and then be made to travel down conventional antenna wire or coax like a radio or TV signal. Edison (talk) 21:11, 5 October 2012 (UTC)[reply]
You can, however, take advantage of the photoelectric effect, which is a totally different physical process, to convert light-frequency electromagnetic radiation in to an electromotive force, and therefore drive a current down a conductor. The incident photon frees an electron from certain types of material, and a signal can propagate through an attached electrical circuit. The photoelectric process relies on properties of atomic physics, though, and is unlike the ordinary induction of electric current in a radio-frequency antenna. Nimur (talk) 22:16, 5 October 2012 (UTC)[reply]

The interesting here is that you could, in principle, store the electromagnetic fields detected by each antenna as a function of time. So, you would have a visible light telescope that detects light coherently. All the information about objects in any arbitrary direction will be stored this way. To look at some position in the sky at some time in the past, you just have to access the memory and add up the detected fields with the right phase shifts. Count Iblis (talk) 15:42, 6 October 2012 (UTC)[reply]

Absolutely, yes. But, if you work out the physics and mathematics for resolution of an image, you will probably find that your device has similar physical dimensions and properties to a camera. There has been an immense amount of theoretical and applied research in to the subject of wave field imaging, and application of the imaging condition to coherently-sampled time-history measurements. For example, a radio telescope array can be used to synthetically image radio waves; this same algorithm has common application in synthetic aperture RADAR, using a differently-shaped antenna. In a similar way, SONAR can be used to generate an image using coherently-sampled acoustic wave fields. The hand-held medical ultrasound imager uses one (or more) sensors, and performs coherent sampling with multiple samples collected at different times, to generate a single "snapshot" image of a medical subject. The capability of modern computers to fully analyze a three-dimensional wave-field has been increasing, steadily, over the last few decades, so as practical implementation problems are solved, we are getting closer and closer to theoretical limitations governed by wave mechanics. At the end of the day, you can't construct an image if you can't physically resolve the waveform data - which is governed by the mathematics of sampling. Nimur (talk) 17:49, 6 October 2012 (UTC)[reply]

12.5 million pixels

Why doesn't wiki software like PNG files with more than 12.5 million pixels? Whoop whoop pull up Bitching Betty | Averted crashes 22:55, 5 October 2012 (UTC)[reply]

It requires too much RAM to resize them under the current infrastructure. Dragons flight (talk) 23:34, 5 October 2012 (UTC)[reply]
See also bugzilla:9497. PrimeHunter (talk) 19:51, 6 October 2012 (UTC)[reply]

October 6

Pressure of a gas

I can visualize pressure inside a liquid pretty easily: water molecules are touching each other and exerting forces on their neighbors. But I'm a little confused about how pressure in a gas works. The gas molecules pass usually through each other, so when people speak of a force by one part of the gas on another, are they speaking about the occasional collision between gas molecules or about some sort of momentum flux due to gas molecules moving through an imaginary surface? 74.15.136.9 (talk) 03:05, 6 October 2012 (UTC)[reply]

It doesn't matter much whether the gas molecules collide or not, though I think the mean free path in typical gases may be shorter than you suppose. Even if they never collided with each other, they'd still collide with the walls of the container (otherwise they'd keep on going and not stay in the container), and thereby exert force on it. Or, with no container, if you have a solid object immersed in the gas, the same arguments apply — the gas molecules collide with the solid object, bounce off it, and thereby exert force on it/transfer momentum to it. --Trovatore (talk) 03:08, 6 October 2012 (UTC)[reply]
Oh, actually I missed that you said "one part of the gas on another". In that case, I suppose it does matter (if the gas molecules really passed freely through the gas at large, they'd just keep moving through it until they hit a solid boundary, and then it wouldn't make much sense to talk about "one part of the gas"). But the article I linked says that under standard conditions (one atmosphere; not sure whether 0 or 25 degrees Celsius) the mean free path is only 68 nanometers. --Trovatore (talk) 03:13, 6 October 2012 (UTC)[reply]
In a cube of 68nm, there would be (I think) (68*10-9)3*6.025*1023*1000/22.4=8457 molecules. But that's if it was 0°C. If that gives any idea... Ssscienccce (talk) 04:23, 6 October 2012 (UTC)[reply]
Not sure just what your point is, but I guess that is a good intuitive guide to what's going on, assuming your figures are right. The cube root of that is about 20, so roughly speaking you're saying a molecule would pass 20 other molecules before bumping into one. --Trovatore (talk) 05:39, 6 October 2012 (UTC)[reply]
Just to get an idea, as you say; maybe I'm a bit obsessive-compulsive when it comes to calculations, judging by the amount of scratch paper riddled with numbers (if that's the right expression) going in the bin every day. Ssscienccce (talk) 07:53, 6 October 2012 (UTC)[reply]
The molecules in a gas do collide with each other, which allows the gas to undergo things like laminar flow and turbulent flow and allows pressure to equilibrate throughout the gas. In situations where the gas molecules do collide with the walls more frequently than with each other (such as small pores), Knudsen diffusion occurs, and the macroscopic notion of pressure breaks down somewhat.--Wikimedes (talk) 07:18, 6 October 2012 (UTC)[reply]
You can work out from the gas laws and kinetic equations and so forth (we did it in high school, I can't give the exact derivation from memory) that on average an individual oxygen or nitrogen molecule in the atmosphere at room temperature travels about 10cm before it hits another air molecule (mean free path mentioned above) and does so while travelling at 1,000 kmph. They may be small, but they are fast, and there's a lot of them. Grokking that should help your intuition. μηδείς (talk) 16:54, 6 October 2012 (UTC)[reply]
I think you're off by about seven orders of magnitude. Maybe you meant nm instead of cm? --Trovatore (talk) 17:48, 6 October 2012 (UTC)[reply]
I may certainly be wrong, I am going on memory from the 80's. But I did mean to say 10cm. Nanometers sounds way too small. I'll see if I can google a result. Hmm, yes, our own article says 68nm. I wonder if I am just remembering the number wrong, or if I have some other result in mind. Well, that should also put my 1,000kmph figure to doubt. μηδείς (talk) 17:52, 6 October 2012 (UTC)[reply]
Physical Chemistry by Atkins 6th edition p.30 gives a typical mean free path for N2 at 1 atm to be about 70nm. It also states that for N2 or O2 at 25C and 1atm the molecules travel at about 350m/s which is 1260km/h, so Medeis' speed is about right.--Wikimedes (talk) 18:30, 6 October 2012 (UTC)[reply]

Indeed, and it's not all that difficult to estimate this from first principles. The Bohr radius is 0.5*10^(-10) meters, so the elastic cross section for hydrogen molecules should be of the order of few times 10^(-20) m^2, let's take this to be 10^(-19) m^2. Then the mean free path L is the average distance a hydrogen molecule needs to travel before collding with another one. This means that within the volume swept out by the cross section of 10^(-20) m^2 over the distance L of 10^(-19) m^2 L , there should be one molecule on average.

The number density of hydrogen molecules at room temperature and a pressure of p = 1 atmosphere is n = p/(k T) = 2.5*10^(25) m^(-3). So, within the volume of 10^(-20) m^2 L there are 2.5*10^(25) m^(-3)* 10^(-19) m^2 L = 2.5*10^(6) L/m molecules, but L being the mean free path this should equal 1. This means that L = 4*10^(-7) meters. Count Iblis (talk) 19:58, 6 October 2012 (UTC)[reply]

Maybe Medeis was thinking of Rutherford scattering? Alpha particles will travel several centimeters in air before being deflected by a nucleus. Ssscienccce (talk) 08:31, 7 October 2012 (UTC)[reply]
I am afraid we may just have gotten the wrong answer somehow when we did the derivation as a class, since I remember being struck by the results being on a visibly imaginable scale, which 70nm is definitely not. Our chem teacher was on the ball, quite a good mathematician, so I can't see her making such a large error, or letting us remain in one. (I do remember getting a speed of around 1,250 kmph, and I just rounded it down for the example. I think the Rutherford result is a good guess as to what may be confusing me, or the fact that there's on the order of one hydrogen atom per cubic decimeter in deep space.) I'll have to ask some other students of hers if they remember doing the derivation.
I assume that the speed of sound is a result of the speed of the particles derived here? μηδείς (talk) 21:34, 7 October 2012 (UTC)[reply]

What kind of tech advance will we need to have GPS-locatable wedding rings?

That's the one thing you dread the most - losing a wedding ring. Sometimes you forget it in a public restroom, or during a picnic in an open field out there, you forget you took it off and now the field is too big for you to spot it.

That's why I would hope for my and her wedding ring to have a GPS chip, so that once the ring is lost, you go to "ringlo.st" and enter your information, then the GPS satellite tracks the ring down for you.

If anyone's concerned about "powering" the chip, I would hope that piezomechanics allow for kinetic movements to keep it powered up.

A. So how much farther do the necessary components need to miniaturize in order to fit comfortably in a wedding ring?

B. How much might it cost?

C. Would it be waterproof?

D. Are there other issues you'd like to bring up with this hypothetical GPS-locatable wedding ring?

Thanks. --70.179.167.78 (talk) 15:03, 6 October 2012 (UTC)[reply]

A) I think the electronics are mostly almost small enough already, with a couple exceptions: the battery (which it would need to remain powered when taken off) and the antenna (I would also include the display and keyboard, but presumably this would lack those).
C) I don't see waterproofing it being a major problem.
D) First, it wouldn't be "solid gold" anymore, so would be lighter and not feel like a quality ring. This would also make it more fragile. Finally, putting any type of technology on it means it could become obsolete, let's say if GPS units all switch to different frequencies.
A more reasonable approach which could be taken right now are to put passive RFID tags in them. These don't require a battery, but only work at a short distance, with a scanner you'd carry around. So, it would work great if you lost your ring around the house, but not if you lost it someplace random outside the house. However, the only reasons I can see for taking your wedding ring off outside the house are at a jewelry store when having it cleaned/adjusted, at a clinic for a CAT scan and such, and when engaged in adultery. Hopefully that's few enough places where you could bring the scanner. I don't quite understand why you'd take it off in a public restroom (unless this is where one engages in adultery). :-) StuRat (talk) 15:30, 6 October 2012 (UTC)[reply]
Another reason to take your ring off is when working with some dangerous machinery. I wirk with people in a factory, and some of them remove rings when working because of the danger of having the ring trap in active machinery. Some of them have lost fingers because of rings. 217.158.236.14 (talk) 08:19, 8 October 2012 (UTC)[reply]
As a workaround, could you just buy a fine chain, worn on your neck or attached to your jeans, and put the ring on it whenever you take it off? – b_jonas 19:27, 6 October 2012 (UTC)[reply]
Or just never take it off. My ring has not left my finger in over forty years, and now I cannot physically remove it.--Shantavira|feed me 19:54, 6 October 2012 (UTC)[reply]
I agree that, currently, GPS tracking is infeasible, and probably will be for your lifetime due to power source constraints (unless you are content to replace/recharge the power source rather often). Another unsolicited alternative: get a dozen of these [35], declare them identical in the sense of "wedding ring," and always have a spare if you lose one ;) They also have safety benefits; I know someone who lost his left ring finger because he was wearing his wedding ring... SemanticMantis (talk) 21:16, 6 October 2012 (UTC)[reply]
Good idea. If you must have a silver/gold/platinum/bejeweled wedding ring, keep it in a vault, and wear something disposable from day to day. StuRat (talk) 21:33, 6 October 2012 (UTC)[reply]
Hi, OP here. No one answered how well piezo-recharging would work on rings.
E. Wristwatches are recharged by the kinetic movements of the wearer all the time, so why couldn't rings?
F. Also, if GPS changes to different frequencies, wouldn't the wearers get a mail/email notification letting them know of this, and to get it changed at the local jeweler / electronic store as soon as possible? --70.179.167.78 (talk) 23:41, 6 October 2012 (UTC)[reply]
E) It's not enough to broadcast a signal to a cell phone tower, which is what your system would require (or, even worse, broadcasting to a satellite).
F) Perhaps, but plan on doing this every 5-10 years, as cell phone technology constantly changes. You'd also probably have to replace the guts, as a new frequency might require new electronics and a new antenna. Think about it this way, what 50 year old portable electronic technology is still used today ? Very little, for good reason. See many people with mono AM transistor radios ? StuRat (talk) 00:08, 7 October 2012 (UTC)[reply]
For future navigation I should note that the question on kinetic reclamation (charging by hand movements) is now being archived at Wikipedia:Reference_desk/Archives/Science/2012_October_2 though it is still visible above for now. Wnt (talk) 21:40, 7 October 2012 (UTC)[reply]
  • It's amazing that nobody has yet pointed out that GPS satellites do not track things. They broadcast a signal that GPS devices use to calculate their location. To make this work, the ring would have to include a GPS tracker and something comparable to a cell phone, so that it could call the owner to give its location. Looie496 (talk) 05:11, 7 October 2012 (UTC)[reply]
What's being described is essentially a cell phone, in that it can be tracked and communicated with (or from) at any time. People of most religions might do without the keypad and the address book, but there would be little savings in omitting the ability to communicate voice since microphones and speakers have become very small. So the question collapses to one of when a cell phone would be possible as a ring. This might depend on some genuinely unknown factors, such as the hazards of terahertz radiation and the feasibility of using it as a routine communications medium (since it could theoretically have a shorter antenna, etc.). Wnt (talk) 21:36, 7 October 2012 (UTC)[reply]
Religions ? StuRat (talk) 21:39, 7 October 2012 (UTC)[reply]
Antenna efficiency drops off very quickly below 1/4 wavelength - a wedding ring is most probably too small to house a GPS receiver antenna large enough to work. Even GPS receivers with full size antennas don't work indoors. It's cellular transmission antenna would be even less efficient though might still be able to work close to a cellular base station, but GPS satellite signals are very weak even outdoors with wide open skies. Roger (talk) 22:00, 7 October 2012 (UTC)[reply]
It would be more feasible to equip your environment with recording video cameras to track your ring's and your own every movement. They could be attached to your clothing or body in case you visit rest rooms. Or you could have movement sensors attached to your hands. Presumably the pattern of movement between your hands is characteristic of pulling the ring off. Then software could alarm you if you step more than a meter away from the ring. All this would be possible with today's accelerometers and cameras. Graeme Bartlett (talk) 23:23, 7 October 2012 (UTC)[reply]

Frog/toad suggestions

Something I half-remember from a wildlife documentary from years and years ago. It's a frog or toad that lays her eggs and then places them in indentations on her back - and her skin actually grows over and covers the eggs completely. Then the tadpoles develop inside the eggs without hatching, and emerge from her back as fully-formed froglets...

Does this creature sound familiar to anyone? As I say, it was a long time ago that I saw this on TV, so my memory might be playing tricks on me - but I do seem to remember watching this as a kid and being freaked out by the skin part. --Kurt Shaped Box (talk) 23:28, 6 October 2012 (UTC)[reply]

Surinam toad. Deor (talk) 23:59, 6 October 2012 (UTC)[reply]
Shudder. That is still as disturbing to me now. --Kurt Shaped Box (talk) 00:05, 7 October 2012 (UTC)[reply]
Well done. I was going to suggest the Midwife toad, but it doesn't really fit the bill. Perhaps someone could add a brief mention of the Surinam Toad to our Frog article? Alansplodge (talk) 00:05, 7 October 2012 (UTC)[reply]
The mere thought of that makes me reach for my backscratcher. StuRat (talk) 04:55, 7 October 2012 (UTC) [reply]
Beautiful! evolution at its best. Richard Avery (talk) 07:19, 7 October 2012 (UTC)[reply]
It reminds me of those pictures that were doing the email rounds a few years ago that purportedly showed some horrible skin disease - but in actuality were elements of lotus seed pods 'shopped onto various body parts. --Kurt Shaped Box (talk) 10:46, 7 October 2012 (UTC)[reply]


October 7

what is the 1D solution for the particle in a box for fermions with spin?

All discussions of fermions in a box seem to involve 3D boxes. But what about 1D boxes? 71.207.151.227 (talk) 01:37, 7 October 2012 (UTC)[reply]

Thanks to uncertainty, you can't pin a good fermion down. Interesting applications have been made using Quantum wire however. Hcobb (talk) 02:08, 7 October 2012 (UTC)[reply]
Pardon me for being ignorant (BSc in something that probably isn't a science...), but is it possible to fit a fermion into a one-dimensional box? Don't they occupy a (small) volume of three-dimensional space? Or does 'box' mean something else entirely in this context? AndyTheGrump (talk) 04:53, 7 October 2012 (UTC)[reply]
AndyTheGrump: I think our Particle in a box page may help clarify. The use of a conceptual/mathematical solution in one dimension makes the math more tractable, with solutions generalizable to higher dimensions. -- Scray (talk) 17:14, 7 October 2012 (UTC)[reply]

It's analogous to the 3D problem. I think you are working directly in the limit of a large volume, otherwise there wouldn't have been a question about this. In that case, you need the density of states. In general, the number of single particle quantum states for a spin 1/s particle in some large volume V and in a volume of momentum space Vp is given as 2 V Vp/h^n, where n is the number of dimensions. The factor 2 is the spin degeneracy, i.e. it takes into account that for each wavefunction in configuration space, you have two independent spin states. Count Iblis (talk) 15:32, 7 October 2012 (UTC)[reply]

Will Santa Claus' house be underwater by 2020?

Will Santa Claus' house be underwater by 2020? And how will scientists explain this to all the children of the world? 220.239.37.244 (talk) 02:16, 7 October 2012 (UTC)[reply]

How do you know it isn't already? ←Baseball Bugs What's up, Doc? carrots02:35, 7 October 2012 (UTC)[reply]
I suspect that most wise scientists avoid formal discussion and attempts at explanation of Santa's house. HiLo48 (talk) 02:58, 7 October 2012 (UTC)[reply]
Tautology (rhetoric) -- Scray (talk) 04:37, 7 October 2012 (UTC)[reply]
I think it will just add another item to the list of Santa capabilities, next to toy production and presents delivery, see The Evolution of Santa's Science and Technology Ssscienccce (talk) 08:42, 7 October 2012 (UTC)[reply]
Well, NORAD may need to explain how they know where to start tracking him from if his house is underwater. Nil Einne (talk) 17:57, 7 October 2012 (UTC)[reply]
Since Santa needed to get a 2nd mortgage to cover Rudolph's alcoholism treatments, his own Weight Watchers membership, and the cost of human growth hormone treatments for all his helpers afflicted by dwarfism, he was unprepared for the collapse in real estate prices. So, yes, his house is already underwater. StuRat (talk) 04:53, 7 October 2012 (UTC) [reply]

Cat eyes

Can cats see infrared rays? 24.23.196.85 (talk) 05:50, 7 October 2012 (UTC)[reply]

No, and, generally, warm-blooded animals can't, since it would be difficult to see beyond their own infrared glow, to see, say, the glow of a mouse in some underbrush. StuRat (talk) 06:08, 7 October 2012 (UTC)[reply]
Thanks! 24.23.196.85 (talk) 06:19, 7 October 2012 (UTC)[reply]
You're quite welcome. I'll mark this Q resolved. StuRat (talk) 21:33, 7 October 2012 (UTC)[reply]
Well, resolved, but I'd add this, from our article cat senses: Cats are able to distinguish between blues and violets better than between colours near the red end of the spectrum. - Nunh-huh 22:56, 7 October 2012 (UTC)[reply]
I'm removing the resolved tag; such tags aren't recommended (the recent discussion on the Ref Desk talk page touches on the reasons why). Also, I notice that no one has provided a single source for their statements, which is very disappointing, given that this is a Reference Desk.
Moreover, StuRat's responses are inaccurate and imprecise, at best. There is published literature (some of it, which I've linked below, is both online and free of charge) which reports on the sensitivity of cat's eyes to near-infrared radiation.
Both studies note that the sensitivity of the cat's eye to far-red and infrared radiation is certainly lower than to middle-of-the-visible-spectrum light, but both also demonstrate measurable responses at longer wavelengths. Gekeler et al. go out to 826nm (laser line illumination) and 875nm (IR emitting diode). Speaking from my own personal experience as a mammalian scientist with normal color vision, human beings can certainly see bright sources with wavelengths out into the middle 800's.
Delving a bit further into the question, we can ask what happens at wavelengths that get a bit longer—what happens when you hit 950 or 1000nm (1 micrometer)? Then you start to get stuck by the physical properties of water. Infrared light at 1000nm and longer tends to be strongly absorbed by water, which – inconveniently – is a major component of the vitreous humour that fills the eyeball.
For comparison, that the blackbody emission of a human-body-temperature object peaks way up around 10 micrometers. That's deep, deep into wavelengths strongly absorbed by water (passage through 1 cm of water will absorb 99.9% of infrared light at this wavelength; the eye is essentially opaque to these rays). Forget being blinded by one's own infrared glow—at these wavelengths one is blinded by the opacity of one's own eye. TenOfAllTrades(talk) 02:29, 8 October 2012 (UTC)[reply]
I think it's further worth noting that, at least in snakes, sensation of IR is not done by the eyes, but by specialized "pit" organs. Also, in snakes, there is no specific reception of IR wavelengths. Rather, the pits are structured in a way that crudely focuses radiant heat onto nerve endings that are actually sensing temperature. Someguy1221 (talk) 02:57, 8 October 2012 (UTC)[reply]
It may also be worth pointing out, in regard to StuRat's answer, that infrared is quite a wide spectrum. If, for example, cats were able to see one-micron light, that would be well into the near infrared range by almost anyone's standards, but there would be hardly any glare at all from the cat itself. Presumably Stu was talking about much longer-wave infrared, say around ten microns. --Trovatore (talk) 05:15, 8 October 2012 (UTC)[reply]

Does carbon dioxide have a quadrupole?

It doesn't have a permanent dipole but its electron density map (why can't I find a basic electron density map of it online?) would be strongly polarised, with the carbonyl oxygens retaining most of the electron density and the carbonyl center being electropositive. Yet why doesn't carbon dioxide remove some of the slightly polar flavor compounds that are found in caffeine (CO2-water partition coefficient at least 0.3?) ? It has some polarity, right, just not dipolarity. 71.207.151.227 (talk) 16:35, 7 October 2012 (UTC)[reply]

This is a quadrupole. This is not what CO2 looks like.
No, it has no polarity. A "quadrupole" would look like the pic I am appending. Think of it this way: the two diametrically oppose oxygens are pulling electrons equally from each other across the carbon atom. That creates a roughly equal distribution of electrons across the whole molecule. In general, any molecule which shows perfect VSEPR symmetry (i.e. one of the basic 3 VSEPR shapes: linear, trigonal planar, or tetrahedral) will be perfectly non-polar, so long as every point is the same atom, like CO2 where both atoms attached to the central atom are oxygen. This is true even where there is a large difference in electronegativity among the individual atoms, because those electronegativity differences will, in essence, counteract each other. So even molecules like tetrafluoromethane show the same nonpolar character as methane; CF4 and CH4 have comparable melting and boiling points because they are equally nonpolar, the difference is explainable by differences in London dispersion forces due to the difference in molecule size. --Jayron32 18:46, 7 October 2012 (UTC)[reply]

Is it wrong to say that we come from the monkey?

Actually, it's from the apes, but maybe the apes came from the monkey, so it's not wrong after all. OsmanRF34 (talk) 22:42, 7 October 2012 (UTC)[reply]

See [38]. We are on a parallel evolutionary path with monkeys, not direct descendents from them. Our common ancestors are all extinct. StuRat (talk) 22:59, 7 October 2012 (UTC)[reply]
Well, even if our common ancestors are extinct, they still could be called 'monkeys.' BTW, I am afraid the graphic you linked to is not the evolutionary perspective, but just the present day classification. OsmanRF34 (talk) 23:32, 7 October 2012 (UTC)[reply]
I found this simple graphic which says that "apes and humans" are descended from "early monkeys". Alansplodge (talk) 23:41, 7 October 2012 (UTC)[reply]
I would interpret StuRat's linked diagram to mean "yes". While it is true that the early ancestors of monkeys and apes, living at a different time than the present and not being a member of any modern clade, might be called whatever we wish, still, it seems logical to call the single common ancestor of "New World monkeys" (Platyrrhini) and "Old World monkeys" (Catarrhini) a "monkey". The catch there is that "monkey" is not a monophyletic group, since it includes a subgroup of "apes" that by tradition are not called monkeys, and if you're not defining it monophyletically, you can define it a lot of ways - for example, you could define it to include every one of the individual subgroups but not the common ancestor. But I think that's a stretch. Wnt (talk) 00:51, 8 October 2012 (UTC)[reply]
  • Biologists use precise terminology to avoid getting bogged down in semantic issues like this. A biologist would say that we are descended from animals that belong to the group simiiformes, which includes monkeys and their ancestors. Most biologists when speaking informally would probably apply the term "monkey" to everything in simiiformes, but there is no official rule governing this. Looie496 (talk) 00:50, 8 October 2012 (UTC)[reply]
    • Apes, including Humans, are Catarrhines. Our common ancestor with the Catarrhine monkeys would be itself considered a catarrhine monkey. Its common ancestor with the platyrrhine monkeys would also be considered a monkey. Yes, we are descended from some animals which, if they were living today, would unambiguously be considered monkeys. μηδείς (talk) 04:10, 8 October 2012 (UTC)[reply]

Physical Chemistry question - low pressure vacuum chamber

Hi all,

I am stuck on this physical chemistry question and would really appreciate some help. Here goes the question

Many processes such as the fabrication of integrated circuits are carried out in a vacuum chamber to avoid reaction of the material with oxygen in the atmosphere. It is difficult to routinely lower the pressure in a vacuum chamber below 1.0 * 10^-10 Torr

A) Calculate the molar density at this pressure at 299K

Well this is just simple substitution: P = ρ R T where ρ is the molar density

ρ = P/RT

 = 1*10^-10/(299 * 62.36)
 = 5.36 * 10^-15

B) What fraction of the gas phase molecules initially present for 1.0 atm in the chamber are present at 1.0 * 10^-10 Torr?

This is the tricky part. I know that PV = nRT and when you lower the pressure from 1.0 atm (760 torr) to 1.0 * 10^-10 torr, and assuming that V and T are held constant, n is lowered as well. Since molar density is defined to be the number of moles over volume

State 1 = 760 torr

ρ1 = n1/V = P1/RT = 0.0407

State 2 = 1.0 * 10^-10 torr

ρ2 = n2/V = 5.36 * 10^-15

The ratio of n1 to n2 can be calculated by dividing ρ1 by ρ2 which equates to 1.32 * 10^-13.

The online homework system is telling me that my answer is wrong and I am just trying to figure out what I did wrong. Any help is appreciated! Thanks in advance! — Preceding unsigned comment added by 169.232.236.95 (talk) 23:03, 7 October 2012 (UTC)[reply]

I worked it out without looking at your method, and got the same answer as you (1.322 x 10-13). Does the online system give you their numerical answer? If you can post that, we might be able to spot a mistake they made. Ratbone124.178.60.218 (talk) 00:25, 8 October 2012 (UTC)[reply]

October 8

Accuracy of forward looking economic statements

There are many cases were businessmen, economists, and similar people have to predict what the market is going to look like for their product weeks, months, or even years in advance. For example, farmers hope to plant crops that will give them a good profit come harvest time. Or alternatively, a book publisher tries to predict demand when deciding on the size of a print run. In some cases there are robust futures markets to help, in other cases in-depth business studies might be undertaken, but often such predictive judgments seems to boil down to some expert's personal opinion. I'm looking for materials I might read reviewing the successes and failures of such forward looking economic statements. For example, when experts predict that the GDP will increase over the next 12 months, how often are they right? Do the target prices set by stock market analysts have predictive value? Etc. Also, I would like to read about cases where detailed mathematical modeling has been applied to economic problems, and where such models have tended to do well or poorly. Thanks for your help. Dragons flight (talk) 11:29, 8 October 2012 (UTC)[reply]

Pyramids at Giza

A claim often repeated on the internet is that the Great Pyramid at Giza could not be reproduced with today's technology. It is aligned to true north with a greater degree of accuracy than any modern building and we could not match or better it. The quality and accuracy of the stonework is beyond anything that could be done today, even with today's modern tools and equipment.

I just cannot believe that this is true. The sort of website this appears on is usually to do with the paranormal and that fact that the pyramids must have been built by Alien's etc - yet it even appears on some more "reputable" sites.

Another claim is that Giza is located at the precise geographical centre of the earth's landmass and the chances of the pyramid being located there are billions to one. I have researched and debunked this particular gem - how do you find the centre of the surface of the sphere????