Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 71.146.8.88 (talk) at 18:44, 18 March 2012 (Element). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


March 14

NIST aluminium ion clock

I was watching a popular science program on TV and it said that the aluminum ion experimental clock at the National Institute of Standards and Technology is the world's most precise clock, and is accurate to one second in about 3.7 billion years.

What do they mean by that? If I say my watch is accurate to 1 second a day I mean that it gains or loses no more than 1 second a day relative to GMT or some other standard. In other words, accuracy can only be measured relative to a standard.

But if the NIST clock is truly the most accurate then it is the standard, since there is nothing more accurate to compare it to. In effect, the NIST clock defines time. To check its accuracy you would have to measure 3.7 billion years by some other, more accurate, means, which contradicts the premise.

So, what do they mean by saying it’s the most precise clock? — Preceding unsigned comment added by Callerman (talkcontribs) 00:37, 14 March 2012 (UTC)[reply]

They are translating the clock resonator's Q factor, which is a very technical measurement (of phase noise, or frequency stability), into "layman's terms." Expressing the frequency stability in "seconds of drift per billion years" is a technically correct, but altogether meaningless, unit conversion. Over the time-span of a billion years, it's probable that the Q-factor will not actually remain constant. It's similar to expressing the speed of a car in earth-radii-per-millenia, instead of miles-per-hour. The math works out; the units are physically valid and dimensionally correct ([1]); but we all know that a car runs out of gas before it reaches one earth-radius, so we don't use such silly units to measure speed. Similarly, physicists don't usually measure clock stability in "seconds drift per billion years" in practice.
Here's some more detail from NIST's website: How do clocks work? "The best cesium oscillators (such as NIST-F1) can produce frequency with an uncertainty of about 3 x 10-16, which translates to a time error of about 0.03 nanoseconds per day, or about one second in 100 million years." And, they link to "From Sundials to Atomic Clocks," a free book for a general audience that explains how atomic clocks work. Chapter 4 is all about Q factor: what it is, why we use it to measure clocks, and how good a Q we can build using different materials. Nimur (talk) 01:04, 14 March 2012 (UTC)[reply]
Alright Nimur, a question to your answer so you know you're not off the hook yet. Does anyone in science or society-at-large benefit from the construction of a clock that is more accurate than 0.03 nanoseconds per day, or is this an intellectual circle jerk? (which is also fine, by the way, because science is awesome.) Someguy1221 (talk) 01:56, 14 March 2012 (UTC)[reply]
Indeed, there are practical applications. If I may quote myself, from a seemingly unrelated question about metronome oscillations, in May 2011: "This "theoretical academic exercise" is the fundamental science behind one of the most important engineering accomplishments of the last century: the ultra-precise phase-locked loop, which enables high-speed digital circuitry (such as what you will find inside a computer, an atomic clock, a GPS unit, a cellular radio-telephone, ...)." In short, yes - you directly benefit from the science and technology of very precise clocks - they enable all sorts of technology that you use in your daily activities. The best examples of this would be high-speed digital telecommunication devices - especially high frequency wireless devices. A stable oscillator, made possible by a very accurate clock, enables better signal reception, more dense data on a shared channel, and more reliable communication. Nimur (talk) 06:43, 14 March 2012 (UTC)[reply]
Nimur has not correctly understood the relationship between Q and stability. Q is a measure of sharpness of resonance. If you suspend a thin wooden beam between two fixed points and hit it, it will vibrate - ie it resonates. But the vibrations quickly die away, because wood is not a good elastic material - it has internal friction losses. If you use a thin steel beam, the vibrations die away only slowly - it is a better elastic material. Engineers would say the wood is a low Q material and the steel a High Q material. All other things equal, a high-Q resonator will give a more stable oscillation rate, because if other factors try to change the oscillation rate the high-Q resonator will resist the change better. But there are other things and they aren't generally equal. Often a high-Q material expands with temperature - then the rate depends on temperature no matter how high the Q is.
In the real world (ie consumers and industry, as distinct from estoric research in university labs) the benefit of precise clocks is in telecommunications - high performance digital transmission requires precise timing to nanosecond standards, and in metrology - Time is one of the 3 basic quantities [Mass, Length, Time] that all practical measurements of any quantity are traceable back to. Precise timing is also the basis of navigation - GPS is based on very precise clocks in each satelite. So folks like the NIST are always striving to make ever more precise clocks so they DO have a reference standard to which they can check ever better clocks used in indutry etc.
It is quite valid to state the accuracy of clocks as soo many nanoseconds error per year or seconds per thousand years or whatever. It's often done that way because it gives you a good feel for the numbers. You don't need to measure for a year or 100 years to know. Here's an analogy: When I was in high school, we had a "rocket club". A few of us students, under the guidance of the science teacher made small rockets that we launched from the school cricket pitch. We measured the speed and proudly informed everyone - the best did about 300 km per hour. That does not mean our rockets burned for a whole hour and went 300 Km up. They only burned for seconds and achieved an altitude of around 400 m, but we timed the rockets passing two heights and did the math to get km/hour. If we told our girlfriends that the rockets did 80 m/sec, that doesn't mean much to them, but in km/hr they can compare it with things they know, like cars. Keit120.145.30.124 (talk) 03:02, 14 March 2012 (UTC)[reply]
I respectfully assert that I do indeed have a thorough understanding of Q-factor as it relates to resonance and oscillator frequency stability. I hope that if any part of my post was unclear, the misunderstanding could be clarified by checking the references I posted. Perhaps you misunderstood my use of frequency stability for system stability in general? These are different concepts. Q-factor of an oscillator directly corresponds to its frequency stability, but may have no connection whatsoever to the stability of the oscillation amplitude in a complex system. Nimur (talk) 06:59, 14 March 2012 (UTC)[reply]
Does "Q-factor of an oscillator directly correspond to its frequency stability" as you stated? No, certainly not. An oscillator fundamentally consists of two things: a) a resonant device or circuit and b) an amplifier in a feedback connection that "tickles" the resonant device/circuit to make up for the inevitable energy losses. This has important implications. First, you can have a high Q but at the same time the resonant device can be temperature dependent. As I said above it doesn't matter what the Q is, if the resonant device is temperature sensitive, then the oscillation frequency/clock rate will vary with temperature. Same with resonant device aging - quartz crystal and tuning forks can have a very high Q, but still be subject to significant aging - the oscillation rate varies more the longer you leave the system running. Second, the necessary feedback amplifier can have its own non-level response to frequency. This non-level response combines with the resonant device response to in effect "pull" the resonance off the nominal frequency. Real amplifiers all have a certain degree of aging and temperature dependence in teir non-level response. Also, practical amplifiers can exhibit "popcorn effect" - their characteristics occaisonally jump very slightly in value. When you get stability down below parts per 10^8, this can be important. All this means that it HELPS to have high-Q (it make "pulling" less significant), but you CAN have high stability with low Q (if the amplifier is carefully built and the resonant device has a low temperature coefficient), and you can have rotten stability with very high Q. I've not discussed amplitude stability in either of my posts, as this has little or no relavence to the discussion. Keit120.145.166.92 (talk) 12:16, 14 March 2012 (UTC)[reply]
Several sources disagree with you; Q is a measure of frequency stability. Frequency Stability, First Course in Electronics (Khan et al., 2006). Mechatronics (Alciatore & Histand), in the chapter on System Response. Our article, Explanation of Q factor. These are just the few texts I have on hand at the moment. I also linked to the NIST textbook above. Would you like a few more references? I'm absolutely certain this is described in gory detail in Horowitz & Hill. I'm certain I have at least one of each: a mechanical engineering text, and a physics textbook, and a control theory textbook, on my bookshelf at home, from each I can look up the "oscillators" chapter and cite a line at you, if you would like to continue making unfounded assertions. Frequency stability is defined in terms of Q. Q-factor directly corresponds to frequency stability. Nimur (talk) 18:34, 14 March 2012 (UTC)[reply]
(1) If you read the first reference you cited (Khan) carefully it says with math the same as I did in just words: High Q helps but is not the whole story. It says high Q helps, but for any change, there must be a change initiator - so if there is no iniator, there's no need for high Q. Nowhere does Khan say stability directly relates to Q. In fact, his math shows where one of the other factors gets in, and offers a clue on another. (2) I don't have a copy of you 2nd citation, so I can't comment on it. (3) The Wikipedia article does say "High Q oscillators ... are more stable" but this is misleading, as Q is only one of many factors. (4) I don't think you'll find anywhere in H&H where it says Q determines frequency stability. With respect to you good self Nimur, you seem to be making 3 common errors: a) you are reading into texts what you want to believe, rather than reading carefuly, (b) like many, you cite Wikipedea articles as an authority. That's not what Wikepedia is for - the articles are good food for thought and hints on where to look and what questions to ask, but are not necessarily accurate. c) you haven't recognised that Khan, as a first course presentation, gives a simplified story that, while correct in what it says, doesn't not cover all the details. Rather than dig up more books, read carefully what I said, then go back to the books you've already cited.
A couple of examples: Wein bridge RC oscillator - Q is 0.3, extreemly low, but with carefull amplifier design temperature stability can approach 1 part in 10^5 over a 30C range, 1 part in 10^4 is easy. 2nd example: I coulkd make up an LC oscillator with the inductor a high Q device (say 400) and the C a varicap diode (Q up to 200). The combined in-circuit Q can be around 180. That should give a frequency stability much much better that the Wein oscillator with its Q of only 0.3. But wait! sneaky Keit decided to bias the varicap from a voltage derived from a battery, plus a random noise source, plus a temperature transducer, all summed together. So, frequency tempco = as large as you like, say 90% change over 30 C, aging is dreadfull (as the battery slowly goes flat), and the thing randomly varies its frequency all over the place. I can't think why you would do this in practice, but it does CLEARLY illustrate that, while it HELPS to have high Q, Q does NOT directly correspond to frequency stability lots of other factors can and do affect it. Keit121.221.82.58 (talk) 01:25, 15 March 2012 (UTC)[reply]
Keit, your unreferenced verbage is no more than pointless pendantry. And nobody believes you had a girlfriend in high school either. — Preceding unsigned comment added by 69.246.200.56 (talk) 01:57, 15 March 2012 (UTC)[reply]
What Keit is saying makes sense to me. According to our article, "[a] pendulum suspended from a high-quality bearing, oscillating in air, has a high Q"—but obviously a clock based on that pendulum will keep terrible time on board a ship, and even on land its accuracy will depend on the frequency of earthquakes, trucks driving by, etc., none of which figure into the Q factor. If you estimate the frequency stability of an oscillator based on the Q factor alone, you're implicitly assuming that it's immune to, or can be shielded from, all external influences. I'm sure the people who analyze the stability of atomic clocks consider all possible external perturbations (everything from magnetic fields to gravitational waves) in the analysis. Those influences may turn out to be negligible, but you still have to consider them.
Also, it seems as though no one has really addressed the original question, which is "one second per 3.7 billion years relative to what?". The answer is "relative to the perfect mathematical time that shows up in the theory that's used to model the clock", so to a large extent it's a statement about our confidence in the theory. I don't know enough about atomic clocks to say more than that, but it may be that the main contributor to the inaccuracy is Heisenberg's uncertainty principle, in which case we're entirely justified in saying "this output of this device is uncertain, and we know exactly how uncertain it is." -- BenRG (talk) 22:59, 15 March 2012 (UTC)[reply]
I'm afraid I don't think anyone has addressed my original question. Answers in terms of frequency stability, etc., do not seem to work. How do you know a frequency is stable unless you have something more stable to measure it against? How do you measure a deviation except with another, more accurate clock? But if this is the most accurate clock, what is there to compare it against? Just looking at my watch alone, for example, if I am in a room with no other clocks and no view of the outside daylight, I cannot say whether it is running fast or slow. I can only tell that by comparing it with something which I assume is more reliable. — Preceding unsigned comment added by Callerman (talkcontribs) 06:32, 16 March 2012 (UTC)[reply]
I think BenRG has answered your question, but I'll see if I can help by expanding on it a bit. If you are in a closed room with only one watch, and you don't know what's inside the watch, then yes, you can't tell if its keeping correct time or not. But if you have two or more watches made by the same process, and you understand the process, you can reduce the risk of either or both keeping incorrect time. And if you have two or more watches each working in a different way, you can do better still, even if each watch is of different accuracy. That is, it is possible to use a clock of lesser (only a little) accuracy to prove the accuracy of the better clock, to a (imperfect) level of confidence. This is counter-intuitive, so I'll explain.
Any metrology system, including precision clocks, has errors that fall into 2 classes: Systematic Error (http://en.wikipedia.org/wiki/Systematic_error), and Random Error. Systematic errors are deterministic/consistent errors implicit in the system. If you 100% understand the system (how it works), you can analyse, correct for, reduce, and test for such errors. For example, a clock may have a consistent error determined by temperature. By holding one clock constant temperature, we can (say) test second clock over (say) over 10 C range, comparing it with the first. If the second changes (say) 1 part in 10^8 compared to the const temp clock, we could reasonably infer that it will stay within 1 part in 10^9 if kept within 1 C of a convenient temperature. The trouble is, there may be a systematic error you didn't think of - you can't know everything. The risk is reduced (but certainly not eliminated) if you have two or more clocks working on entirely different principles of roughly similar performance. Random errors (eg errors due to electrical noise, hiesenburg uncertainty etc) are easy to deal with. One builds as many clocks as is convenient, and keeps a record of the variation of each with respect to the average of all of them. There is a branch of statistics (n-sigma analysis, and control charts/shewart charts see http://en.wikipedia.org/wiki/Control_chart) for handling this. After a period of time, the degree of random error, the "accurate" mean, and any clock that is in error due to a manufacturing defect (even tiny errors), will emerge out of the statistical "noise".
It's quite true to say that, at the end of the day, how long a second is, is not decided by natural phenomena but by arbitary human decision. From time to time the standard second is defined in terms of a tested best avaliable clock. As better clocks get built, we can confine the error with respect to the declared standard to closer and closer limits, but the length of the second is whatever the standards folks declare it to be.
Perhaps my explanation will anoy Nimur and some turkey who has a problem with girls, but I hope it satifies the OP. Essentially I'm saying much the same as BenRg - as folks build better and better clocks, they have better and better error confidence, based on both theory and testing multiple examples of a new clock against previously built clocks that are nearly as good. But the duration of the standard second is arbitary. Keit124.178.61.36 (talk) 07:33, 16 March 2012 (UTC)[reply]
Thanks for taking the trouble to explain. I think I understand, at least in general if not in the detail. — Preceding unsigned comment added by Callerman (talkcontribs) 02:23, 17 March 2012 (UTC)[reply]

Hospital de Sant Pau Barcelona Spain.

Is this hospital open for medical care to patients today? 201271.142.130.132 (talk) 03:53, 14 March 2012 (UTC)[reply]

Have you seen that we have an article on Hospital de Sant Pau? it seems to suggest that it ceased being a hospital in june 2009. Vespine (talk) 04:04, 14 March 2012 (UTC)[reply]

Mean Electrical Vector of the Heart

Hello. When would one drop perpendiculars from both lead I (magnitude: algebraic sum of the QRS complex of lead I) and lead III (magnitude: algebraic sum of the QRS complex of lead III), and draw a vector from the centre of the hexaxial reference system to the point of intersection of the perpendiculars to find the mean electrical vector? Sources are telling me to drop a perpendicular from the lead with the smallest net QRS amplitude. Thanks in advance. --Mayfare (talk) 04:46, 14 March 2012 (UTC)[reply]

I don't have medical training, so I can only guess, but if I understand correctly:
  • the contraction of different parts of the heart have accompanying electrical signals that move in the same direction as the contraction. Movement towards one of the electrodes of an electrode pair will give a positive or negative signal, while movement perpendicular to that direction would have little influence because it would effect both electrode potentials the same way, increase or decrease.
  • All these movements can be represented by vectors and the mean vector of these is what you're after.
  • For each electrode pair you have measured the positive and negative deflection voltages, the sum of those give you a resulting vector for each electrode pair, and these correspond to the magnitude of the mean vector in each of those directions.
  • If one of these vectors is zero or very small, you know that the mean vector must be perpendicular to that direction, leaving you only one last thing to determine, which way it points.
  • If you have two smallest vectors with the same magnitude, then the mean vector will be on one of the angle bisectors. The info I got has lead I at 0° (to the right), lead II at 60° clockwise rotation, lead III at 120°. If I and III are equal in magnitude (don't need the same sign), then the mean vector can be 150° or -30°, but in those cases lead II will be smallest, so the only possibilities left are +60° or -120°, depending on the sign of the lead II result. That's how I understood it, but all the different electrodes made it a bit confusing. So far only arm and leg electrodes seemed involved?? A link to a site with the terminology or examples could help. More people inclined to have a look if the subject is just a click away instead of having to google first. Hmmm, would there be a correlation between links in question and number of responses... 84.197.178.75 (talk) 19:37, 14 March 2012 (UTC)[reply]

All basically correct! Our article on Electrocardiography has some information about vectorial analysis, but I'm not sure if that's sufficient for you. In the normal heart the mean electrical vector is usually around about 60 degrees (lead II), but anywhere between -30 and +90 is considered normal. The mean vector, of course, will be perpendicular to the direction of the smallest vector, and in the direction of the most positive vector. Mattopaedia Say G'Day! 07:10, 16 March 2012 (UTC)[reply]

Electricity prices

In this Scientific American article, US Energy Secretary Chu says natural gas "is about 6 cents" per kilowatt hour, implying it is the least expensive source of electricity. However Bloomberg's energy quotes say on-peak electricity costs $19.84-26.50 per megawatt hour, depending on location. Why is that so much less? 75.166.205.227 (talk) 09:44, 14 March 2012 (UTC)[reply]

The Bloomberg prices are prices at which energy companies can buy or sell generating capacity in the wholesale commodities markets. An energy company will then add on a mark-up to cover their costs of employing staff, maintaining a distribution network, billing customers etc. plus a profit margin. Our article on electricity pricing says that the average retail price of electricity in the US in 2011 was 11.2 cents per kWh. Gandalf61 (talk) 10:48, 14 March 2012 (UTC)[reply]
(Per the description,) The figures given in the Scientific American article are apparently the Levelised energy cost. This is a complicated calculation and the number depends on several assumptions and it's not clear to me what market the estimated break even price is for although it sounds like it depends on what you count in the costs. Also I'm not sure why the OP is making the assumption natural gas is the least expensive source, I believe coal normally is if you don't care about the pollution.Edit: Actually the source says coal is normally more expensive now although I don't think this is ignoring pollution. Nil Einne (talk) 11:21, 14 March 2012 (UTC)[reply]
The Bloomberg quote for gas is 2.31$/MMBtu, and 1MMBtu is 293 kWh, so that would be 0.7 cents per kWh. Maybe he was a factor 10 off? Or he quoted the European consumer gas prices, those are around 0.05€ per kWh... 84.197.178.75 (talk) 12:56, 14 March 2012 (UTC)[reply]
The article I linked above suggests the figures are accurate. For example the lowest for naturalised gas (advanced combined cycle) is given as $63.1/megawatt-hour. The cost for generating electricity from naturalised gas is obviously going to be a lot higher then just the price of the gas. Nil Einne (talk) 13:24, 14 March 2012 (UTC)[reply]
Oops, you're right of course. I was thinking 10% efficiency was way too low for a power plant, I saw your comment but the penny didn't drop then... 84.197.178.75 (talk) 15:27, 14 March 2012 (UTC)[reply]
Talk of gas-generated electricity being cheaper than coal is puzzling. In the 1970's baseload coal and nuke were cheap electricity, and natural gas was used low efficiency fast-start peakers, typically 20 megawatt turbine units, which could be placed online instantly to supplement the cheaper fueled generators, when there was a loss of generation or to satisfy a short-duration peak load. The peakers might be 10 or 15 percent of total generation for a large utility. The coal generators were 10 times larger (or more) than the gas turbines, and took hours to bring on line. The fuel cost for gas was over 4 times the fuel cost for other fossil fuels, and 18 times as much as for nuclear. The total cost was over 3 times as much for gas as for other fossil and 8 times as much as nuclear.. Is gas now being used in large base-load units, 300 to 1000 megawatt scale, to generate steam to run turbines, rather than as direct combustion in turbines? Edison (talk) 15:33, 14 March 2012 (UTC)[reply]
I think so. This 1060 MW plant built in 2002 is very typical of the new gas plants installed from the late 1990s to present. They are small, quiet, relatively clean except for CO2, and have become cookie cutter easy to build anywhere near a pipeline. 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)[reply]

I still do not understand why a power company would quote a wholesale contract price less than 30% of their levelized price. Even if it was only for other power companies, which I don't see any evidence of, why would they lose so much money when they could simply produce less instead? 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)[reply]

In some areas (for example, in California), energy companies do not have a choice: they must produce enough electricity to meet demand, even if this means operating the business at a loss. Consider reading: California electricity crisis, which occurred in the early parts of the last decade. During this period, deregulation of the energy market allowed companies (like the now infamous Enron) to simply turn the power-stations off if the sale-price was lower than the cost to produce. Citizens didn't like this. To bring short summary to a very long and complicated situation, the citizens of California fired the governor, shut down Enron, and mandated several new government regulations, and several new engineering enhancements to the energy grid. The economics of power distribution are actually very complicated; I recommend to "proceed with caution" any time anyone quotes a "price" without clearly qualifying what they are describing. Nimur (talk) 18:43, 14 March 2012 (UTC)[reply]
You should remember what levelised cost is. It tries to take in to account total cost over the lifespan and includes capital expenditure etc. It's likely a big chunk of the cost is sunk. Generating more will increase expenditure e.g. fuel and maintenence and perhaps any pollution etc taxes and may also lower lifespan, but provided your increased revenue is greater then the increased expenditure (i.e. you're increasing profit) then it will still likely make sense to generate more. The fact you're potentially earning less then needed to break even is obviously not a happy picture for your company, but generating less because you're pissed isn't going to help anything, in particular it's not going to help you service your loans (which remember are also part of the levelised cost). It may mean you've screwed up in building the plant although I agree with Nimur, it's complicated and this is a really simplistic analysis (but I still feel it demonstrates why you can't just say it's better if they don't generate more). (Slightly more complicated analysis may consider the risk of glutting the market although realisticly, you're likely only a tiny proportion of the market.) You should also perhaps remember that the costs are (as I understand it at least) based on the assumption of building a plant today (which may suggest it makes no sense to build any more plants, but then we get back to the 'it's complicated part). Nil Einne (talk) 20:13, 14 March 2012 (UTC)[reply]
Ah, I see, so they have already recovered (or are recovering) their sunk and amortized costs from other contracts and/or previous sales, so the price for additional energy is only the marginal cost of fuel and operation. Of course that is it, because as regulated utilities their profits are fixed. That's great! ... except for the Jevons paradox implications. 75.166.205.227 (talk) 22:08, 14 March 2012 (UTC)[reply]
Natural gas is the cheapest energy source delivered directly to the home, at about 1/3 the cost of electricity per BTU, for those of us lucky enough to have natural gas lines. Sure, our homes explode every now and then, but oh well. StuRat (talk) 22:38, 14 March 2012 (UTC)[reply]
If you extrapolate these numbers for cumulative global installed wind power capacity, you get this 95% prediction confidence interval.
When the natural gas which is affordable (monetarily and/or environmentally) has been burned up by 1000 megawatt power plants, then what heat source will folks use who now have "safe, clean, affordable" natural gas furnaces? I have always thought (and I was not alone) that natural gas should be the preferential home heating mode, rather than electric resistance heat. Heat pumps are expensive and kick over to electric resistance heat when the outside temperature dips extremely low (unless someone has unlimited funds and puts in a heatpump which extracts heat from the ground). Edison (talk) 00:08, 15 March 2012 (UTC)[reply]
I agree that we are rather short-sighted to use up our natural gas reserves to generate electricity. Similarly, I think petroleum should be kept for making plastics, not burned to drive cars. We should find other sources of energy to generate electricity and power our cars, not just to save the environment but also to preserve these precious resources. We will miss them when they are gone. StuRat (talk) 07:42, 15 March 2012 (UTC)[reply]
I completely agree, but honestly think there is nothing to worry about. Wind power is growing so quickly and so steadily that it has the tightest prediction confidence intervals I have ever seen in an extrapolation of economics data. Also, there is plenty of it to serve everyone and it's going to get very much less expensive and higher capacity on the same real estate and vast regions of ocean very soon. Npmay (talk) 22:01, 15 March 2012 (UTC)[reply]
Who did that extrapolation and what are the assumptions? It appears to be based on an exponential growth model—why not logistic growth? -- BenRG (talk) 00:35, 16 March 2012 (UTC)[reply]
I agree a logistic curve would be a better model, but when I tried fitting the sigmoids, they were nearly identical -- within a few percent -- to the exponential model out to 2030, and did not cusp until long enough that the amount of electricity being produced was unrealistic. Npmay (talk) 01:20, 16 March 2012 (UTC)[reply]
God, eventually the missionaries of noise will have to travel to the frozen tundra and the Arctic Ocean to prospect for the last pockets of natural sound to ruin with one of their machines. Wnt (talk) 16:13, 18 March 2012 (UTC)[reply]

Polyethylene

Is the dimer for polyethene butane? If not, what? Plasmic Physics (talk) 12:25, 14 March 2012 (UTC)[reply]

Butene? --Colapeninsula (talk) 12:57, 14 March 2012 (UTC)[reply]
Butane is the saturated hydrocarbon C4H10, and cannot be a dimer for polyethene (more corectly known as polyethelene), a saturated hydrocarbon H.(C2H4)n.H. Perhaps you meant "Is the dimer for butane polyethelene?". For the polyethelene with n=2, H.(C2H4)n.H reduces to C4H10, ie it IS butane. But for all n<>2, the ratio of C to H changes, so the answer is still no. To form dimers, you need two indentical molecules combined without discarding any atoms. Keit120.145.166.92 (talk) 13:12, 14 March 2012 (UTC)[reply]

The only solution is to that would be cyclobutane? Plasmic Physics (talk) 22:21, 14 March 2012 (UTC)[reply]

That is very well not the answer either, the only solution to preserving the elemental ratio is to have a diradical, polyethylene is not a diradical. Plasmic Physics (talk) 22:30, 14 March 2012 (UTC)[reply]

@Colapenisula:Butene would require a middle hydrogen atom to migrate to the opposite end of the chain, not likely - the activation energy would be pretty high. Plasmic Physics (talk) 23:51, 14 March 2012 (UTC)[reply]

Dimerization to form butene is apparently easy if you have the right catalyst to solve the activation energy problem:) Googling (didn't even need to use specialized chemical-reaction databases) finds many literature examples over the past decade or so, giving various yields and relative amounts of the alkene isomers. The cyclic result is a pretty common example in undergrad courses for effects of orbital-symmetry: is it face-to-face [2+2] like a Diels–Alder reaction, or crossed (Moebius would be allowed whereas D–A is Huckel forbidden/antiaromatic), or is it even a radical two-step process (actviation energy?) or electronically-excited [2+2] (Huckel-allowed) in the presence of UV light? DMacks (talk) 19:29, 17 March 2012 (UTC)[reply]

OK, so both forms are allowed. Which one has the lower ground state? What if it was a icosameric polymer? Plasmic Physics (talk) 22:19, 17 March 2012 (UTC)[reply]

Dissuading bunnies from eating us out of house and home...

... literally. My daughter's two rabbits, when we let them roam the house, gnaw the woodwork, the furniture, our shoes... Is there some simple means by which we can prevent this? Something nontoxic, say, but unpleasant to their noses that we can spray things with?

Ta

Adambrowne666 (talk) 18:36, 14 March 2012 (UTC)[reply]

From some website: The first thing to do is buy your bunny something else to chew on. You can buy bunny safe toys from many online rabbit shops. An untreated wicker basket works well too. They also enjoy chewing on sea grass mats. To deter rabbits from chewing on the naughty things, try putting some double sided sticky tape on the area that is being chewed. Rabbits will not like their whiskers getting stuck on the tape. You can also try putting vinegar in the area too, as rabbits find the smell and taste very very offensive. Bitter substances tend not to deter rabbits as they enjoy eating bitter foods (ever tried eating endive? very bitter.)
Or google "rabbit tabasco" for other ideas .... 84.197.178.75 (talk) 19:59, 14 March 2012 (UTC)[reply]
Why would you let bunnies run free indoors ? Don't they crap all over the place ? Not very hygienic. Maybe put them in the tub and rinse their pellets down after an "outing". StuRat (talk) 22:26, 14 March 2012 (UTC)[reply]
Fricaseeing might help. Supposedly they taste just like chicken. ←Baseball Bugs What's up, Doc? carrots23:38, 14 March 2012 (UTC)[reply]
Rabbits tend to go to the toilet in the same spot and they're pretty easy to litter box train. It's harder, but not impossible to train some rabbits to not chew things, but we never managed to do it with our two dwarf bunnies. For that reason we just don't leave them in the house unsupervised. We do let them run around the house sometimes and they won't go to the toilet on the floor, but we did catch one of them once chewing on the fridge power cable which was the final straw of giving them free reign of the house. Vespine (talk) 23:53, 14 March 2012 (UTC)[reply]
As a final point, I do remember reading that some bunny breeds are just more suitable as "house" pets then others. There are plenty of articles on the subject if you google "house rabbit". Vespine (talk) 23:57, 14 March 2012 (UTC)[reply]

Thanks, everyone - yeah, they crap everywhere; we're not masters of the household at all - I don't know how they get away with it: they drop dozens of scats all over the house, but if I do one poo in the living room, people look askance! Will try the doublesided tape and other measures; wish me luck!

Resolved


March 15

feeding plants carbon

What are some ways to feed plants carbon? Could I administer malic acid to CAM plants through the stomata? There's a product out there on the market that supposedly is an artificial carbon source for plants-- what are some possible mechanisms to "help out" plants with carbon fixation? (This is for ornamental plants, where the large-scale boost of organic material is desired.) I can't use ammonium (bi)carbonates because of the ammonium ion's toxicity to fish. Could I administer an organic acid + bicarbonate, or maybe boric acid? 74.65.209.218 (talk) 06:01, 15 March 2012 (UTC)[reply]

Sugar works pretty well. Plasmic Physics (talk) 07:09, 15 March 2012 (UTC)[reply]
Googling the topic of feeding plants sugar, I doubt this is a beneficial solution. It seems to encourage bacterial, rather than plant, growth. 74.65.209.218 (talk) 08:14, 15 March 2012 (UTC)[reply]
What makes you think your plant is deficient in carbon ? Doesn't it get enough from carbon dioxide in the air and/or organic molecules in the soil ? StuRat (talk) 08:17, 15 March 2012 (UTC)[reply]
You could always put an animal in with the plants as they produce carbon dioxide, maybe more fish? Or a hamster. SkyMachine (++) 08:29, 15 March 2012 (UTC)[reply]
I'm trying to speed up carbon fixation. Furthermore, these are aquatic plants in a fish tank, which seem to grow slowly. 74.65.209.218 (talk) 09:10, 15 March 2012 (UTC)[reply]
First, are they growing more slowly than is typical for their species ? Second, what makes you think that carbon is the limiting factor ? Perhaps something else is deficient, such as light. If you don't accurately determine the problem, any "solution" is likely to cause more harm than good. StuRat (talk) 09:59, 15 March 2012 (UTC)[reply]
Did your search just focus on sucrose or did you include a variety of sugars. Plasmic Physics (talk) 08:32, 15 March 2012 (UTC)[reply]
My search included mentions of glucose. My search shows that glucose is actually an herbicide. 74.65.209.218 (talk) 09:11, 15 March 2012 (UTC)[reply]
I suppose if the sugar was colonized by yeast you would produce carbon dioxide. SkyMachine (++) 08:43, 15 March 2012 (UTC)[reply]
This doesn't help. I can't have too many microbes proliferating in the tank water. Furthermore, I am not sure if roots are meant to absorb carbon dioxide or even sugar. I am looking at more sophisticated ways of adding carbon. Does administering malic acid to CAM plant stomata speed up carbon fixation? 74.65.209.218 (talk) 09:10, 15 March 2012 (UTC)[reply]
You could always use compressed CO2 like you can get at home brew stores or soda stream canisters. Modify a switch to slowly relase CO2 to bubble through the tank. SkyMachine (++) 09:29, 15 March 2012 (UTC)[reply]
That's going to produce some carbonic acid in the water and make it more acidic, which might not be good for the fish. StuRat (talk) 09:56, 15 March 2012 (UTC)[reply]
Seems to be a mix of factors; light, CO2, fertiliser.
  • plants need light to grow, but the more light they get, the more CO2 and trace elements they will need.
  • CO2 diffusion in water is much slower than in air. There can be a CO2 depleted layer of water around the plants. CO2 injection is one of the techniques used, with a CO2 tank, valves, regulators and controller, measuring the pH to adjust the CO2injection. Seems to be a bit expensive.
  • Trace elements, especially iron it seems, may be lacking. Add some trace element mix for water plants.
air bubblers, biofilters, plants will remove CO2 from the water. Fish add CO2. Yeast generators are low cost way of adding CO2.
Adding CO2, when the lighting is adequate, will increase the oxygen in the water due to more photosynthesis from the plants.
It's all a balancing act it seems, check out some forums like forum.aquatic-gardeners.org for more info. 84.197.178.75 (talk) 11:25, 15 March 2012 (UTC)[reply]
What if you combine malic acid with a buffering agent? Plasmic Physics (talk) 11:28, 15 March 2012 (UTC)[reply]


Note about carbon fixation: plants take up CO2 via photosynthesis. That's the ONLY way they take up carbon in significant amount. Forget about trying to feed them carbon any other way. Do you want to boost carbon fixation because you want the plants to grow or because you want to reduce the amount of CO2 in the water?? For the first you want to add CO2 and light and trace elements if needed. For reducing CO2 you would add light and more plants and again trace elements if needed. But from what I understand, faster growing plants by CO2 injection will result in more O2 in the water for the fish, and under 30 ppm, the CO2 does not hurt them. 84.197.178.75 (talk) 12:19, 15 March 2012 (UTC)[reply]


If this is for aquatic plants, they make commercial CO2 injectors specifically intended to introduce extra carbon dioxide into planted tanks. It's a whole category of products on the specialist sites (e.g. here). These get carbon dioxide from pressurized tanks, available either from a welding supply company or from paintball supply companies. You can also put together a DIY system with a homebrew reactor based on sugar and yeast (you don't put the sugar and yeast in the tank, you put it in a separate tank, and pipe the gas that comes off into the tank. (Search /diy co2 aquarium/ or /diy co2 planted tank/ on Google, and you'll get plenty of results, including many step-by-step instructions. Try also /co2 system for aquarium/). While adding the CO2 will depress the pH a little due to the carbonic acid formed, when the plants take in the carbon dioxide, they'll reverse that process, neutralizing the acidity. And the pH drop can be mitigated by making sure your tank has enough buffering capacity (usually referred to as "KH" in the test kits). If you want your plants to really take off once you start adding CO2, you may want to add some additional aquatic plant fertilizer. Try to avoid using regular plant fertilizer, as depending on formulation, it may produce algae blooms. You'll probably also want to invest in a better water chemistry test kit, as keeping acidity/buffering/nitrogen/phosphate/iron/etc. in balance in a planted tank maintained as such, especially with CO2 injection, is more important than for a tank maintained just for the fish. -- 71.217.13.130 (talk) 16:44, 15 March 2012 (UTC)[reply]

I'm trying to find methods cheaper than CO2 injection. Carbon supplements already exist on the market. Specifically, my relative wants me to find an alternative to this product for him, one that he could mass produce. Would feeding plants 3-phosphoglycerate work, or would it just trigger algal blooms?

Also I know from reading the literature (I'm a chemistry student) that plants appear fix the bicarbonate they absorb; presumably if they can absorb fulvic acids in organic fertiliser (in mulch for example) then they could absorb sugars or organic acids. Do the organic acids in fertilisers boost the growth of plants directly or do they simply improve the growth of symbiotic fungi and so forth? 74.65.209.218 (talk) 19:38, 16 March 2012 (UTC)[reply]

A quick web search (/Flourish Excel ingredients/) turns up posts claiming that the active ingredient of Flourish Excel is glutaraldehyde (or rather a polymerized version of flutaraldehyde -which is good, as glutaraldehyde by itself is volitile and highly toxic). An MSDS on the Seachem website [2] says it's specifically polycycloglutaracetal, which matches what's written at Glutaraldehyde#Algaecidal_activity. I've seen some forum posts discussing people using aqueous solutions of glutaraldehyde as a DIY replacement for Flourish Excel, but caveat emptor and all that. -- 71.217.13.130 (talk) 03:56, 17 March 2012 (UTC)[reply]

Bullet through the brain

I'm under the impression that shooting a bullet through the brain almost always causes instant death. Is this true, and if so, why? Phineas Gage had a huge tamping iron driven through his skull, yet he remained mostly unaffected. Lobotomies remove the entire prefrontal cortex, yet leaves the patient mostly functional. In literature, I routinely read about studies of what happens when this or that area of the brain is lesioned/damaged. Why would a bullet, which is physically small and unlikely to take out a major portion of any brain structure, be so likely to cause death after penetrating the brain? --140.180.5.239 (talk) 06:49, 15 March 2012 (UTC)[reply]

As long as the brain stem is intact, there is a possiblity of survival. If you want to not be brain dead, then you'll have to miss a few more sections. In addition, a bullet doesn't always make a clean wound, sometimes (depending on specs) the bullet liquifies tissue around it. There is a youtube video somewhere of what can happen, although demonstrated on an apple. Plasmic Physics (talk) 07:07, 15 March 2012 (UTC)[reply]
Curiously, all of the good reviews on gunshot wounds to the brain happen to be in journals my library doesn't have a subscription to. But no, this is certainly not true. Without those reviews, I couldn't come up with many numbers. What I was able to glean from abstracts is that over 2000 American soldiers in Vietnam managed to make it to a hospital alive despite taking a bullet through the brain. As for why a bullet causes so much damage, it's fast and spinning. It doesn't simply poke a hole through the tissue in front of it; rather, a bullet effectively pulls and drags the tissue around it, potentially causing catastrophic trauma. See Zapruder film for a famous example of what that means. If the bullet stops in the brain, which is more likely if it's a hollow point bullet designed to slow down after hitting its target, the brain has to absorb all of that kinetic energy very quickly. Finally, getting shot in the head can cause severe bleeding and can easily send a person into respiratory arrest, none of which will typically happen in the controlled setting of a surgical room. Someguy1221 (talk) 07:09, 15 March 2012 (UTC)[reply]
I would guess that most deep penetrating brain injuries result in death... but there are the rare exceptions to that and bullets are no exception. The shooting of Gabrielle Giffords is a salient recent example. In few of these cases, whether that shooting or the case of Gage, or in lobotomies, is there no damage. In fact, the damage is often quite profound. What's remarkable is that the victim doesn't die immediately.
What makes a bullet different from many of the other kinds of head injuries, Gage's probably included, is the sheer velocity of a bullet. A low velocity bullet, say a .45 ACP caliber, moves at almost 1,000 feet per second (680 miles per hour/1000 km/h). A bullet from a modern rifle (military or hunting) is about 3x that speed.
Take a look at hydrostatic shock and stopping power. Hydrostatic shock describes why "remote", i.e. not directly to the brain, bullet impacts can incapacitate almost instantly. You don't have to be a scientist to extrapolate those findings to what direct brain injuries do. Also look at terminal ballistics (not a great article, a lot of it looks like one person's production, but should give you some context). The short answer is that while a bullet is small, the shockwave it creates as it enters an object, particularly an object with features like tissue, create temporary disruptions much larger than the projectile itself. I actually doubt there's too much tumbling in brain tissue, although I could be very wrong about that. But there are a lot of very morbid journal articles (the above articles reference some of them) that talk about how occasionally supposedly more "humane" bullets have counter intuitive effects. (sidenote: the Geneva ConventionHague Convention requires militaries use full metal jacketed bullets, however there's some debate over the differences between hollow point and full metal jacketed rounds). The brain is particularly sensitive in this respect, which is why these injuries are usually fatal. Shadowjams (talk) 07:26, 15 March 2012 (UTC)[reply]
A few points:
1) A modern rifled weapon, unlike an ancient one, has a spiral groove inside it, designed to spin the bullet to keep it from tumbling. This reduces air resistance and makes it go faster, farther, and straighter. If it continues like this through the brain, it may cause less damage than a tumbling bullet.
2) A slower bullet may actually cause more damage, by ricocheting around in the brain, rather than just going in and out.
3) As mentioned above, there are hollow-point bullets and other types, designed to rip apart on impact, causing much more damage. Such bullets are often illegal.
4) If you survive the initial trauma of the bullet, then infection becomes a major concern. StuRat (talk) 07:37, 15 March 2012 (UTC)[reply]
Well you're wrong on pretty much every point StuRat. All modern handguns are by definition rifled (this is a pretty elementary point to anyone with a cursory familiarity with firearms)... smoothbore guns are generally considered shotguns or muskets (if black powder)... ever wonder why a handgun that shoots .410 shotgun shells is legals? answer is... because it has a rifled barrel. Btw, none of that has anythign to do with my point... if you could get a musketball to do 1000 fps it'd do substantially more damage too. Again, "straight through" if it was slow that might be true, but the high velocity of a bullet has affects on tissue that are disproportionate. As for point 2, there's a huge debate over "energy delivered" and "stopping power" and "hydrostatic shock" and other similar concepts. Many modern militaries have shifted to high velocity, smaller rounds. I doubt there's much chance for "richochet" inside the skull with most modern rounds. I've heard that mafia tale that a .22 was used for assassinations for this reason, but I get a strong suspicion that's urban legend. Your point 3 is again subject to the intense debate about the effectiveness of particular round types Point 4, i doubt that's true with modern medicine. I think swelling is probably the greater risk. Shadowjams (talk) 09:07, 15 March 2012 (UTC)[reply]
I've modified my point 1 accordingly, but don't think you've made your case that my points 2-4 are wrong. On point 4, swelling may also be a major concern, but that doesn't mean that infection isn't. StuRat (talk) 09:51, 15 March 2012 (UTC)[reply]
A treatment of last resort for severe cerebral oedema (swelling of the brain) is a decompressive craniectomy, which is basically cutting a hole in the skull. Since the bullet has already done that, swelling may well not be that serious an issue. --Tango (talk) 21:01, 16 March 2012 (UTC)[reply]
I've noticed that the old method of immediately sealing any wound has been replaced by a newer method of leaving it open to allow it to drain, so this would help prevent swelling, but does pose a challenge as far as preventing infection. StuRat (talk) 23:25, 16 March 2012 (UTC)[reply]
This looks like another great opportunity to mention Mike the Headless Chicken on the science desk.--Shantavira|feed me 08:55, 15 March 2012 (UTC)[reply]
Or even Roland the Headless Thompson Gunner. --Jayron32 14:06, 15 March 2012 (UTC)[reply]
Or Carlos Rodriguez. --Itinerant1 (talk) 18:41, 15 March 2012 (UTC)[reply]
You may also be interested in this man, who lost 43% of his brain in the Falklands War and is still with us: Robert Lawrence (British Army officer). --TammyMoet (talk) 09:35, 15 March 2012 (UTC)[reply]
The amazing part is that he's not only with us, but he managed to lead an active life and even to get married after the injury. I'd have expected him to be a vegetable (or at least severely mentally disabled.) --Itinerant1 (talk) 02:30, 16 March 2012 (UTC)[reply]
"Am I serious about her ? I have half a mind to marry that girl !". :-) StuRat (talk) 23:27, 16 March 2012 (UTC) [reply]

Car radio

I asked this question a few years ago on another forum, but didn't get any replies I felt answered it. I used to own a car where the car radio would sometimes go quiet. A quick push on the front panel of the radio would restore the sound level. So far so straightforward. However, I noticed that driving under high-voltage power lines would also sometimes restore the sound level. Any ideas why this would happen? 86.134.43.228 (talk) 20:00, 15 March 2012 (UTC)[reply]

Metal contacts can oxidize, and such an oxide layer can be an insulator, or have semiconductor properties. Pushing the radio may shift the contacts a bit, breaking through the oxide layer. A high voltage can also break through a thin semiconductor layer, and once it does it causes avalanche breakdown: the electrons are accelerated, collide with atoms which get ionized by this creating a chain-reaction. That's how zener diodes above 5.5 volt work, see avalanche diode and avalanche breakdown. Usually there's an hysteresis effect, meaning that the voltage at which the conduction stops will be lower than the one where it started. That could be an explanation, with the power lines inducing a higher b-voltage over the contacts, enough to break through. Also the micro-weld phenomenon seen with coherers could be involved. In general, it would be some thin insulating layer that can withstand the 12 volts over the contacts but breaks down at a higher potential. That's my best guess. 84.197.178.75 (talk) 21:25, 15 March 2012 (UTC)[reply]
The following explanations are much much more likely: The strength of radio waves often changes dramatically near and under high voltage power lines. Usually the signal strength falls under power lines but it also can increase. AM Radios incorpoarte and automatic volume control system (AVC)(the more correct term is automatic gain control) so you don't notice the change as tune form one station to another, move around, go under power lines etc etc. My guess is that there is a bad solder joint in an area affecting the AVC. When the radio passes under the power lines perhaps the change in signal causes a sufficient change in voltage in the AVC circuit to overcome the oxide layer in the faulty solder joint. FM radios often incorporate a Mute circuit, as without it you get a full volume blast of noise when tuning between stations. Maybe the mute circuit is affected by a crook solder joint, making it mute at too high a signal strength, and the change in signal when passing under power lines is overcoming it. Keit124.178.61.156 (talk) 00:40, 16 March 2012 (UTC)[reply]


The question didn't indicate that it specifically affected weaker stations, and it's an unlikely defect for an AVC circuit imo, since these reduce the gain of strong signals, not boost a weak one. A faulty solder joint affecting the mute/squelch circuit is a possibility,
fixing a bad connection on a PCB with a quick push would only work reliably if that put significant force on the PCB. Pushing the old fashion volume or tuner knob would be most effective (good way to damage it too). So yes, it can be caused by a bad solder joint on the board. But I'm thinking that in that case, it would often be triggered by touching the controls, changing volume or station. The way it's described, it didn't sound like something that would happen several times a day. And it's easily fixed. Makes me think, it's likely a spring leaf type connector rather than a "force-less" contact. If there's any movement between the contacts due to vibrations, they will cause rapid fretting corrosion of the contacts. You get a buildup of oxide material because the oxide layer gets scratched, exposing fresh metal, increasing the layer thickness and accumulating metal and oxide particles. A "normal oxide layer on metal conductors will withstand about 0.2 volt. Typical open-circuit voltage somewhere in a radio would be from >1 to several volt I think, so you need a big oxide layer. A loose connection with that much build-up would behave more like a typical coherer, with the high resistance state easily triggered by vibrations, more so than when the contact surface is under pressure. Vibration won't separate the contacts, it takes time to build up the oxide layer and a bit of random chance to get to the high resistance state. And it won't be very stable. Movement or RF noise could return it to the conducting state.
Power lines develop faults with age, The insulators crack or get covered with dirt causing leak currents that emit high amplitude RF noise. And power companies only fix those faults if they have to; to quote "An Important Rule" for technicians resolving power-line RF noise given in an industry publication:
"Perhaps the most difficult hurdle to overcome in this process is to ignore those noises not affecting the customer's equipment. An important rule for efficient and economic RFI troubleshooting is to locate and repair only the source causing the complaint." (Transmission and Distribution World, sept 2004)
But I'm just speculating; anything is possible ;-) 84.197.178.75 (talk) 18:49, 16 March 2012 (UTC)[reply]
You are right - almost anything is possible. It is possible with some types of AVC circuits to go faulty so as to cut off an IF stage rather than leave it full on. Some signal still gets thru due to stray capacitance. Not enough to keep the owner happy, but maybe enough under high signal conditions to joggle the fault. I used to do car radio, stereo, and TV repair. Two things I learnt very solidly:- (1) Intermitant faults are the sneakiest and trickiest things. Prodding in one place can make it come and go, but the dry joint is somewhere else. (2) if a customer says their set is faulty, you can (usually) assume it IS faulty, but don't rely on what they say about it. Non-technical people have funny theories and often leave out or confuse vital information. Like saying their TV is crook only on Channel 2, when in fact it is faulty on all channels but they only watch Channel 2. Keit120.145.40.231 (talk) 03:15, 17 March 2012 (UTC)[reply]

Excellent answers, thanks for the replies everyone 86.134.43.228 (talk) 20:10, 18 March 2012 (UTC)[reply]

Hydrogen scattering length density?

I was discussing SANS this morning with another grad student, and realized I don't actually know the answer to this question myself: Does anyone have a simple answer for why hydrogen has a negative scattering length density? The article says "neutrons deflected from hydrogen are 180° out of phase relative to those deflected by the other elements", but that's purely phenomenological. I know it's quantum mechanical in origin, but beyond that I don't have a really good grasp of why this the case. It's slightly counterintuitive to me that something akin to a scattering cross-section would be negative. I've also looked over the article on neutron cross sections, but it in turn just references back the the scattering length density article. Any thoughts? (+)H3N-Protein\Chemist-CO2(-) 21:55, 15 March 2012 (UTC)[reply]

*bump* (+)H3N-Protein\Chemist-CO2(-) 14:22, 16 March 2012 (UTC)[reply]
??? (+)H3N-Protein\Chemist-CO2(-) 22:42, 18 March 2012 (UTC)[reply]

To what extent is a fermion's position part of its quantum state for the purposes of Pauli exclusion?

Recently this controversy regarding a Brian Cox (physicist) lecture was brought to my attention. Although it is only touched on briefly by the many people objecting to May's interpretation of the Pauli exclusion principle, it is generally agreed that position is part of an electron's quantum state. But to what extent is that so? For example, two electrons orbiting the same helium nucleus are forced into different spins because they are close enough together, and similar things cause Pauli exclusion in much larger molecular orbitals. But how far apart do two electrons need to be before they can otherwise both exist in the same quantum state? Npmay (talk) 22:07, 15 March 2012 (UTC)[reply]

To do this correctly, you need to solve the wave function for interacting electrons, which is very hard. (Why is it hard? Because the potential energy is not constant - much like any non-quantum n-body problem - only, also add the complexity of quantized states.) If you take your ordinary quantum mechanics textbook, they'll walk through the solutions for a single electron around a highly-ionized atomic nucleus; and usually, they'll assume the potential energy function for a stationary, electrostatic potential well. But if you have multiple moving charged particles, you can't do this; the math becomes quite difficult. If you'd actually like to work it out, I can recommend several good texts to walk you through the math - but let's be honest: physics students (who are very smart people) usually spend something like a full year working the basic mathematics that describes the quantum-mechanically correct electron orbit, during the course of a two or three semester advanced physics class, and still do not even solve for two electrons. So, the probability that we can summarize this quickly or easily is very low.
If you're looking for a one-line answer, though, let's phrase it this way: "The farther apart the electrons, the greater the probability that they are non-interacting." Quantized states notwithstanding, electron-electron interactions are modeled by a Coulomb potential, whose strength falls off as the inverse of distance. Nimur (talk) 23:00, 15 March 2012 (UTC)[reply]
So, the extent to which the electrons interact, which is proportional to the strength of the electromagnetic force in accordance with the inverse square law, determines whether they are in the same position for the purposes of being in the same quantum state? That would make some sense. It would also resolve the controversy in that distant electrons only have a tiny but nonzero probability of being subject to Pauli exclusion. Is that good enough to avoid the math details? Npmay (talk) 23:29, 15 March 2012 (UTC)[reply]
Electromagnetic interaction doesn't really have anything to do with it—in everything that I wrote below, it's irrelevant whether the fermions are electrons or neutrinos or (hypothetical) particles that don't interact at all. -- BenRG (talk) 23:59, 15 March 2012 (UTC)[reply]
The key point is that there's one wave function for the whole system, not one per particle. In a system of two spinless identical fermions confined to a line segment, you can think of the wave function as defined on a square whose corners are "both fermions at the far left", "fermion A at the far left and fermion B at the far right", "both fermions at the far right", and "fermion A at the far right and fermion B at the far left". (In fact it's not fair to give the particles labels since they're indistinguishable, but I can ignore that here, so I will.) The exclusion principle says that the wave function is zero at all points that correspond to the fermions being in the same place, which in this case is the diagonal line from "both fermions at the left" to "both fermions at the right". Since the wave function is continuous, it also approaches zero as you approach that diagonal, but there's no particular bound on how large it can be except exactly on the diagonal. The exclusion principle doesn't make any difference when the wave function is zero (or nearly zero) near the diagonal—in other words, when there's no (significant) chance that the fermions are near each other.
For spin ½ particles (like electrons) you can use four copies of the square, one for "both particles spin-up" and so on. The diagonals in the two squares where the particles have the same spin are zero, but the diagonals in the two squares where they have different spins don't have to be zero.
Regarding Cox's lecture, see WP:Reference desk/Archives/Science/2011 December 18#Pauli exclusion principle and speed of light. His words can be interpreted in various ways, but basically he was just wrong. I mostly agree with Sean Carroll's blog post, but even he seems to believe that every quantum object is spread out over the entire universe, an idea which I mocked in my last post to that old Ref Desk thread. -- BenRG (talk) 23:59, 15 March 2012 (UTC)[reply]
How do you decide which particles to include in the "complete system" wave-function? I own some beachfront electrons in Tucson and I feel that their interactions should be included in the wave-function for your two-particle system. Clearly, there must be some sanity in deciding when a particle is "far enough away" that it no longer matters. If this criteria isn't based on the magnitude of potential-energy function of the interaction, (i.e., electrostatic potential, for an electron-electron interaction), then what else would it be? Nimur (talk) 00:11, 16 March 2012 (UTC)[reply]
True, you need some criterion to separate system from environment. But electromagnetism has nothing to do with the Pauli exclusion principle, so I don't think it's relevant here. I had two particles in my system because one wouldn't be enough and three would be an unnecessary complication. The two particles are isolated from all outside influence because it's my thought-experiment and I say they are. Electromagnetism is relevant if you're specifically talking about atomic orbitals, but that's complicated enough (as you said) that I couldn't have given anything like the answer I did. -- BenRG (talk) 00:54, 16 March 2012 (UTC)[reply]
That doesn't make sense to me. Two adjacent helium atoms have between them two pairs of electrons, each pair of which is in the same quantum state except for its position. If their electromagnetic interaction determines whether they are in the same quantum position as well when they are near, then what determines whether they are in the same quantum position as well when they are further away? Npmay (talk) 01:11, 16 March 2012 (UTC)[reply]
Now you're getting into some of the really messy Schrodinger's cat territory, and you've basically asked the vital question: The two electrons, two protons, and two neutrons around a single helium atom represent a single "system" which can be analyzed by a single "wave function" which describes, among other things, the relationship between the two electrons around that atom. This requires quantum mechanics to do accurately. Once you introduce the idea of "two adjacent helium atoms" now you really need to define "adjacent". If the two atoms interact meaningfully, then what you have is essentially a helium molecule of some sort, which is dealt with quantum-mechanically by molecular orbital theory, and the math is identical in spirit to the math to calculate the orbitals around a single helium atom, excepting that it is more complex, as you have 4 electrons and 2 nucleii to deal with now. If you're dealing with two atoms sitting in a box together, occasionally colliding; well now you're into that fuzzy "Schrodinger's cat" area; which is that QM has a real problem in describing with behaviors of particles interacting classically. Which is not to say that it cannot/is not done. Its just that, at some level, the quantum mechanical solution to a problem and the classical mechanics solution to a problem converge, so there's no need to go through the exhaustive QM mathematics, which is almost impossible, and instead you can just use the Newtonian math to solve it. Two helium atoms bouncing around a box can basically be described in Newtonian terms and get the same result as using QM terms, so there's no need to do the messy bit... --Jayron32 13:20, 16 March 2012 (UTC)[reply]
Thanks for taking the time for that clear explanation. If the Pauli exclusion principle is what makes two helium atoms bounce off each other instead of pass through unaffected, then perhaps the thermodynamic gas compression information inherent in Boyle's law explains how close is close enough to be more in the same system than not. Npmay (talk) 23:05, 16 March 2012 (UTC)[reply]
Real gas#Models and Equation of state are certainly complicated and indeterminate enough to fit the bill. Npmay (talk) 11:10, 17 March 2012 (UTC)[reply]

March 16

Meteorology question

Reading through the Dodge City, Kansas National Weather Service forecast discussion today I came upon something that somewhat confuses me (not something that happens often being a meteorology student). It says (I apologize in advance for the all caps, but that's what NWS products use) "FRIDAY EVENING COULD BRING MORE WIDELY SCATTERED CONVECTION HOWEVER AS THE LEADING EDGE OF A LEFT FRONT QUADRANT JET MAY PRODUCE A THERMALLY INDIRECT VERTICAL CIRCULATION NEAR THE OKLAHOMA LINE IN THE EVENING."[3] The part that confuses me is the part about "the leading edge of a left front quadrant jet may produce a thermally indirect vertical circulation", as this is not a concept I have come across before. I also seem to have seen something related to this on the evening TV weather forecast here (2:30 into the video). First, what does the forecast discussion part mean? Second, what is the meteorology behind it (i.e. how does the part that's confusing me cause the convection mentioned in the first part)? Thanks in advance, Ks0stm (TCGE) 00:09, 16 March 2012 (UTC)[reply]

I am not sure, but I think "jet" here refers to a front from the jet stream mixed in vertically from downward convection. Npmay (talk) 01:15, 16 March 2012 (UTC)[reply]
They appear to have dropped the word streak. See here and here and here. CambridgeBayWeather (talk) 17:17, 16 March 2012 (UTC)[reply]

Can a black hole also be, or contain, a neutron star?

If so, would that be ascertainable? Aside from a certain mass range, what else if anything might give evidence for it? Thanks, Rich Peterson198.189.194.129 (talk) 00:33, 16 March 2012 (UTC)[reply]

Sort of. Many black holes would have been neutron stars if they were less massive. You can not ascertain anything about the contents of a black hole directly, but you can infer quite a bit about its mass and former composition from the remnants of its formation. As for the matter in a black hole which was there upon its formation, most of it is in a frame of reference where it is a very hot and compressed quark-gluon plasma I believe, but I'm not sure, and nobody really knows what the physical state of a singularity is. Everything that falls into the black hole (even a moment) after its formation is, in its own frame of reference, trapped in a state of being continually stretched and heated. Npmay (talk) 01:07, 16 March 2012 (UTC)[reply]
I remember reading that large black holes could exist without being very dense. It was a popular science magazine.(not Popular Science)198.189.194.129 (talk) 01:10, 16 March 2012 (UTC)[reply]
Supermassive black holes under a string-theoretical interpretation can be less dense than ordinary matter. There is a discussion of this in Fuzzball_(string_theory)#Physical_characteristics. Npmay (talk) 01:13, 16 March 2012 (UTC)[reply]
Neutron stars often spin fast, could that give the black hole containing it an angular momentum that we could observe? Or could "Hawking radiation" be affected by the nature of the stuff inside? Thanks, Richard Peterson198.189.194.129 (talk) 01:21, 16 March 2012 (UTC)[reply]
Yes, the collapse of a spinning neutron star would produce a rotating black hole, and we could in theory, if we got close enough, measure its rotation by frame dragging or the Penrose process. Smurrayinchester 10:09, 16 March 2012 (UTC)[reply]
Just to clarify, a neutron star cannot be in a black hole and still be considered a neutron star. Once it collapses or falls past the event horizon, the material will inevitably fall into the singularity within finite time. Hawking radiation is not affected by the composition of the black hole. Goodbye Galaxy (talk) 18:28, 16 March 2012 (UTC)[reply]

As for the density, the claim that black holes aren't very dense is based on a measure of average density, with the region contained by the event horizon considered the volume. Not only can we not see past the event horizon, we can't sense what's beyond it through any means. That also means the gravity field of the black hole beyond the event horizon is the same no matter the internal distribution of mass. As for whether you'd see evidence in the hawking radiation, I have no idea, but physicists can't seem to agree on how you'd "read" the radiation anyway. Someguy1221 (talk) 01:26, 16 March 2012 (UTC)[reply]

Could cold dark matter have accumulated in orange dwarf stars?

Thanks again. Richard Peterson198.189.194.129 (talk) 03:16, 16 March 2012 (UTC)[reply]

I don't have an answer for you, but why do you specifically pick that one kind of star? Someguy1221 (talk) 03:21, 16 March 2012 (UTC)[reply]
If your dark matter does not inter-act with normal mater or itself, then any that falls into a star should just come out the other side and not stop. So it would be difficult to accumulate, as the dark matter would hve to lose momentum to stay in the star. Graeme Bartlett (talk) 04:56, 16 March 2012 (UTC)[reply]
Dark matter is still under the influence of gravitation, so why wouldn't it accumulate in a gravitational well? SkyMachine (++) 08:28, 16 March 2012 (UTC)[reply]
As Graeme said, it needs to lose momentum and energy in order to accumulate. Ordinary matter does that via radiation or collisions. Dark matter particles do not radiate and they do not (or at best weakly) interact with each other, so that is not possible. Dark matter requires complicated collective processes that redistribute the energy and allow it to accumulate, so-called violent relaxation (that redirect is a bit useless...). And that (as far as we know) only works on larger scales, say small galaxies and up. --Wrongfilter (talk) 09:04, 16 March 2012 (UTC)[reply]
(ec)Because there's nothing to stop it when it gets there. The most you'd get is dark matter orbiting the well. Now, these are "weakly interacting", rather than "non-interacting", so they will inevitably collide with something given infinite time. Someguy1221 (talk) 09:06, 16 March 2012 (UTC)[reply]
Is this the same situation with regard to black holes? Would the dark matter only orbit around or pass by rather than being trapped? SkyMachine (++) 09:22, 16 March 2012 (UTC)[reply]
No. If a dark matter particle should pass the event horizon of the black hole, it is trapped, just like everything else. Someguy1221 (talk) 09:39, 16 March 2012 (UTC)[reply]
I asked about orange stars because I've read they can be very old, and have had time to accumulate the stuff. I was thinking time to accumulate is an important factor, because once it's there, it won't readily leave, even in a nova? Perhaps an even better sort of object for me to inquire about would be a very ancient white dwarf, which could be just as old as an orange star, but would have stronger gravity...It does seem to me it could be orbiting, but orbiting far inside the star, for a long time, then, if and after any interactions, probably drop to a lower orbit inside the star?--Rich Peterson198.189.194.129 (talk) 17:41, 16 March 2012 (UTC)[reply]
What's the evidence that dark matter doesn't interact with itself? Wnt (talk) 23:54, 16 March 2012 (UTC)[reply]
It could, it very well could. But in the simplest models that allow dark matter to hardly ever interact with normal matter, it also hardly ever interacts with itself. Someguy1221 (talk) 00:41, 17 March 2012 (UTC)[reply]
No one has mentioned it, but the assumption that weakly interacting massive particles might accumulate in stars has been the basis for a variety of experiments. For example, if they accumulate significantly in the sun, and if they can self-annihilate, then one signature could be a excess of very high energy neutrinos coming from the sun. Super Kamiokande and similar neutrino telescopes have looked for such signals, though so far we don't yet have a definitive detection. The possibility of such dark matter accumulating in stars has been studied in a great deal of detail, though what one expects depends on the properties that are assumed for the unseen dark matter. Dragons flight (talk) 02:39, 17 March 2012 (UTC)[reply]
I am skeptical that dark matter is weakly interacting massive particles at all, because there seems to be no actual evidence for their existence, there is no theory of supermassive black hole formation which does not involve aggregation of smaller primordial black holes over time, none of the gravitational microlensing or wide binary star orbit studies have ruled out massive compact halo object dark matter more than a few hundred stellar masses on average, and a relatively small fluctuation in the rate of spacetime expansion between the inflationary epoch and nucleosynthesis would allow for the additional baryons necessary. Furthermore, all particle dark matter theories are unable to explain the cuspy halo problem and the dwarf galaxy problem. Moreover, dozens of intermediate mass black holes have been confirmed in the past couple years, up from two which were known prior. This is a minority view not held by those who stand to gain research grants from the construction of particle dark matter detectors, but I predict in a few years the black holes will prevail over particles. There is a full account of these issues at Talk:Dark matter#Draft table. And in answer to the original question, if an intermediate mass black hole collided or entered a close enough orbit with a dwarf star, it would siphon its matter, producing x-rays in the accretion disk over a time period depending on how direct the collision happened to be. Npmay (talk) 04:51, 17 March 2012 (UTC)[reply]

Is ac supply used for driver circuits?

usually driver circuits are fed with dc ,but my project is designed using ac ,could any one say why ac is implemented? — Preceding unsigned comment added by Ishusri (talkcontribs) 07:26, 16 March 2012 (UTC)[reply]

This question cannot be answered as insufficient information has been given. Driver of what? What kind of driver? What do you mean by "fed" - power? signal? What is the project about? Keit120.145.44.170 (talk) 10:08, 16 March 2012 (UTC)[reply]
My guess is this is some sort of hobbyist project and the OP is thinking of a power supply module (probably a constant current one although perhaps a constant voltage one) functioning as an LED driver or similar. And the OP is saying many such commercial or hobbyist modules are designed for DC input (which from my limited experience is often true baring ones designed for mains voltage). And the OP wants to know if you can design one suitable for AC input. There's of course no reason you can't design a module that can take AC input and some modules are in fact suitable for AC input (as I said mains voltage ones are an obvious example). Just add some sort of rectification to a DC design is probably the simplest option. You will of course need to consider whether your design needs smoothing etc. However if you're working with mains voltage or other AC input above extra-low voltage, you should make sure you know what you are doing for safety reasons, which you very likely don't if you didn't think of rectification. Nil Einne (talk) 12:41, 16 March 2012 (UTC)[reply]
I agree. But the OP could be on about lots of other things - for example he/she might be a student doing some sort of mechatronics course, and his/her class has had a lecture on stepper motor drivers, which normally get powered from DC. Sometimes lecturers throw in a project or assignment question about using a standard stepper motor driver integrated circuit to control a multi-phase AC motor instead of a stepper motor. And the answer is usually yes it can, if you connect it up right and program it right - but to do it you need real understanding, not just the ability to regurgitate the text book & copy datasheet circuits. Homework in other words - if so show you made an effort first. The OP's english is inconsistent - did he mean "could any one say[explain] how ac is implemented?" - or did he mean "could any one say why DC is normally implemented?" Keit60.230.199.158 (talk) 15:42, 16 March 2012 (UTC)[reply]
The reason I felt a school or university project was unlikely is it would seem surprising they would be assigned such a project with so little knowledge of basic electronics as to not think of the possibility of rectification. But you're right, another possibly more likely possibility is that it wasn't a matter that they didn't think of rectification, but rather they want to know why their project uses AC. (I somehow misread their last sentence as being a question of whether it's possible to use AC but rereading it it sounds more like a question of why AC is used.) Nil Einne (talk) 17:34, 17 March 2012 (UTC)[reply]
Please don't advise connecting a home project to the mains to a user who may well not understand what they are doing. SpinningSpark 16:43, 17 March 2012 (UTC)[reply]
Actually I clearly stated they should not do so if they don't understand what they are doing, which as I also said, they almost definitely do not do if they needed to ask about it. (I also acknowledged the existance of common products which do use mains voltage as examples since they are from my experience the most common examples by far. I didn't see this as a problem since amongst other things it doesn't particularly sound like the OP is interested in using commercial modules and if they were, since OP hadn't already found them, I thought it unlikely my acknowledging their existance would prompt them to.) But do remember it's easily possible they are dealing with AC voltage which is generally consider safe (below 24V) so there's no reason not to answer the general question (although as I noted above, I may have misinterpeted the question anyway) about the possibility of using AC voltage just because one of the more likely AC voltages is mains voltage. Particularly if a clear cut warning is provided that they should not deal with any dangerous voltages without knowing what they are doing. (If the OP had suggested they were dealing with mains voltage or other dangerous voltages then it may be a good idea just to not answer even the general question but I think it's difficult to make that case here.) Nil Einne (talk) 17:41, 17 March 2012 (UTC)[reply]
I think it is beyond time the OP came back and clarified what he wants. Keit58.170.182.237 (talk) 04:13, 18 March 2012 (UTC)[reply]

Milk

There were some creamy deposits on the inside of my milk carton, that left solid clumps in my milk. The milk I bought was whole pasteurised and organic. The milk smelt ok and tasted ok but I was sick, what could these deposits have been? — Preceding unsigned comment added by 92.8.72.150 (talk) 09:49, 16 March 2012 (UTC)[reply]

That would be milk fat. Most likely the milk was frozen, which undoes the homogenization process which previously mixed the cream into the milk. The cream naturally "rises to the top", hence that expression, but freezing tends to make the cream stick to the outside of the container.
Separated milk is not dangerous, and this is how people had their milk for most of human history. However, the milk may tastes too thin without the fat mixed in, since you're used to it that way. I assume by "I was sick" you mean you found the substance disgusting, not that you literally were sick.
As far as preventing this, is it possible it froze in your refrigerator ? If so, you may need to move it to a different part of the refrigerator or turn the temp up a bit. If it was frozen some time before you bought it, then you might want to buy a different brand or from a different store. Another option is to buy skim milk (nonfat milk), which doesn't contain enough fat to clump up. Of course, if you're not used to it, that stuff tastes like water. StuRat (talk) 10:04, 16 March 2012 (UTC)[reply]
I'm not sure the OP is saying the milk made them sick. I think they're saying that they were already sick, and so their ability to smell or taste might have been hindered (as often happens when one has a cold, for example). --Mr.98 (talk) 13:33, 16 March 2012 (UTC)[reply]
I interpreted "The milk smelt ok and tasted ok but I was sick…" to mean that the milk made the person sick, or caused some kind of sickness. I think some clarification is in order, as Mr.98's understanding of that wording makes sense too. Bus stop (talk) 13:45, 16 March 2012 (UTC)[reply]
The IP is from the UK so they could mean that the milk made them vomit. See here. CambridgeBayWeather (talk) 17:11, 16 March 2012 (UTC)[reply]

Now, I thought that "but I was sick" meant that the op was mentally ill and lacked the mental capacity to discern if the milk was good or not. I have a good idea : maybe you could ASK for clarification. — Preceding unsigned comment added by 165.212.189.187 (talk) 17:31, 16 March 2012 (UTC)[reply]

google satellite

Where do the 1930 satellite images on Google Earth come from? The satellite article says the first satellite was in 1957 109.162.115.155 (talk) 18:35, 16 March 2012 (UTC)[reply]

Which images? Juliancolton (talk) 18:44, 16 March 2012 (UTC)[reply]
They're aerial not satellite. Google Earth already uses lots of modern USGS aerial photos, so it's never just been satellite imagery only. 87.113.82.247 (talk) 18:50, 16 March 2012 (UTC)[reply]
(yes, despite them labelling the overhead imagery button "satellite"). 87.113.82.247 (talk) 18:52, 16 March 2012 (UTC)[reply]

quick chemistry question

I have the mass (g) and volume (mL) of something and I need to find M (molarity) and mol...I can't figure this out so do you know if I'm missing something? — Preceding unsigned comment added by 142.132.6.24 (talk) 19:18, 16 March 2012 (UTC)[reply]

Try molecular weight or molar mass. 74.65.209.218 (talk) 19:47, 16 March 2012 (UTC)[reply]

OP here. How about just using grams and mL to get moles? Is there a way to do that? — Preceding unsigned comment added by 142.132.70.14 (talk) 01:11, 17 March 2012 (UTC)[reply]

You need the know the molar mass. A mole of iron is a lot heavier than a mole of hydrogen. Someguy1221 (talk) 02:00, 17 March 2012 (UTC)[reply]


What is "something"? The information you give is not enough. So if somebody told you, here is something that weighs X and has volume Y, he hasn't given you enough info. If he said, here's 120ml solution of 10g Na3O2H7C14X11P4Be2 in an unknown liquid, then you can calculate molarity. 84.197.178.75 (talk) 14:33, 17 March 2012 (UTC)[reply]
Or if it's a gas at known pressure and temperature, then the mass and volume are sufficient. 22.4 liters = 1 mole at STP 84.197.178.75 (talk) 14:38, 17 March 2012 (UTC)[reply]

need a non-phosphate buffer system that yields a pH of near-neutral (7 +/- 0.4)

I can't use phosphate buffers for organic acids because they encourage algal blooms. What system is optimal? 74.65.209.218 (talk) 19:31, 16 March 2012 (UTC)[reply]

What exactly is the application of this? It's pretty easy to just search for lists of different buffers. For example, you can read a list of some buffers used in microscopy here [4]. What's best really depends on the application though. Buddy431 (talk) 21:27, 16 March 2012 (UTC)[reply]
He's trying to help his aquarium plants grow without harming his fish: Wikipedia:Reference_desk/Science#feeding_plants_carbon. StuRat (talk) 22:08, 16 March 2012 (UTC)[reply]
Will 200 mM sodium acetate/methanol work? Npmay (talk) 23:08, 16 March 2012 (UTC)[reply]
Is that OK for fish ? StuRat (talk) 23:20, 16 March 2012 (UTC)[reply]
Oh, no, it is not! Sorry I missed that this was for an aquarium. Methanol and sodium acetate are likely to kill and season fish and plants, respectively. Npmay (talk) 05:02, 17 March 2012 (UTC)[reply]
You really can't pick random laboratory buffering systems and expect them to work in an aquarium, at least not on a stable, long-term basis. If you're looking to buffer an aquarium, you really only have two choices, and it's not really a "choice", as which one you use is determined by the type of tank you're keeping. The first option is bicarbonate/carbonate buffering, which basically amounts to throwing limestone chips into the tank (the term for aquarium buffering capacity, KH, derives from the German for "carbonate hardness"). The equillibria a little more complex than an intro chem titration, due to the multiple species, multiple pKas and precipitation effects, so even though the pKas might not line up, the buffering tends to work out, especially in a CO2 injected tank. The one drawback is that dissolving limestone increases your GH (overall hardness), which doesn't work so well for soft water tanks (note that soft water is not the same as softened water - don't use water that's been through a water softener for aquaria, due to the salt content). If you want to maintain a soft water tank, you need to use humic acid to do your buffering, or as the aquarium wonks call it, "blackwater" (because it tints the water). You can either make it yourself by extracting peat moss, or you can buy "blackwater extract" from well-stocked aquarium supply stores (e.g. [5]). From your call for a pH of 7.0, it looks like you might be trying to maintain a soft water tank - be sure to test the GH of your source water if you go down that path. (Note that commercial companies also sell buffering agents which contain secret ingredients which may not be either of the two above.)
By the way, although they're nice blokes an all, Wikipedia & the RefDesk probably isn't the best place to get your aquarium maintenance information. There are gobs of sites on the web about how to maintain a tank (especially a planted tank), including many knowledgeable specialty forums who would be happy to answer your questions. My first stop suggestion, though, is the Aquaria FAQs at the Krib [6], as well as the assembled usenet post there (e.g. general plant info, CO2 and water hardness, carbonate buffering]). The posts may be a bit old, but they still contain valid information. I'm not really hooked into the current planted tank web forums, but doing a Google search for /planted aquarium forum/ gave at least six options on the front page alone, so it's likely you'll be able to find a friendly and knowledgeable community to help you out. (If one is populated by jerks, feel free to move on to a different one.) -- 71.217.13.130 (talk) 03:37, 17 March 2012 (UTC)[reply]

March 17

Meaning of ground state chemistry notation

I have a paper which apparently uses the notation 3P to identify oxygen atoms in the ground state, as distinct from 1D and 1S for oxygen atoms in an excited state. The wikipedia page for oxygen says that the (presumably) ground state oxygen electron configuration is 1s22s22p4, as do other web sites (the superscripts are the number of electrons in each mode), such as http://periodictable.com/Elements/008/data.html. How does the notation 3P identify the ground state, or, how does it relate to the notation given in the websites? Wickwack124.182.39.88 (talk) 04:34, 17 March 2012 (UTC)[reply]

That's the term symbol which is used in addition to the electron configuration to indicate the total angular momentum in the particular configuration. The superscript number is the value 2S+1, where "S" is the sum of all ms values for all electrons. Thus, for an oxygen atom in the ground state, you have 1s2 (a +1/2 and a -1/2 spins) 2s2 (a +1/2 and a -1/2 spins) and 2p4 (3x +1/2 and only one -1/2 spins). That gives S=1 (all s values cancel except in the 2p orbital, where one +1/2 cancels a -1/2, but there are 2 +1/2 spins left over). So that makes the superscript 2S+1=3. The big letter P is the value "L" in the term symbol, where "L" is the sum of all "ml" values. For s orbitals, ml=0, and for p orbitals ml=+1, 0, -1 for each p orbital. So for oxygen, for all 8 electrons, you have L 0+0(1s) +0+0 (2s) +1+1+0-1 (2p) = 1, so L = 1, which is P. (basically, Capital letters are the sum of the individual lowercase letters in the Term Symbol). The rules for constructing a term symbol for the ground state of an atom are described by Hund's rules. Excited states have electrons in different sets of quantum numbers, so have different term symbols. It is possible for two atoms to have the same notional electron configuration and have different term symbols; for example there are multiple ways that 2p4 could be organized into the three degenerate p orbitals, and only those with the term symbol 3P are considered the "ground state". Organizations that give different term symbols are considered excited states. --Jayron32 05:19, 17 March 2012 (UTC)[reply]

Are the concepts of closed space and sliders real concepts?

So I am a big fan of The Melancholy of Haruhi Suzumiya, which is a science-fiction series. In that anime, there are alien, time travelers and ESPers. Obviously, aliens are an established concept in science, as well as time travel, and ESP is being researched by some people. However, in that series, there is mention of so-called "closed space", where ESPers can travel to, and "sliders", who apparently can switch between dimensions. I think the "closed space" concept is made-up by Tanigawa Nagaru, but what about sliders? Has there ever been scientific theories or conspiracies about their existance? Asking this in the Science RefDesk instead of the Entertainment one since I am more interested in Haruhi Suzumiya's scientific basis. Narutolovehinata5 tccsdnew 10:04, 17 March 2012 (UTC)[reply]

Essentially all serious scientific proposals involving spacelike dimensions beyond the three dimensions of everyday life contemplate that such dimensions would be subatomic in size, so no person would ever be directly aware of them. They are also usually closed in the sense that the two dimensional surface along a very long pipe is closed along the circumference of the pipe's cross sections but not along its length. The reason for those attributes is usually to accommodate the unification of the physical forces, which is motivated by (what could be a merely coincidental) fact that all physical forces appear to be very similar at high energies. However, the prospect of multiple timelike dimensions is sort of an open question which would allow for all kinds of interesting physics. In general though, it is unlikely but there are a number of possibilities around the basic objections. Npmay (talk) 10:35, 17 March 2012 (UTC)[reply]
That's not the answer to my question. My question is, are the terms "closed space" and "sliders" made-up or not, or are there actual scientific theories about them that call them as such. Narutolovehinata5 tccsdnew 11:00, 17 March 2012 (UTC)[reply]
Closed space is an actual mathematical attribute of spatial dimensions which has scientific theories about it calling it such; it is usually called a closed manifold. Sliders are entirely fictional concepts which are certainly not possible without dimensional attributes that are almost never considered seriously in science. Npmay (talk) 11:20, 17 March 2012 (UTC)[reply]
Opon review of the fictional literature cited in the original question, I find the Yuki Nagato character most believable, although the most persuasive evidence of extraterrestrial life on Earth is invariably dismissed by serious scientists, but not, in my opinion, for good reasons.
Furthermore, I would say the "closed space" concept is probably less similar to a closed manifold and more similar to a macroscopic brane intersection between multiverses, which is, well, let's just call it fringe science. When branes as scientists theorize them collide, they do things like cause big bangs more often than they open dimensional portals from which one can fight monsters with psionic powers. Npmay (talk) 12:06, 17 March 2012 (UTC)[reply]
I'm tempted to think that "sliders" is a reference to Sliders rather than any real physics. Wnt (talk) 20:03, 17 March 2012 (UTC)[reply]

how do bridge makers model stresses

in general it seems easy enough to imagine things that support compression or pull, and so there must be software where you can just combine rods and suspending things at all angles you want, specify their attributes and see if the whole thing collapses or how it acts. so what is it? --80.99.254.208 (talk) 11:33, 17 March 2012 (UTC)[reply]

Physical modeling with finite element analysis CAD software is usually very accurate, but is almost always checked in practice with physical scale models for new structures of nontrivial complexity. Npmay (talk) 11:53, 17 March 2012 (UTC)[reply]
You seem like you know a bit about this subject. Could you explain it. --80.99.254.208 (talk) 12:10, 17 March 2012 (UTC)[reply]
Have a look at that article and this video. Npmay (talk) 12:39, 17 March 2012 (UTC)[reply]
Not quite professional grade, but I recommend anyone interested in learning the basics try out the bridge builder series of games (some are free or free trial). It lets you visualize the stresses, and you will very quickly get better at building bridges :) SemanticMantis (talk) 14:41, 17 March 2012 (UTC)[reply]
I was about to say the same thing, online there's "Bridge Thing" and "Cargo Bridge". Also, sites about building bridges from toothpicks give good info, and there's the bridge design software West Point Bridge Designer, free to download. 84.197.178.75 (talk) 14:49, 17 March 2012 (UTC)[reply]
The basics of bridge building have been known for centuries, but some considerations, like wind-loading, resonance, and metal fatigue, have only been fully understood in the last few decades. See Tacoma Narrows Bridge (1940) for a case where the first two issues caused a collapse. StuRat (talk) 21:10, 17 March 2012 (UTC)[reply]

Software to draw schematic diagrams?

Hey, is there any software that can be used to draw schematic diagrams of machines or any device for that matter? And I don't mean CAD software, they're used for engineering drawings only. What I mean is something that can be used to draw stuff like this See, I'm reduced to uploading my horrible drawings to Wikipedia. Thanks! Lynch7 16:53, 17 March 2012 (UTC)[reply]

If you're just looking for a free package to make diagrams for Wikipedia, then Inkscape is good and it is in Wikipedia's preferred SVG format. SpinningSpark 17:27, 17 March 2012 (UTC)[reply]
If Inkscape has too high a learning curve, LibreOffice Draw might do. Npmay (talk) 21:01, 17 March 2012 (UTC)[reply]
You wouldn't want to blow $5000 and many hours of leaning to use on professional CAD programs like Autocad, but purchasing a consumer grade CAD product like DesignCAD ($200 or so) is well worth the ease in which you can make very nice drawings with very little learning effort. With a few hours practice it becomes quicker than hand sketching. I find it is very nice for doing custom graphs with special axis too. For example I used it to graph viscosity of mineral oil versus temperature - this requires a [log of (log + K)] scale, not possible in Excel. Keit58.170.182.237 (talk) 04:24, 18 March 2012 (UTC)[reply]

crypto rands automatically good for monte carlo?

Is any random number source that's good for crypto automatically good for a monte carlo simulation? (i.e. the monte carlo converges on whatever you would see in the wild with those natural conditions/percentages/whatever, rather than ever converging on some fluke of the RNG.

to illustrate what I'm talking about, int rand() % x is not a good way to monte carlo random integers betwen 0 inclusive and x exclusive, because lower ones tend to come up more than higher ones. Likewise it would be a terrible choice for a crypto secure random integer.

So, my question is about the general case: if something is good enough for cryptography, is it good enough to monte carlo nature with? (i.e. all i have to worry about is finding a crypto library of random numbers, and I can go on my merry way assuming each random number is as good as forking to a random universe to continue, with equal distributin in each random universe...) forgive me if this belongs on the comp sci desk. --80.99.254.208 (talk) 18:55, 17 March 2012 (UTC)[reply]

Well, cryptography is pretty notorious for not being as good as claimed - it's hard for the user to tell, after all. It's not my field, but I should think that if your monte carlo acts aberrantly, you're already well on the way to cracking the encryption. So I'd ask ... would it be newsworthy that the encryption that you're using had just been cracked by accident? Wnt (talk) 20:01, 17 March 2012 (UTC)[reply]


The needs of cryptography, and those of Monte-Carlo methods, are somewhat different. In principle, for example, a bitstream PRNG used for cryptography in certain ways (say, to get initialization vectors) could tolerate some slight bias (say, 50.1% 1s and 49.9% 0s), because bias per se is unlikely to help an attacker much. However, for Monte-Carlo applications, you don't want bias.
That said, as far as I know, cryptographic PRNGs are all unbiased.
The main downside of using a cryptographic PRNG for Monte Carlo is likely to be speed. Monte-Carlo methods typically need random values in huge torrents; getting those from RC4 or something may be a bit slow.
Then there's a different issue: Are you sure you really want pseudo-random numbers at all? Many Monte-Carlo type algos work better when you give them self-avoiding sequences (let's see if subrandom comes up blue), because you don't waste time exploring points of the space that have already been explored. There's a good practical treatment of some of these in Numerical Recipes in C. --Trovatore (talk) 20:12, 17 March 2012 (UTC)[reply]
Well, Numerical Recipes is blue. DMacks (talk) 08:03, 18 March 2012 (UTC)[reply]

Why is the urinalysis called also: "Routine and Microscopy"?

I would like to understand it, thank you. — Preceding unsigned comment added by 176.13.203.54 (talk) 20:21, 17 March 2012 (UTC)[reply]

You don't say where you saw this, but it was likely on a lab slip where someone places a check-mark by the tests they want to order. One might want to order a routine urinalysis, or an examination of the urine under a microscope, or both. If you want both, you'll check off "routine and microscopy". The reason these are separated is that a routine urinalysis consists of chemical tests, and can be done very simply with a urinalysis dipstick, and it can be completely automated, making that part of the test cheaper than a microscopic examination, which requires spinning the urine in a centrifuge and examination by an actual person with a microscope. Our urinalysis article is a little misleading, because "Routine and Microscopy" is not a synonym for urinalysis, but rather an abbreviation of "routine urinalysis and microscopy". A routine urinalysis would include chemical tests for pH, specific gravity, glucose, ketones, protein, nitrite, blood (red blood cells (RBCs)), and leukocyte esterase (from white blood cells (WBCs)) - bilirubin or urobilinogen or other tests may also sometimes be included. But because there can be false positive chemical tests for blood and WBCs, so these would ordinarily be confirmed by microscopic examination, where one can actually see RBCs or WBCs (or other abnormal urine contents) if present. - Nunh-huh 20:51, 17 March 2012 (UTC)[reply]
Thank you for help. 176.13.203.54 (talk) 21:42, 17 March 2012 (UTC)[reply]
You're more than welcome. By the way, the combination of a routine urinalysis with a microscopic examination of the urine is sometimes called a "complete urinalysis". - Nunh-huh 21:58, 17 March 2012 (UTC)[reply]
Thank you. By the way, you're a good explainer and I wish to meet yuu here a lot in the down the road.176.13.203.54 (talk) 23:06, 17 March 2012 (UTC)[reply]
Yes, thanks for your analysis of urinalysis. StuRat (talk) 02:28, 18 March 2012 (UTC) [reply]

Why is the Urine culture called also Diaslide?

I don't understand it. thank you for help. 176.13.203.54 (talk) 20:49, 17 March 2012 (UTC)[reply]

Diaslide is a trademark for a specific brand of urine culture test. Just like "Bic" is the trademark for a specific brand of pen. - Nunh-huh 20:53, 17 March 2012 (UTC)[reply]
Youtube has a video showing it in use.[7]--Aspro (talk) 20:56, 17 March 2012 (UTC)[reply]
And I imagine it's short for "DIAgnostic microscope SLIDE culture". StuRat (talk) 20:59, 17 March 2012 (UTC)[reply]

The relativity of time.

Could time be the big and small force? — Preceding unsigned comment added by 192.148.117.95 (talk) 23:11, 17 March 2012 (UTC)[reply]

vanessa1234394!

tahliabrehm — Preceding unsigned comment added by Tahlia1234 (talkcontribs) 23:22, 17 March 2012 (UTC)[reply]

Dunno. Give up! What’s the answer?--Aspro (talk) 00:05, 18 March 2012 (UTC)[reply]
If you mean the strong nuclear force and weak nuclear force, then no, absolutely not, not any more than time can be the tomato in your salad. StuRat (talk) 01:59, 18 March 2012 (UTC)[reply]
According to Gary Larson, Einstein proved that time is actually money. ←Baseball Bugs What's up, Doc? carrots04:20, 18 March 2012 (UTC)[reply]

March 18

What chemical compound is produced when carbon monofluoride and thulium are combined, if any? 71.146.8.88 (talk) 01:45, 18 March 2012 (UTC)[reply]

I assume that a redox reaction will take place at high temperatures, to form carbon and trifluoridothulium. Plasmic Physics (talk) 02:13, 18 March 2012 (UTC)[reply]
Thulium can also from a carbide. At low temperatures graphene fluoride or your carbon monofluoride are quite unreactive. Graeme Bartlett (talk) 10:07, 18 March 2012 (UTC)[reply]
Thanks. 71.146.8.88 (talk) 18:43, 18 March 2012 (UTC)[reply]
Thanks. 71.146.8.88 (talk) 18:45, 18 March 2012 (UTC)[reply]

Physics. Adding vectors.

I am faced with this problem: we have a 200N (newton) force heading west. we have two 200N forces heading off 30 degrees, one in a NE direction one in a SE direction. The net force is equal to 200 newtons in a east direction. We need to find the missing force which will make the net force equal to 200N E. I have tried using trig to find the magnitude of the two 200N forces but still cant find the answer. Paradoxical 0^2 (talk) 02:11, 18 March 2012 (UTC)[reply]

First, ignore all those vectors you started with. Since you know the resultant vector, they don't matter. Now, break down the desired 200N vector in the NE direction to the components headed North and East, then subtract the resultant 200N vector headed East from the East component, to get the final amount you want headed East (it will actually be negative, meaning West). Then combine this West component with the North component to get the missing vector. If you show your work, we will check it for you. If my explanation doesn't make sense, please let me know. StuRat (talk) 02:22, 18 March 2012 (UTC)[reply]
(edit conflict)Since the NE and SE forces are symmetrical about the East-West axis, their North-South components cancel out. Use Pythagoras theory to calculate the East-West components of the NE and SE forces (treat their given values as their respective hypotineuses). Once you have them, it is simply a case of adding or substracting them from or to the West force, and calculating the difference with the net force. Remember to assign the correct sign to each direction (possitive/negative) to each cardinality. Plasmic Physics (talk) 02:24, 18 March 2012 (UTC)[reply]
Still don't understand,here is all the working out i have:
cos(45)*200N=105.064398N
(I believe this is the right way to get the adj side here)
so we have to have 105.064398+(-200N)+x=200N
this x≈294.936 N
this is clearly not right.
by the way the answer given is 53.6 N east

Paradoxical 0^2 (talk) 04:04, 18 March 2012 (UTC)[reply]

Your calculator is set to radians, not degrees. 200×cos(45°) = is 141.421356. However, I don't get the given answer. Perhaps I don't understand the problem. In your original post, does "The net force is equal to 200 newtons in a east direction" mean the net force of all 3 original vectors, or only the two vectors at 30 degree angles ? StuRat (talk) 04:11, 18 March 2012 (UTC)[reply]
that is the net force with the vector that we are trying to find, i belive that is called the resultant vectorParadoxical 0^2 (talk) 04:28, 18 March 2012 (UTC)[reply]
that is with all the vectors added up we are left with a 200N final vectorParadoxical 0^2 (talk) 04:30, 18 March 2012 (UTC)[reply]
including the vector we are trying to find Paradoxical 0^2 (talk) 04:32, 18 March 2012 (UTC)[reply]
Also, are those vectors 30 degrees above and below straight East ? If so, where does the 45 degree angle come in ? StuRat (talk) 04:11, 18 March 2012 (UTC)[reply]
whoops, but still the answer aint right Paradoxical 0^2 (talk) 04:37, 18 March 2012 (UTC)[reply]
http://content.jacplus.com.au/secure/ebooks/07314/0731408209/14/image_n/nt0030-y.gif here is the question — Preceding unsigned comment added by Paradoxical 0^2 (talkcontribs) 04:43, 18 March 2012 (UTC)[reply]
That requires an account to view it. Can you do a screen grab and upload it here ? I can help you if you don't know how. StuRat (talk) 04:46, 18 March 2012 (UTC)[reply]
How do you do that? — Preceding unsigned comment added by Paradoxical 0^2 (talkcontribs) 04:49, 18 March 2012 (UTC)[reply]
There are 4 steps:
1) Do the screen grab.
2) Save the image as a JPG (or PNG, GIF, PDF, or TIF).
3) Upload to Wikipedia.
4) Display it here.
Do you need help with all 4 steps ? StuRat (talk) 04:59, 18 March 2012 (UTC)[reply]
[8] Paradoxical 0^2 (talk) 05:00, 18 March 2012 (UTC)[reply]
I made it a link. It's rather blurry and there's no text. Can you zoom in on it before you do the screen grab and include any text ? (The text might be for a whole group of problems, not just this one.) StuRat (talk) 05:05, 18 March 2012 (UTC)[reply]

I understand the problem without needing to see a copy of what's printed in the book. The description of the problem in the first paragraph above seems to me like a perfectly reasonable description of the problem, and indeed has an answer of 53.6 N east. But your math is making three mistakes, only two of which have already been identified above:

  • The problem says the angle involved is 30 degrees, but in your first equation, you're using 45 degrees.
  • You need to calculate the cosine using degrees on your calculator, not radians.
  • There are two vectors each contributing a force of 200 cos 30 N in the easterly direction, but your second equation only includes the easterly contribution of one of those two vectors. Red Act (talk) 05:44, 18 March 2012 (UTC)[reply]
Here's the answer:
Let North and East be positive.
X = 200 N (East) + 200 N (West) - 200 N (North-East) x cos(30°) - 200 N (South-East) x cos(30°)
X = 53.6 N (East)

Plasmic Physics (talk) 05:57, 18 March 2012 (UTC)[reply]

I don't understand how you can add 200 East and 200 West. Shouldn't they cancel each other out ? StuRat (talk) 06:01, 18 March 2012 (UTC)[reply]
No, because they have the opposite sign, and substracting a negative results in addition. It is a result of rearanging the original equation. Plasmic Physics (talk) 06:04, 18 March 2012 (UTC)[reply]
Thanks very much guys! sorry for being such a pain. Paradoxical 0^2 (talk) 06:10, 18 March 2012 (UTC)[reply]
Here it is with a force diagram. For some reason they omitted vector D, the one we are trying to find, from your diagram:
                      B
                  * 
              *  
A <-------+------->D
              *
                  * 
                      C
Resultant = east component of B + east component of C + D - A
200 = 200(cos(30°)) + 200(cos(30°)) + D - 200
400 = 200(cos(30°)) + 200(cos(30°)) + D
400 = 400(cos(30°)) + D
400 = 400(0.866) + D
400 = 346.4 + D
400 - 346.4 = D
53.6 = D. StuRat (talk) 06:16, 18 March 2012 (UTC)[reply]
Resolved

Always being able to feel your heart beating is not normal?

In primary school when taught about how to measure heart rate, I noted that I could actually always feel my heart beating. So, I never have to bother trying to find my pulse. I did realize at the time that this means that most people cannot (always) feel their own heart beating, but I didn't make too much out of this. No doctor has ever found a problem with my heart, and I am extremely fit. However, I've never heard of people who don't have heart disease who can always feel their heart beating. Count Iblis (talk) 02:50, 18 March 2012 (UTC)[reply]

I can't feel it, but can hear it when my blood pressure is high. StuRat (talk) 02:57, 18 March 2012 (UTC)[reply]
I also used to be able to hear it in one ear when holding my head in a certain position for several years. But that was after I suffered an injury to that ear. But just being able to feel it is actually very handy, I don't need heart rate monitors when exercising, I just need to look at my wach while running! Count Iblis (talk) 03:12, 18 March 2012 (UTC)[reply]
Have you tested this to check if your reckoning of your heart rate matches with the account of a pulse meter? SkyMachine (++) 06:45, 18 March 2012 (UTC)[reply]
It really amazes me that anyone would not feel it. I mean, I feel each heartbeat at the heart, with a sensation that varies according to diastolic blood pressure; I feel it at the inside of my elbows, with a sensation that changes when systolic blood pressure goes over 140; I typically feel it in my toes, legs, hands, abdominal organs (hard to tell which is which though), and many other places. I would say that even the cerebral arteries have a noticeable pulse sensation, especially if my blood pressure gets up over 150 for some reason. And there's also a small degree of control over all the arteries also, though it's hard to tell how much is manipulation of blood pressure in general. (At one point, trying to experiment with or "improve" cerebral blood flow, I actually managed to give myself a painless but quite disturbing ocular migraine; I've generally discontinued that research... I actually suspect that the "emotion" of shame/embarrassment is mostly a sensation of vasoconstriction somewhere near Broca's area, and ensuing results) But an exception to all this is that for some reason the aorta seems devoid of sensation. Wnt (talk) 15:14, 18 March 2012 (UTC)[reply]

Moving your hand in front of your closed eyes in a pitch-dark room

I noted that I can actually "see" something moving in my field of vision when I do this, even though it should be impossible to see anything at all. Is this caused by the brain always taking into account the way body parts move in front of the field of vision when processing visual information? I can imagine that the algorithm used by the brain is not perfect and if there is nothing to see at all, you could be seeing an artifact of the algorithm... Count Iblis (talk) 03:29, 18 March 2012 (UTC)[reply]

I doubt if your room is really pitch black. Likely some starlight, etc., filters in through the windows, even on moonless nights. This isn't enough light for you to actually see, but is enough for what you describe. To have a room really be completely dark would require no windows, and seals around all the doors. StuRat (talk) 04:42, 18 March 2012 (UTC)[reply]
I'm going to agree with StuRat, considering that as few as nine photons will cause your brain to register having seen something. Someguy1221 (talk) 04:54, 18 March 2012 (UTC)[reply]
Nine photons and/or or a really, really shallow wave of light. --134.255.75.71 (talk) 07:29, 18 March 2012 (UTC)[reply]
To test this you should repeat the same scenario but get someone else to move their hands in front of your closed eyes and see if you can still determine when they are moving as opposed to being at rest. SkyMachine (++) 07:43, 18 March 2012 (UTC)[reply]
And you may still be able to detect this without vision, by feeling the draft, hearing the movement or sensing the changed electrostatic environment. Even blocking the ambient noise can be detected. Graeme Bartlett (talk) 09:28, 18 March 2012 (UTC)[reply]
You can control for these variables if you put some thought into it. The question is how is the mind percieving the movement? Is it outside sensory data that is being recieved, or is the mind constructing the sense of a hand moving because it already knows the hand is moving. The mind might just be predicting where the hand might be in the absense of confirming sensory data from the eyes. SkyMachine (++) 09:39, 18 March 2012 (UTC)[reply]
Not only that, it's ridiculously easy to control for! Insider the dark house, find a door with a window in it (making the rooms on either side "pitch black" by covering any other sources of light into the rooms) and have someone on the other side either do the movement quietly or not move at all when you tell them loudly through the door. Your eye can be right against the window, they can stand back but move their hand close by the window when they hear your muffled request to do so. (or they can do nothing and see if you read a false positive). Tell them what you think, and within short order they'll tell you if you're getting it right. --80.99.254.208 (talk) 12:56, 18 March 2012 (UTC)[reply]
I've noticed too that I can sort of vaguely sense my hand in a totally dark room. And I can assure you it was totally dark. The human eye does have an extremly weak response to infrared. So I tried an experiment with a black painted roughly forearm sized piece of metal warm to 35 C. Results were inconclusive though - I could feel the warmth on my skin if I brought it within 200-300 mm of my face. There is another possibility: Most people know that the eye has 2 kinds of light sensor - the rods and cones. Rods and cones produce the conscous perception of light. But there is a third kind of sensor in the eye - modified ganglion cells. Mammals have these for sychnonising the body circadian rythm to the night-day cycle. Not a lot is known about these sensors. Maybe these respond to infrared as well as visible light, and maybe there is some crosstalk into the conscious perception, as adjacent ganglion cells process the output of the rods and cones. Ratbone124.178.47.224 (talk) 13:19, 18 March 2012 (UTC)[reply]


To really get all scientific, their response should be random, and not influenced by whether you got it right before. They can memorize 10 responses beforehand, for example, after flipping coins to see what they'll do. Or they can do more than 15 if they have something they can feel (a baille sheet?) taped to the wall that tells them their response). The weaker your ability to see them, the more tests you need to establish statistical confidence. --80.99.254.208 (talk) 12:54, 18 March 2012 (UTC)[reply]

also I should mention that a hidden assumption is that a person is very good at being at the other side, like a game where they're trying to be stealthy. I think as hunters (hide and seek etc) we are very good at being stealthy and have fun doing so in a game context. So all they would do is quietly slowly move their hand in front of the window or not, and otherwise not 'give away their position'. --80.99.254.208 (talk) 13:12, 18 March 2012 (UTC)[reply]

  • To interpret this I should ask a related question: how common is it for people to start dreaming when in the dark? I used to do work in darkroom where it was inappropriate to use a safe light for what could be over an hour, and it was absolutely routine for me to see/imagine all sorts of things despite concentrating on the task at hand. For me these were 1) blue and occasionally yellow wave patterns 2) rarely, a photorealistic "rehash" of recent unfamiliar visual stimuli - for example, if playing cards for the first time in a year the patterns would then show up during the same day 3) faint, ghostly, indistinct, mostly blue dream images of all varieties. This third type is relevant here - when moving a hand in front of my face, I might see something like that, though then again, the hand might not stay where it was supposed to be, or it might segue into monsters and maidens. Is that the way it works for other people? Wnt (talk) 15:25, 18 March 2012 (UTC)[reply]

Tiger cognition

Hi I would like to learn more about tiger cognition. I read the article on elephant cognition, and I want to know the same type of information, but for tigers (brain size/mass/structure, use of tools, language, self-awareness, etc). The Tiger article itself doesn't address these either. Any help is appreciated. Thanks.--99.179.20.157 (talk) 03:31, 18 March 2012 (UTC)[reply]

One interesting measure of their intelligence is that they only attack from the back, when their prey is the most helpless. They do this by recognizing the face of the animal they are attacking (not each particular face, just that it is a face). We know this because people in tiger-infested areas of India have learned to wear face-masks on the backs of their heads, which protects them from attacks. So apparently tigers aren't smart enough to distinguish a real face from a cheap plastic mask. StuRat (talk) 04:39, 18 March 2012 (UTC)[reply]
Er, citation needed? AndyTheGrump (talk) 04:59, 18 March 2012 (UTC)[reply]
Not sure why you don't believe me, but sure, here you go: [9]. StuRat (talk) 05:09, 18 March 2012 (UTC)[reply]
They're not fooled. They just see how silly the mask looks, and feel really, really sad for you. Someguy1221 (talk) 05:21, 18 March 2012 (UTC)[reply]
Or maybe they can't attack because they are doubled over laughing. StuRat (talk) 05:43, 18 March 2012 (UTC) [reply]
Alright, having actually looked into this now, there is precious little research done on tiger cognition specifically. You may have far more luck asking about cat cognition in general. If I had more time at the moment I'd actually look into that and give you a real answer. Someguy1221 (talk) 05:47, 18 March 2012 (UTC)[reply]
OK, that sounds reasonable. Cats have been given the mirror test, and are smart enough to figure out that their reflection is an illusion, and ignore it thereafter. However, they aren't smart enough to figure out that they are seeing an image of themselves, so apparently lack of concept of "self", which also implies an inability to empathize with others/see things from another POV. StuRat (talk) 05:55, 18 March 2012 (UTC)[reply]
I'm not sure that a solitary predator has any need to 'empathize with others'. See Ludwig Wittgenstein, and his comments about lions speaking (though come to think of it, lions, as social animals, may have more of a need for empathy than tigers...) AndyTheGrump (talk) 06:23, 18 March 2012 (UTC)[reply]
It could be helpful for a tiger. For example, knowing that the villagers of the person they are about to kill will get very upset and come after the tiger with guns might be a good thing to understand. Or, on a more basic level, knowing that the prey animal will run towards the pack when threatened might enable them to figure out that they need to position themselves between it and the pack before they pounce. If you're unable to think about what others are thinking, this type of reasoning becomes more difficult. StuRat (talk) 06:34, 18 March 2012 (UTC)[reply]

which branch of statistics

which branch of statistics (or other science, but I assume with the tools of modern statistics) deals with which religion is most likely to be true.

(Obviously 'we can't know for sure' 100.000...%, a hundred sigmas, as an almighty Taco could have fabricated all the evidence, implicating some nonexistent God) — Preceding unsigned comment added by 149.200.102.16 (talk) 14:57, 18 March 2012 (UTC)[reply]

I'm not sure what the origin of the story is (maybe they know on Humanities) but there's an old story about the Emperor of a faraway land, who had never been seen by the people. [10] Desiring to make a statue of him, they needed to find the length of his nose, so a vote was held, and the results were averaged to get a figure. (Hmmm, maybe it should have been a median... [11] ;) That's all statistics can do for religion. One can find deep philosophical truths, but striking an average of the responses has nothing to do with it. Wnt (talk) 16:08, 18 March 2012 (UTC)[reply]
Religion is a matter of faith, and is entirely unconnected with how likely it is to be true. Science cannot help with questions of religion, if it could they would no longer be questions of faith and no longer a matter of religion. SpinningSpark 17:23, 18 March 2012 (UTC)[reply]

why does the chain make a 'triangle' shape between front (pedal) gear, back (wheel) gear, and this arm, on high-quality bikes?

on a high-quality bike like this: http://www.bike-trend.com/wp-content/uploads/2009/06/gt-force-carbon-pro-2009.jpg why does the chain not go directly between the forward and after gears, both above and below - instead, below an arm makes it form a triangle (in case it isn't clear what I'm saying I drew the triangle here in red: http://imgur.com/cFFIH.jpg -- the green is what I would expect instead of the red triangle. To get the green, you would just reflect the top half the chain makes going from back gear to front pedal, reflected horizontally so it looks the same below (result: http://imgur.com/xf083 )....soo...why the triangle? --79.122.101.84 (talk) 18:36, 18 March 2012 (UTC)[reply]

biology

can we call a gynandromorph a "partial intersexed creature" ?. thanks. 109.64.44.20 (talk) 18:42, 18 March 2012 (UTC)[reply]