Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 476: Line 476:


:::A problem here would be that there are no long term stable orbits around the Moon. All probes orbiting the Moon need to have very frequent course corrections to prevent them from crashing into the Moon due to very strong tidal perturbations from the Earth and the Sun. [[User:Count Iblis|Count Iblis]] ([[User talk:Count Iblis|talk]]) 21:46, 21 January 2013 (UTC)
:::A problem here would be that there are no long term stable orbits around the Moon. All probes orbiting the Moon need to have very frequent course corrections to prevent them from crashing into the Moon due to very strong tidal perturbations from the Earth and the Sun. [[User:Count Iblis|Count Iblis]] ([[User talk:Count Iblis|talk]]) 21:46, 21 January 2013 (UTC)
::::For a [[lunar space elevator]], you need to put the counterweight at the [[Lagrangian point|L1 point]], which is sufficiently stable not to need much station keeping. --[[User:Tango|Tango]] ([[User talk:Tango|talk]]) 23:10, 21 January 2013 (UTC)

:See [[mass driver]]. --[[User:Tango|Tango]] ([[User talk:Tango|talk]]) 23:10, 21 January 2013 (UTC)


== Microphone physically attached on a string ==
== Microphone physically attached on a string ==

Revision as of 23:10, 21 January 2013

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 17

Nickel replacing magnesium in geoporphyrin

The geoporphyrins - see Abelsonite - have formed from chlorophyll in fossils of plants. However, the Mg ion is replaced by Ni or V ion. What is the mechanism for Mg replacement with Ni or V? Namely, did this somehow happened after fossilization (how?) or did the original plants have Ni or V ions in their photopigments? Or maybe only the few porphyrin molecules that had Ni or V ion at the center of the ring survived until present, and the ones with Mg (or Fe) ion did not? Our article Porphyrin#Organic geochemistry seems to suggest that Ni- and V- containing porphyrins in oil and oil-shale came from bacteria and not from plants. Indeed, corphin has Ni ion at the center of its porphyrin ring. On the other hand, this paper suggests that the plant chlorophyll is the origin of the porphyrin in oil, and does not explain how Mg was replaced by Ni. So what`s the answer? Thanks in advance, --Dr Dima (talk) 00:43, 17 January 2013 (UTC)[reply]

It could be post depositional ion replacement, if Ni or V ions have radii compatible with the porphyrin ring, then given the right chemical conditions, it will replace the magnesium. This would explain the rarity of the mineral, the conincidence of porphyrin and soluble Ni or V salts. Plasmic Physics (talk) 09:00, 17 January 2013 (UTC)[reply]

Atomic/Ionic Radii

Why is it that the atomic radius of fluorine (64pm) is less than the ionic radius of the sodium ion (98pm on the IB Chemistry data booklet, various values around/over 100pm in wikipedia articles, but in any case, still far greater than the atmoic radius of fluorine)? Applying a Bohr-Rutherford diagram (which I understand is a simplistic/somewhat inaccurate representation), it would appear that the sodium ion has a greater nuclear charge than fluorine, and the same number of electron shells. The nuclear charge, as I've been taught, overcomes the repulsion created by increased electron-electron repulsion, which should therefore lead to a smaller radius (neon illustrates this trend, as its radius, 58pm, is smaller than that of fluorine, and it has the same differences with fluorine as the sodium ion does (although neon does have 1 fewer proton)). I've asked around, and someone suggested it may have something to do with sub-levels (which I have not learned but sort-of understand through my own reading), although I do not see how this explains it. Could someone offer an explanation? (all group 1 ions appear to be smaller than group 17 and 18 atoms). Brambleclawx 02:09, 17 January 2013 (UTC)[reply]

You're likely measuring apples and oranges here. Atomic radius is a fuzzy concept (because atoms are fuzzy concepts) and there can be a great varience in how radii are measured or defined from one method to another. The Van der Waals radius of an atom derived the Van der Waals equations of state for a gas-phase atom is not going to be a compatible measurement to the Ionic radius, which is measured empirically from the crystal lattice via X-ray crystallography. Different measurements measuring different things in different ways cannot be compared quantitatively. --Jayron32 02:59, 17 January 2013 (UTC)[reply]
Thank you. I guess I was just focussing too hard on there being a definitive boundary, hm? Brambleclawx 00:54, 20 January 2013 (UTC)[reply]
You should be measuring the ionic radius and not the atomic radius for F to compare this. Currently you are comparing the radii of F and Na+. Shannon ([1]) gives rion(F)=119 pm and rion(Na+)=116 pm. Double sharp (talk) 15:34, 18 January 2013 (UTC)[reply]
As a matter of fact, I fully intended to compare F and Na+. I am aware that F- is indeed larger, but I was hoping to understand why sodium ion was bigger than fluorine atom, with a basis in what I've learned supposedly affect these things: "nuclear charge", "electron shell shielding", "electron-electron repulsion" and "number of shells". Brambleclawx 00:54, 20 January 2013 (UTC)[reply]
All of those things are real effects; its just that the methods for measuring these things are not as simple as "fetch a ruler". Hypothetically, if you could measure neutral F atoms and Na+ ions using the exact same method, you would get the results you expect. It's just that there is no simple means to do that. There isn't even a real "ionic radius" of Na+; each ionic lattice that the ion exists in will have a different effective Na+ radius; this is actually explained in the article ionic radius. --Jayron32 02:14, 20 January 2013 (UTC)[reply]

Olive-colored matters in cooked crabs and crayfish

When you peel open the shell of a crab, you'll see some soft olive-colored matter in the shell, mostly on the sides. What is that? Is it supposed to be edible?

When you pull the head of a cooked crayfish off its tail, it'll expose some olive-colored matter, soft and perhaps runny—something that seems to be in the head before. Again, what is that? Is it supposed to be edible? — Preceding unsigned comment added by 71.185.166.208 (talk) 04:33, 17 January 2013 (UTC)[reply]

I think what you are talking about crab is hepatopancreas, also known as tomalley. --PlanetEditor (talk) 04:45, 17 January 2013 (UTC)[reply]
And I've eaten them (in lobster, crayfish, and blue crabs) and suffered no ill effects. Eating crayfish has a bit of a ritual associated with it ("suck the head and pinch the tail"). this video shows the procedure, the "suck the head" portion of the procedure involves extracting the tomalley from the crayfish. Being essentially liver, tomalley/hepatopancreas has the same sorts of health risks associated with eating liver; if you consumed a bowl of the stuff every day for breakfast, it may be unhealthy, but in moderation (which is how often most people eat these foods; they aren't every day staples) you'd be fine. The Wikipedia article on tomalley notes that there have been health warnings against eating the tomalley of specific shellfish at specific times, but this is often associated with red tide; that makes sense as the liver is basically a filter organ, and thus when there are higher-than-normal levels of toxic substances in the water the lobster is living in, there's going to be more of that stuff in the tomalley as well. --Jayron32 05:28, 17 January 2013 (UTC)[reply]

Perception of time between age groups?

I recall watching a tv show where Michio Kaku introduced an experiment that tested the perception of time under duress (does time really slow down during an accident, for example). But I'm wondering if there have been any experiments on the perception of time between different age groups? For example, as a child, I clearly recall getting up, going to school—which dragged on forever—but then in the afternoons, you cram in as much adventure as you can, and you were able to do so. Decades later, I notice I'm getting ready for bed every day, but it feels like I just did that, even though it was 24 hours ago. Adults are always saying how fast time flies. Is it simply due to a difference in, say, stress? I'm very curious, though, if there've been actual (and good) studies on this. – Kerαunoςcopiagalaxies 06:31, 17 January 2013 (UTC)[reply]

Original research: When I was a child, half an hour would seem like forever, but now it doesn't seem like a long time at all. And paradoxically, back then I wasn't able to read the seconds on a digital watch because they'd flicker too fast for my eyes, but now I can do it easily. 24.23.196.85 (talk) 06:45, 17 January 2013 (UTC)[reply]
Wikipedia has an article named Time perception which may provide an interesting launching point for the OP to research the answers to their question, and to other related topics. --Jayron32 06:58, 17 January 2013 (UTC)[reply]

I have observed that the main reason is that any given period of time, say 24 hours, as a percentage of total life experience decreases as time passes, or I should say as the total time of your life experience increases. This has been cited many times here. Example is 24 hours to a 1 day old is double the total life experience for that baby but to a one month old is 1/30 of total life experience.165.212.189.187 (talk) 15:30, 17 January 2013 (UTC)[reply]

Tangerine peel oil and styrofoam cup

I am drinking tea in a styrofoam cup (I know, I know shame on me, but in my defense i usually have a mug) and I have a bit of a cold. So I took the peel of my tangerine and ripped it into small pieces over top of the cup of tea. I could see the oils make film on the surface of the tea and got a good amount of it in my tea. a few minutes later I noticed that the styrofoam cup above the water line has been eaten away! What happened? Is this safe to drink? I wont but just wondering. not necessarily medical advice but if the reaction has leached certain chemicals into the tea the it is just lain facts.165.212.189.187 (talk) 15:22, 17 January 2013 (UTC)[reply]

I recommend you don't drink it but take it to a toxic waste facilty for proper disposal. Didn't your mother teach you about mixing chemicals? If you pour it down the sink we may end up with mutant marine animals that may take over the world.(kidding)--Canoe1967 (talk) 15:40, 17 January 2013 (UTC)[reply]
I only know that acid can eat away at aluminium foil. You can't use it to envelope food that carries tomato, lemon, etc. In Catalonia it's traditional to use tomato in all sandwiches. It is still traditional that mothers prepare sandwiches for the their kids, and they have to add a layer of paper between the foil and the sandwich, or switch to plastic films. --Enric Naval (talk) 15:42, 17 January 2013 (UTC)[reply]
I think you have probably rediscovered Limonene recycling. Sean.hoyland - talk 15:43, 17 January 2013 (UTC)[reply]
I remember using a styrofoam cup full of petrol to prime carburetor once. The cup turned to oobleck very quickly. It was probably some type hydrocarbon in the peel.--Canoe1967 (talk) 15:51, 17 January 2013 (UTC)[reply]
We're not supposed to give medical advice, but I think we should be allowed to say that styrofoam is not a food, even when dissolved in a hydrophobic oil. Wnt (talk) 16:15, 17 January 2013 (UTC)[reply]
To enlarge and clarify what Wnt is saying - it is entirely possible for quite a range of oils and related organic chemicals to dissolve or corrode polystyrene. This has nothing to do with acidity, and everything to do with the propensity of non-polar solutes to dissolve in non-polar solvents. Tea is mostly water, which is very polar, with some emulsified fats, which are mildly polar and also not good solvents. On the other hand, vegetable oils are quite good solvents, and mostly non-polar. So they dissolve polystyrene. And no, the resulting mish-mash is not good to drink. AlexTiefling (talk) 16:30, 17 January 2013 (UTC)[reply]

Boeing 787 manufacturing locations of windshields and fuel tanks

Didn't see those in the summary at Boeing_787_Dreamliner#Manufacturing_and_suppliers 20.137.2.50 (talk) 16:22, 17 January 2013 (UTC)[reply]

The Boeing 787 cockpit windows are made by PPG Aerospace Transparencies.
http://www.ppg.com/coatings/aerospace/transparencies1/B787_tb_v9.pdf
The Boeing 787 fuel tanks are manufactured by Boeing. --Guy Macon (talk) 16:42, 17 January 2013 (UTC)[reply]

Which article is this?

I failed to jot it down. Thanks in advance. [2] 65.88.88.71 (talk) 22:20, 17 January 2013 (UTC)[reply]

That site requires a password, which I do not have. Sorry. If you don't have one either, then WP:REX may be able to help you. --Jayron32 22:26, 17 January 2013 (UTC)[reply]
I'm not sure exactly how I found it, but I think the article is: Melinda Wenner Moyer (November–December 2012). "You are what you eat: food can affect your behavior--even when you just look at it". Psychology Today. 45 (6): 43.. Looie496 (talk) 00:04, 18 January 2013 (UTC)[reply]
Thank you! 65.88.88.71 (talk) 21:20, 18 January 2013 (UTC)[reply]


January 18

V-engine with unequal stroke lengths on different banks

Soviet's V-12 Diesel model V-2 engine has a stroke of 180mm on the left cylinder group and 186mm right cylinder group.

  1. Is there a name for this configuration? Googling "unequal stroke length" got me nothing related.
  2. What's its purpose?
  3. Is there any other engine with this setup?

Dncsky (talk) 02:00, 18 January 2013 (UTC)[reply]

It does seem odd. A quick google search found this [3], which states that "The V-2-34 features an aluminum alloy body and is meant to be mounted lengthwise in the vehicle hull. Two cylinder banks with 6 cylinders each were placed in an angle of 60 degrees. The pistons are linked to the central crank shaft by wrist connecting rods, which means that only six rods are directly connected to the crank shaft. This special design also results in a slightly lower stroke in both sides of the engine. The right side has a stroke of 186.7 mm and 180mm in the cylinders on the left bank". From the description, the unequal stroke may be a side-effect of the design, rather than an intended feature. The photo of a cutaway V-2 here [4] seems to show a connecting rod (on the left) which is connected to something other then the crankshaft - from the look of it to an extension lug on the side of a conventional connecting rod big-end. This would fit the description, and if it works the way it appears, the reduced stroke for the secondary connecting rod may be a simple consequence of the geometry. AndyTheGrump (talk) 02:28, 18 January 2013 (UTC)[reply]
Thank you!Dncsky (talk) 02:34, 18 January 2013 (UTC)[reply]
Resolved
Looking further, it may be the other way round: using Google translate on one of the sources that the Russian-language article on the engine [5] cites [6], It appears that the secondary connecting rods have the longer stroke - and in consequence, the cylinders have a higher compression ratio. AndyTheGrump (talk) 02:49, 18 January 2013 (UTC)[reply]
That would be a physical impossibility, unless the secondary conrod connecting point was very displaced, on a spur off the primary conrod. That would have very severe weight and strength penalty - you wouldn't do it. If you visualise a connecting point on the primary conrod, inline with its logitiudinal axis, that point must, as the crankshaft rotates, describe a circle just above and at smaller diameter than does the big end. Hence the stroke produced by the secondary conrod must be less than that produced ny the primary conrod. If I was the designer of this engine, adn for some reason wanted this conrod arrangment (I can't think why I would, although it does allow a large bearing area), I would doctor the intake valve timing so that the effective compression on both sides was equal. This would enable smooth running while using the greatest commonality of parts. The impact on power output (rather arbitarily set in a diesel engine) and fuel economy would be negligible.
As for other engines using this configuration, radial aircraft engines have been made with one master conrod and several secondary conrods attached in a ring around the master bigend. In a radial engine, there is not the room for separate bigends for each rod to be on the crankshaft.
Ratbone 124.178.174.189 (talk) 03:05, 18 January 2013 (UTC)[reply]
Looking at the cutaway photo, I'd say that the secondary conrod connecting point was displaced, on a spur off the primary conrod, though it is hard to tell. In any case, even with the secondary conrod connecting to the primary on the centreline of the primary, the thrust from the secondary conrod is going to create a bending force, due to the cylinder V configuration. As for the exact geometry, without further information there is no way to be sure. AndyTheGrump (talk) 04:28, 18 January 2013 (UTC)[reply]
A longer stroke does not imply a higher compression ratio. If you double the stroke and you move the piston down far enough to double the volume at top dead center the compression ratio is unchanged. --Guy Macon (talk) 04:30, 18 January 2013 (UTC)[reply]
God point, Guy. Anyway, I've found a diagram of the engine here (about 20% down the page): [7] - note the way the secondary conrod 'wrist pin' is offset from the centreline of the primary. As for which cylinder has the longer stroke, I'll leave it for someone else to figure out. AndyTheGrump (talk) 04:41, 18 January 2013 (UTC)[reply]
A longer stroke with the same compression ratio means that something else has to change, either pistons, and/or cylider head height, remembering that the piston must at top dead centre just about touch the head, or piston squish will not work properly, leading to poor compustion, fuel and combustion products forced past the rings, etc. Cheaper and easier to keep everything the same on both banks, and tweak the intake valve timing as I said. However, all this appears to be not relevant. I printed out the cross section view linked by AndyTheGrump, and found by carefull measurement that:-
  • Pistons, and cylinders on both banks are the same; So are the heads, except for being mirror images of each other;
  • The secondary conrods are indeed attached on a spur on the primary rods;
  • The spur is positioned such that, at left bank TDC, the spur bearing is positioned just slightly to the right of crankshaft centre and just slightly below the point at which a line to the spur bearing centre would make 120o to a line from spur centre to crank centre. This means that the stroke for both banks is the same!.
Where did the 180 mm / 186 mm dimensions in the Wikipedia article come from? There is no reference for the dimensions cited - it's probably wrong, even though the refrences [16] and [17] (Russian Wikipedia) provided by AndyThe Grump clearly state it's the case. [16] clearly states each bank has a different stroke, but not for all versions of the engine, which does not seem likely.
Ratbone 124.178.147.222 (talk) 05:08, 18 January 2013 (UTC)[reply]
This reference - http://translate.google.com/translate?sl=ru&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&eotf=1&u=http%3A%2F%2Fru.wikipedia.org%2Fwiki%2F%25D0%2592-2, states that the secondary rod side has greater stroke and compression (look at table at bottom), "due to kinematic reasons". Presumably that means to statically balance the engine - which is weird and makes no sense to me. The spur arrangment would indeed introduce balancing issues, but a difference in stroke will not fix that. Ratbone 124.178.147.222 (talk) —Preceding undated comment added 05:39, 18 January 2013 (UTC)[reply]

Seismic morse code

Let's say I was a Soviet spy in Washington, DC, and I wanted to send an SOS message to a seismometer monitored by my contact in Moscow. How large a bomb would I need to use for each dot or dash, and what sort of transmission rate could I get? --Carnildo (talk) 03:08, 18 January 2013 (UTC)[reply]

You'd need a nuclear weapon for each dot and dash, and your transmission rate would be maybe one character a day (due to the need to travel to a new site and bury the weapon prior to sending each new dot or dash). Of course, with that many nukes, you won't need to send an SOS -- you could just blow up your pursuers (along with anyone else who happens to be within a country mile) and be done with it. (Of course, doing so will probably lead to a nuclear war between the USA and the USSR, and also setting off nukes on US soil without authorization from the Soviet Prime Minister will most likely get you executed as an "enemy of the state".) Now can we PLEASE have some SERIOUS questions?24.23.196.85 (talk) 03:28, 18 January 2013 (UTC)[reply]
Not necessarily. It is a classic question in information theory, sometimes set by lecturers in the subject. In any communications channel, radio, sound in air, seismic, or whatever, there is a tradoff between information rate, transmitter power, and noise level. Noise level is in this case set by interference at the receiving site by vibrations from vehicles on roads, construction site works, etc. For a given noise level, how much transmitter power (i.e., size of explosion) you need is proportional to how fast you want to send the message. Real communications systems generally require more power than is predicted by the theorry - one has to design a encoding system to make the best of the channel. If you wanted to transmit "SOS" and could accept taking months to send it, you could perhaps employ a building site pile driver, using a code that translates the three letters into a vast number of bangs. But if you could accept only seconds, you'd probably need a nuclear bomb. See http://en.wikipedia.org/wiki/Information_theory Ratbone 124.178.174.189 (talk) 03:31, 18 January 2013 (UTC)[reply]
Absolutely not -- the magnitude of the seismic wave drops off as the SQUARE of the distance, so by the time it travels from Washington DC to Moscow, any seismic wave less powerful than that made by a nuclear explosion will have dropped to below detectable levels. This has nothing to do with information theory, this is straight physics. 24.23.196.85 (talk) 03:38, 18 January 2013 (UTC)[reply]
Sorry, but this is no different to any other communication medium. For example, in radio comms, where information theory has a major application, received power also falls as to the square of the distance. Info theory tells us why, for example, shortwave broadcast radio requires hundreds of kilowatts of transmitter power to send AM music, but radio hams can communicate around the World using only a few watts using morse code. The square law makes the transmitter power required much greater than it would be for a linear-with-distance loss system (which never occurs in practice), but the info theory math still applies. You do need a filter mechanism at the receiver tuned to the coding system in use, so that the noise is discriminated against. I suggest you read up on info theory before talking about things you are not aware of. Ratbone 124.178.174.189 (talk) 03:50, 18 January 2013 (UTC)[reply]
On the contrary, YOU should read up on physics -- a seismic signal simply WILL NOT make it halfway around the world without dropping to undetectable levels. Do you even know WHY a radio signal can be detected at such a distance? It's NOT because it's so powerful, but because it's focused in just one narrow frequency channel, and because the receiver can efficiently filter out the broadband noise and amplify ONLY the selected frequency -- which is NOT the case with a bomb and a seismometer! Also, like ALL big cities, Moscow has such a high background seismic noise level on ALL frequencies (from construction work, trucks, buses, trains, trolleys, the Metro, etc.) that even if a weak seismic signal does make the distance, it will be drowned out by the noise and undetectable! 24.23.196.85 (talk) 05:33, 18 January 2013 (UTC)[reply]
You've sunk your own argument. Yes, indeed radio works by filtering. But you can filter a mechanical transmission system (eg seismic) too. There is more than one way to filter. You can filter on frequency, as in traditional analogue radio, or you can filter on digital coding (as in cellphones and some types of military comms - termed "spread spectrum", although this is only one type of spread spectrum). Comms systems can, and do, share the same frequency if the digital coding is different, and share the same coding if the frequency is different. A simple example: Say the agent makes a thump every 32 hours for a month to send a morse dash, and every 40 hours for a month to make a morse dot. Moscow could then filter for these thump rates and exclude much noise. (it would not be smart choosing an integer multiple of 24 hours, or a sub-multiple, for obvious reasons) There are much better coding schemes but this will do for a simple example.
Your bit about Moscow being noisy is of no importance, for two additional reasons: a) it just means the transmitter has to be a bit more powerfull than otherwise, and b) who said the reciving site has to be at Moscow? They can do the same as is done with Embassy radio coms: set up a reciving site at a quiet location, and pass the message on to Moscow via phone or courier etc.
I strongly suggest you read up on the subjects of information theory, coding, messaging systems before you goof again. Electronic and comms engineers spend much time studying this very same field in undergraduate courses.
Ratbone 124.178.147.222 (talk) 05:57, 18 January 2013 (UTC)[reply]
I strongly agree - even a small source is not 100% undetectable - even if the amount of signal that reaches the detector is amazingly tiny. If the pulses are regular enough - and at a known frequency - then you can detect it given enough time. Suppose we have a hypothetical pile-driver that thumps (or does not thump) at PRECISELY one second intervals and does that (or doesn't do it) continuously to send one bit of information every 32 years. Let's suppose that the effect of one of those thumps is a signal that appears at the detector that is one part per billion of the 'background' vibrations due to distant volcanoes, cars, footsteps of the experimenter, etc. Let's have the recieving seismograph be monitored every half second for all of that time and separate out that data into the odd-numbered half seconds and the even numbered half seconds. (Technically, you'd need to sample a little faster than that...but let's keep things simple for a moment!)
  • If the transmitter sent a "0" bit (the pile driver didn't thump for 32 years) then the difference between the average of the odd and of the even half-second readings measured over 32 years will be very, very close to zero.
  • If the transmitter sent a "1" bit (by thumping once a second for 32 years) then the average vibration at all of the odd-numbered values (when the pile drivers' hammer was being lifted ready for the next thump) over 32 years will be significantly less than the average of the even-numbered half-second values when the pile-driver hit the dirt. The amount of difference in the measurement was one part in a billion - but added up a billion times and divided by a billion - the result should be that the even-numbered reading will be about twice that of the odd-numbered readings...easily noticeable...quite distinctive!
The difference between the two is perfectly adequate to receive a one bit signal at a billionth of a Hertz transmission rate. Even if the detector isn't accurate enough to receive that signal, the errors in the detector will average out over enough time - and the signal will show through. Even if the detector has only one bit of precision - showing a '0' for less-than-average amounts of ground displacement and '1' for greater-than-average, the presence of background "noise" around that cutoff point will result in the pile driver producing a statistically greater-than-average number of 1 bits over 32 years...even if the detector is inherently crappy and noisy - the resulting noise produced in the signal will average out eventually. There are entire areas of science and mathematics in "communications theory" that relate signal strength, noise ratios, bit rates, detector precision and so forth.
Of course this might fail if there is interference from other sources at the same frequency. So if you picked once-per-second (1Hz) as your signalling rate and there is a mechanical clock in the room with the detector - then the clock is a transmitter "broadcasting" on the same frequency and that could easily wipe out the signal that you're trying to detect. The frequency of our pile-driver clock would also have to be pretty solid - if it was "off" by a bit, or drifted compared to what the detector is expecting, then that would be difficult. A smart pile-driver-transmitter designer would pick a frequency that's not polluted by noise from man-made sources. You'd want to avoid 1Hz transmission rates for sure! Maybe also avoid rates close to that of the footstep-frequency of a typical adult human...avoid rates similar to the density of road traffic near where the receiver is located.
The practicality of averaging the results from a seismometer for a billion seconds and getting one bit of information every 32 years is going to be problematic...but it's not impossible.
To pick a "real world" example, consider how hard it is to detect a planet orbiting a star that a few hundred lightyears away by the tiny amount of light that's blocked by the planet as it crosses in front of the star for a few days once a year. The amount of variation in the brightness of the star as a tiny planet crosses in front of it's disk is vanishingly small...but analyse the light over a long enough time and you can pull that tiny variation out from all of the background noise in the telescope. Science does these kinds of amazing tricks routinely. It's just a matter of technique and time. SteveBaker (talk) 15:48, 18 January 2013 (UTC)[reply]
All this still won't help the KGB if the signal is so weak as to be below detectable levels (which will be the case for a Washington DC to Moscow seismic signal that uses anything less than maybe a 100-kT nuclear bomb). 24.23.196.85 (talk) 01:04, 19 January 2013 (UTC)[reply]
Didn't you read anything that Ratbone and SteveBaker wrote? Information theory, as both have outlined, is a well established branch of science, applying to any communication medium, radio, communication by light beam, mechanical systems, seismic communication, etc. A standard textbook used in many information/communication theory courses for some years is Modern Digital and Analog Communication Systems, B P Lathi, pub: Holt-Saunders. Chapter 8 covers in depth the principles discused by Ratbone and SteveBaker here. Coding is covered in other chapters. If you have good high-school level math, you can study this or similar books and realise you are wrong - given enough time and a good coding system, there is no lower limit on transmitter power. The subject of information theory, which can be regarded as the science of sending a mesage as fast as possible in a finite bandwidth in the presence of noise with the least transmitter power, was established by C E Shannon, R V L Hartley, and H Nyquist papers in the Bell System Technical Journal, in the 1920's. Keit 121.215.159.209 (talk) 01:44, 19 January 2013 (UTC)[reply]

I have been poached! μηδείς (talk) 12:30, 18 January 2013 (UTC)[reply]

You are wrong. Given enough time and a repetitive signal, the concept of "below detectable levels" does not exist. Repeat it enough times on a regular enough basis and you can send a Washington DC to Moscow seismic signal with the power of a firecracker. Of course you will die of old age long before it is received... I haven't run the numbers but I suspect that it would take longer than the current age of the universe. Go back and read SteveBaker's comment again. --Guy Macon (talk) 01:35, 19 January 2013 (UTC)[reply]
Of course the concept of "below detectable levels" exists -- if the signal is weaker than the seismometer's sensitivity limits, then the instrument won't pick it up no matter how many times it repeats! YOU are the one who's wrong here! 24.23.196.85 (talk) 02:09, 19 January 2013 (UTC)[reply]
I'm transmitting a "1" with a dim LED in a sealed safe 1km underground at .0187534019265 Hz. The reciever is a silver halide camera on the other side of the Earth. :) Sagittarian Milky Way (talk) 03:28, 19 January 2013 (UTC)[reply]
I am totally clueless here, but doesn't transmission require a channel or a medium? The earth will conduct vibrations. But in Sag.'s example, does the 10,000 degree core of the earth count as a channel for the transmission of light? μηδείς (talk) 04:02, 19 January 2013 (UTC)[reply]
There is another issue with Sag's example: Silver halide detection requires the received energy to exceed a threshold - below the threshold for each sliver halide crystal nothing happens. However, this is not important wrt the OP's question, as a linear transducer can be used to detect seismic waves. Ratbone 121.215.34.98 (talk) 04:09, 19 January 2013 (UTC)[reply]
The Sun can be seen through thin enough gold leaf. Let's say the sunlight is made at least 10 million times dimmer by passing through the gold leaf (100000 is used for visual filters). The Earth is I don't know a 100 trillion times thicker? If the Sun could be bought, the box would say 35,700,000,000,000,000,000,000,000,000 lumens. Adjust for inverse square-law of the distances. So at least 260000000000000000000x10000000100000000000000 times dimmer than visual detectability. And if you say the halides won't be activated and there's a bright core in the way, then information theory! (make sure the photos are near the threshold, though) Sagittarian Milky Way (talk) 17:21, 19 January 2013 (UTC)[reply]
And even a linear transducer will have a certain detection threshold below which it will not detect a signal. If nothing else, the Heisenberg uncertainty principle puts an ABSOLUTE limit on the accuracy of ALL measurements and precludes the detection of an "infinitely small" signal. 24.23.196.85 (talk) 07:04, 19 January 2013 (UTC)[reply]
But you don't need to detect individual pulses to detect the message. The signal is superimposed on the background noise. Sometimes the sum of noise and signal pulse will breach the detection threshold, and sometimes it wont. But if the signal pulse is there, this sum will breach the threshold more often than if it isn't. You can collect data over a long enough time to get to an arbitrary certainty for distinguishing the "pulse" from the "no pulse" case, no matter how small the difference. Heisenberg does not affect this. --Stephan Schulz (talk) 09:59, 19 January 2013 (UTC)[reply]
Honestly, all the discussion is coding theory is largely academic, while the original question was phrased in practical terms. A system that takes 100 years to transmit a signal is of no practical value. So far, no one has really looked at the problem in practical terms. We would need to know what the transmission efficiency of seismic waves from DC to Moscow actually is. And we need a frame of reference for how much seismic energy you can generate from an explosive (or any other method). Till one has those details in hand, you can't figure out if the initial signal to noise ratio is 10:1, 1:1, 1:1000, 1:1000000, 1:1030 or what. Without such basic details you can't figure out what is needed to build a practical system for seismic communication. Coding theory can dig you out of the noise, but if you are starting many orders of magnitude below the noise floor then it is unlikely the system can ever be built in a practical way. Dragons flight (talk) 04:04, 19 January 2013 (UTC)[reply]
The original question was deliberately posed as an absurd one on the talk page--hence my obscure reference to being poached. Carnildo is having a little joke on us by posting it here. But I do think the question of a medium which follows from subsequent comments is an interesting one. For example, "In space, no one can hear you scream." I take that to mean that ordinary sound waves simply cannot propagate through a vacuum. (Although supernova shockwaves can through near vacuums, if I understand correctly.) Doesn't information theory require a suitable medium, and aren't claims for infinite transmissabilty (again, a term I am making up) based on assumption like the perfect gas that don't hold in reality? μηδείς (talk) 04:19, 19 January 2013 (UTC)[reply]
I reckon I showed that it is not a system you would use in practice in my first post. Using radio is the tried and tested way and is vastly cheaper and more convenient. The agent could also simply phone and use a coded message. We used to phone our family in East Germany from time to time, business calls were made, and who is to know that a certain pre-arranged number (not necessarily in any particular country) will be answered by someone trained to react to a certain secret code word by forwarding it on. However, the OP asked how large a bomb is required for seismic signalling, and what sort of transmission rate could be achieved. That's a good question to ask, and we've answered it - the transmission rate and bomb size need to be traded off against each other, and the trade off will not be of practical use as you say. Ratbone 121.215.34.98 (talk) 04:21, 19 January 2013 (UTC)[reply]
Yes, information theory requires a medium by implication, as the system bandwidth is one of teh calculation inputs. For electromagnetic waves (radio, light), the medium can be a vacuum. Sound can travel in anything that is not a vacuum. The medium in this case is the earth propagating seismic waves - essentially low frequency sound travelling in an elastic solid the same as it does as a result of geologic disclocations i.e., earthquakes. Radio waves and light travelling in a vacuum is a lossless system - the strength of the falls of with the square of the distance purely because of the ever increasing volume per unit energy. In the case of sound (or seismic waves) travelling in a solid, there is a loss above this - because there are frictional losses converting some of the sound energy to heat along the way. That does not mean a total loss after some distance x though - you just need to increase the transmitter power (bomb size) to compensate.
I don't see the question as necessarily absurd or silly - in part because this is a classic assignment question sometimes asked by lecturers in information theory and coding, as I stated in my first post. By specifically asking about bomb size and signalling rate, the OP has displayed some ability to think beyond what many lay people can do (and clearly better than poster 24.23.196.85). Or he might be just being silly and fluked a good question - I don't know the guy, so I cannot know. We shouldn't be too ready to judge a question silly - many of us ask questions of an unusual scenario in order to test or increase our understanding of physics. I've done it myself - it is a very effective technique.
Ratbone 121.215.34.98 (talk) 04:39, 19 January 2013 (UTC)[reply]
The silliness lay in my original talk page suggestion that a spy would send an sos signal by nuclear bomb, not in the pure physics aspect. As a physics question it could simply be reworded as what TNT equivalent would be necessary for a regular signal to be detected on the other side of the Earth. (I also understand at the antipodes the signal would be focused and hence magnified.) μηδείς (talk) 05:09, 19 January 2013 (UTC)[reply]
I should point out that Moscow and Washington DC are not geographical antipodes (though they can be considered political antipodes) -- in fact, the distance between them is approximately 5000 miles (8000 km) on a great circle route. As for calculating the size of the bomb needed, the most important piece of info is the seismic wave absorption coefficient of oceanic basalt -- this absorption, along with the inverse-square decrease in intensity I mentioned earlier, will account for most of the signal attenuation over this distance. (In practice, even this prediction will be overly optimistic, because the seismic waves will be scattered and partly retroreflected back toward Washington DC every time they encounter a geological fault or a boundary between different rocks -- in particular, there will be major attenuation at the Mid-Atlantic Rift, which will mean the signal will arrive in Moscow even weaker than what this theoretical model predicts.) 24.23.196.85 (talk) 07:17, 19 January 2013 (UTC)[reply]
The same thing happens with any wave propagating through a medium. Electromagnetic waves suffer a degree of attenuation above the square-law-with distance rule in non-vacuum media due to a degree of conversion into heat, and portions get reflected backwards when any change in media offreing different dielectric and/or magnetic properties is encountered. In all cases, the intensity at some distance x is non zero, and as it is non-zero, it can be used to communicate by means of frequency filtering or pattern filtering. Indeed, in seismic testing to determine geology for earth resistivity calculations, filtering is used to reduce the size of the bang/thump to that of striking a plate with a sledge hammer a few times - the detection equipment knows when you hit the plate and triggers its signal storage and timing from it. Ratbone 121.215.17.194 (talk) 09:54, 19 January 2013 (UTC)[reply]
What kind of seismic testing you are referring to ? I'm not aware of seismic being used to determine resistivity, although I guess it's sometimes used to constrain the earth model for an inversion based on well log data. I don't understand "filtering is used to reduce the size of the bang/thump to that of striking a plate with a sledge hammer a few times". A lot of steps are involved in processing seismic, but I don't recognize that one, although I guess you might be referring to compressing the wavelet via decon. If anything, the opposite of reducing the size of the bang is done to try to minimize the effects of absorption and geometric spreading from the data. The recording equipment for marine streamers and onshore/ocean bottom cables certainly needs to know when the airgun arrays/vibrator sweeps happen or else you will be burning through hundreds of thousands of dollars in acquisition cost with nothing to show for it in no time. Sean.hoyland - talk 12:29, 19 January 2013 (UTC)[reply]
Seismic testing using a sledgehammer and plate as the signal source is common in testing for civil engineering purposes. What I was refering to is the use of the same sort of gear for determing the geo-electric conditions in orfer to design/precalculate earth rod systems for elecrical earthing in high voltage and extra high voltage electricity transmission. In any AC lectrical transmisssion system, where overheard wires or buried cables transport electrical energy from one location to another, a portion of the return current flows in the Earth. Under fault conditions this current can be substantial, and the connections to Earth at each end are often substantial engineering projects - they may be multiple metal rods driven 30, 40, 50 metres or more into the ground. The cost is substantial, so it is important that the Engineer gets his calculations as to just what is required at a given site accurate. The return current flowing in the Earth flows centered at a depth known as the Carson Depth (after the USA Engineer who first worked out the math, in the 1920's). The Carson Depth is typically a few hundred meters to a few kilometers down. This means that the Engineer must have an adequately accurate undertanding of the electrical conductivity of the various geological strata in the vicinity, say to within a few hundred metres to a few tens of kilometers of the site.
A simple geology is a two-layer geology, e.g., a top layer of dirt having a relatively moderate electrical conductivity and underneath it a water table having high conductivity. Or, a top layer of dirt having moderate conductivity and underneath it at some depth a rock layer having very poor conductivity. In these cases, one can determine the interface depth by measuring the electrical resistivity on the surface at a progressive set of test distances. Plotted on a graph, the resistivity vs distance will be an "s"-shaped curve, and by comparison with standard curves, one can determine both the interface depth and the electrical resistivity of each layer.
A more common geology, especially in the parts of the World where I practice, is a three-layer geology: A top soil layer having high electrical conductivity, underlying sand having poor conductivity to a depth of a few tens to a few hundred meters, and underneath that a rock layer having very poor electrical conductivity. This structure can be evaluated by surface electrical measurements, but it is very inaccurate and unsatisfactory - particularly if the rock layer is irregular in depth. An Engineer can do his tests, decide that x means electrodes to depth y will do, but upon installation there just happend to be some local rock anomally. Bosses don't appreciate being quoted z dollars and then being told we need 2z dollars after drilling work has supposedly been completed. However, with seismic testing and electrical testing, the Engineer can get a more accurate picture.
Sometimes, the three layer geology is truncated by a geology fault, or a river, or whatever. In these cases electrical testing on its' own is pretty much hopeless, but seismic testing will give an answer.
In all these cases, as with geology testing for civil enginnering, the range of the tests is a few hundred metres to a few kilometers, and striking a plate on the ground with a sledge hammer is adequate, and avoids the need for an licenced explosives expert. Often, you are testing in built up areas (towns, cities) where detonations are undesirable or not allowed anyway. However, ONE strike with a sledge hammer may not be enough to get the signal above the noise from construction sites, trucks on roads, etc. So, you hit it several times, and the instrumentation integrates the signal from all the hammer blows, and the signal is seen to rise out of the noise.
Ratbone 120.145.189.33 (talk) 04:03, 20 January 2013 (UTC) [reply]
We have a real-world example of a chemical explosion in Wyoming that registered on seismographs as far away as Europe. --Guy Macon (talk) 09:18, 19 January 2013 (UTC)[reply]
That had "only" about 40 tonnes of (unspecified) explosives - or about one large truck full of the stuff, so it gives us an upper bound. And it wasn't focused to produce a seismic shock, but set off by accident, so much of the blast probably dissipated into the air. One should be able to do better. --Stephan Schulz (talk) 09:52, 19 January 2013 (UTC)[reply]
All right then, I guess a few tons of high explosive will do the trick. Still, that would be completely impractical as a signalling system. 24.23.196.85 (talk) 05:43, 20 January 2013 (UTC)[reply]

Reason for glass being transparent face on but opaque edge on

My understanding of why transparent materials (at least SiO2) are transparent (which is informed by what I remember from a semiconductor physics class I took in college) is that the energy difference between the constituent atoms' base state and next available excited state is larger than the energy that photons (of visible light frequencies encountered in everyday life here on Earth) have, so they pass through the material. But I was just thinking about this as I looked at a piece of glass that was transparent when viewed face-on but green when viewed edge-on. And it's not because of thickness: a piece a quarter-inch wide and a quarter-inch thick is still highly transparent through its face and nearly opaque green through its side. Who can explain in as SIMPLE terms as possible what key features of the crystal structure (though I thought glass was often amorphous) cause high transparency at one angle and high opacity at another? 20.137.2.50 (talk) 14:44, 18 January 2013 (UTC)[reply]

In a word: thickness. Consider that a cube of glass won't have any preferred direction of "transparency": it will be equally transparent in all directions. Differences in transparency only become apparent in big sheets of glass, which are basically large retangular prisms. The shortest side is dramatically shorter than the two longer sides, so the amount of material that the light has to pass through is significantly more edge-on. If the index of refraction of the glass is such that a light beam traversing the glass in a particular direction gets deflected before it reaches the other side, you can't see clearly in that direction. Picture light going through a piece of glass 1/4 inch thick and 3 feet wide, edge on. Any light that strikes that edge even at a slight angle will be deflected by refraction enough to never reach the opposite side. This doesn't happen when you view the pane of glass from the front. --Jayron32 14:51, 18 January 2013 (UTC)[reply]
I don't have a cube of glass handy to check, but I'm pretty sure it will be equally transparent through all faces, if they are finished the same way. Note that many types of window glass have a different type of finish on the edges, and this can contribute to the thickness effects Jayron describes above. SemanticMantis (talk) 14:59, 18 January 2013 (UTC)[reply]
(ec) The key bit there is 'if they were finished the same way'. If you're looking at pieces cut from flat sheets of glass (like panes of glass used for windows) I can think of (at least) three reasons why the appearance of the glass might be different from the cut sides/edges as it is from the originally-manufactured flat face.
  1. Most plate glass is made using the float glass process. This process produces faces that are extremely smooth and uniform—and the opposing faces will be very nearly perfectly parallel. The other surfaces of the glass cannot be subjected to this process (obviously).
  2. Cutting the edge of the sheet of glass can (will!) introduce microscopic and macroscopic defects, fractures, and other light-scattering features to the cut edge. Even if polished, these surfaces will not be identical to the float-glass faces. Opposing cut faces are also unlikely to be quite as parallel as the opposing faces of the float-glass pane.
  3. Surface coatings may have been applied to the exposed surfaces of the plate glass. Some may be deliberate and permanent, for reducing visible reflections or heat leakage, others may be incidental, such as adhesive residue left behind from the protective paper often applied to glass for shipping.
Any or all three of these factors can and will affect the way that light scatters and reflects (internally and externally) from the surfaces of the piece of glass, and thereby affect the appearance of the light transmitted through the glass. If you were to take a larger lump of glass, and cut and polish it into a cube, I suspect that you would find its properties are pretty isotropic: the same in all directions. TenOfAllTrades(talk) 15:21, 18 January 2013 (UTC)[reply]
Then I would like to know the details of how finish affects transparency. This picture captures my observational rationale. About in the middle horizontally and at the bottom quarter vertically, for instance, you'll see a fairly trapezoidal piece that closes to a triangular point at its right. So though its width is at or less that of its thickness, it is still green edge on but is transparent face on. 20.137.2.50 (talk) 15:09, 18 January 2013 (UTC)[reply]
In that image, the camera is at an angle of (let's guess) 60 degrees to the "horizontal" surface of the glass. Light from the ground beneath is refracted going into the bottom of the glass, then again as it emerges from the top and heads towards the camera. The light is attenuated by maybe a quarter inch of glass. But the "vertical" sides of the glass are more like 30 degrees from the camera. The light that emerges from that angle at the sides has been subject to "total internal reflection" and bounced around perhaps dozens of times within the sheet before emerging towards the camera - the total distance travelled through the glass could be a dozen times more than the relatively straight path taken on the horizontal sides. Hence more attenuation - and it looks green. If you look at some of the smaller chunks (especially the ones just to the left of the stone column), the horizontals and verticals look about the same color. SteveBaker (talk) 15:19, 18 January 2013 (UTC)[reply]
Jayron gave you the right answer. You may like to read up on optical fibre. Optical fibre is used by phone companies to send information via very thin glass strands dozens of kilometers long. The glass used is essentially quite normal silica glass, equally transparent in all directions. However, it is thousands of times more transparent than standard silica glass due to extreme purity, so that the signal can go end to end. Ratbone 120.145.29.226 (talk) 15:31, 18 January 2013 (UTC)[reply]
More reading: See Total internal reflection, which is exactly what occurs when you view a pane of glass edge on. Light simply can't escape in that direction. --Jayron32 15:56, 18 January 2013 (UTC)[reply]
That was nice. Sunny Singh (DAV) (talk) 13:18, 19 January 2013 (UTC)[reply]

12cm

is a man with 12cm wide feet (at the widest point, the ball of foot) a 4e, 5e or 6e wide width shoe?--Shoes15151617 (talk) 17:12, 18 January 2013 (UTC)[reply]

That's impossible to know without knowing the length as well. Per the Wikipedia article on shoe size Shoe widths are relative to the length of the shoe, so a size 10/4E width foot will be a different width than a size 14/4E shoe (assuming U.S. measurements, as the "4E" width designation is usually a U.S. designation). If you have a question, you should visit a shoe store and get measured; most shoe stores have a Brannock Device to measure your foot, and that can give you an idea of your correct size. --Jayron32 17:26, 18 January 2013 (UTC)[reply]
I hate that device, since it doesn't measure height. As a result, they always recommend normal width for me, when I really need wide shoes to accommodate the additional height of my feet. StuRat (talk) 17:39, 18 January 2013 (UTC)[reply]
As with any tool, it just gets you started. You, for example, know that your feet run wide from what the device tells you. So, you're going to tend to find shoes sized somewhat wider than it recommends. The idea is that, if you know absolutely nothing about your shoe size, it gives you a ballpark to start from. You should always try on the shoes in question, and several sizes for each style, because styles and manufacturers will all vary somewhat, so even though you're a 10.5/4E in Nike sneakers, you may find you're an 11/D in Timberlands. The device is useful, but only if you're not unwise in how you use it. Try the shoes on regardless... --Jayron32 18:30, 18 January 2013 (UTC)[reply]
The device doesn't seem to help. Since I need to try them on to see if they fit anyway, and know my approximate size, why bother with the device at all ? StuRat (talk) 19:07, 19 January 2013 (UTC)[reply]

I am a 11 1/2 shoe length with 12cm wide feet. Is that a 4e , 5e or 6e?--Shoes15151617 (talk) 19:52, 18 January 2013 (UTC)[reply]

Neurotransmitters - if someone is being hard with dancing in clubs

This is not a medical advice, just a theoretical case to evaluate.

someone go to lot's of dance clubs, and dance very genteelly, he\she could be with someone who "knocks out" in dancing, he\she could sit and the partner will dance near him - but when he\she dances, it's always have to be clumsy, absent, slow, gentle, he/she being hard with this because it potentially hearts him/her - hearts the experience. the one do fell the need to do dancing but something just blocks it from it. alchaol seems to worsen it (by upgrading neural inhibition?)

i think it have to do with levels on neurotransmitters, what do you guys think of such matter? maybe the individual lacks dopamine?, maybe it needs caffeine/xanthins? — Preceding unsigned comment added by 79.176.113.107 (talk) 17:44, 18 January 2013 (UTC)[reply]

The question is hard to understand because of poor English, so I have to guess at what you mean. If you are asking whether slow and clumsy dancing can be caused by altered neurotransmitter levels, the answer is yes. But there are other possible causes too. Looie496 (talk) 18:02, 18 January 2013 (UTC)[reply]
poor english? excuse me?, what neurotransmitters could play a role?, sure that a neuro-informative level could be another option (the individual doesn't have neural information about how to dance), but, i am interested to hear opinions about the neurotransmitters that has to do with dancing. — Preceding unsigned comment added by 79.176.113.107 (talk) 19:04, 18 January 2013 (UTC)[reply]
Dopamine is the most obvious possibility. A person with Parkinson's disease (caused by loss of dopamine cells) will have great difficulty dancing. But also a person who is depressed (low levels of norepinephrine and serotonin, among other things) will often be uninterested in dancing. Looie496 (talk) 21:44, 18 January 2013 (UTC)[reply]
What do you mean by "knocks out" in dancing? What do you mean by "potentially hearts him/her - hearts the experience"? Do you mean "potentially loves him/her"? Do you mean "potentially gives him/her a heart attack"? It's not that your English is poor, it's that there's too much slang in it for those of us with more experience (OK and who are older) to be able to give you a proper answer. --TammyMoet (talk) 09:53, 19 January 2013 (UTC)[reply]
What Looie was trying to say is that you're English is shit. 78.150.17.65 (talk) 04:53, 21 January 2013 (UTC)[reply]
Do you mean "your"? - Goodbye Galaxy (talk) 17:58, 21 January 2013 (UTC)[reply]
I think Developmental dyspraxia is probably as good as you'll get from Wikipedia about such problems. We can't give medical advice or diagnosis of individual cases. As to neurotransmitters there's a large number of them and people aren't at all certain what they all do. Dmcq (talk) 20:00, 18 January 2013 (UTC)[reply]
Being able to dance requires a sense of rythm. Our sense of rythm arises from certain neural structures - timing neurons. It is a brain wiring configuration thing, not just an adequate supply of neurotransmitters. It is also highly variable between individuals. I was always unable to dance. At one time I got involved in electronic music, and from there got interested in playing guitar. That was terribly difficult at first, as a sense of timing/rythm is required for that as well. But, by persistence, after a while things "clicked" and I was able to play. I was then able to dance as well. So dancing skill can be acquired. Floda 121.221.210.234 (talk) 00:18, 19 January 2013 (UTC)[reply]

Alex Hum

please either consider creating such an article yourself or clarifying your question regarding references, if you have one
The following discussion has been closed. Please do not modify it.

Alex Hum is making revolutionary discoveries in the field of intelligence clothing such as sunglasses that one can watch television. I have found very little on this pioneer, and many around college campus' are calling him the next Steve Jobs. Below is a link from ScienceDirect.com.


Alex Hum, voted as one of the outstanding people of the 20th century by the Cambridge International Biographical Centre, Cambridge, England, has achieved much of his world-wide attention because of his aim and vision of infusing wireless technologies into everyday high-tech devices and, now into, haute-couture. Dr. Hum is managing the international i-Wear consortium (sponsored by Adidas, Energizer, Seikon–Epson, France Telecom, Levi's, Samsonite, Bekintex, Recticel, Philips, Siemens, WL Gore, Courreges, Vasco Data Securities, and more). i-Wear is a multi-disciplinary research and industrial consortium to invent the deep future of intelligent clothing. In addition, he is the Chief Scientist in Starlab Research Laboratories nv/sa in Belgium. He was a scholar and obtained a direct Ph.D. in RF/Microwave Engineering. He was a Member of Technical Staff at the Centre for Wireless Communications, with experience in project management and industrial collaboration with top technology companies. His specialities are RF/microwave circuit and system design, wireless technology, 3G cellular systems, antenna systems, and the design of RFIC and MMIC integrated circuits. A member of IEEE and IEE – Dr. Hum publishes regularly in internationally-refereed journals and frequently addresses international conferences. He holds several patents. Dr. Hum is listed in Who's Who in Science and Technology as well as Who's Who in the World. — Preceding unsigned comment added by 199.120.31.20 (talk) 19:11, 18 January 2013 (UTC)[reply]

mrsa

I heard if someone gets a mrsa infection they carry mrsa for life is that true? Is that also true for non mrsa staph?--Shoes15151617 (talk) 19:54, 18 January 2013 (UTC)[reply]

Do you mean MRSA?--Shantavira|feed me 20:15, 18 January 2013 (UTC)[reply]

yes--Shoes15151617 (talk) 20:53, 18 January 2013 (UTC)[reply]

In general, "going dormant" only to emerge later and cause more problems, is more a behavior I associate with viruses than bacteria. StuRat (talk) 02:45, 19 January 2013 (UTC)[reply]
  • Bacteria can colonize people; for example, MRSA can colonize the nose without causing disease for long periods of time, then under certain circumstances it can cause disease in that same person. Example: PMID 23290578 and PMID 18374690 -- Scray (talk) 03:08, 19 January 2013 (UTC)[reply]
  • Staph aureus is very often found on the skin, and as Scray said, also in the nose. MRSA is just a strain of Staph aureus which is resistant to methicillin. So if someone has had lots of S. aureus infections which have been treated with antibiotics (note that minor issues such as boils and sinusitis can be caused by S. aureus infection) it is likely that there will be some bacterial cells on the skin or in the nose which are methicillin resistant, and they can happily stay there without causing any issues. douts (talk) 13:54, 19 January 2013 (UTC)[reply]
  • Hmmmm... I'm having an unexpected amount of trouble trying to get a straight answer about the risk of reinfection (as opposed to infection rate in those colonized). Some sources like [8][9][10] are interesting. Perhaps the problem is that conceptually a person who is "colonized" is infected, making the question philosophically invalid; yet I feel like there should be a distinction between a few bacteria hiding out intracellularly and a visible, active infection that would make such a number possible to obtain. Wnt (talk) 17:38, 20 January 2013 (UTC)[reply]

heatwave?

can anyone tell me why this http://www.bbc.co.uk/news/world-asia-21072347 says it was a "heatwave" in sydney today of 46C but this http://www.weather.com/weather/today/Sydney+ASXX0112:1:AS says it only reached 24C ? --Shoes15151617 (talk) 20:59, 18 January 2013 (UTC)[reply]

The 46C reading is reiterated by the Sydney Morning Herald here, by the Nine Network here, and by Weather Underground here. -- Finlay McWalterTalk 21:14, 18 January 2013 (UTC)[reply]
The cooler temperature is for the 19th - the BBC story (from the 18th) mentions the expected sharp drop in temperature "However, meteorologists have forecast a dramatic change in weather overnight in Sydney, with thunder storms expected to bring a rapid drop in temperatures". Mikenorton (talk) 21:31, 18 January 2013 (UTC)[reply]
Yes, it's a forecast for the 19th. It says "Today" but that's local time where it's already the 19th. The "Yesterday" link for the 18th says "115°F High". That's 46°C. PrimeHunter (talk) 21:36, 18 January 2013 (UTC)[reply]
Incidentally, the fire in Victoria that killed one man at Seaton yesterday (the same one that's cut off Licola) was only 20km from where I live, and day was like night here (Maffra) yesterday from all the smoke. The imminent danger has passed but we remain on high alert, because the fire's predicted to spread up into the high country where there's even more abundant dry fuel, gather a great deal more energy, and possibly turn back this way. Australia has been enduring its worst heatwave and bushfire season on record, with almost no part of the country untouched, and temperature records falling like flies. See 2012–13 Australian bushfire season. -- Jack of Oz [Talk] 21:54, 18 January 2013 (UTC)[reply]
And it's not just the heat. An absence of rain for over a month hasn't helped. My vegie garden is very sad. But it's worth noting that fatalities from the fires so far have been very low. Just luck? Not sure. HiLo48 (talk) 21:58, 18 January 2013 (UTC)[reply]
We learnt a lot from the Black Saturday bushfires in 2009, which killed 173 people. -- Jack of Oz [Talk] 22:30, 18 January 2013 (UTC)[reply]
True, but we still have opposing opinions on what's the safest approach when fire is on its way, and whether fuel reduction burns are a good idea, and how and when to conduct them, with the proponents of each view 100% certain that their view is correct. Still, the ongoing debate keeps the issue of fire safety in peoples' minds, and that can only be a good thing. HiLo48 (talk) 04:13, 19 January 2013 (UTC)[reply]

January 19

About AHL

Is AHL special for each kind of bacterium? — Preceding unsigned comment added by Viacoha (talkcontribs) 02:06, 19 January 2013 (UTC)[reply]

What does the American Hockey League have to do with bacteria? ←Baseball Bugs What's up, Doc? carrots→ 02:12, 19 January 2013 (UTC)[reply]
[11] Gzuckier (talk) 08:50, 19 January 2013 (UTC)[reply]

AHL → acyl-homoserine lactones (enables many gram-negative bacteria to engage in quorum sensing) — Preceding unsigned comment added by Viacoha (talkcontribs) 02:16, 19 January 2013 (UTC)[reply]

Yes, different bacterial species generally have different AHLs. 24.23.196.85 (talk) 02:38, 19 January 2013 (UTC)[reply]

Astronauts in talking space

You know that sound just needs only medium to travel. So, if two astronauts touched their helmet together and are talking, will they be able to listen each others voices?27.62.9.222 (talk) 12:31, 19 January 2013 (UTC)[reply]

Yes, that should work if the helmets touch at points that are made of some hard material (glass, plastic, metal, etc - rubber would likely work poorly). 88.112.41.6 (talk) 14:02, 19 January 2013 (UTC)[reply]
Also, if they only touch at a single point, that might not be enough. That is, the volume level might be too low to hear. StuRat (talk) 19:02, 19 January 2013 (UTC)[reply]
They did that in First Men in the Moon (1964 film). Bubba73 You talkin' to me? 02:59, 20 January 2013 (UTC)[reply]
And in Homeward Bound. Whoop whoop pull up Bitching Betty | Averted crashes 05:07, 20 January 2013 (UTC)[reply]

Aurora

Are auroras harmful? What will happen if any flying object entered in it, e.g. a plane?27.62.9.222 (talk) 12:42, 19 January 2013 (UTC)[reply]

No. Auroras are caused by energetic charged particles (solar wind) colliding with atoms in the Thermosphere, far above the heights that most planes fly at. Even if the space shuttle were to fly through it, it's very unlikely that any harm or damage will occur due to the interaction being at the atomic level. douts (talk) 14:05, 19 January 2013 (UTC)[reply]

Two bar magnets tied together

If two bar magnets are tied with a string such that one is placed over other, and North pole of first lies above South pole of second, and S pole of first lies above N pole of second. Both magnets are tied for a long time, e.g., for one month. Their poles will interchange or the poles will remain same as initially they are. Britannica User (talk) 13:13, 19 January 2013 (UTC)[reply]

If not heated they are likely to remain the same. Ruslik_Zero 18:31, 19 January 2013 (UTC)[reply]
For the record, they don't even need to be tied together -- opposite poles attract, so they'll stick together naturally. BTW, in this scenario they'll actually keep their magnetism BETTER than if they are stored separately -- they will form a magnetic circuit, which will reduce the leakage of magnetic energy. (The same thing is often done with horseshoe magnets, by either letting two of them cling together end-to-end, or else by placing a keeper bar across the poles.) 24.23.196.85 (talk) 05:12, 20 January 2013 (UTC)[reply]
If they were tied the other way round, then they might lose a bit of magnetism over a month, especially if dropped, hit with a hammer or heated, but I can't think of any conditions under which they would exchange poles. Dbfirs 17:35, 20 January 2013 (UTC)[reply]

Gas from posterior side of body

Please, don't take this question as funny one. What is the common term for gas released from posterior side of our body ? It has foul smell. Why ? It also produces sound sometimes. Why ? Sunny Singh (DAV) (talk) 13:32, 19 January 2013 (UTC)[reply]

See Flatulence. Mikenorton (talk) 13:34, 19 January 2013 (UTC)[reply]
Please don't take this answer as a funny one, but the common term is a fart. This, however, is somewhat rude, so don't go round saying it to people you don't know. A common, more polite term is just "gas", as in "I can't eat chillies, they give me gas". Again, it's not something that people discuss much, unless needed. IBE (talk) 15:14, 19 January 2013 (UTC)[reply]
I think "gas" can either mean farting or burping. That is, how the gas escapes the body isn't specified. StuRat (talk) 18:59, 19 January 2013 (UTC)[reply]
The medical term for gas generated in the digestive tract is flatus. Gandalf61 (talk) 17:32, 19 January 2013 (UTC)[reply]
One of the stranger medical devices is the flatus bag, designed to collect farts: [12]. Apparently the flatus gases are sometimes analyzed [13], while at other times they are just collected to prevent the patient and medical staff from being exposed to unpleasant and potentially toxic levels of flatus gas. StuRat (talk) 19:15, 19 January 2013 (UTC)[reply]
One can find almost anything on the Internet. http://www.fartnames.com/ has a list of euphemisms, including Trouser trumpet, Message from the Interior, Under-thunder, and my favorite, Step on a Duck. --Guy Macon (talk) 18:09, 19 January 2013 (UTC)[reply]

Semiautomatic cook off

As the article on Cooking off mentions, it can potentially happen with semiautomatics that fire from a closed bolt position. If a magazine was loaded with the first cartridge being full of a pyrotechnic such that it heated the chamber intensely, could it then just unload the rest of the rounds as if by fully automatic fire? 210.210.129.92 (talk) 16:29, 19 January 2013 (UTC)[reply]

If the following conditions are met, it could happen:
The chamber would have to be hot enough but not the magazine. If, for example, the entire firearm is in a fire, it is likely that the rounds in the magazine will cook off first (less thermal mass, so they get hot first).
As you mentioned, the firearm would have to fire from a closed bolt.
The action would have to be able to cycle at that temperature despite parts expanding and lubrication burning off.
There would be a delay as each new round heats up. This delay would have to be shorter than the time it takes for the chamber to cool down enough so that it doesn't set off a round.
The trigger mechanism would have to be such that an unpulled trigger or the safety (if on) only stops the firing pin from engaging. If it is designed so that an unpulled trigger also locks the cycling, the second round would not make it into the chamber. I don't know if any actual firearms are designed this way. See Trigger (firearms). --Guy Macon (talk) 17:29, 19 January 2013 (UTC)[reply]
If the chamber is hot enough it doesn't matter why it's hot enough. But that said, Guy's last point is kind of interesting... I don't know enough to answer that offhand, but I'll pay attention to that point in the future. Shadowjams (talk) 20:58, 19 January 2013 (UTC)[reply]

Is there relation between yank and work ?

Is there relation between yank and work ? Is there any formula involving both yank and work ?Sunny Singh (DAV) (talk) 18:35, 19 January 2013 (UTC)[reply]


Well, "yank" is the derivative of force with respect to time, "work" is force moved through a distance. How "related" is that? Not much - a force can vary over time without producing any motion - so the work done is zero. (Imagine a battery powered electromagnet stuck to your refrigerator...as the battery runs down, the force changes (a "yank") - but no actual mechanical work is produced. I'm sure there are plenty of formulae that incorporate both terms - nothing immediately comes to mind though. SteveBaker (talk) 19:12, 19 January 2013 (UTC)[reply]
I'm not doubting Steve's derivative answer above, except to comment that "yank" is not a precisely defined term. In its original (Scottish or mid-nineteenth-century American) meaning, I think it included a connotation of some displacement, and so would involve some work being done. I can't think of any formulae either. You would need to specify some parameters of a "yank" before you could deduce anything at all about the work done by it. A derivative with respect to time has no simple connection with an integral (of force) with respect to displacement, except that it's the same force. The effects of the force depend on how it is applied. A large yank on an immovable object will do no work, but a small yank over a large distance might do much work. Dbfirs 17:30, 20 January 2013 (UTC)[reply]
The definition of "yank" that I use is mentioned in our article "jerk (physics)"...sorry, no reference though. SteveBaker (talk) 15:20, 21 January 2013 (UTC)[reply]
Yes, the "mass times jerk" sense is also mentioned in Wiktionary, but it is a modern sense not mentioned in older dictionaries. Dbfirs 23:04, 21 January 2013 (UTC)[reply]

How images of galaxies are taken?

I want to know that how astronomers take images of galaxies ? For example, how do images of Milky Way are taken although we are inside it ? It is just like taking the image of a house by sitting in a room inside that house.Parimal Kumar Singh (talk) 19:09, 19 January 2013 (UTC)[reply]

Photographs of other galaxies are taken with cameras (or comparable sensors) attached to telescopes. If you see a "photo" of the Milky Way galaxy that looks complete, like this, it's either a photo of some other galaxy (that hopefully is similar to the Milky Way galaxy) or it's a simulation. -- Finlay McWalterTalk 19:16, 19 January 2013 (UTC)[reply]
Or more accurately, it is like making a world map without a satellite: [14] --140.180.253.61 (talk) 20:40, 19 January 2013 (UTC)[reply]
That's a great example. It wasn't until 1972 that we had an actual photograph of the earth as opposed to smaller photos stitched together. --Guy Macon (talk) 20:56, 19 January 2013 (UTC)[reply]
That is not what our article claims ("The Blue Marble was not the first clear image taken") Rmhermen (talk) 23:40, 19 January 2013 (UTC)[reply]
This photo from 1967 is described as the first one. File:ATSIII_10NOV67_153107.jpg RudolfRed (talk) 00:50, 20 January 2013 (UTC)[reply]

The answer for Parimal Kumar Singh question is simple. You're right that we are inside the Milky Way but we can see things from all directions from Earth so basically we can see all the parts of the Milky Way not the Milky Way as the whole. With our technology right now, we can easily match them up (just like putting the puzzle together) to create an accurate image of what Milky Way would look like as a whole. By the way, your analogy of taking a picture inside the house can't comparable to taking a picture of the Milk Way because the house has walls to block your view unlike the Milky Way.184.97.244.130 (talk) 05:43, 20 January 2013 (UTC)[reply]

Actually, we cannot see all parts of the Milky Way - a large part is obscured by interstellar dust clouds, and the galactic center. We only fairly recently found out that we most likely live in barred spiral galaxy, not a plain spiral. --Stephan Schulz (talk) 11:23, 20 January 2013 (UTC)[reply]

if light is just electromagnetic spectrum, why don't we have electric web cams with equal precision?

What is the exact reason that I can't replace a web cam with an electric version (that works in the electric part of the spectrum) and replace a light source with a source of electrons. Do electrons not bounce off people etc the same as light does?

Basically, other than color (due to the research into it), what's so special about light versus electricity when it comes to cheap 'imaging'? 178.48.114.143 (talk) 21:51, 19 January 2013 (UTC)[reply]

Electrons will interact with the air, not travel through it, which is why tubes like TV tubes are evacuated. If you up the energy enough to get through the air, you make a particle beam weapon. -- Finlay McWalterTalk 21:56, 19 January 2013 (UTC)[reply]
Fair enough. And if we imagine a room without air, then can a web cam work over the electric part of the spectrum just as well as light, as long as we 'illuminate' the room with a source of electrons? Or, in addition to not traveling through air, do they also not bounce off of objects nicely? 178.48.114.143 (talk) 22:23, 19 January 2013 (UTC)[reply]
You mean one of these, only larger? No reason why that wouldn't work. A bit like killing a fly with a bulldozer, but it should work. --Guy Macon (talk) 22:29, 19 January 2013 (UTC)[reply]
For the ordinary things you'll find in a room, the electrons will be absorbed by the surfaces of many of them, yielding no picture. When you see a picture of an object like a fly imaged with an electron microscope, it has probably been plated with a thin layer of gold so that it reflects the incoming electrons - see Electron microscope#Sample preparation. If you did that, you'd still only get a monochrome image. -- Finlay McWalterTalk 22:37, 19 January 2013 (UTC)[reply]
Do you mean I would get a monochrome image because I've just plated everything with gold? (after sucking all the air out). But, yeah, question answered. 178.48.114.143 (talk) 23:10, 19 January 2013 (UTC)[reply]
I think one thing should be clarified first. Light is an electromagnetic wave, or, alternatively, a stream of photons - see Wave-particle duality. Photons are uncharged particles with zero rest mass, and hence travel at the speed of light. They typically only interact with matter by being absorbed or emitted whole. Electrons, on the other hand, are charged particles with a rest mass of roughly 1/1800s u. As a result, electrons can travel at any speed, are influenced by electric and magnetic fields, and can interact with other charged particles in ways photons cannot. I'm not aware of any material that is transparent to electrons in the same way glass is to visible light. While electron beams are also called beta rays, they are not part of the electromagnetic spectrum, and have quite different properties from electromagnetic waves. In particular, beta rays are ionizing radiation, and hence quite unhealthy. --Stephan Schulz (talk) 23:27, 19 January 2013 (UTC)[reply]
That's interesting. Could you explain where radio itself (like am/fm, wifi, bluetooth, etc) falls here? Is it like light, but goes through stuff? Is it (as the name implies) part of the electromagnetic spectrum? If so, why doesn't light go through stuff the same way? I probably had more this in mind than electrons, I didn't realize that electrons are so different from radio waves - I thought they were similar, thanks for explaining. 178.48.114.143 (talk) 00:26, 20 January 2013 (UTC)[reply]
You need to read electromagnetic spectrum which puts all of the various types of light into relation with each other. Radio is the term for a range of wavelengths of light (colors, if you will) which are significantly lower in energy than light which our eyes are tuned to see. All light interacts with some matter, but different light interacts with different matter. For example, glass is generally transparent to most light in the visible range, but it is opaque to most wavelengths in the ultraviolet range. Also, radio doesn't just "go through stuff". Radio can be blocked if it lacks a straight path to reach the receiver (known as a line-of-sight path). Radio is reflected by parts of the Earth's atmosphere, so it can be reflected around smaller objects and hills and things, but many places exist in a "radio shadow", especially in mountainous areas where large mountains can block effective line-of-sight to radio sources. The article Radio propagation covers some of this. --Jayron32 00:43, 20 January 2013 (UTC)[reply]
I'm sure this is oversimplified (the solid state physics lecture I took for my minor is too many years in the past), but I'll try. As I wrote above, photons can only be absorbed or emitted whole - light is travelling in quanta. So in order to be absorbed, a photon needs to interact with something that can absorb exactly the amount of energy provided by the photon. In a single, isolated atom, electrons occupy shells (or orbitals), which correspond to very precise energy levels. Each shell can only be occupied by a given numer of electrons. A photon can only be absorbed by the atom, if either it knocks an electron completely off the atom, or if there is an electron in some shell, and an empty spot in a higher shell, and the difference in energy between the two shells corresponds to the energy of the photon. For most materials and visible light, the first case happens rarely, if at all, because it requires a lot of energy. For X-rays and gamma rays, this does happen, hence these are also ionizing radiation - knocking an electron off a neutral atom makes it into an ion. This is, BTW, the mechanism behind the Fraunhofer lines in spectroscopy - the black lines correspond to a valid state transition in some atom, so the energy can be absorbed. In a solid body, neighbouring atoms interact, and as a result, the simple energy levels get spread out into energy bands. But electrons still can only be in one of the bands, and the number of electrons in a band is limited. Bands are normally filled from lower energy levels to higher energy levels. In a conductor, the highest band with electrons is only partially filled, so electrons can absorb very small quanta of energy, and simply move a bit up in the band. That is why metallic conductors are usually completely opaque. In an isolator, the lowest filled level is (nearly) full, and the next higher level is (nearly) empty. Thus, for an electron to absorb an photon, the photon must provide enough energy to pass the band gap to the next higher band. If the band gap is large enough, visible light photons cannot provide this energy. Thus they cannot be absorbed, and hence the material is transparent. UV light has lower frequency and hence higher energy photons, so it is absorbed by material that is transparent for visible light, like e.g. window glass. There is a slight complication: Except at absolute zero, not all electrons are in their ground state - some are excited, and hence in a higher band than usually . Thus, they can absorb some low energy photons - no real material is completely transparent. But the number of excited electrons usually is low enough that this happens rarely. If on the other hand, the material is completely ionized, we have a plasma, in which free electrons and ions can interact directly with light. As a result, plasmas are opaque. At the time of the big bang, as the universe expanded, it cooled, and at a critical temperature, the Hydrogen and Helium in the universe turned from plasma to a neutral gas. Only at that time did the universe became (mostly) transparent. We can still see that moment in time - the phenomenon is usually called the surface of last scattering if we talk about how it came about, or the cosmic microwave background if we talk about what we observe today. --Stephan Schulz (talk) 10:10, 20 January 2013 (UTC)[reply]
There is no such thing as "the electric part of the spectrum". The whole spectrum, light included, is electromagnetic, but it does not involve electrons. It sounds as if you think the ordinary process we call electricity (i.e. the flow of electric currents) lives somewhere on the electromagnetic spectrum: it doesn't, it's an entirely different process. --ColinFine (talk) 21:34, 20 January 2013 (UTC)[reply]

January 20

What happen to helium in our sun

I know the Sun fuse Hydrogen into Helium and after billions of years eventually it ran out of hydrogen then started to fuse helium into bigger atoms and eventually collapse. What happen to helium in those billions of years? Is it just stay somewhere inside the sun? Or is it broken down to hydrogen again to start out the fusion cycle? 184.97.244.130 (talk) 00:18, 20 January 2013 (UTC)[reply]

Eventually it will become carbon through the triple alpha process.--Gilderien Chat|List of good deeds 00:25, 20 January 2013 (UTC)[reply]
That's not what I'm asking... What happen to Helium in our Sun right now as hydrogen is still abundant.184.97.244.130 (talk) 03:20, 20 January 2013 (UTC)[reply]
Nothing at all. Currently, the Sun is fusing hydrogen into helium. This produces enough energy so that the Sun cannot collapse and become dense enough to fuse helium. Therefore, the Sun is hot enough to fuse hydrogen but not hot enough to fuse helium (yet). Whoop whoop pull up Bitching Betty | Averted crashes 04:46, 20 January 2013 (UTC)[reply]
I am not sure that is correct, Whoop. I'd really like to see some sources and articles quoted. Helium three is being constantly generated and destroyed at this point. See proton-proton chain reaction. There's no a priori reason to assume that certain, perhaps low-rate, nuclear reactions with Helium four are not also going on at this point. Citations are needed. μηδείς (talk) 04:52, 20 January 2013 (UTC)[reply]
While the rate of helium burning isn't strictly zero, it's actually astonishingly close. As noted (and footnoted) in the article already linked by Gilderien, the rate of the triple-alpha process responsible for helium fusion depends on the core temperature to the fortieth power. In a low-to-middling mass star like our Sun, the rate might as well be zero right up until the helium flash. TenOfAllTrades(talk) 05:09, 20 January 2013 (UTC)[reply]
So basically helium is just there without any interaction until the Sun is hot enough to fuse them, correct?184.97.244.130 (talk) 05:31, 20 January 2013 (UTC)[reply]
Yes, it is effectively simply accumulating in the sun as an end product of the hydrogen fusion process at the moment. — Quondum 08:08, 20 January 2013 (UTC)[reply]
Is the helium mostly at the centre, or does it spread evenly throughout the mass of the Sun? I know we cannot see the centre of the Sun, but maybe some simulations have given some hints.--Lgriot (talk) 09:00, 21 January 2013 (UTC)[reply]
How well mixed together the hydrogen and helium are depends on the size of a star. For a small red dwarf, it's all mixed together evenly (which is part of the reason they burn for so long - hundreds of billions of years, compared to a mere 10 billion for the Sun). For the Sun, it's more concentrated in the core. Convection zone and Radiation zone are relevant articles, although they aren't very good... --Tango (talk) 11:29, 21 January 2013 (UTC)[reply]
Thx --Lgriot (talk) 13:35, 21 January 2013 (UTC)[reply]

why is this not used in air?

http://en.wikipedia.org/wiki/Modulated_ultrasound

why isn't this used for near-field communications (like bluetooth etc). is it because radio is so much easier? But radio requires licenses and has limited spectrum, not that this doesn't but i would imagine while no one else thinks to use it it does!  :) 178.48.114.143 (talk) 00:22, 20 January 2013 (UTC)[reply]

Lol, I found this: http://www.theregister.co.uk/2012/11/08/ultrasonic_bonking/
But in fact this is just iphone hardware. couldn't specialized hardware be a bit better? 178.48.114.143 (talk) 00:24, 20 January 2013 (UTC)[reply]
Take a look at these:
http://alumni.media.mit.edu/~wiz/ultracom.html
http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA499556
http://www.cs.ou.edu/~antonio/pubs/conf059.pdf
In particular, look at the data rates they have been able to achieve.
--Guy Macon (talk) 00:53, 20 January 2013 (UTC)[reply]
I suspect one problem would be interference. In other words it would work fine if you were the only one using it, but what happens if you were in a crowd with many people using it. There could also be a problem with phase diffraction from multiple sources. (See also Superposition principle and Circular convolution) ~ It would probably be okay if by "short-range" you mean "a few inches". ~E:(talk) 01:17, 20 January 2013 (UTC)[Fixed header:74.60.29.141 (talk) 01:38, 20 January 2013 (UTC)][reply]
It will also drive your dog crazy. And it provides fairly limited bandwidth at plausible frequencies. --Stephan Schulz (talk) 10:14, 20 January 2013 (UTC)[reply]

oligomers and monomers

--41.203.67.133 (talk)mkm~

what are the differences between oligomers and monomers using equations and mechanisms as a basis for the differentiation — Preceding unsigned comment added by 41.203.67.133 (talk) 01:08, 20 January 2013 (UTC)[reply]

I'll offer you a hint: Consider what the prefixes "mono-", "oligo-", and "poly-" mean. These are all fairly fundamental concepts in polymer science. Assuming the course these homework assignments belong to is a polymer characterization course you probably want to have these kinds of definitions down cold. (+)H3N-Protein\Chemist-CO2(-) 02:07, 20 January 2013 (UTC)[reply]

polymerisation

what is the difference between the average degree of polymerization and the extent of a reaction using equations and reaction mechanisms to distinguish between them — Preceding unsigned comment added by 41.203.67.133 (talk) 01:12, 20 January 2013 (UTC)[reply]

Perhaps you should take a look at our article titled "Degree of polymerization". We're not going to do your homework for you, but if there are any remaining points of confusion we can probably help clarify them.(+)H3N-Protein\Chemist-CO2(-) 01:54, 20 January 2013 (UTC)[reply]

chemistry [polymer chemistry]

what is the difference between number average molecular weight and weight average molecular weight — Preceding unsigned comment added by 41.203.67.133 (talk) 01:17, 20 January 2013 (UTC)[reply]

You may want to have a look at molar mass distribution. These sorts of definitions are necessary to describe polymers, which are virtually always a mixture of molecular weights. Some methods such as static light scattering are intrinsically weight averaged, since scattering intensity is proportional to molecular weight amongst other things. (+)H3N-Protein\Chemist-CO2(-) 01:43, 20 January 2013 (UTC)[reply]

RF interference

An organization I'm in using some wireless devices that use 915 MHz. They recently replaced a wireless microphone and they say that the new one interferes with the other devices. The mike says it can be set to frequencies between 524 MHz and 865 MHz. Could the mike really be causing interference? (I used them together a few days ago and did not experience any problem.) Bubba73 You talkin' to me? 02:21, 20 January 2013 (UTC)[reply]

The frequency at which it's set isn't necessarily the only frequency at which it broadcasts. See spurious emission. Indeed, devices not made to broadcast RF at all, such as microwave ovens, frequently still do, although at a level that can only cause interference within a few feet (see Radio_transmitter_design#RF_leakage_.28defective_RF_shielding.29). StuRat (talk) 03:41, 20 January 2013 (UTC)[reply]
Microwave ovens are meant to produce RF, though. It's how they cook stuff. And lots of devices use RF at the same or very similar frequencies as microwave ovens. Whoop whoop pull up Bitching Betty | Averted crashes 05:02, 20 January 2013 (UTC)[reply]
They aren't meant to "broadcast" RF, which is what I said. StuRat (talk) 05:06, 20 January 2013 (UTC)[reply]
WP:OR, but my wireless internet in my house goes a little jinky when the microwave is running. --Jayron32 06:03, 20 January 2013 (UTC)[reply]
To Jayron's point, Obligatory xkcd reference. Zunaid 09:02, 20 January 2013 (UTC)[reply]
It could be a harmonic, so we could try some other frequencies. I once lived in a city where you could pick up an AM radio station on two frequencies - one was half or double of their actual frequency. Bubba73 You talkin' to me? 04:23, 20 January 2013 (UTC)[reply]
For it to be a harmonic, wouldn't it have to be set to 457.5 MHz, which is outside the given range? Whoop whoop pull up Bitching Betty | Averted crashes 04:59, 20 January 2013 (UTC)[reply]
For a 2:1 or 1:2 harmonic, yes, but there could be other harmonics, 3:2, etc, I think. Bubba73 You talkin' to me? 05:47, 20 January 2013 (UTC)[reply]
Such devices usually oscillate at the transmit freqwuncy (ie there is not internal multiplication), so only integer multiples of the rated frequency is possible - i.e., 1048 MHz, 1536 MHz, etc at the lowest channel setting. Interfering on a sub-harmonic, and relatuonships such as 3:2, 2:3 are not possible. However, radio reciveing devices can suffer from blocking - the receive circuits are overloaded and thus de-sensitised by a strong signal close to the frequency they are tuned to. How close in frequency is determined by the design of the receiver and how physically close and strong the interfering signal is. So the answer is yes - the radio mike could well be interfering, without being in a harmonic relationship. Keit124.178.178.83 (talk) 08:45, 20 January 2013 (UTC)[reply]
Thanks. Bubba73 You talkin' to me? 14:51, 20 January 2013 (UTC)[reply]

Altitude / temperature

I've hiked the Grand Canyon a few times, and one thing seems counter-intuitive: it's quite warmer at the bottom than on the rim. Consider the following: a) heat rises. b) air is thinner (less dense) at the top; sunlight should therefore be stronger. c) the bottom is in the shade much of the day, whereas the rim is not. ~ So, ~ why is it warmer at the bottom?    ~:74.60.29.141 (talk) 02:43, 20 January 2013 (UTC)[reply]

My previous home was on a hill and when I was walking, it was warmer at the bottom, in the valley. I figured that at the top the air warmed by the ground got blown away by the wind, but I don't know if that is correct. Bubba73 You talkin' to me? 02:56, 20 January 2013 (UTC)[reply]
Basically the lapse rate for temperature is a function of two things: (1) solar heating occurs almost entirely at ground level; (2) the ability of the atmosphere to retain heat is weaker the higher the altitude. The way those factors trade off for something like the Grand Canyon is not so easy to work out, but you should bear in mind that the south-facing canyon walls are very strongly illuminated, pretty much all the way to the bottom. Looie496 (talk) 04:26, 20 January 2013 (UTC)[reply]
I've always understood that the main function responsible for the lapse rate is the adiabatic expansion / contraction of the circulating atmosphere. Note that adiabatic lapse rate redirects to the lapse rate article you linked. I am a bit confused by the article, though. Is the environmental lapse rate the observed lapse rate at a particular location while the adiabatic lapse rate is the predicted rate due to the modeled adiabatic processes which are the major factor? -- 41.177.85.143 (talk) 07:25, 20 January 2013 (UTC)[reply]
Following up, sources such as this give the following elevations:
South Rim (Bright Angel Trail Trailhead): 6,860ft
Colorado River (presumably at the Silver Bridge): 2,400 ft
North Rim (Kaibab Trail Trailhead): 8,241 ft
That puts the South & North Rims 4,460 ft and 5,841 ft above the river. An average lapse rate of 3.5°F/1,000 ft would estimate the temperature at South and North Rims to be 15.6°F and 20.4°F above the temperature down near the river. (Using the dry lapse rate of 5.5°F/1,000 ft would give even greater temperature differences of 24.5°F and 32.1°F.) Googling "grand canyon hotter bottom" turns up a lot of pages stating that the temperature at the bottom is 15°F to 20°F warmer than at the rim, suggesting that the adiabatic lapse rate is sufficient explanation. -- 41.177.85.143 (talk) 11:10, 20 January 2013 (UTC)[reply]
Finally, the questioner should ask themself whether they have ever wondered with equal curiosity why it is cooler on top of mountains, and if not, then why they haven't. It is the same mechanism at work. -- 41.177.85.143 (talk) 11:12, 20 January 2013 (UTC)[reply]
Yes, I realize the same principles apply to mountains, it's just that the difference seems more obvious at the canyon. Plugging numbers into a formula might provide a mathematical description, but not a satisfactory explanation of why higher altitudes are colder than lower ones (given the same approximate latitude). ~:74.60.29.141 (talk) 18:00, 20 January 2013 (UTC)[reply]
The explanation is "expansional cooling" and "compressional warming". When a gas expands it push the surrounding air out of the way (so it can expand) and that requires energy which is taken from the internal thermal energy of the gas, hence the drop in temperature. When a gas is compressed the opposite happens, the energy flows the other way, and the temperature rises. Dauto (talk) 18:19, 20 January 2013 (UTC)[reply]
That explains how an air-conditioner works; but it seems implausible that there would be an energy transference from the rim to the bottom by that means. That would require a pressure differential over time and distance, and the specific air molecules wouldn't necessarily be transferred from one place (low density) to another (high density) where the heat energy is exchanged. ~[Does that makes sense?].
I don't mean to sound unappreciative of the answers, it's just that this question has bothered me for quite some time. I'm sure the lapse rate / adiabatsis explains it, but the underlying thermodynamic principles are still unclear to me. Consider the following hypothetical: given a column of air (let's say 1km) - perfectly sealed within a thermally insulated (vertical) container. Leave it alone for a year or two; when you come back the air temperature at the bottom would be higher than that of the top, by an amount consistent with the related equations — right? ~:74.60.29.141 (talk) 21:22, 20 January 2013 (UTC)[reply]
Feeling warmer in a valley is quite common, but not all valleys are warm. All this talk of lapse rate is misleading you. As Looie496 said in his post, air is hotter nearer the ground because the air is heated by the ground, which is heated by incoming solar radiation. The air both absorbs some of the raditated heat (in both directions), and re-radiates it, in both directions. This is the greenhouse effect, resulting in a band of air at altitude that has the minimum temperature - below the air is warmer as it is close to the warm ground, and above it is warmer due to being above most of the air heat absorption.
Now, think about it. If this was the only factor, and (say) the average ground temperature at some latitude is 20 C, you'd expect the temperature at 1000 m above it to be 13.5 C, applying the satndard lapse rate of 6.5 C per 1000 m (Note that the lapse rates given in a post above are incorrect). Now, assume you are on level ground in a bloody great hole 1000 m deep. Based on the greenhouse effect, which is what causes the lapse rate, you'd still expect the temperature at the bottom of the hole to be 20 C, and the top of the hole to be 13.5 C.
It can't work that way - the temperature of air at the top of a hole cannot be 6.5 C below the surrounding air, because wind and convection will mix it to equality at the top. Any hot air at the bottom will be less dense and try to rise out of the hole, and be replaced with cooler air.
So what does cause the bottom of some but not all valleys, and the Grand Canyon, to be warmer?
  1. The south facing walls get full sun as Looie496 said.
  2. Some of the heat re-radiated off the south walls gets trapped by the other walls - this is a large scale example of the cavity radiator effect, well known to engineers - to find the appoximate temperature of a furnace, drill a small hole in the side - if the furnace is red hot, the hole will appear to be a hotter colour.
Supporting what I've said is the common experience of folk living in the bottom of valleys. Some valleys are warm and some are not. It depends on the alignment of the valley axis vis-a-vis the direction of the sun's radiation, and on whether or not the alignment of the valley axis allows the prevailing winds to penetrate.
Wickwack 120.145.56.251 (talk) 01:11, 21 January 2013 (UTC)[reply]
I'll have to think about this some more (later, its bedtime).. ~:74.60.29.141 (talk) 04:10, 21 January 2013 (UTC)[reply]
Hey Wickwack, can you clarify "applying the standard lapse rate of 6.5 C per 1000 m (Note that the lapse rates given in a post above are incorrect)."? I assume that you are referring to my 3.5°F/1,000 ft which is the standard value quoted in our article and is equivalent to your 6.5 C per 1000 m. -- 41.177.85.143 (talk) 07:05, 21 January 2013 (UTC)[reply]
I never looked at the Wikipedia article, as they are not intended to be trusted as a data source. They are only intended to guide novice readers to references and sources. I got the standard rate 0.0065 C/m (= 6.5 C per 1000 m) for the altitude of the Grand Canyon from a set of standard tables (Turns & Kraige 2007, page 94) I have. Note that the lapse rate is not constant but varies with altitude - at 4000 m it reduces to 1/10th this value, and at 15,000 m it is zero. At greater altitudes it increases at opposite sign, untimately reversing sign again. The value you used in your calcs, 3.5 F / 1000 Ft equates to 6.38 C / 1000 m. That is, you used a value 2% low. This is not significant in the context of the question, but I thought that someone would squawk that I used a different value. I should have explained it better - I appologise for the confusion. Wickwack 124.182.14.231 (talk) 07:55, 21 January 2013 (UTC)[reply]
Thanks. I certainly wouldn't quibble over a one to two percent difference in figures which are only given to two significant digits, but the variability of lapse rate with altitude is something which could have a significant impact on my calculations above. -- 41.177.85.143 (talk) 08:18, 21 January 2013 (UTC)[reply]
I agree that local topography, insolation, and winds can have a large effect on the actual, observed lapse rate, but the standard rate falls between the dry and saturated adiabatic lapse rates and successfully predicts the observed rate at the Grand Canyon itself, so I don't understand why additional mechanisms are needed here to explain what appears to be consistent with the thermodynamic model. -- 41.177.85.143 (talk) 07:38, 21 January 2013 (UTC)[reply]
A coincidence is not proof. I have explained why the lapse rate actually is not the full story, as relying on it alone means that the floor of the canyon, around 800 m above sea level, should have a temperature about the same as the area generally, about 2700 m above sea level, and if it did, there would be a pocket of air above the canyon at 2700 m lower in temperature than the air in the area generally, and that cannot be the case, as Louie pointed out. Wickwack 124.182.14.231 (talk) 08:04, 21 January 2013 (UTC)[reply]

The lapse rate in the Grand Canyon is specifically addressed in this (PDF 1.3 MB) 1965 Journal of Applied Meteorology article. -- 41.177.85.143 (talk) 08:46, 21 January 2013 (UTC)[reply]

Note that this paper, which is about the improved horizontal resolution of a then new satellite, talks about an apparent lapse rate, which the author's data indicates was 10 C per 1000 m at the time of measurement, well above the standard value. The article does not use the standard lapse rate to explain why the canyon floor was warmer, it merely says that there appears to be a significant lapse rate and mentions some reasons why it might be thought to occur at the magnitude that it did. The purpose of the paper was use the Grand Canyon to show how good the horizontal resolution was. Wickwack 120.145.80.46 (talk) 10:38, 21 January 2013 (UTC)[reply]

This PDF link isn't working for me (can't connect to host) but googling "Lawrence E. Stevens The Biogeographic Significance of a Large, Deep Canyon" and looking at the "Quick View" gives a 2012 work which states in section 3.4, Elevation:

Nonetheless, elevation remains an overwhelmingly important ecological state variable due to its strong negative relationship with air temperature and freeze-thaw cycle frequency, and its positive relationship to precipitation and relative humidity. The global adiabatic lapse rate is -6.49 °C/km. Analysis of paired daily minimum and maximum air temperature from 1941-2003 at Phantom Ranch (elevation 735 m) on the floor of GC with the South Rim (2100 m) produces a GC-specific lapse rate of -8.7 °C/km. The >1.3-fold steeper lapse rate in GC is likely a function not only of the dark red and black bedrock color of the inner canyon, but also to aspect. Steep, S-facing slopes in the GCE, particularly those with darker rock color, absorb and re-radiate more heat than do N-facing slopes, which often are shaded from direct sunlight, and are cooler and more humid than S-facing slopes across elevations. Overall, elevation strongly and broadly influences synoptic climate, while aspect exerts strong local control over microclimate and microsite potential evapotranspiration and therefore productivity.

I find it interesting that he does not mention the arid climate as a factor in the lapse rate as the dry adiabatic rate of 9.8 °C·km-1 is greater than that observed in GC. A paper on the subject written by a climatologist would carry more weight than a passing mention by a biologist, but I'd be inclined to pay more attention to either than to anything said here. -- 41.177.85.143 (talk) 11:03, 21 January 2013 (UTC)[reply]

The bottom line:   the primary principle involved would be adiabatic lapse which relates to air density, which relates to barometric pressure (?). Therefore, the barometric pressure at the bottom is consistently higher than at the top. [?]

Localized topography, azimuth orientation (etc.) primarily accounts for the variety of micro-climate conditions.
~Eric the OP:74.60.29.141 (talk) 22:04, 21 January 2013 (UTC)[reply]

why when people have fever the feel colder?

seem a bit pardoxic. i had fever today, and i felt cold, anything i wore almost gave no heat. why is that? — Preceding unsigned comment added by 79.176.113.107 (talk) 03:30, 20 January 2013 (UTC)[reply]

Because your bodies ability to sense temperature is out of whack. Your body is not an accurate thermometer, and your general sense of warmness and coolness is not really directly connected to the internal or external temperature. Wikipedia has an article on thermal comfort which is a bit bloated, but has some information. Physiologically, your sense of temperature is wrapped up in your Somatosensory system. --Jayron32 03:35, 20 January 2013 (UTC)[reply]
It's common to swing back and forth between feeling hot and cold, when you have a fever. StuRat (talk) 03:37, 20 January 2013 (UTC)[reply]
My understanding is that a fever is a response to infection in which the body tries to raise its temperature in order to increase the activity of the immune system. It does that by raising the homeostatic "set point", and anything below the set point is going to feel cold to you. Looie496 (talk) 04:16, 20 January 2013 (UTC)[reply]
  • It's not really that the body's ability to sense temperature is out of wack, but that the thermostat has been reset either by the immune response or the pathogen itself. See Pyrogen (fever). One theory of fever is that whichever agent induces the fever, the pathogen or the body itself, will function better against its opponent with a raised temperature. Many proteins function best within a set temperature range, and leaving that range can have a huge effect. μηδείς (talk) 04:47, 20 January 2013 (UTC)[reply]

Looie496 is right. Your body has a "normal" temperature. When you get a fever, the "normal" temperature rises. Therefore, what originally was at the "normal" temperature is now colder than the "normal" temperature, and consequently feels cold. Whoop whoop pull up Bitching Betty | Averted crashes 04:52, 20 January 2013 (UTC)[reply]

  • For completeness sake, the homoeostatic set-point for temperature is set by the hypothalamus. Fgf10 (talk) 12:03, 20 January 2013 (UTC)[reply]
  • Fevers occur in reptiles, fish,[15], and even insects.[16] Fever is stimulated by eicosanoids and therefore would seem to be conserved with mammals. [17][18] Amazingly, this means that the fever response dates back almost to the Urbilaterian, maybe further. All this time fever has been a matter of behavior, and only in the most recent times does it control internal temperature directly. What's odd is that Hollywood appears unalterably convinced that people with fever are hot, and must have shown this idiocy in a thousand movies. Don't they get sick in celebrityland? Wnt (talk) 22:46, 20 January 2013 (UTC)[reply]
I think there is a big misunderstanding here. Your body doesn't directly detect the temperature of the air around you.
Let's do an experiment: Touch a piece of metal with one hand and a piece of plastic with the other. Which feels warmer? The plastic - right? But in truth, they are both at the exact same temperature (you can check...use a thermometer!). That's because, what we actually detect is the amount of heat energy that is drawn out of our skin. Metal feels colder than plastic because it conducts heat away from your body very efficiently, where plastic doesn't.
Now, consider Newton's law of cooling: "The rate of heat loss of a body is proportional to the temperature difference between the body and its surroundings.".
That means that when your feverish body is hotter (compared to the surrounding air) than it usually is, you lose heat to the environment more rapidly than you normally would...and because "rate of heat loss" is what our skin actually measures, you feel colder because you're losing more energy to the outside world than usual.
When I have a fever, I feel cold because my body is hotter compared to the surrounding air. Seems backwards - but it's not. SteveBaker (talk) 17:25, 21 January 2013 (UTC)[reply]

Population, Wealth, and the Environment

sock block
The following discussion has been closed. Please do not modify it.

What has more impact on the environment, population or GDP per capita?

Republicanism (talk) 03:38, 20 January 2013 (UTC)[reply]

Your question doesn't seem to have enough information to answer it meaningfully. --Jayron32 03:40, 20 January 2013 (UTC)[reply]
I'd say total population, as pollution is pretty much directly proportional to the number of people, everything else being held equal. GDP per capita isn't directly proportional. An extremely low GDP might mean less pollution, as there's presumably no industry, just subsistence farmers. Higher GDP can then lead to unregulated industry, similar to what China has now, or the developed world had a half century ago, causing massive pollution problems. Even higher GDP can lead to strict environmental laws, as people then feel they can afford the cost of such laws, and that the sacrifice is worth it, as in much of the developed world today. StuRat (talk) 03:52, 20 January 2013 (UTC)[reply]
"pollution is pretty much directly proportional to the number of people". Citation needed - and for the rest of your unsourced assumptions. This is a reference desk, not a forum. AndyTheGrump (talk) 03:56, 20 January 2013 (UTC)[reply]
If the "everything else being held equal" includes the pollution generated per capita, then my statement is correct, by definition. StuRat (talk) 05:04, 20 January 2013 (UTC)[reply]
And even more useless to the question. Nil Einne (talk) 07:47, 20 January 2013 (UTC)[reply]
No, if that assumption is correct, then it directly answers the Q. StuRat (talk) 07:39, 21 January 2013 (UTC)[reply]
Pollution is more proportional to GDP (although not always in the same direction—the graph has a hump, as Stewed Rat said), but total environmental impact is more proportional to the number of people. The more people there are, the more farmers there are to feed them. The more farmers there are, the more land is torn up. The more land is torn up, the greater the environmental impact. Whoop whoop pull up Bitching Betty | Averted crashes 04:56, 20 January 2013 (UTC)[reply]
[citation needed] As noted above, statements are more useful when paired with the evidence to back them up. --Jayron32 05:00, 20 January 2013 (UTC)[reply]
http://tierneylab.blogs.nytimes.com/2009/04/20/the-richer-is-greener-curve/
http://www.cid.harvard.edu/archive/esd/pdfs/iep/643.pdf
--Guy Macon (talk) 11:58, 20 January 2013 (UTC)[reply]
👍 Like. There, that wasn't so hard, was it? --Jayron32 18:06, 20 January 2013 (UTC)[reply]
In general, I dislike the following behaviors:
  • Asking unpaid volunteers questions and then demanding that they answer in a different way.
  • Being rude to unpaid volunteers.
  • Demanding that unpaid volunteers provide evidence rather than looking it up yourself.
  • Snarky, passive aggressive "thank you" comments that imply that there was something wrong with the way the other unpaid volunteers answered.
  • Taking without giving and without showing appreciation.
--Guy Macon (talk) 18:46, 20 January 2013 (UTC)[reply]
Actually, Republicanism never posted here after the initial question. All posters above were also unpaid volunteers, who have every right to demand that you provide references on a reference desk. --140.180.253.61 (talk) 21:22, 20 January 2013 (UTC)[reply]
Nobody said that you don't have a right to make demands. You do, just as others have a right to ignore them. I merely stated what I personally dislike. --Guy Macon (talk)
I personally dislike people who treat the reference desk as a forum, and suggest others should find the reference for them when they are the ones making completely unsupported claims on the reference desk but I've learnt to live with it, simply requesting references when I feel it is needed.... Nil Einne (talk) 02:19, 21 January 2013 (UTC)[reply]

Out of battery detonation

What are the most common causes of out-of-battery detonations in firearms? 24.23.196.85 (talk) 05:45, 20 January 2013 (UTC)[reply]

Human error? ~:74.60.29.141 (talk) 05:48, 20 January 2013 (UTC)[reply]

Electrical conduction

Could somebody please tell me the answer to this, I can't seem to find anything about it in Wikipaedia articles. If I connect an ordinary multi-strand copper wire to a 12volt DC connection, but only get half the strands in the connector, then connect the other end of the cable to a device (like a light for example) but only get the other half of the strands in that connector, how efficient will the power transmission be? Thanks in advance. 124.191.177.1 (talk) 07:21, 20 January 2013 (UTC)[reply]

Assuming that the individual strands of the conductor are not corroded, oxidized or coated in any way, the inter-strand resistance (per metre) between copper strands will be low. Thus, after as short distance (a few diameters of the copper of the wire), the current should be effectively uniformly distributed over the cross-section of the copper. Thus, one could say "about as efficient as using all strands, but with a small added length to the cable". For most practical purposes, this means the effect will be insignificant. — Quondum 07:59, 20 January 2013 (UTC)[reply]
(agreed) ... and it's very unusual to get corrosion or significant oxidation in the middle of a wire if the insulation is undamaged. For wires carrying a high current (for their cross-section), then it is obviously better to connect as many strands as possible, but even one connected strand at each end (and not the same strand) will usually give efficient power transmission, though I wouldn't recommend the practice because the single strand at each end might get hot, and it will act like a fuse wire, burning out at a certain high current. Dbfirs 16:51, 20 January 2013 (UTC)[reply]

Thankyou, the multistrand cable I am using is tinned copper, so I assume this makes no difference to the interstrand resistance.124.191.177.1 (talk) 07:23, 21 January 2013 (UTC)[reply]

Tin is actually an even better conductor than copper, so using tinned copper helps. In addition, the tinning prevents oxidation of the copper, so largely prevents the problem of corrosion mentioned by Quondum above. Dbfirs 10:22, 21 January 2013 (UTC)[reply]
Dbfirs: are you sure that tin is a better conductor than copper? I seem to remember copper is a better conductor, and the table at Electrical_conductivity#Resistivity_of_various_materials seems to support this. – b_jonas 16:03, 21 January 2013 (UTC)[reply]
Sorry, my error. I picked up the conductivities from a table elsewhere on the internet and either it was wrong or I mis-read it (the latter being more probable!) In fact pure tin has only one seventh the conductivity of copper. I've striken my erroneous comment above, leaving only the second part that remains valid and is more significant in the situation being considered. Tin does have a higher conductivity then oxidised copper, and bare copper corrodes easily. Dbfirs 22:45, 21 January 2013 (UTC)[reply]
You're right that tin is a worse conductor - only silver is better than copper. But "tinned copper" doesn't necessarily mean literally "copper coated with tin" - it's probably a lead/tin alloy with other metals involved in the mixture. The practical function is twofold - excluding air and thereby reducing the corrosion of the copper - and (because lead/tin alloy is so soft and because it flows into the gaps between the conductors) improving the contact area between the strands and between strands and terminal block. Those two things taken together greatly improve the quality of the contact. SteveBaker (talk) 17:12, 21 January 2013 (UTC)[reply]
If you think about it, even if you get all of the conductors stuffed into a "screw terminal", only the outer strands are actually in contact with the terminal block - so you're relying on inter-strand conductivity to make the other connections anyway. The big problem with only connecting a few of the conductors is the very short distance between the terminal block and the point where all of the conductors meet. It doesn't much matter how short that distance is - the wires can still overheat at that spot and cause problems. SteveBaker (talk) 16:26, 21 January 2013 (UTC)[reply]

tDCS for insomnia?

According to this article: "I’ve been told that [transcranial direct current stimulation] is handy if you have racing thoughts at bedtime." Have any controlled trials of tDCS confirmed this? If not, with how much confidence can it be inferred given results with TMS? NeonMerlin 08:22, 20 January 2013 (UTC)[reply]

I can't find any details of any such trials. It may be of interest to you to note an article in the Journal Of Psychiatric Research (2013 Jan; Vol. 47 (1), pp. 1-7), which is 'Clinical utility of transcranial direct current stimulation (tDCS) for treating major depression; a systematic review and meta-analysis of randomized, double-blind and sham-controlled trials.' This concludes that there is at this stage no clear evidence for the clinical utility of this method for the treatment of MD, and more extensive trials will be required. ---- nonsense ferret 14:27, 20 January 2013 (UTC)[reply]
See also PMID 23219367. Looie496 (talk) 17:08, 20 January 2013 (UTC)[reply]

Nuclear fusion to produce phosphorus

What temperature would be required for the fusion reaction 28Si + 4He → 31P + 1H (which [jtgnew.sjrdesign.net/stars_fusion.html this site] says is part of the stellar oxygen-fusion chain) by known methods? Could any known fusion reactions produce 31P at a lower temperature from isotopes abundant on Earth? NeonMerlin 13:56, 20 January 2013 (UTC)[reply]

You misunderstood what they said. They mean that the Oxygen-burning process has several possible outcomes:
16O+16O= 31P + 1H or
16O+16O= 28Si + 4He
among them. The required temperature is more than 1 billion K. Ruslik_Zero 17:48, 20 January 2013 (UTC)[reply]

Electric current through gases

  1. At normal pressure air or any other gas is a nonconductor of electricity, but at low pressure the gas become conductor of electricity. How does this happen?
  2. In a discharge tube, sparking is accompanied by crackling noise. How this noise is produced? — Preceding unsigned comment added by Want to be Einstein (talkcontribs) 14:02, 20 January 2013 (UTC)[reply]
  1. Non-ionised gasses are not conductive at any pressure, low or high. What makes you think they are conductive at low pressure? Conversely, an ionised gas is conductive at all pressures.
  2. When an electrical discharge occurs, there is local heating, which causes an increase in pressure. The over-pressure travels outwards and thus constitutes a sound wave.
Wickwack 120.145.81.211 (talk) 14:48, 20 January 2013 (UTC)[reply]
Some links you might find useful: Plasma (physics)#Generation of artificial plasma, dielectric strength, and for a more detailed explanation of the mechanism, see Paschen's law. — Quondum 15:05, 20 January 2013 (UTC)[reply]

Hydrolysis of Phosphodiester bond in DNA

When the phosphodiester bond in the Phosphate-backbone of the DNA is hydrolysed by DNAses enzymes, which one of the P-O bond is broken? Dnakid (talk) 17:59, 20 January 2013 (UTC)[reply]

Depends on the DNAse. Deoxyribonuclease I leaves 5' phosphates, whereas Deoxyribonuclease II leave 3' phosphates. -- 71.35.98.191 (talk) 20:24, 20 January 2013 (UTC)[reply]

compressing gas into a cylinder

would a very light-weight "pod" floating on a helium baloon with a line into it, be able to consist of also a battery, light empty canister, and pump, and descend when it wishes by pumping (compressing) the helium (or enough of it) back into the cannister to descend?

Of course, the battery would have to be recharged after a while, but in this way helium could be conserved. The idea is that when it wishes to ascend again, it can refill the baloon from the helium it has just pumped into the cannister.

Would this work? Thanks. 178.48.114.143 (talk) 18:28, 20 January 2013 (UTC)[reply]

Yes, in fact there is a (proposed) aircraft which uses a similar principle -very interesting concept- See "Gravity Plane": http://www.youtube.com/watch?v=0QZ1KzveIic (WP won't allow direct link to YouTube) ~E:74.60.29.141 (talk) 18:56, 20 January 2013 (UTC)[reply]
More directly related to what you're thinking about: Aeros Flight Buoyancy Management - "By compressing and decompressing helium, the density in the ship can be varied as a means to control the ship’s static heaviness."
On a much smaller scale (your balloon) the limiting factor would be the weight of your "pod". ~E:74.60.29.141 (talk) 21:52, 20 January 2013 (UTC)[reply]
A very rough rule of thumb is that you need one cubic meter of helium to lift one kg of craft. That has to include the weight of the balloon itself. SteveBaker (talk) 02:36, 21 January 2013 (UTC)[reply]
What you are describing is "sort of" an aerial version of a Stab jacket, similar principle anyway. Vespine (talk) 05:25, 21 January 2013 (UTC)[reply]
Except that while gas is added from a cylinder to a stab jacket to increase buoyancy, it is vented to decrease buoyancy. It would be as if there was a pump to move gas from the jacket back to the cylinder when less buoyancy is desired. -- 41.177.85.143 (talk) 07:50, 21 January 2013 (UTC)[reply]
  • In order to link to youtube use this markup in edit: [http://www.youtube.com/watch?v=0QZ1KzveIic gravity plane vidoe] to produce this link: gravity plane video. The video itself is absurd, since the plane is presented as a perpetual motion machine, and its capacity for failure seems unlimited. μηδείς (talk) 07:30, 21 January 2013 (UTC)[reply]
No, that doesn't work. The canister strong enough to contain the helium always weighs more than what that much helium can lift. – b_jonas 15:58, 21 January 2013 (UTC)[reply]
What makes you say that?!? I certainly don't believe it's true. The volume of a sphere is a function of the cube of the radius - the surface area increases as the square of the radius. Since the amount of lift is proportional to the volume - but the weight of the envelope is a function of the surface area. Double the size of the balloon and you can double the thickness of the envelope without changing the pressure it has to withstand. Hence it's always possible (in principle) to build a bigger balloon to make it light enough to fly. (Recall the Mythbusters episode where they made a balloon out of lead foil - and it actually flew!) But the amount of additional pressure required to make the balloon lose altitude doesn't have to be that much - providing you don't need to lose altitude rapidly. From that point of view, it's a plausible machine. SteveBaker (talk) 16:17, 21 January 2013 (UTC)[reply]
The plane in the video is (effectively) powered by compressed air. They say that compressed air is used to create thrust - it's dumped as ballast - and it's used to re-compress the helium so the plane can return to earth. What replenishes that compressed air? All of the energy for the flight is contained in that compressed air...but it's not a perpetual motion machine. That said, the video is confusing...why would this plane require wings? Why folding wings? Lighter-than-air craft aren't magical - they are like up-side-down aircraft. With a regular plane, ascent costs lots of energy, but coming back down again is free. With a conserved-gas lighter-than-air craft, it's the reverse...going up is free, but it takes energy to get back down again. The only "free lunch" comes from venting gas at altitude - effectively switching from being lighter-than-air to being heavier-than-air and thereby getting a free ride both up and down. But venting gas is a problem: Helium is expensive, and it's a non-renewable resource that humanity is rapidly running out of - so simply venting the stuff to get you back down again is not a great way to go. That leaves you with other potential lifting gasses - but they are either rare and expensive (helium, neon), potentially dangerous (hydrogen, methane, ammonia), energy-intensive to produce and maintain (hot-air, steam), only marginally light enough (nitrogen, neon) or nasty green-house gasses (methane).
Recompressing helium at altitude (as suggested by our OP) is a good approach - but it still consumes energy.
I like the idea of using a Rozière balloon - which is a hot air balloon with a helium balloon inside it. If you paint the upper envelope such that one side is black and the other silver. By turning the balloon with the black side facing the sun, the hot air will expand - causing the balloon to rise. Rotate the balloon with the shiney side towards the sun and the hot air can cool. With the majority of the lift coming from the helium, you only need the hot air to provide altitude control. SteveBaker (talk) 16:17, 21 January 2013 (UTC)[reply]
Put thin film solar panels on the blimp and and make the battery life infinite. Have they ever tried acid and metal filings-based buoyancy storage? Vent the hydrogen to go down. Sagittarian Milky Way (talk) 18:16, 21 January 2013 (UTC)[reply]

January 21

Hot water in pan

To thaw a hole in ice on my pond I put a pan of hot water on the ice, and noted that the water in pan the rotated in a anticlockwise motion Why is this? — Preceding unsigned comment added by Johnmyers00 (talkcontribs) 08:30, 21 January 2013 (UTC)[reply]

The most likely reason is that you filled it from a hot tap that generated the rotation as it filled the pan. The Coriolis effect explanation has been shown to be invalid, I think. Dbfirs 10:11, 21 January 2013 (UTC)[reply]
Yeah, the Coriolis effect works on the scale of hurricanes and other major weather systems. Didn't stop this cafe at the equator in Uganda I was once at from putting bowls 10 m either side of the equator, expecting the water to drain opposite ways... Sigh! I guess the movement of the pan while being carried from the tap to the pond would also affect rotation here. Fgf10 (talk) 10:25, 21 January 2013 (UTC)[reply]
I am more curious about what made it rotate at all. If the bottom was heated I would guess convection currents but a pan of hot water on ice should stratify, So why did it rotate? --Guy Macon (talk) 12:14, 21 January 2013 (UTC)[reply]
Yes, if the water was completely still without any "curl" when placed on the ice, I can see no reason why it would gain an overall rotation through convection. It can be difficult to see small rotations in clear water, so perhaps the stratification just made a pre-existing rotation visible. Dbfirs 22:22, 21 January 2013 (UTC)[reply]

Ah, nothing to do with answering the question but if the ice was thin enough to melt with a pan of water then would it not be almost too thin to walk on? The Canadian Red Cross says 15 cm (5.9 in) and Environment Canada, no link but I just checked the manual, says 20 cm (7.9 in). Even at 15 cm it would take several pans of hot water to melt through. CambridgeBayWeather (talk) 13:02, 21 January 2013 (UTC)[reply]

Could've been from the shore. Mingmingla (talk) 17:36, 21 January 2013 (UTC)[reply]

Battery paradox

So I'm looking for a new wireless mouse and I come across this. Looks good, but the rechargeable battery only lasts a few days according to most of the consumer reviews. Then I come across this, and it purportedly lasts 2 years on just AA batteries alone (it doesn't work on glass though so that's a deal breaker, but I digress). I was under the impression that lithium ion batteries have much better energy density than ordinary alkaline AA batteries do. So what's the deal? Why does the mouse which runs on AA's lasts so much longer than the lithium ion one? ScienceApe (talk) 08:36, 21 January 2013 (UTC)[reply]

Two AA cells are quite heavy for a mouse. I suspect the the Lion battery is much smaller and lighter. Perhaps the manufacturers assume that recharging is not much trouble so they don't put a large and expensive battery in the first mouse. It's also possible that the AA mouse has a processor that draws less power (and perhaps does less processing?) Perhaps someone can check the specifications? Dbfirs 10:06, 21 January 2013 (UTC)[reply]
Energy density Extended Reference Table lists the energy density of both:
Alkaline battery: 1.15 to 1.43 MJ/L
Lithium ion battery: 0.83 to 3.60 MJ/L
So no more than a 2:1 difference one way or the other.
Mice don't typically specify current draw, and the specified battery life is often an advertising fiction. A 20:1 ratio in current draw between different designs is fairly common. So when you factor in battery size, energy density, current draw, assumptions about use patterns, and stir in a few fibs from the marketing department, that could explain the difference. --Guy Macon (talk) 11:16, 21 January 2013 (UTC)[reply]
While not denying battery life figures from manufacturers aren't particularly reliable, if consumer reviews say the battery life of the earlier mouse is only a few days then I would suggest the mouse has a rather short battery life. I've used several AA mice in the past and with rechargeable NiMH battery life is in the weeks to months range presuming decent full charged batteries. I'm of course presuming a more ordinary consumer pattern, not say 24/7 FPS gaming. Nil Einne (talk) 11:40, 21 January 2013 (UTC)[reply]
My guess is that the difference has little (if anything) to do with the batteries. The amount of power a mouse needs depends drastically on how (or "if") it shuts down when you're not moving it. The problem is often that to shut itself down sufficiently to save power when you stop moving it, makes it take a lot longer to wake up when you start it moving again. So if I had to bet, I'd say that the mouse with the significantly lower current draw would have that annoying lag that you get with some cordless mice when you start using it again after a break. Some mice shut down much sooner than others too - that would make a big difference to power consumption. SteveBaker (talk) 15:35, 21 January 2013 (UTC)[reply]
I don't think so. They are made by the same company and have the same power saver features. We're talking about a difference between 2 years and a couple of days too. ScienceApe (talk) 17:46, 21 January 2013 (UTC)[reply]
Performance Mouse MX M950:
Requires 1 AA NiMH rechargeable battery.
Expected battery life is up to 30 days.
http://logitech-en-amr.custhelp.com/app/answers/detail/a_id/12710
Wireless Mouse M510:
Requires 2 AA alkaline batteries.
Expected battery life is up to 24 months (2 years).
http://logitech-en-amr.custhelp.com/app/answers/detail/a_id/17990/
The M510 has a standard laser.
The M950 has a dual Darkfield Laser.
http://www.logitech.com/images/pdf/briefs/Logitech_Darkfield_Innovation_Brief_2009.pdf
From the above, it looks like the M950 uses about twelve times the power that the M510 uses. --Guy Macon (talk) 19:02, 21 January 2013 (UTC)[reply]
You might think the darkfield laser requires more energy, but I own this mouse. It's logitech's other darkfield laser mouse (allows it to work on glass and reflective surfaces) and it has worked for months with just two AA batteries. According to consumer reviews, it has worked for at least 6 months. I would just buy another one, but this one is designed for laptops so it's smaller, I want a bigger one for my desktop. ScienceApe (talk) 21:53, 21 January 2013 (UTC)[reply]

Railgun on the moon

Would it make sense to launch probes to the outer solar system with a railgun from the moon? This is meant in a few years (decades or centuries) after a moon station is built and we have a few people there. I know that it does not make sense to transport all the stuff to the moon built a railgun there and than think it is more effective and cheaper than to launch from earth. Is the speed you can reach higher than what chemical propulsion or ion thrusters can provide?--Stone (talk) 13:35, 21 January 2013 (UTC)[reply]

This has been suggested at least since I was a child (the 60's). I remember illustrations showing how material from the moon could be sent to Earth orbit using this method. Zzubnik (talk) 16:15, 21 January 2013 (UTC)[reply]
I think it's possible - but not economically sensible for that purpose. Between the lack of atmosphere and lower gravity, railguns would certainly be much more effective from the Moon than from Earth. But you've still got the problem of transporting the probe from earth to moon and loading it into the railgun - that might cost more fuel and equipment than would be saved by doing a launch direct from earth orbit. But you could also considering placing a railgun like that in earth orbit or at one of the lagrange points too - and that would be even more effective than building it on the moon.
The only reason (that I could imagine) for putting this machine on the moon would be the possibility of making the machine from materials commonly found there to avoid having to ship them up to Earth orbit. But that would be a much bigger task. Moon rocks contain iron, aluminium, silicon, magnesium and titanium - but not much copper or silver for making electrical conductors suitable for all of those magnets. Aluminum wires are also possible - but their conductivity would be poor. There ought to be materials for making photocells with which to make solar panels. You'd probably want to use superconducting magnets - and the specialized materials needed for that would be even harder to make there. But before any of those things can be done, you'd need massive mining, refining and manufacturing facilities on the moon - and the cost of those things would dwarf the costs of sending probes to the outer solar system from Earth. We routinely send those kinds of probes out there - and the cost is easily accommodated within NASA's budget - but the cost of even one human mission to the moon would be huge.
So I think the first problem is to find money and motivation to build extensive lunar colonies with the infrastructure to process moon rock in large quantities. Helium-3 mining (as fuel for fusion reactors) is a possible reason to do that - and using a railgun to launch helium canisters back to Earth would be a strong motivation to build a lunar railgun. But if it's aimed to get canisters into low Earth orbit, I don't know whether you'd be able to aim it correctly to get a probe into the outer solar system.
So, possible? Yes. Likely? No.
SteveBaker (talk) 16:52, 21 January 2013 (UTC)[reply]
(Multiple ECs) You (the OP) might wish to check out Robert Heinlein's novel The Moon is a Harsh Mistress, in which a railgun – used to transport (metal-jacketed) bulk materials from the Moon to Earth – features prominently. While this is fiction, Heinlein was a trained engineer (he worked on early versions of what eventually evolved into NASA's standard spacesuit, amongst other things) and would have been careful to get the underlying concept and numbers right. {The poster formerly known as 87.81.230.195} 84.21.143.150 (talk) 16:55, 21 January 2013 (UTC)[reply]
Yes, but that book was written almost 50 years ago (1966) - before most of our understanding of practical superconductors and before we had rare-earth magnets. Three years before we had even a single moon-rock to analyze. A modern railgun would be a very different beast to the thing that Heinlein imagined. SteveBaker (talk) 17:03, 21 January 2013 (UTC)[reply]
Really the only hurdles would be financial and technological hurdles. Seems to me that it will be done once we begin colonizing space. ScienceApe (talk) 17:39, 21 January 2013 (UTC)[reply]
A railgun will be built only if it's better than all alternatives, and I'm pretty sure it's not. A space elevator is not yet feasible on Earth, because no known material is strong enough for the cable, but it is possible on the Moon with current technology. --140.180.255.25 (talk) 19:12, 21 January 2013 (UTC)[reply]
Yep, I agree. A space elevator would be an excellent solution for the moon...but then you might want to advance another step and build a Skyhook (structure)...or an Orbital ring...or a Space fountain...or a Lofstrom loop. There are many, many other ideas that become possible without an atmosphere and with sufficiently strong materials. SteveBaker (talk) 21:09, 21 January 2013 (UTC)[reply]
A problem here would be that there are no long term stable orbits around the Moon. All probes orbiting the Moon need to have very frequent course corrections to prevent them from crashing into the Moon due to very strong tidal perturbations from the Earth and the Sun. Count Iblis (talk) 21:46, 21 January 2013 (UTC)[reply]
For a lunar space elevator, you need to put the counterweight at the L1 point, which is sufficiently stable not to need much station keeping. --Tango (talk) 23:10, 21 January 2013 (UTC)[reply]
See mass driver. --Tango (talk) 23:10, 21 January 2013 (UTC)[reply]

Microphone physically attached on a string

suppose a microphone is physically attached ot a string. it's tiny, assume it weighs close to 0 grams and vibrates freely with the string - maybe it has its tiny battery and just transmits whatever it "hears", via FM or whatever.

Now. What *DOES* it hear? Tee same thing as it would hear if it were NOT freely vibrating on the string, but attached to a fixed point right next to it?

91.120.48.242 (talk) 15:42, 21 January 2013 (UTC)[reply]

It'll certainly hear things - the modes of vibration of the string (being a long, thin thing) will be very different than the modes of the microphone pickup - the string will be highly susceptible to vibrations near to it's natural frequency and to lateral vibrations rather than vertical ones. So for that reason, it doesn't isolate the microphone from all sounds...the string+microphone also has a lot of inertia - so when a high frequency sound hits it, it takes time to get moving. But the super-lightweight microphone transducer is designed to have minimal inertia. So the string might dampen out very low frequency vibrations - but not the high frequencies that we think of as "sound". I doubt you'd hear anything very much different compared to a stationary mic. SteveBaker (talk) 16:56, 21 January 2013 (UTC)[reply]
Thank you, this was the answer. Followup question: does this added weight affect the string much? (I suppose I could check this experimentally, but I'm lazy). 178.48.114.143 (talk) 18:34, 21 January 2013 (UTC)[reply]
Well, it certainly affects it...but "very much" is a value judgement. If you take one a 100' length of one of those 3" diameter ropes they use to tie ships to docks - and attach one of those pin-head microphones that spies use...then "not very much" would be a reasonable answer! On the other hand, if you're talking about a 6" length of thin string and one of those gigantic 1920's radio microphones - then you'd be lucky if it didn't snap the string...so "very much" would be a good description! SteveBaker (talk) 21:04, 21 January 2013 (UTC)[reply]

Disposing of Cooking Oil

There are a lot of specific instructions concerning the disposal of used cooking oil. They seem largely unnecessary to me. Couldn't I just pour the oil outside and let the soil microbes decompose it for me? The oil in question is a blend of soy, olive, and canola oils.70.171.28.155 (talk) 17:07, 21 January 2013 (UTC)[reply]

Sure, you can pour cooking oil outside. Ideally you compost it, in a bin or pile with other kitchen and yard waste (coffee grounds, dead leaves, etc.). If you just pour a bunch of oil on the ground, it can start to smell rancid and attract unwanted pests (opossums, racoons, etc.) SemanticMantis (talk) 17:13, 21 January 2013 (UTC)[reply]
You shouldn't add too much oil to a compost pile; too much fat retards decomposition. Some people claim it can be used to kill weeds in gravel or sidewalk cracks, but I don't know how effective that would be. SemanticMantis (talk) 17:17, 21 January 2013 (UTC)[reply]
You should find out if somebody in your community converts cooking oil to biodiesel; they'd be happy to take it off your hands. -Or- buy a diesel powered vehicle and make your own fuel:[19]:[20] There are pre-fab units (~$1500) that convert 40 gal. at a time. (Or you can DIY):[21] ~:74.60.29.141 (talk) 19:24, 21 January 2013 (UTC) ~ (P.s.: I have used straight (filtered) cooking oil, but this is not recommended for cold climates or when the fuel sits awhile.)~:74.60.29.141 (talk) 19:40, 21 January 2013 (UTC)[reply]

Why not just use as much as you need (which amounts to deposing it in your toilet a day or so later) ? Count Iblis (talk) 21:42, 21 January 2013 (UTC)[reply]

If you deep-fry correctly, little of the oil soaks into the food, leaving you a fryer-ful of used oil for disposal. DMacks (talk) 21:47, 21 January 2013 (UTC)[reply]
I see! When I visit my parents and eat some snacks they prepared, I see one or two such fryers with used oil. I never bother deep-frying stuff myself, I prefer to know exactly how much oil/fats I'm actually eating. Count Iblis (talk) 21:58, 21 January 2013 (UTC)[reply]
There are recycling centers, like this one, in many local areas that may be able to help you. Richard Avery (talk) 22:53, 21 January 2013 (UTC)[reply]

Quantum mechanical question. in a physical sense of actual reality (what is true) does the mathematical identity function (==) not hold for objects?

Update: this is a quantum mechanical question. Identity means mathematical identity.

In a physical sense - referring to actual reality/truth - does the mathematical identity function (==) not hold: meaning, you are not mathematically identical to yourself a physical object does not pass the mathematical identity function (is instead =/= to itself). My reasoning is as follows:

- PeoplePhysical objects, unobserved, do not have exact locations, but instead are probability waves. (per quantum mechanics).

Therefore, when "comparing" to yourself any physical object with itself, you would necessarily find that you are not identical to yourself one would find that the object fails the mathematical test. By whatever means this comparison were to physically occur (per qm):

That is to say, it is not an experimental limit, but a facet of [the physical universe].

Can we, therefore, conclude that the mathematical identity function does not hold for real-life objects in the physical universe? That an atom, or whatever, is not identical to itself? physical objects cannot be considered to pass the equality test with themselves? 178.48.114.143 (talk) 17:21, 21 January 2013 (UTC)[reply]

I don't think it's meaningful or helpful to apply concepts like the identity function to thingstri-as-they-are, rather than things-as-we-approach-them. If you consider any object, and realise that it is a collection of atoms, with quite fuzzy edges as the microscopic level - and then realise that the atoms are mostly space, and the sub-atomic particles that make them up defined only by probability functions, and so on - it becomes clear that the identity of an object, or a person, or anything at all is a matter of perceptual and conceptual convenience, rather than an absolute truth. In order to get through life and not go crazy, we behave as though our architectonic approach to the material world is true. I don't stop and worry if my lunch is 'real' or not; I just eat it.
As to whether the self is real - I strongly recommend Douglas Hofstader's book I am a Strange Loop, which is all about the perception of one's own self, identity and thought processes. AlexTiefling (talk) 17:26, 21 January 2013 (UTC)[reply]
I've updated the question slightly to reflect that I am asking about "physics" (qm in particular) and not philosophy, etc. I'd like a more rigourous answer, though I did not use formalism (equations, etc) in my question, since I don't know it. Thank you. 178.48.114.143 (talk) 18:32, 21 January 2013 (UTC)[reply]
Many people are half bicycle so they don't retain their identity ;-) from Flann O'Brien, The Third Policeman: "The gross and net result of it is that people who spent most of their natural lives riding iron bicycles over the rocky roadsteads of this parish get their personalities mixed up with the personalities of their bicycle as a result of the interchanging of the atoms of each of them and you would be surprised at the number of people in these parts who are nearly half people and half bicycles" Dmcq (talk) 18:43, 21 January 2013 (UTC)[reply]
It would have been useful if you had marked the changes more clearly. I stand by my original answer, though. The term 'identity function' has, to me, only the technical mathematical meaning of a function which maps every term to itself. It has nothing much to with identity in the sense of individuality. But the human self is not a quantum mechanical thing; it exists, if it exists at all, at a much higher level both of size and abstraction. It is in no way helpful to think of macroscopic entities like people as probability functions. In any case, no person exists unobserved: they observe themselves, even when asleep, by means of their motor functions. You're trying to apply the tools of the sub-microscopic world, and of the abstract mathematical world, to large, chunky things which they do not usefully describe. I am not a probability function; in quantum terms I am a highly deterministic averaging-out of billions of billions of very localised probabilities, indistinguishable from a Newtonian entity. Unless someone subjects me to a Schrodinger's Cat test, the quantum world does not affect me perceptibly - even though it may play some part in the brain functions by which I decide what is and is not perceptible. But I'll say it again: I am not a quantum function; I am a strange loop. AlexTiefling (talk) 19:06, 21 January 2013 (UTC)[reply]

Edit: I've now I added the word "mathematical" before every occurrence of "identity" and made substantial and well marked clarificaitons. I only mean the equals function, and the physical universe. I don't care about any other aspect. 178.48.114.143 (talk) 19:41, 21 January 2013 (UTC)[reply]

I'm sorry, but this still isn't really a physics question. It's pure semantics. The word 'self', and all its compounds, conveys an idea of equality such that 'the set of all things not equal to themselves' is guaranteed to be empty. (I expect someone will be along shortly to prove that rigorously with first order symbolic logic.) So however you have defined any given physical entity, that thing (according to that definition) will be identical to itself. But if you stop thinking of that entity in macroscopic, architectonic terms, and start considering the wave functions of all its component particles, not only are you treating it in a way that is not reasonable for any ordinary person - even a professional physicist - you are also ignoring the implicitly inclusive and fuzzy-matching way we identify macroscopic objects.
If, on the other hand, the physical entity you first chose is, in fact, a subatomic particle, then (thanks to Heisenberg's Uncertainty Principle) your initial identification of what and where it is was only ever a well-informed guess to start with. In that case, you may not be able to say later that another, similarly defined particle is definitely the one you had earlier; but at any given point of time, even an electron is itself.
Bottom line: mathematics is not the real world, and trying to make the real world obey mathematical rules leads to headaches. But equally, the real world at the large scale is not usefully described by quantum theory. AlexTiefling (talk) 19:58, 21 January 2013 (UTC)[reply]
I'm not sure I entirely understand the question - but we have an article "Identical particles" which certainly says that at the level of electrons, atoms and such, there is "absolute equality" between all particles of that type. It says: "Identical particles ... are particles that cannot be distinguished from one another, even in principle. Species of identical particles include elementary particles such as electrons, and with some clauses, composite particles such as atoms and molecules."
Furthermore, John Archibald Wheeler (in a concept prominently reported by Richard Feynman in his Nobel Prize speech) suggested that electrons are not just identical - but that there is actually only one of them in the entire universe(!) - bouncing back and forth through time and appearing as an electron when going forward in time and as a positron as it travels in reverse.
I'm not sure this helps!
SteveBaker (talk) 21:00, 21 January 2013 (UTC)[reply]
The fact that particles don't have an exact location in space is definitely a facet of the universe but that doesn't meant that the particle is not identical to itself. All it means is that exact location in space is not a property of particles. Dauto (talk) 22:31, 21 January 2013 (UTC)[reply]

common ways to relieve heart palpitations

This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis, prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page.
This question has been removed. Per the reference desk guidelines, the reference desk is not an appropriate place to request medical, legal or other professional advice, including any kind of medical diagnosis or prognosis, or treatment recommendations. For such advice, please see a qualified professional. If you don't believe this is such a request, please explain what you meant to ask, either here or on the Reference Desk's talk page. --~~~~
--Jayron32 22:26, 21 January 2013 (UTC)[reply]

Why do airships exploit lift so little?

Hindenburgs travel at least 90 mph, why not make that airspeed do something useful for once? I might've seen an article about balloon-wings to orbit (with rockets and momentum providing the final boost). What's the article for that? Sagittarian Milky Way (talk) 18:42, 21 January 2013 (UTC)[reply]

Read the FAA's Balloon Flying Handbook. This free text-book will introduce you to everything you need to know about basic aeronautical engineering, aerodynamics, and engineering realities of modern lighter-than-air aircraft. Nimur (talk) 18:51, 21 January 2013 (UTC)[reply]
I now realize that the word lift was unclear. I meant airfoil lift. Sagittarian Milky Way (talk) 20:35, 21 January 2013 (UTC)[reply]
Lift doesn't come 'for free'. If you add a wing - and it produces lift, then it will inevitably add drag - and that slows the airship down or requires larger engines and more fuel...which in turn makes it heavier...so yet bigger engines and yet more fuel.
Furthermore, if you need that lift in order to maintain altitude with your regular payload weight, then you have all manner of new problems that true airships manage to avoid. For example, an airship can hover - it needs very little fuel to travel long distances. If it had to use engine power to create motion through the air just in order to avoid losing height - then it wouldn't be able to hover without falling out of the sky. If you need that wing lift to get it to gain altitude - then you'd need a runway for it to take off an land on. True airships can land and take off vertically - which is vital given how big they are!
If you don't need the extra lift - then why complicate the design and add a ton of drag for no particularly good reason?
That said, airships do gain lift - they just don't have wings to generate it. When the airship needs to gain altitude in a hurry, it can use the elevators on the tail and/or ballast shifting to point the nose up in the air - and in that attitude, that huge body does actually generate lift. I don't recall seeing any of the giant world-war-I era craft doing that - but I've definitely seen the Goodyear Blimp do it.
SteveBaker (talk) 20:49, 21 January 2013 (UTC)[reply]
I take then you're talking about a Hybrid airship Quote: “However, critics of the hybrid approach have labeled it as being the "worst of both worlds" declaring that such craft require a runway for take-off and landing, are difficult to control and protect on the ground, and have relatively poor aerodynamic performance.”--Aspro (talk) 20:56, 21 January 2013 (UTC)[reply]
►"...article about balloon-wings to orbit" → Orbital airship. ~:74.60.29.141 (talk) 21:09, 21 January 2013 (UTC)[reply]

According to Ben Bova, author of "The Great Supersonic Zeppelin Race" (fiction), a Supersonic Zeppelin can be designed so as to not create a sonic boom. The sonic boom limiting the Concorde to over-water flights is considered by many to be a major reason why it failed. --Guy Macon (talk) 23:00, 21 January 2013 (UTC)[reply]