Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 561: Line 561:
:I agree it is wrong, though perhaps not as wrong as one might guess. According to [[list of countries by iron production]], global iron mining was 2.3 billion tons / year in 2009. As raw metal, that would have volume of 0.3 km<sup>3</sup>. 3554 Amun is only about 7 times that annual volume. So definitely not 30 times all metal ever, but still a large amount on the scale of iron mining. Also, iron stands out for its very large production volumes. Most other metals we mine are in much smaller quantities (e.g. copper and aluminum are only a few percent of the iron values), so a concentration of those metals would be comparatively more significant if one existed. [[User:Dragons flight|Dragons flight]] ([[User talk:Dragons flight|talk]]) 07:50, 26 November 2010 (UTC)
:I agree it is wrong, though perhaps not as wrong as one might guess. According to [[list of countries by iron production]], global iron mining was 2.3 billion tons / year in 2009. As raw metal, that would have volume of 0.3 km<sup>3</sup>. 3554 Amun is only about 7 times that annual volume. So definitely not 30 times all metal ever, but still a large amount on the scale of iron mining. Also, iron stands out for its very large production volumes. Most other metals we mine are in much smaller quantities (e.g. copper and aluminum are only a few percent of the iron values), so a concentration of those metals would be comparatively more significant if one existed. [[User:Dragons flight|Dragons flight]] ([[User talk:Dragons flight|talk]]) 07:50, 26 November 2010 (UTC)
::Thanks, I would never have imagined those figures... [[User:Sandman30s|Sandman30s]] ([[User talk:Sandman30s|talk]]) 09:48, 26 November 2010 (UTC)
::Thanks, I would never have imagined those figures... [[User:Sandman30s|Sandman30s]] ([[User talk:Sandman30s|talk]]) 09:48, 26 November 2010 (UTC)

::Hang on a moment. I could not find a web page giving the volume of Amun (I did find multiple sources giving its "diameter", but since small asteroids are not spherical, this gives little idea of its volume). But [[3554 Amun|Wikipedia's page]] shows its mass as 1.6e13 kg, which is 16 billion metric tons. This accords with Dragon's figure of 7 times the 2.3 billion tons of iron mined in 2009, if those are metric tons; if they're short tons, as one might expect from a US source, it would be nearer 8 times. But several other Internet sources give Amun's mass as 30 billion metric tons, which (if iron) would be equivalent to 13 or 14 years' terrestrial production rather than 7 or 8. Still nowhere near 30 times the world's all-time production; perhaps someone slipped a factor of 1,000 in their original calculation. --Anonymous, 02:02 UTC, November 26, 2010.


== How can a solvent be non-polar but be made of polar molecules? ==
== How can a solvent be non-polar but be made of polar molecules? ==

Revision as of 02:01, 27 November 2010

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 21

Full name for invertebrate zoologist named Grube or Grübe, mid C19

Hi all,
Pet peeve is authorities without biographies. "Grube" is the binomial authority for Chirocephalus josephinae, Peripatopsis capensis, as "Grübe" -- and with one "Oersted" -- Themiste alutacea, and so on. From the last mentioned, I'm guessing he or she may well be Danish.
Thank you! --Shirt58 (talk) 09:23, 21 November 2010 (UTC)[reply]

This is where the internet fails! I eventually found it by searching for " Grube nineteenth century taxomony" and found his initials are AE Grube, then google scholared that and found this which says he is Adolph Eduard Grube. The Oersted is Anders Sandøe Ørsted based on the third hit for a search of "A S Ørsted Themiste" (tried posting a link but the syntax gets screwed up as it has square brackets in the URL). SmartSE (talk) 12:28, 21 November 2010 (UTC)[reply]
If you replace the [ and ] with %91 and %93, which are the ASCII escaped equivalents, then it should work. CS Miller (talk) 09:08, 22 November 2010 (UTC)[reply]
Uuh, mmm, yeah, I'll try that the noo the morn --Shirt58 (talk) 13:19, 23 November 2010 (UTC)[reply]
Thanks all! Turns out he's Polish. fr.wikipedia has an article about him. The redlink above might turn blue... perhaps. Thanks again.--Shirt58 (talk) 09:09, 26 November 2010 (UTC)[reply]
Actually it should be %5b and %5d which are the hexadecimal values, not decimal. CS Miller (talk) 16:01, 26 November 2010 (UTC)[reply]

Is autism a human polyphenism similar to those of social insects?

This isn't a topic I know much about, but I was thinking... It is well recognized that locusts switch between a solitary existence and one in which they form vast gregarious migratory swarms. This is dependent on a very simple mechanism involving the foraging gene (a cGMP-dependent protein kinase or PKGI) and increased serotonin levels in the nervous system. This mechanism has a similar function in Drosophila melanogaster, and also controls foraging vs. defender behavior in worker ants.[1]

Now the same sort of PKGI protein controls serotonin uptake by SERT in mammals also.[2] Variants of SERT have been associated with certain autistic traits,[3] Autism can be associated with increased serotonin levels, and various SSRIs have been suggested as treatments.[4] (At first glance the elevated serotonin seems the wrong way, but one could argue that it is in some way compensatory... I'll leave this hanging for now)

Now putting together these things, and if I credit the common ancestor of bilateral animals with some sophistication, I am tempted to suppose that there was some primordial mechanism to use PGKI and serotonin to dictate a different "phase" or behavioral phenotype which was more social in nature. This leads to the idea that at least some forms of autism are a pre-existing, adaptive model of human behavior, rather than a disease. Perhaps it was maintained if as a variation in populations of all preceding ancestors, or at least as a working genetic regulatory system that could be called upon at need; or else it might be that the way the nervous system works allows the same genetic changes to recreate similar changes of behavior even hundreds of millions of years apart from the first instance. (Autism could also result from non-adaptive genetic mutations, just as white skin can arise from albinism rather than an ancient variant of MC1r)

But now to the question. Is it possible for an autistic person to live in a traditional hunter-gatherer society, either Paleolithic or Neolithic (or as best an approximation as can be found) without suffering any selective disadvantage? Apparently autism is still relatively uncharacterized in developing countries, but does exist.[5] Most of the discussion I find about autism and hunter-gatherer lifestyles focuses more on hunter-gatherer diet, said by some to help with the disease, but that's not what I'm looking for. In order for autism to be a true human polyphenism, there has to be some environment in which it is not maladaptive. Actually, that last sentence was quite a stupid thing to say, especially in reference to insect polyphenism! What I mean is that the potential to become autistic would have had to have been preserved by some means, even if the mechanism is prone to be phrased in terms of the ever-controversial group or kin selection. Wnt (talk) 10:06, 21 November 2010 (UTC)[reply]

I don't accept your implicit assumption that autism would automatically be maladaptive in hunter-gatherer societies, at least not in relatively mild forms. You could argue, for example, that mild autism could be a benefit for someone who is engaging in solitary trapping of small animals. Modern hunter-gatherer societies show a high level of individual specialization, and hence can accomodate a range of personality types. Physchim62 (talk) 12:02, 21 November 2010 (UTC)[reply]
Sorry if I was confusing: that's not my assumption but the question. While I would speculate that it is possible that autistic people could do well in such societies (otherwise the answer to the title question is probably "no"), I'd really like to see if there's evidence. Certainly it is believed in conventional circles that autism is a disease and a disability, and I'd like to actually have a concrete example to point to to show otherwise.
Another reason to look for concrete examples is that the rate of autism in society is apparently highly variable (and currently increasing), and only in a society where autism and non-autism are of equal fitness can one see what the "natural" rate of autism among humans might be. (At least, assuming that the developmental decision process is adaptive) I should point out that if it is a polyphenism rather than a disease - in other words, if people are already "pre-loaded" with autistic developmental software and rely only on environmental cues to determine whether to activate it or not) there may be no upper limit on how common it could become - if there is some lifestyle habit, food, chemical, ultrasonic noise, etc. that affects which behavioral phenotype is expressed, you could find yourself in a situation in a few years where 50% or more of children are autistic. It is possible that some unknown trigger (for example, the concentration of carbon dioxide in the atmosphere) could cross a tipping point to cause this, but still be virtually impossible to identify. It is even possible that such stimuli could have a cumulative epigenetic effect in the parents, so that once the change occurs, it is irreversible for a large portion of society. These are of course very remote possibilities, but it's almost an apocalyptic scenario. It would help to rule out very widespread autism in the future if you can identify a society where autistic people do well and show that even there the rate is very low. Wnt (talk) 13:08, 21 November 2010 (UTC)[reply]
This is a very interesting question, but I don't think you're going to get a concrete answer and you're probably asking for opinions, as you've evidently researched this in some depth without finding anything yourself. For starters, I think it's important to question whether autism is maladaptive in today's society, although in severe cases it obviously is, at the less severe end of the spectrum though you'll find many scientists and mathmaticians - [6] by Simon Baron-Cohen who has worked on this e.g [7]. That makes me wonder whether an argument could be made that as today's world is more technical and abstract than in the past, maybe autism is adaptive. This is obviously guesswork, but maybe the industrial revolution selected for genes linked with autism and severe cases of autism are a result of individuals having two recessive alleles or something (sorry, I'm not a geneticist!). This discusses how similar points have been raised regarding schizophrenia and creativity, and reminding us of the thin line between genius and insanity (maybe they are the same). These ideas are linked to neurodiversity - it's pretty obivous that there is no such thing as "normal" when it comes to the way people think and this is a good thing because we are all good at different things. Combined, this has allowed us to go further than if we all thought in the same way and as a society we are stronger as a result. I don't think that anyone has written about this, but in the same way that a more genetically diverse animal population is more likely to survive changes in the environment, the same may well apply to the way our brains work, but in a slightly different way. For example if we need to solve a problem, you may need somebody creative (more schizophrenic) to come up with the idea in the first place, but then someone who pays attention to details (more autistic) to actually design and implement the solution. This links pretty nicely with insect polymorphism, for example in leaf cutter ants where the many different castes perform the jobs they do best and the colony as a whole benefits from that. So, I haven't really answered your question, but I've hopefully provided an alternative viewpoint. SmartSE (talk) 14:36, 21 November 2010 (UTC)[reply]
I'm starting to believe that being Neurotypical is a real live example of the Green-beard effect --Digrpat (talk) 23:46, 21 November 2010 (UTC)[reply]
I believe in neurodiversity, and more generally, that both genetic diversity and other forms of phenotypic plasticity and diversity are maintained within many species. I suspect that there are many of us here on Wikipedia who are some small distance into the autism spectrum; even so, it's been my impression that the average family returning from the pediatrician with a diagnosis of autism is not expecting a great mathematician or even an average student. When autism is accompanied by retardation, I don't know whether that is because it is one symptom of a more widespread problem, or whether autism interferes with learning as it occurs in our society. I suppose I've been assuming the latter when asking about ancient or primitive cultures; if it were the former, I suppose that Aspberger's Syndrome or some otherwise defined subset of the autism spectrum might count as the polyphenism in question above, with more severe autism being something of a red herring. But looking at locusts from an anthropomorphic perspective, I tend to doubt that: a locust who won't follow and eat with the swarm surely must seem profoundly developmentally disabled to its peers.
The "extreme male brain" model of autism seems somewhat offensive, since it's based on the idea (I would be prone to say myth) that women can't do math, science, or technical work. I think that there are (wrong) racist arguments that seem more plausible. To me it would seem more likely that men, lacking a second X chromosome, are somehow more prone to autism on a genetic basis, and a greater than expected number of subtle autistic phenotypes have tinged the studies about differences between male and female. There is also an argument regarding the idea that autism and schizophrenia are opposite ends of a spectrum, and schizophrenics aren't "more female".[8]

vortex launcher : pressure amount area

Hi guys,

recently i have been puzzling over vortex launchers because of the military non lethal one that has been proposed. It is designed to produce a vortex with enough strength to knock someone down. At first i thought that this was folly since the law of equal and opposite reactions meant that if it produced enough force going one way to knock over a person, sureley the person holding it would be knocked down as well. then I began to think, the vortex is caused firstly by an explosion in a 3inch diametre blast chamber (read the report). then the flow is expanded via the nozzle as it travels up it and out and arrives at the target as a 2ft diamtre vortex. Would that mean that even though the pressure in the blast chamber is greater than the vortex pressure on the target, it is over a much smaller area so is not as effective at moving the whole system backwards. A similar example would be a ship on water, if i set off an explosion on a small part of a ship, it may destroy a part of the ship, but the whole thing will not move across the water, but the much smaller pressure of the wind on the sale, which is a significantly larger area, can move the whole ship across the water. Do you think the vortex launcher works something like that, more pressure but in a smaller area at the launcher, so it experiences a jolt, but less pressure but more area at the target so it is moved backwards (knocked over)

Does that make sense or have I missed something.

Many thanks. —Preceding unsigned comment added by 62.3.99.14 (talk) 11:11, 21 November 2010 (UTC)[reply]

It is not necessary that all the momentum comes from the "vortex launcher" since the vortex interacts with the surrounding air. Also the shouter can maybe seek support while the target are unprepared. Even given this I have difficulties in seeing that it could be a practical weapon in most situations. --Gr8xoz (talk) 01:07, 22 November 2010 (UTC)[reply]

Is it safe to drive after drinking cough syrup?

I feel euphoric and other pleasant senses. There's something about cough syrup that just alters the way I feel and etc.

But I have this gut feeling about driving that caused me to ask you here: Is it safe to drive after having had cough syrup? Please let me know ASAP, and don't forget to cite sources. --70.179.178.5 (talk) 13:28, 21 November 2010 (UTC)[reply]

Are you talking about a normal dose, or intentional abuse of dextromethorphan, or something else? There are many different cough syrups with different active ingredients. Do you want to specify one in particular? Wnt (talk) 13:33, 21 November 2010 (UTC)[reply]
Edit conflict - It is policy here not to answer medical advice questions. Please read the package insert of your cough syrup to find out if you can safely drive under medication from its contents, which as user:Wnt pointed out we cannot know. Please ask a qualified Pharmacist or Medical Doctor near your location for their advice regarding this matter. --79.219.104.60 (talk) 13:36, 21 November 2010 (UTC)[reply]
I have added back the above response which was removed by the OP [9] with the claim it was 'unconstructive' Nil Einne (talk) 18:11, 21 November 2010 (UTC)[reply]
Regardless of that, do you really want to trust some randomers on the internet about this? Maybe ask you doctor! SmartSE (talk) 13:56, 21 November 2010 (UTC)[reply]
SmartSE, my doctor is not in on weekends. Otherwise I would've called him already. --70.179.178.5 (talk) 14:46, 21 November 2010 (UTC)[reply]
Wasn't there a leaflet in the box? If so, read it carefully. If not, and you can't get advice anywhere else, be safe and don't drive. No-one here can help you any further because we don't know what is in that particular cough syrup. Don't take medicines for a sense of euphoria, only for the medical conditions they are meant for. Itsmejudith (talk) 17:20, 21 November 2010 (UTC)[reply]
Words of wisdom: when a question says "please let me know ASAP, and don't forget to cite sources", it should be treated as trolling. Looie496 (talk) 17:21, 21 November 2010 (UTC)[reply]
More so when the OP removes helpful answers. Nil Einne (talk) 18:11, 21 November 2010 (UTC)[reply]
Most UK medicines in this class have the following warning - "Warning. May cause drowsiness. If affected do not drive or operate machinery." Exxolon (talk) 20:00, 23 November 2010 (UTC)[reply]
In the US, the standard warning is "do not drive or operate heavy machinery while using this product". This was amusing when placed on a bottle of children's cough syrup ... so can we assume it's OK for toddlers to drive and operate heavy machinery after they get over their colds ? :-) StuRat (talk) 20:14, 23 November 2010 (UTC)[reply]

Limits on number of channels in MIMO wireless

In my answer to the question Wireless v Fibre on the 16/11 [10] I assumed that the maximum number of independent MIMO- channels in future wireless internet connections is limited to 10. This question got me thinking on what the physical limits really are and how they depends asymptotically to the size of the antenna arrays. I has not been able to find any work on this, maybe be course I do not know the terminology.

In theory it should be possible to place any number N antennas in a arbitrary small area on the transmitter and receiver and get N independent channels but if the antennas are to closely spaced the problem with separating the independent channels will be very ill-conditioned. It will be a hard inverse problem that will get very bad signal to noise ratio.

So the question is: How will the number of possible independent channels depend on the size of the antenna arrays given that the separation of the channels should be a reasonable well-conditioned problem?

This should be the same as to say that the needed power at the sender should scale linearly in the number of channels.

In the case of free space communication my guess based on intuition and calculation of the angular resolution of the antennas are:

Where N is the number of independent channels, C is a small constant depending on how well-conditioned the system should be, is the diameter of the transmitter array, is the diameter of the receiver array, is the wave length and L is the distance between the transmitter and the receiver. I think this should be valid for , , and .

Is this correct?

An example , , and (100 GHz) would give 9 C channels.

There are much hype around Orbital Angular Momentum (OAM) [11] but I do not think it will make any difference for this limit, is this correct?

In the presence of reflectors the number of channels can be increased and in the extreme case when the transmitter and the receiver are in the focal points of a reflective ellipsoid cover the limit should be something like:

In a network the capacity can be increased by using more than one base station.

Any ideas of what number of MIMO- channels that can be used in the future for fixed data-links (point to point), stationary roof mounted antennas in a cellular system and in portable devices? --Gr8xoz (talk) 14:19, 21 November 2010 (UTC)[reply]

If you try to pack too many antennas in a small space, the power required will grow exponentially with the number of antennas. Totally negating the benefit, as you could get that by having more bits per symbol anyway. You can try to imagine how many antennas on a portable device. At the wavelengths used you could get about two, and if you allow diverse polarization as well, perhaps you can get 4 independent signals on your handheld device. For larger implementations, it is going to be mostly 2 dimensional on the surface of the earth. In a stationary application, you can have a tradeoff with independent antennas, or getting gain with a directional antenna and increasing the bits per symbol. remember also that the cost will be proportional to the number of antennas, so having 1000 antennas in a circle to get 1000 times the bit rate may not be economical, it may be better to run in an optical fibre. Graeme Bartlett (talk) 09:53, 22 November 2010 (UTC)[reply]
Thank you for the answer. How many antennas is to many in a given space? I assume it is the number of independent channels that is limited, many antennas sending the same signal does not require more power. I also think that about 4 channels on a hand-held device at today's frequencies is around the maximum but the trend are for frequencies to go up over time, from 415 MHz NMT to 2.6 GHz 4G and 5 GHz WiMAX, the question is where does it stop and how many independent channels can be used then.
My impression was that as long as you do not pack the antennas to tight the power rises linearly with the number of channels and the capacity while the power rises exponentially with the number of bits/s/Hz/channel. See Shannon–Hartley theorem.
For the same reason the needed surface-area of a directional antenna will increase exponentially with the number of bits/s/Hz/channel while the needed surface area for a antenna array in a MIMO-system only should increase by the square-root of the needed capacity.
Antenna arrays aligned with the surface of the earth is of curse the way to go for radio astronomy and deep-space communication. I think that in order to get 2D MIMO you will need a projected area perpendicular to the direction of communication, for communication along the surface of the earth it will become a 1D MIMO-system. I was thinking more of integrated antennas on printed circuit-boards in the size of a normal satellite dish operating at high frequencies (10-300 GHz). The current fastest data-link operates at 85 GHz. I do not see the cost of the antennas as important in such an installation but of curse the transceivers and signal processing will cost a loot today but they follow Moore's law.
An optical fibre is of curse a good alternative in many cases.--Gr8xoz (talk) 11:48, 22 November 2010 (UTC)[reply]
Your sensible limit will be determined by diffraction limits, and will be about a square half wavelength per antenna. Also double that to get the alternate polarization. You could print the antenna on a circuit board, but as you say a 1D arrangement is probably the best that you will do for an earth surface system. If you can scatter bases over buildings or vehicles then you may be able to take advantage of 2D patterns. I was thinking that if you can have an electrically controlled hologram, you could send a coherent beam of light off in different directions to your base stations. The electric control would enable to beams to be steered to keep them on target as your mobile moved. However you really need a clear line of site for this to work. At your 300GHz you could get 4 per mm (Vertical and Horizontal for two half wavelengths). So with a meter of length of antenna you could have 4000 beams. Graeme Bartlett (talk) 11:51, 24 November 2010 (UTC)[reply]
As I understand it those 4000 beams will be in all directions only a few will hit the receiver if it is small compared to the distance. That was the idea behind the formula where I asume that there are a useful path in any direction, this is the 2D-case, remove the square to get the 1D case. As I understand it is your sugestion is that C=4.
I was maybe unclear in my first post, the formula is based on the diffraction-limited angular resolution of each antenna. The angular resolution of the transmitter are , the receiver takes an angle of that means that the transmitter can point out points on the receiver. You can of curse use the receiver to resolve the transmitting antennas in stead if you want to. The same formula apply if you divide both antenna arrays in to directional sub arrays. Lets say the transmitter can resolve 9 by 9 points on the receiver (81 channels), if we divide both the transmiter and receiver in 3 by 3 sub-arrays, each sub-array on the transmitter can now resolve the 3 by 3 sub-arrays on the receiver and the sub-arrays of the receiver can resolve and separate the signals from the sub-arrays of the transmitter. Each sub-array of the transmitter can send 9 signals, one to each sub-array of the receiver and there are 9 transmitter sub-arrays so the total number of channels are still 81.
Is this the right way to think of it or can the other beams (4000 in your case) be used by some holographic trick to get significantly more channels?
I also think a hologram is a good way to think of a fully controlled antenna array. When you say "light" do you mean visible light (400 to 800 THz) or more generally electromagnetic radiation including microwaves? Why do I need a clear line of sight for this to work? I would think that this would work even for reflections as long as they are "mirror like" (at the relevant frequencies) and that multiple "mirrors" should increase the capacity by simulating more receivers/transmitters. Of curse the problem of channel estimation becomes harder in a environment with unknown reflections such as a city. --Gr8xoz (talk) 14:03, 24 November 2010 (UTC)[reply]

Origin of particle diversity

What is the origin of such a diverse world of elementary particles (if you exclude the divine one)? As the vacuum state says, the default vacuum has a zero-point energy, so (if I'm not mistaken) it can support only Higgs boson or photon annihilation at best. How can low-energy Higgs bosons or photons produce so many different particles if they were the original stuff for annihilation? (leaving aside the Big Bang as the high energy origin, because the explosion reason and the origin of pre-Bang ingredients are unclear so far) Twilightchill t 14:33, 21 November 2010 (UTC)[reply]

Nobody knows what the origin of particle diversity is. One of the ambitious goals of string theory and other Theories of Everything is that, if correct, one of these theories will clear up why things are the way they are — it'll turn out that logically they have to be the way they are. (An analogy — in 1913, it was a good question to ask, why are the Bohr atom electron orbits exactly where they are? Doesn't that seem arbitrary? But in 1924 De Broglie showed that their orbits are absolutely necessary according to the idea that the electron was actually a standing wave that could not cancel itself out. What seemed arbitrary at first was really just an imperfect understanding of the nature of the physical universe itself, which really did not allow alternatives.) It's certainly the most ambitious goal of modern physics to figure out why things are as they are, and not some other way. There are some who believe that universes pop into and out of existence all the time (e.g. in the multiverse), and so what we have is a quasi-Darwinian situation where of course the only universe that we happen to have come into existence in happens to be not only stable but contain a mixture of things that are allowable for the formation of suns and planets and water and life (see anthropic principle). For some that's a good enough answer, for others it isn't. --Mr.98 (talk) 14:40, 21 November 2010 (UTC)[reply]
"The default vacuum has a zero-point energy, so it can support only Higgs boson or photon annihilation at best" makes no sense to me, and I think it's simply wrong. In modern physics the vacuum effectively supports everything; we are perturbations of the vacuum. It's also not true that photons and Higgs bosons were the first particles. What determines the particle types is the various ways in which the vacuum can vibrate, so the question is why the vacuum can vibrate in those ways and not others. The various grand unified theories are about finding a simple vacuum "structure" that can vibrate in those ways and not others. For example, in SO(10) grand unification you can imagine there's a 10-dimensional sphere attached to every point in spacetime, and the various Standard Model bosons (except the Higgs) arise as rotations of the spheres, while the different fermions are more or less different spherical harmonics of the spheres. These models predict additional bosons arising from other rotational motions of the spheres, so one has to find a way to explain why those bosons haven't been seen. But that's not too difficult; you can invent a mechanism similar to the one that makes the weak force so weak. Unfortunately, these theories are difficult to test, so we remain ignorant about what's really going on. Hopefully data from the LHC will finally clarify things, but it might not. -- BenRG (talk) 02:22, 22 November 2010 (UTC)[reply]

Electroplating With Lemon Juice?

The other day I decided to clean some extremely corroded and grimy coins that I had laying around my room. One was a United States nickel and the other a US penny. In order to clean the coins, I placed them in a small container of lemon juice in hopes that the acid would help remove some of the grime. After a day, I removed them from the container and to my surprise, the penny was delightfully shiny, while the nickel had turned copper as well! I assumed that some sort of electroplating had occurred, but the container was never exposed to an electrical field. Could someone please help explain to me what has happened to these coins? Thanks! Stripey the crab (talk) 14:51, 21 November 2010 (UTC)[reply]

There was an electrical field, which was set up by the battery you inadvertently created. See Galvanic cell. Going from metallic nickel and copper ions to metallic copper and nickel ions is chemically favorable (see electromotive series and Standard electrode potential (data page)), like going from metallic zinc and copper ions to metallic copper and zinc ions. It's just that, instead of having two separate electrodes with a wire between, you have different parts of the same coin acting as the two electrodes and the wire. -- 174.24.198.158 (talk) 19:26, 21 November 2010 (UTC)[reply]
Waste of lemon juice. --90.219.114.59 (talk) 23:16, 21 November 2010 (UTC)[reply]
I want to repeat this experiment. Will vinegar work for the acid? Will it help to put some table salt in it? Should the coins be touching or not? —Bkell (talk) 05:01, 22 November 2010 (UTC)[reply]
Those are good experiments to try. If we told you what the results would be, they wouldn't be experiments. Cuddlyable3 (talk) 09:08, 22 November 2010 (UTC)[reply]
Heh, okay. :-) —Bkell (talk) 14:23, 22 November 2010 (UTC)[reply]
I don't think that the coins have to touch. The copper(II) oxide and carbonate on the penny coin dissolves in the acid to make copper(II) acetate. The copper(II) acetate reacts with the nickel in the nickel coin to make nickel(II) acetate and copper metal. Sorry for ruining your experiment, do it anyway! --Chemicalinterest (talk) 15:28, 22 November 2010 (UTC)[reply]

Enema washes aren't enough; will taking many laxatives help lose substantial weight?

Hi, the Enema Wash on my BioBidet BB-i3000 ( http://www.BioBidet.com ) isn't doing enough of a job at forcing weight out of my system. It has improved my weight loss, but not by enough in my opinion. I currently weigh 178 lbs. at 5'11.5", but will try to get down to a nice, round 150 lbs. (below 135 is underweight.)

So if I take a lot of laxatives, how much will that help me lose more weight? Also, what ill side-effects might it have?

Besides, what ill side effects will giving myself extra enema-washes from bidet-seats give me? --70.179.178.5 (talk) 14:56, 21 November 2010 (UTC)[reply]

Request for medical advice removed. Please consult an appropriate physician for advice on any aggressive weight loss scheme. 71.228.185.250 (talk) 15:05, 21 November 2010 (UTC)[reply]

Mass of energy?

Something is wrong with this set-up, but I'm not sure what. Let's say I turn a kilogram of mass into (1kg)*c^2 energy. If energy is massless, I should be able to lift it up ten metres for free. I then turn it back to matter and run in through a watermill and generate free energy. Is the problem in my assumption that energy is massless? I'm guessing so given that black holes can attract light. 142.244.236.20 (talk) 22:06, 21 November 2010 (UTC)[reply]

If you can turn a Kilogram of mass into energy, why are you bothering with watermills? AndyTheGrump (talk) 23:11, 21 November 2010 (UTC)[reply]
Because I don't want my kilogram consumed in the process; it has sentimental value. 142.244.236.20 (talk) 23:13, 21 November 2010 (UTC)[reply]
And because antimatter is really expensive... Googlemeister (talk) 15:08, 22 November 2010 (UTC)[reply]
It turns out that moving your equivalent-to-one-kilogram-of-matter chunk of energy up a gravity gradient costs exactly the same amount of energy as moving the original one-kilogram mass. For example, if you point a flashlight straight up, the photons will get (ever-so-slightly) redder as they gain altitude. (See gravitational redshift for more details.) Your kilogram mass at ground level will be just a little bit lighter once you've lifted it up ten meters, whether you convert it to energy or leave it as matter during the trip. Each time you repeat the cycle, you lose a little bit of mass/energy — that's where the energy coming out of the watermill in your thought experiment is coming from. TenOfAllTrades(talk) 23:30, 21 November 2010 (UTC)[reply]
What do you mean when you say my kilogram will be lighter after the trip even if I leave it as matter? Does something with more potential energy have less mass? Where did that mass go, did it lose a very small number of atoms? 142.244.236.20 (talk) 02:11, 22 November 2010 (UTC)[reply]
For light to climb out of a gravity well it cannibalizes some of its own energy to do so, so it ends up massing less (compared to other object also out of the gravity well). On the other hand, if you compare that mass to something at the ground you will find no change in the mass because the potential energy of the gravity adds to the mass (so it traded light-energy into gravitational potential energy). Yes, I know it's complicated. If you carry matter up against gravity, you have to add energy externally to do it, since you can't steal energy from the mass itself, this energy you added will, of course, add to the mass of the matter. No atoms are gained or lost, the change in mass (and weight) is entirely due to change in potential energy. Potential energy from gravity is particularly hairy to calculate since it's totally relative to what you are measuring against. There is a reason scientists usually deal only with rest mass when possible, since calculating the true mass of something is not just hard, it's also not a single number - it depends on what you compare to. Ariel. (talk) 02:52, 22 November 2010 (UTC)[reply]
The fundamental assumptions in the question are flawed, mass is identical to energy mass can not be turned in to energy, it is energy and 1 kg will be 1 kg = 9e16 J regardless of what form of energy it is. Energy is not massless, it is mass so the OP:s suspicion of the flaw is correct. Matter is a category of energy that covers almost everything we normally think of as mass, it is not an entirely well defined category. --Gr8xoz (talk) 00:58, 22 November 2010 (UTC)[reply]
TenOfAllTrades gave you the correct answer, but I wanted to add that energy is not massless. Not only does it have mass, it also has weight. So if you had the energy in 1kg all as light (photons), somehow stored in a box, that box would actually weigh 1kg just from the energy. Ariel. (talk) 01:47, 22 November 2010 (UTC)[reply]
If photons have mass, how can they travel at the speed of light? 142.244.236.20 (talk) 02:11, 22 November 2010 (UTC)[reply]
Because although a so-called massless particle like a photon has mass, it has a rest mass of zero. See mass in special relativity. Red Act (talk) 02:35, 22 November 2010 (UTC)[reply]
The fundamental flaw in the question is the assumption that matter can be turned into "pure" energy. You can't. Energy is no substance, it is a quantity that has to be ascribed to a material system depending on its mass and state of motion. Indeed, mass and energy are not the same thing, however often you write down (that equation is only valid/useful in the rest frame of the mass m - there is the alternative interpretation of "relativistic mass" advocated by Gr8xoz but that is not useful (because it hides the physical difference between mass and energy) and is not used in physics any more). Radiation is not "pure energy" either, it is better viewed as a substance, i.e. matter in a broad sense. A single photon always has zero mass, but a collection of photons (say, a photon gas confined within a box of negligible mass) can have an effective mass. May sound strange, but that's how special relativity works. --Wrongfilter (talk) 10:06, 22 November 2010 (UTC)[reply]
I'm sorry, but this is incorrect. A single photon has mass, and so does a collection of them. Matter CAN be turned into "pure" (as you call it) energy. For example you can convert matter into kinetic energy (assuming you have some anti-matter, but anti-matter is still matter). Mass and energy are in fact the same thing, it's impossible to make a logically consistent distinction between them. You can distinguish between zero rest mass and non zero, but that's all. Ariel. (talk) 11:40, 22 November 2010 (UTC)[reply]
As I said, there are two conventions. The one that is in use in theoretical physics (have a look at a current text book on relativity) uses mass for the invariant length of the 4-momentum vector, and energy for its time component (3-momentum gives the three spatial components). The full equation is . Energy and mass (and 3-momentum) are intricately linked, of course, but they are not the same. If you annihilate matter and anti-matter you don't get pure energy. You get radiation, which has energy. --Wrongfilter (talk) 12:49, 22 November 2010 (UTC)[reply]
Ummm ... if energy and mass are not the same, then perhaps you can demonstrate this by giving an example of a system that has energy without mass, or mass without energy - or a process that changes a system's mass without changing its energy, or vice versa ? Gandalf61 (talk) 13:49, 22 November 2010 (UTC)[reply]
A single photon that has energy has momentum and zero mass. If I switch reference frame, the photon's frequency, and thus its energy and momentum, will change (Doppler effect in SRT) but the mass will remain at 0. A system of two photons (of equal frequency) that are moving in opposite spatial direction has energy 2E, and total momentum p=0, hence mass . That's why a box containing a gas of photons whose momenta cancel each other (so that the box as a whole stays at rest in my reference frame) will have effective mass which is (numerically) equal to the sum of their energies. There is no system with non-zero mass and zero energy - this is precisely the meaning of . For a system that is in motion as a whole, the energy will always be larger than its mass (multiplied by c2). Mass is an intrinsic property of a system which is invariant with respect to changes of reference system, energy is not. The mass of a particle tells you what states of motion are possible for it; the energy (and the momentum) tell you what the actual current state of motion of the particle is. --Wrongfilter (talk) 15:14, 22 November 2010 (UTC)[reply]
Locally, incidentally, any process conserves 4-momentum, hence energy and (ordinary) momentum, hence also mass. That's the shortest answer to OP's question. --Wrongfilter (talk) 15:16, 22 November 2010 (UTC)[reply]
So with that definition of mass, two photons of equal frequency travelling in opposite directions have a joint energy of 2E and a joint mass of - but if you consider each photon individually, it has an energy of E but a mass of 0 ? Gandalf61 (talk) 16:55, 22 November 2010 (UTC)[reply]
That's weird, isn't it? I think it has to do with the fact that the concept of "opposite directions" is not invariant. If you go to another reference system the angle between the directions changes. --Wrongfilter (talk) 17:17, 22 November 2010 (UTC)[reply]
See Mass in special relativity for a discussion of the distinction between "invariant mass" and "relativistic mass" which is what this argument comes down to. Which is the true "mass" is a question of semantics. Rckrone (talk) 18:19, 22 November 2010 (UTC)[reply]


This is one of the reasons that I think the concept of relativistic mass deserves a little more respect than it sometimes gets. Intuitively, the mass of the whole should be the sum of the mass of the parts. That's true for relativistic mass, not true for invariant mass. --Trovatore (talk) 18:17, 22 November 2010 (UTC)[reply]

Pre-mitochondrial eukaryote

How did eukaryotes process energy before merging with mitochondria? 142.244.236.20 (talk) 22:08, 21 November 2010 (UTC)[reply]

See Mitochondrion#Origin and Endosymbiotic_theory. The pre-eukaryotic cells would have probably used biochemical pathways like glycolysis and fermentation (biochemistry). --- Medical geneticist (talk) 22:26, 21 November 2010 (UTC)[reply]

Evolution without Cretaceous–Tertiary extinction event

Would there be major evolutionary disruptions if the Cretaceous–Tertiary extinction didn't occur? I think the evolution would have take much more time at best and the man would have little chances to pop out and survive... —Preceding unsigned comment added by 89.77.158.172 (talk) 23:36, 21 November 2010 (UTC)[reply]

This is very hypothetical. Still, I should point out that mammals filled the ecological niches left by the extinction, for example, by producing megafauna. I think the question here is whether dinosaurs or mammals would have better adapted to the climate changes over the last 65 millions years, such as the ice ages. —Arctic Gnome (talkcontribs) 23:47, 21 November 2010 (UTC)[reply]
Of course the life on earth would be very different if the CT extinction didn't take place. Essentially what you are asking is if life evolved differently, would it be different?.. How different is impossible to say from our perspective. Every evolutionary biologist would love to have a machine that could replay evolution from a chosen point in earth's history, that kind of machine would answer a lot of questions. I would also just point out that "evolution would have taken much more time" is a meaningless statement. Evolution doesn't have a "vector" or purpose except for fitness, the vectors can only be drawn in post hoc. Unlike Star Trek might have you believe, bipedal tetrapods with big brains are not the ultimate goal of evolution. Vespine (talk) 02:38, 22 November 2010 (UTC)[reply]
How many feet do bipedal tetrapods have? --Lgriot (talk) 12:38, 22 November 2010 (UTC)[reply]
Two. WikiDao(talk) 15:38, 22 November 2010 (UTC)[reply]
It is 100% certain that our species would NOT exist. Cause and effect could not conceivably have led to our species, given such a fundamentally different foundation. It is literally inconceivable. 63.17.93.42 (talk) 04:24, 22 November 2010 (UTC)[reply]
Our article on the extinction (a featured article, no less) mentions that there is not universal support for the exact cause of the extinction, which makes it extremely difficult to figure out how "it" (the cause of the extinction) not happening might affect everything else. In terms of dinosaurs, the article confirms my recollections of the current state of evidence: "The dinosaur fossil record has been interpreted to show both a decline in diversity and no decline in diversity during the last few million years of the Cretaceous..." and "Whether the extinction occurred gradually or very suddenly is debatable, as both views have support in the fossil record." The point being that if non-avian dinosaurs were declining in numbers for whatever reason anyway, their continued existence may not have played much of a part in affecting the survival of birds and mammals that were around at the time. Matt Deres (talk) 14:52, 22 November 2010 (UTC)[reply]


November 22

Human Dissection

Hello. Pretend that I was born yesterday. Why is the identity of a human subject about to go under dissection concealed? Thanks in advance. --Mayfare (talk) 04:21, 22 November 2010 (UTC)[reply]

Well the obvious one is in case someone observing knows the donor. Do you need more reason then that? Vespine (talk) 05:37, 22 November 2010 (UTC)[reply]
There are laws[12] [13] [14] against desecrating corpses. Perhaps the dissector(s) wish to avoid a writ of Habeas corpus.
I congratulate the OP on their first day of speaking English. Cuddlyable3 (talk) 08:57, 22 November 2010 (UTC)[reply]
That's high praise indeed from you, C3! Actually, the rights of the prosector are generally protected by the various Human Tissue Acts against prosecution for desecration of a corpse where the decedent has donated their body. I know C3's comments about habeas corpus represents comedy for the linguist, but I thought I'd just clarify that. The main reason is as per Vespine's response, and to preserve confidentiality for the donor and his or her family. Mattopaedia Say G'Day! 10:14, 22 November 2010 (UTC)[reply]

engineering survey

What are the importance of surveying to the engineers?

We have articles on Surveying, Civil engineering and Military engineering that may be of interest. Itsmejudith (talk) 13:35, 22 November 2010 (UTC)[reply]
Something interisting: about a year ago a company I used to work for, employed a brand new technology (at least in New Zealand). This technology basically uses sonar scanning to methodically confirm the homogenity of newly cast concrete support columns. Homogenity is important in establishing the strength of concrete. Concrete that is not mixed well, has large zones of differing densities. Even if the bulk of a structure is dense and strong, a single weak spot of low density may cause a catastrophic failure. This may not answer your question, but I think that applying sonar technology in such a way is really neat.
Surveying is very important to engineers, no actually, it is key. Without surveying, engineers would basically be out of a job. Surveying gives crucial information about the terrain, so that engineers know what they're dealing with. It allows them to know how to stop a building from sinking into the ground or toppling over. Plasmic Physics (talk) 13:56, 22 November 2010 (UTC)[reply]
Please note that not all engineers work with buildings. Surveying is not very important to electrical, chemical or aerospace engineers. Googlemeister (talk) 15:06, 22 November 2010 (UTC)[reply]

Tin(IV) sulfate?

From the tin(IV) oxide article:

Similarly, SnO2 dissolves in sulfuric acid to give the sulfate:[3]
SnO2 + 2 H2SO4 → Sn(SO4)2 + 2 H2O

Does tin(IV) sulfate really form like that? If it is so easy to make, it deserves an article. --Chemicalinterest (talk) 15:25, 22 November 2010 (UTC)[reply]

Be WP:BOLD! shoy (reactions) 16:01, 22 November 2010 (UTC)[reply]
But does it exist? Unfortunately, my chemistry experiments have been terminated and I am unable to do some WP:OR to prove that it does or doesn't exist. --Chemicalinterest (talk) 17:49, 22 November 2010 (UTC)[reply]
Stannic sulfate appears to be CAS 19307-28-9. Can also be made by dissolving certain forms of tin metal in concentrated sulfuric acid.(ref: doi:10.1021/ie50259a027) Commercially available from Fluka (maybe as a sulfuric-acid adduct, if you feel like believing their catalog), with a note that it's a reagent used for some standard analytical-chemistry procedure. DMacks (talk) 18:13, 22 November 2010 (UTC)[reply]

Electric heater efficiency

I just got a "Nordik Ceramic Heater" (the kind of electric heater that won't set things on fire if they touch it) and on the box it claims "HIGH EFFICIENCY ceramic elements power consumption / instant settings at High and Low". Am I wrong that (other than radiative losses through windows, sound energy escaping the room, etc.) all electric heaters are ~100% efficient, that pretty much being the definition of a resistive electric load? And that there is essentially no difference in conversion efficiency between a ceramic-element heater and a wire-element heater? And if I'm right, what could be the basis for the claim of "high efficiency"? Thanks! Franamax (talk) 17:44, 22 November 2010 (UTC)[reply]

You're not wrong; the basis might be different meanings of "efficiency". For instance, one could make such a safe heater by wrapping an unsafe heater in a large amount of (non-flammable) insulation, but then it wouldn't heat as quickly (and you would probably have to run the heating element only part of the time to prevent overheating). If they've made a heater with no exposed extreme temperatures but also have some (convective?) system for emitting heat quickly, that could be called "more efficient" in that you wouldn't wait so long for the effects. It might also radiate more rather than conduct; then if you sit by it more of it heats you directly rather than heating a rising plume of air that you're not sitting in. Then you would feel warmer for the same energy spent (or feel as warm for less energy). --Tardis (talk) 18:42, 22 November 2010 (UTC)[reply]
(ec)Yes, that's true from a heat/temperature point of view. But humans don't measure temperature, they measure how fast heat is gained or lost by the skin. So a process that makes the skin feel warm, without actually heating the room can be more than 100% efficient by some measures. That's the idea here - it sends infrared heat to your skin, without (fully) heating the room in between. How effective this is is debatable, because it will only work for the side of you that is facing the machine. Ariel. (talk) 19:33, 22 November 2010 (UTC)[reply]
Sounds a lot like the EdenPURE heaters with highly dubious and misleading claims, hawked by Bob Vila infomercials and direct mail ads, and sold for over 10X what a normal space heater would cost. I agree that all electric space heaters are probably over 99% efficient, with light and sound going through the windows, and chemical changes such as burning dust on the electric wires, being the sources of the <1% inefficiency. So, it's simply not relevant to compare this type of efficiency in an electric heater.
Other types of inefficiency mentioned in the EdenPURE ads were:
1) Heating the ceiling more with other space heaters. This is definitely an issue with convection heaters, not sure I believe their claim that other forced air heaters are worse than them that way.
2) Heating the basement, walls, and duct work. An issue with central heating, but then again, you need to heat those areas somewhat or your pipes will freeze.
3) Heat going up the chimney. Also an issue with central heating, but you need some heat to go up the chimney or you get condensation and poor draft. StuRat (talk) 19:05, 23 November 2010 (UTC)[reply]

Can we really digest enzymes "whole," or are they broken down?

So much natural food these days touts the amount of enzymes they contain, and there is so much advice about eating raw or less-cooked foods so that we don't "destroy" the enzymes in the foods.

My question is: don't we digest all proteins into amino acids? And if so, does it matter if we're not consuming enzymes in our newest energy bars or raw foods, so long as we're consuming all the amino acids we need to consume?

I'm tried to research this, but the enzyme page doesn't talk about it, and searching "digest enzymes" on Google comes up with all the enzymes that are used in order to digest things. Is there any published information that deals with the question of whether proteins and enzymes are broken down by the digestive system before they are absorbed, and whether it actually makes any difference to your body whether the proteins have been denatured by cooking?

Thanks! — Sam 63.138.152.135 (talk) 17:51, 22 November 2010 (UTC)[reply]

From here, "The enzymes naturally present in food play an important role in digestion by helping to predigest the ingested food in the upper stomach". I used the Google search term "health benefits enzymes in food". The claim sounds plausible, although I'm not sure whether the enzymes in the food would still function in the acid environment of the stomach. Franamax (talk) 17:59, 22 November 2010 (UTC)[reply]
Hmmm, that's interesting, and is likely what people are talking about when they refer to the benefits of eating enzymes. That said, I can't find anything about "Enzyme University," and they don't have any sources, and they seem to be (passively) pushing supplements, so I'm not certain that they're an authoritative source. — Sam 63.138.152.135 (talk) 18:16, 22 November 2010 (UTC)[reply]
Yes, note I said "sounds" plausible, not "is" plausible. The top Google hits are all sites that think they have something that would be good for you if only you would buy it on a regular basis. They seem to support their claims with anecdotal evidence like "people have more energy" rather than double-blind studies or analysis of undigested nutritive content in the stool. And how would you do a double-blind study anyway? people can usually tell the difference between cooked and raw food. It's also possible that people feel more energy because they get tired of chewing the raw food and don't overstuff themselves as on tasty cooked food. The key measure is how active these enzymes are in the time between mastication of the food and saturation in stomach contents at pH 3.5-4.5, and how active the enzymes are at that pH level. You would likely have to find a much more detailed source to get that information. Franamax (talk) 20:51, 22 November 2010 (UTC)[reply]
I located an "Enzyme University" at [15]. Those responsible for the site clearly have a better knowledge of biology than the average quack, they cite patents that they've developed, and they appear to have some common basis with known products such as the lactase in Lactaid (see [16]). That said, I think that some caution is still required, because I don't think that they are marketing these things as drugs — they aren't mentioning safety and efficacy studies in these pages. The enzymes come from various odd sources, such as fungi, which might not ordinarily be eaten.
I would question the advisability, for example, of trying to replace pepsin activity lost by antacid with an outside enzyme that is active throughout the digestive tract [17], because I would worry that if it works, it might digest signalling proteins on the outside of cells in the gut and end up causing cancers. (Note that chronic use of antacid is definitely a problematic treatment in itself, addicting the user by increasing the stomach's acid production...) Human digestive proteins might have special safety features still unknown to science, and the acid requirement that the product circumvents is a basic safety shut-off that prevents them from damaging the duodenum and other portions of the gut. I should note that the Japanese have been the undisputed masters of coming up with various enzymes and artificially digested foods, but they also have such a high rate of stomach cancer that they have to get tested for it routinely like people get colonoscopies in the U.S.
I think that when you take something as a nutritional supplement rather than a drug, you should know for sure that it is something that has been consumed, either as food or as herbal medicine, for hundreds if not thousands of years. Otherwise you should demand clear, modern evidence of safety. Wnt (talk) 22:18, 22 November 2010 (UTC)[reply]
I agree that that site isn't entire quackery but: "The enzymes naturally present in food play an important role in digestion by helping to predigest the ingested food in the upper stomach. Cooking and processing destroys the natural enzymes found in foods. This places the full digestive burden on the body, which can cause extra stress on the digestive system, leading to incomplete digestion. As a result, vital nutrients may not be released from the food for assimilation by the body." is pretty flawed reasoning in my opinion. The reason we cook food is to break down plant cell walls and denature proteins so that our digestive enzymes can get at the molecules in food and digest them. Cooking will stop enzymes in naturally present in food from functioning, because they aren't enzymes that breakdown food as our enzymes do, since self-digesting isn't a good idea! Also plant have been fighting an evolutionary war against herbivores for millions of years, some of these defences, e.g. tannins and protease inhibitors, decrease the nutritional quality of plants to try and make herbivores eat other plants. It's unlikely that plants contain enzymes that would do the opposite and make themselves more digestible. We've evolved so that we are now fairly dependent on cooking to pre-digest food and I find it difficult to believe that our bodies need any help in digesting food, since the human gastrointestinal tract seems to do a pretty good job. There are of course far more enzymes in our guts than just the ones we produce due to the gut flora which digest many compounds which we can't digest ourselves. I'm always extremely sceptical of a company saying their product is useful, especially when what they are selling is basically just purified mould as they obviously have a conflict of interest. I can't find any independent studies about these supplements and their own research is based on a mechanical model (!) of the digestive system which also "set out to prove" they were effective - not a good way of doing science. SmartSE (talk) 14:57, 23 November 2010 (UTC)[reply]
I fully agree with you that the "we need enzymes in our food to aid digestion" notion is dubious, but I would hesitate to endorse the claim that the (major) purpose of cooking is to break down foods so that our own digestive enzymes can get at them. It's worth remembering that most cells do contain and can manufacture enzymes designed for partial or total self-digestion (see autophagy, apoptosis), but that their function is generally tightly regulated while the organism is still alive. As well, many foodstuffs fresh from the field are going to come with (often-undesirable) hangers-on: microbial contaminants that very much do want to digest their hosts, and secrete all the necessary enzymes to do so. Cooking, smoking, salting, drying, and pickling are all techniques that we use to denature proteins in order to discourage spoilage and inactivate pathogens that would otherwise very quickly pre-digest our foods — whether we wanted them to or not. (It goes without saying that these preservation techniques all also inhibit or inactivate the pathogens responsible for many diseases, which is another perq of a fully-cooked diet.)
Cooking also has the benefit (in many cases) of improving the taste of food. We have evolved to prefer sweeter, fattier foodstuffs (in order to get the calories we need and avoid starvation); cooking foods allows us to simulate these flavors and satisfy that evolutionary imperative (through caramelization and the Maillard reaction for sweetness, and the liquefaction of animal fats for that fatty mouthfeel). Cooking is (evolutionarily speaking) a very recent development; I would be extremely surprised to find that we were in any way dependent on it. TenOfAllTrades(talk) 15:43, 23 November 2010 (UTC)[reply]
Cooking food with fire is far from a recent development: it seems to older than homo sapiens: Control of fire by early humans. The human gut is shorter because we eat cooked food, and so don't need such long digestion to extract nutrition. I have never heard (in a reliable source) of someone maintaining a raw food diet at a stable weight: every account I've seen, the person loses weight until they reintroduce non-raw food, or they keep some processed/cooked food in their diet. We are half-cyborg, reliant on our technology :) 86.166.40.102 (talk) 19:20, 23 November 2010 (UTC)[reply]

Niña

Is there a name for a ship like the Niña that has two square-rigged sails in the front and then a lateen sail at the back? --The High Fin Sperm Whale 19:31, 22 November 2010 (UTC)[reply]

a Carrack. --Dr Dima (talk) 20:14, 22 November 2010 (UTC)[reply]
More recently, a barque. Mikenorton (talk) 23:00, 22 November 2010 (UTC)[reply]
Actually, barques have a gaff rigged sail at the back. Thanks anyway. --The High Fin Sperm Whale 23:31, 22 November 2010 (UTC)[reply]
The Nina and the Pinta were Caravels, originally caravela latinas (lateen sails on all three masts), but re-rigged as caravela redondas to follow the trade winds on the outward leg. And our article on the caravel could really use some work! --Stephan Schulz (talk) 07:52, 23 November 2010 (UTC)[reply]

Soda can exploding

I'm sorry for posting what is probably a trivial question, but why does a can of soda "explode" when opened after shaking? I've checked online, but all of the answers seem either very vague or plain wrong.

Correct me if I'm wrong, but as I understand it, carbonated liquid contains dissolved CO2, like a solution. But some of the gas can form microbubbles (on their own accord or with the help of nucleating sites? I don't know...). When an unshaken can is opened, the liquid's pressure drops, and the microbubbles are then able to grow in size and leave the liquid. But why would shaking cause so many more bubbles to form? 70.52.44.192 (talk) 21:16, 22 November 2010 (UTC)[reply]

Part of it could be the same reason why bubbles appear in a shaken container of water, a simple mechanical action. But I do wonder if that's not all.
Because when I've shaken a sealed can of soda, it felt like the can became cooler... Wnt (talk) 21:43, 22 November 2010 (UTC)[reply]
There's an article at livescience. The fizz has to do with pressure equilibrium in that when you open the bottle, the pressure inside the bottle equalizes with whatever the pressure is outside the bottle. It may get slightly colder since the diluted carbon dioxide may take some energy to escape. See Solubility, Vapor pressure, partial pressure, Le Chatelier's principle, Pressure, and Boyle's law.Smallman12q (talk) 00:10, 23 November 2010 (UTC)[reply]
That explains the fizz, which I understand, but not why there's a massive increase in bubbles when the soda is shaken (I don't qualify the webpage's explanation, which says that shaking "adds the zing needed to unleash more tiny bubbles and add real splash to a celebration", as a good explanation).
It's very simple: your soda can is not full to the brim with soda. Rather, to keep the soda carbonated during storage, it contains a quantity of compressed gas. When you open a can of unshaken soda, the gas is at the top of the can and can escape freely. When you open a shaken can, the gas is mixed with the soda, and carries a good deal of the soda with it as it escapes. --Carnildo (talk) 00:25, 23 November 2010 (UTC)[reply]
That's certainly part of it, but it can't be all of it. Try it with a clear coke bottle. Shake the bottle vigorously and then open it as soon as the bubbles on the surface break. (Tap the bottle to speed this process.)
In any case, the volume of soda lost from a shaken can often far exceeds the volume of undissolved air in the can, which would not be possible in your theory. APL (talk) 03:38, 23 November 2010 (UTC)[reply]
Do you happen to know the answer to the question then? 70.52.44.192 (talk) 05:01, 23 November 2010 (UTC)[reply]
There is a link at the bottom of the carbonation article which has the answer you are looking for: Whirlpools in a soda pop Explains why shaken soda bottle will spray soda when opened. Vespine (talk) 05:56, 23 November 2010 (UTC)[reply]
Thanks Nil Einne (talk) 11:56, 23 November 2010 (UTC)[reply]
I forgot to mention, but the proper term for "fizzing" would be effervescence.Smallman12q (talk) 13:03, 23 November 2010 (UTC)[reply]
Your clear-bottle experiment doesn't take into account microbubbles. When you vigorously shake the bottle/can, you break up the large gas bubble into a range of bubble sizes, some of which are very small. They'll re-dissolve/merge with the big bubble eventually, but are stable on a short (minutes) timescale. When you release the cap, the sudden decrease in pressure means that the dissolved carbon dioxide comes out of solution on those small nucleation sites, resulting in rapid bubble expansion. As to your second point, there is a fair quantity of liquid in soda foam. If you pour a glass of soda with a large head of foam, and then watch the soda level, you'll see that it comes up a fair bit as the foam breaks. Having the foam be 50% liquid is not unheard of. In addition, as the microbubbles are distributed throughout the bottle and expansion is rapid, the ones at the bottom can push the soda out of the bottle. -- 140.142.20.229 (talk) 18:36, 23 November 2010 (UTC)[reply]
Does the above explain the fountain or geyser you get when you drop a polo mint into a large bottle of cola? Edit: this Diet Coke and Mentos eruption. 92.15.6.86 (talk) 13:10, 23 November 2010 (UTC)[reply]
The article that you linked contains the explanation and links to further research. TenOfAllTrades(talk) 14:52, 23 November 2010 (UTC)[reply]

Thanks. 70.52.44.192 (talk) 15:11, 23 November 2010 (UTC)[reply]

Flaxseed oil and autism

Is there is any real evidence for a connection between taking flaxseed soil, or omega 3 fatty acids and preventing autism? Or if autism has anything to do with dopamine. Thanks in advance. AdbMonkey (talk) 22:23, 22 November 2010 (UTC)[reply]

This is too controversial a subject, to attempt to answer. The best and most informative book I have ever come across is this one. [18] Borrow a copy from your local friendly medical library. You will not ever, get a sensible discussion about this subject here. It is too specialised.--Aspro (talk) 23:21, 22 November 2010 (UTC)[reply]

Ohhh. Thank you. AdbMonkey (talk) 05:34, 23 November 2010 (UTC)[reply]

See Autism#Causes for a general discussion. From the article, it looks like environmental factors are suspected by some to play a role, but genetics is definitely a factor. Paul (Stansifer) 13:27, 23 November 2010 (UTC)[reply]
As for the general medical consensus on nutritional supplements, it could be summarized as this:
"If a deficiency of a particular nutrient is present, then a supplement may be useful. However, mega-doses of most nutrients are not helpful, and may be harmful in certain cases".
StuRat (talk) 17:59, 23 November 2010 (UTC)[reply]

Chemicals with humorous formulas

Is it possible for Argon Selenide to exist, and if so, would its chemical formula be ArSe? Also, is it possible to manipulate Arsenic sulfide under laboratory conditions so that its ions have equal charges, thereby changing its symbol to AsS?--99.251.211.17 (talk) 23:24, 22 November 2010 (UTC)[reply]

Would Arsole do? We have a subjective list of names at List of chemical compounds with unusual names...though not formulas.Smallman12q (talk) 23:59, 22 November 2010 (UTC)[reply]
That page has been nominated for deletion five times (some Wikipedians do not like articles of a humorous tone). If one of the AFDs ever succeeds, then the article is likely gone forever. If you feel it is an appropriate article, you might wish to watchlist it. I do not feel that this is canvassing, since there is presently no AFD in process. Edison (talk) 14:49, 23 November 2010 (UTC)[reply]
See this for a pretty comprehensive list, with pictures, structure and discussion. Ariel. (talk) 00:10, 23 November 2010 (UTC)[reply]

November 23

Inserting pipes when drilling for oil etc.

When drilling through soft material like mud, clay etc., presumably some sort of pipe has to be inserted into a bore hole as the drilling progresses to prevent the hole from simply collapsing? As the hole becomes longer and longer, how is that achieved? I would have thought it would soon become impossible to ram in new sections of pipe from the top of the borehole due to increasing friction? 86.173.36.159 (talk) 01:03, 23 November 2010 (UTC)[reply]

You'd be surprised. One thing they do is use a type of wet "mud" to lubricate the pipes, but basically, they just keep feeding pipe section after pipe section into the hole. At the end of the last pipe is the drill bit, which leads the way, but as the drill bit clears out enough rock to put in another pipe length, you just add another one. Wikipedia's article on Boring could use some work, but it is really not a complex process. --Jayron32 01:43, 23 November 2010 (UTC)[reply]
Do you know whether it is pressure from above that forces the pipe down or the weight of the pipe above? Intuitively I feel that as the hole gets deeper the gaps in the pipe form somewhere near the top, as the weight of the pipe forces it downward as the hole deepens. If this is this case you are not pushing the pipe all the way to the bottom but just filling a gap. -- Q Chris (talk) 09:13, 23 November 2010 (UTC)[reply]
To expand on Jayron32's explanation, during the drilling of a soft section, the collapse of the hole is prevented by keeping the pressure of the drilling mud high (although not too high or you may hydraulically fracture the well bore). To prevent later problems it is necessary to line the unstable sections of the wellbore using a metal liner known as a casing, after they've been drilled. Once the liner has been set, the borehole is drilled on using a smaller bit, leading to a hole that gets progressively narrower with depth. This starts at the surface by drilling out (or digging out in the case of onshore drilling) the uppermost tophole section and it is into this that the blowout preventer is set once it has been lined. Mikenorton (talk) 11:00, 23 November 2010 (UTC)[reply]
So, please excuse me for adding a related question or two, when the drill bit needs replacing the whole pipe has to be drawn up leaving the bored hole without any lining? With a long pipe, won't the friction become very high and make it difficult to turn the bit? 92.15.6.86 (talk) 13:18, 23 November 2010 (UTC)[reply]
Yes, a broken drill bit (or any other tool breakage in the hole) would be a "minor catastrophe" and could delay drilling operations anywhere from several hours to many days, depending on how long it takes to physically remove the drill string. If there are shards of metal in the bore, then a special grinding tool might need to be put down the hole to chomp up the metal fragments, increasing the costs and delays. And with a very long pipe, friction is very high - so powerful motors (topdrives) are used at the surface. In very "high-tech" drilling, a drillbit can be hydraulically actuated (meaning that instead of turning a drillstring, energy is conveyed by pumping pressurized fluid down the bore, and that pressure provides energy to a mechanical contraption that turns the bit). Alternatively, electric motors can be used down-hole. Neither of these solutions are standard-practice. You might find the Schlumberger Oilfield Glossary a helpful resource - bottomhole assembly has diagrams of the bit and related mechanical parts, and a bunch of links to related terms. Nimur (talk) 14:47, 23 November 2010 (UTC)[reply]
At least for smaller water wells there exists drill-bits that expand when turned in one direction and contract enough to fit in the casing when turned in the wrong direction a few turns. This works by a bearing of centre and a part that is turned in or out due to friction. This linck shows an other method [19]. --Gr8xoz (talk) 12:00, 24 November 2010 (UTC)[reply]
Is there anything equivalent to a stent used in drilling, a collapsed object which can be moved into place, then expanded into it's final form using a balloon ? StuRat (talk) 17:54, 23 November 2010 (UTC)[reply]
Yes. Indeed, a component of the infamous MC252 lower marine riser package was a (hydraulically) inflatable "gasket" called an annular preventer. (Actually, there were at least two independently-actuated annular preventers, described in detail in the BP Accident Investigation Report). Both of these were intended to seal the annulus, not the bore. The trouble with annular preventers, like any high-pressure seal or valve, is that they must withstand the gauge pressure of any fluid across the zone they are trying to seal. In the case of the MC252 well, the fluid-pressures in the bore (i.e., gas and oil squeezing up the well annulus) were higher than the hydraulic pressure supply for the annulus - even if the hydraulic system had been functional, the annular preventers still could not have sealed against such high pressures. These "inflatable" seals were at the "top" of the well (at the LMRP), but similar apparatuses exist in the bore-hole near the cement shoe (almost at the bottom of the hole). A float collar can include a hydraulically inflated valve (though usually use a mechanical check valve instead); it is more common to use controlled injection of drilling fluids with known densities (weights) to seal the bore at great depth - in other words, a fluid seal; or to insert a cement or polymer plug. Obviously, by definition, a blowout will blow out such a seal. Nimur (talk) 19:12, 23 November 2010 (UTC)[reply]

Altering the refractive index of air molecules. —without different gases or high temperatures

I have a question about how one can cause air to become "blurry" —AT ROOM TEMPERATURE— by altering the refractive index of the air molecules in a small area.

I know that a similar effect can be engineered through either raising the temperature tremendously (in a small part of a much larger area), or by pumping a gas with a different refractive index into said area (which is usually a sign to leave IMMEDIATELY!)

I'm curious, however, if one can create a similar effect using a certain frequency of radio waves (or microwaves, etc.), or some other extant technology. To wit, be able to change the refractive index of the air in small area, so that it would appear "blurry" from farther away.

Does any such technology exist? Thank you for reading this! Pine (talk) 01:17, 23 November 2010 (UTC)[reply]

No, because it would be the equivalent of heating the air. Physchim62 (talk) 01:35, 23 November 2010 (UTC)[reply]
What you would have to do is change the density of air in a local area; the only option you may have from the other two you have ruled out is to generate a pressure wave of some sort; unfortunatly in order to be detectable, such a pressure wave would probably be large enough to rip you to shreds; it also could not be localized, so it would travel out causing death and destruction. The problem with all of this is air is a gas; which means that any perterbations you make to density, locally, are VERY quickly dissipated. You just can't do what you are proposing. --Jayron32 01:39, 23 November 2010 (UTC)[reply]
Increasing the pressure in a local area will also inherently increase the temperature in the same local area. Googlemeister (talk) 16:20, 23 November 2010 (UTC)[reply]
Robert Boyle would like to have a word with you. DMacks (talk) 16:56, 23 November 2010 (UTC)[reply]
Just an off-the-cuff thought, but what about using two beams of microwaves of differing frequency which, intersecting at a localised spot, would yield a beat frequency (of, say, 2.54 GHz) that would have a strong dielectric heating effect on the moisture content of the air? 87.81.230.195 (talk) 12:52, 23 November 2010 (UTC)[reply]
Wouldn't the microwave heating cause the air heated to be at a higher temperature from the rest of the room? I suppose a technically correct but unhelpful response would be to have a room where the temp was way different from 70 F but have the "blurry air" be at exactly 70, by heating or cooling. If some air were compressed, then released through a slot in the viewing area, the pressure drop would cause the air to suddenly cool, perhaps with moisture condensing. This is done in some dew point measurement apparatus. Again, there is a temperature change, but the "delivery temperature" of the air could be the required 70 F. Edison (talk) 14:46, 23 November 2010 (UTC)[reply]
What you see when the apparent refractive index of air changes with temperature is an effect of changing air density. (Denser, cooler air contains more molecules per cubic centimeter and has a higher refractive index than less-dense, warmer air.) As suggested, in principle one could use focused infrared or microwave radiation of an appropriate frequency to locally heat a small volume of air from a distance; as it expanded its refractive index would fall, but it would fail your criterion of maintaining 'room temperature' conditions.
You can temporarily increase the density (and refractive index) of air by passing a shock wave through it. (See, for example, this YouTube video; between roughly 0:02 and 0:04 you can see an expanding ring of visible distortion around the exploding automobile.) In principle, I wonder if you might accomplish something similar (and somewhat less destructive) by setting up some kind of standing acoustic wave. It would be awfully loud, however — sound pressures comparable to atmospheric pressure are up around 195 decibels. TenOfAllTrades(talk) 15:06, 23 November 2010 (UTC)[reply]
Or more destructively, with a nuclear weapon. See [20] for some info. DMacks (talk) 15:32, 23 November 2010 (UTC)[reply]
Shock waves (especially nuclear ones) also cause temperature to rise. Googlemeister (talk) 16:21, 23 November 2010 (UTC)[reply]


"In principle, I wonder if you might accomplish something similar (and somewhat less destructive) by setting up some kind of standing acoustic wave. It would be awfully loud, however — sound pressures comparable to atmospheric pressure are up around 195 decibels."

Could that be done with NO SOUND, though, through active noise cancellation? Pine (talk) 18:50, 23 November 2010 (UTC)[reply]

It could be done at frequencies people can't hear, such as 5 Hz or 100,000 Hz, though a human might still be able to "feel" sound that they can not hear at that level of intensity. Actually, 195 decibels might be harmful to people, even if they can't hear it. Googlemeister (talk) 20:22, 23 November 2010 (UTC)[reply]
Active noise cancelling would actually cancel the wave.. As for the standing wave, I wonder if you could do it using a lot of smaller speakers all around the target producing small waves that add up in the centre. I saw a clip of an amazing wave pool in Japan once which had hundreds of computer controlled baffles around the outside that could produce precise small waves at the edge that would all travel to the centre barely more then a ripple to begin with, but they'd line up perfectly just for a moment in the middle making various shapes, like a triangle, or a heart shape. For the "grand finale" all the baffles made a wave that "added up" in the centre and created a wave so violent that it sent a big ball of water shooting up into the air. I wonder if you could do something similar with speakers. Vespine (talk) 22:13, 23 November 2010 (UTC)[reply]

equilibrium constants units

in formation of ammonia from nitrogen and hydrogen
      N2 + 3H2 <--> 2NH3
units of kc doesnt cancel. i read about unit of kc on net and came to know that kc do have units in the form of (mol/L)+n or(mol/L)-n.
if it so why dont we write the units , neither it is wrong to not write the units why?
is there any other logic behind not writing units of equilibrium constant except that it is the ratio of same things ie concentrations --Myownid420 (talk) 01:48, 23 November 2010 (UTC)[reply]
Assuming you mean Kc, which is the equilibrium constant (k is the rate constant). Equilibrium constant is ALWAYS a unitless value. When you calculate Kc or Kp, you are using standins for the real value. The equilibrium constant is dependant on a value called activity, which, by a bit of circular logic, is the value which determines how fast a substance reacts in a chemical reaction (and thus, its effect on rate and equilibrium). Activity is unitless, so equilibrium constant is also unitless. It is actually an unmeasurable value, however for most measurements, the ratio of concentrations is close to equal to the ratio of activities so we use concentration as a stand in for it. Kc indicates that concentration is used as the standin for activity, and works for all solutions, and for all gases in a constant pressure environment. In a constant volume environment, we use Kp instead of Kc, which uses pressure values instead of concentrations (normalized to atmospheres). The article Equilibrium constant discusses most of this. In summation, equilibrium constant is always unitless, even though you use "concentration" as an approximation to calculate it; thus if you actually do the dimensional analysis on your calculation, you will often get a "unit" for the equilibrium constant. Ignore that. Equilbrium constant is unitless, period. --Jayron32 04:55, 23 November 2010 (UTC)[reply]
The logic behind the fact that the equilibrium constant is unitless comes from the fact that it is related to the Gibbs free energy change for the reaction by ΔG = −RTlnK. You can't take a logarithm of a quantity that has units. Physchim62 (talk) 12:54, 23 November 2010 (UTC)[reply]
No, that's plainly not true. pH is the -log of the molarity of the hydronium ion concentration. Concentration has units of moles/liter, but pH is unitless. You cannot take the log of the unit and get a meaningful second unit, but you certainly can take logs of measurements with units and get useful numbers. --Jayron32 01:49, 24 November 2010 (UTC)[reply]

Human-caused / Life-caused

Non-scientist here, so forgive the obliviousness. I know the term used for human-caused (or man-caused) is "Anthropogenic". Two questions: 1) Is there a term used for something caused collectively by life in general? 2) What are some examples of this? I know early plant life had something to do with the early atmosphere changing to something we can breathe today -- could that be considered "life-caused" (or that might be too specific of an example, so it would be limited to "plant-caused" I guess)? If you can point me to a WP article, I would be appreciative. Rgrds. (dynamic IP, may change) --64.85.215.81 (talk) 07:27, 23 November 2010 (UTC)[reply]

Biogenic means "created by life". So, yeah: Biogenic oxygen, biogenic methane, etc. I don't know of an equivalent for "plant-caused". I was thinking floragenic, but that seems to be a company. Someguy1221 (talk) 07:32, 23 November 2010 (UTC)[reply]
I would probably use "phytogenic" for plant-caused. --Jayron32 07:36, 23 November 2010 (UTC)[reply]
But it's sort of a moot point, since the initial oxygenation of the atmosphere was probably caused by cyanobacteria billions of years before there were any plants. Looie496 (talk) 07:46, 23 November 2010 (UTC)[reply]
That would make it "bacteriogenic", which means "bacteria-caused".
Another word pertaining to things caused by life is "zoogenic", which means "animal-caused". Red Act (talk) 08:17, 23 November 2010 (UTC)[reply]

"Biogenic" appears to be what I was looking for and seems to point me in the direction I was hoping, thanks Someguy. ("Zoogenic" is too specific-- I was looking for an all-inclusive term.) Rgrds. --64.85.215.81 (talk) 08:45, 23 November 2010 (UTC)[reply]

Mass flow rate of Refrigerant

what relation can I use to solve mass flow rate problems when given temperatur, pressure and specific volume —Preceding unsigned comment added by 41.221.209.6 (talk) 10:08, 23 November 2010 (UTC)[reply]

what appropriate relations can i use to determine changes in potential and kinetic energies and enthalpy when given temperature, pressure and mass flow rate —Preceding unsigned comment added by 41.221.209.6 (talk) 10:13, 23 November 2010 (UTC)[reply]

In the absence of an empirical equation of state, you can use the Ideal Gas Law or its more accurate form, the Van der Waals equation, to model gas volumes and pressures. Use the assumption that the flow is adiabatic, unless you know there is a pump, heat source, or heat sink. Conserve energy, calculate temperature using a specific heat value for the gas, and assume a phase change at the boiling-point temperature. You can apply the enthalpy of vaporization to conserve heat across the phase-transition. You can assume the fluid is incompressible when in liquid form, unless you have parameters for compressibility. Nimur (talk) 15:39, 23 November 2010 (UTC)[reply]

The Doppler Effect

If one were using the Doppler Effect to determine the speed of an object, how would the angle at which the object is moving in relation to the observer effect the determination? Can the speed still be accurately calculated even if the moving object isn't moving directly away from or at the stationary observer?--160.36.38.212 (talk) 15:41, 23 November 2010 (UTC)[reply]

For the acoustic Doppler effect, only the radial speed matters, so you just add a and you're done. For the relativistic Doppler effect, that effect occurs but there is also the transverse Doppler effect (very small at, say, highway speeds). In either case, there's only one value you measure (frequency change), so you can't determine two values (speed and direction). But if you know one of those two, and measure the frequency change, you can get the other. --Tardis (talk) 16:23, 23 November 2010 (UTC)[reply]

What about in the instance of, let's say, determining the speed of a baseball with a radar gun. Since we are trying to determine the speed and can't be sure about the angle of observation, how accurately can the frequency change be used to determine the speed of the baseball?--160.36.38.212 (talk) 16:47, 23 November 2010 (UTC)[reply]

You'll never overestimate its speed (because the error is that if it's moving across your line of fire you'll get 0). The ratio of its true speed to your measurement is the secant of the angle its velocity makes with your line of fire. If you assume that that's, say, no more than 10°, then the true speed is no more than 1.5% above your measurement. But look at the graph of that function to see what happens if the direction isn't known at all. --Tardis (talk) 16:55, 23 November 2010 (UTC)[reply]

I see, not much to worry about as far as error is concerned. Thanks for your help.--160.36.38.212 (talk) 17:03, 23 November 2010 (UTC)[reply]

The "error" will always favour the driver, yes. Physchim62 (talk) 18:33, 23 November 2010 (UTC)[reply]
When you say it will always "favour the driver" I assume you are thinking of a police control for speeding speeding and no attempt at setting a speed record or similar. This is true as long as you are measuring the speed of a point like object, if you let the radar gun sweep along a stationary wall you will get the speed at which the point of intersection of the wall and the radar beam moves away from you. You can get similar errors if you don't hold the radar gun steady at a vehicle. --Gr8xoz (talk) 11:29, 24 November 2010 (UTC)[reply]

Does the sun drop faster on the equator?

As headline. —Preceding unsigned comment added by 84.12.125.33 (talk) 16:02, 23 November 2010 (UTC)[reply]

Yes. On visiting Texas from the UK I just couldn't get used to not having a twilight, its light one minute and dark 10 minutes later. The movement of the sun across the sky is also consistent, they can say "15 minutes for each finger-width (at arms length) the sun is above the horizon. In the UK it varies with the time of year but could be hours at midsummer!
The sun moves at that angular velocity all the time and everywhere. But in what direction? If it's moving "horizontally", it's not dropping as fast even though it's moving as fast. The sun typically moves more vertically near the equator and more horizontally near the poles, so there's the effect you're thinking of. However, it varies over the year and day even at one location; clearly at noon the sun is neither rising nor dropping. At the solstices, the sun crosses the horizon closest to one of its stationary points in altitude, so twilight is exaggerated there. During the midnight sun, of course, the sun never (fully) drops even over many days! --Tardis (talk) 16:20, 23 November 2010 (UTC)[reply]
"The sun moves at that angular velocity all the time and everywhere" -- Are you sure? I'm not convinced that the (apparent) angular velocity of the sun is exactly the same at all times everywhere on earth. How big the variation is is another matter... 86.174.40.11 (talk) 21:24, 23 November 2010 (UTC).[reply]
Well, it does 360 degrees in 24hr, doesn't matter where on earth you stand right? Vespine (talk) 21:58, 23 November 2010 (UTC)[reply]
No, if you’re standing on one of the poles, the sun will have almost no motion over the course of a day. The "15 minutes for each finger-width" rule of thumb (rule of finger?) fails miserably at or near the poles. Red Act (talk) 22:16, 23 November 2010 (UTC)[reply]
No, it really does go through 360 degrees in 24 hours no matter where on Earth you are. Its apparent angular velocity is the same everywhere on Earth. The angle subtended by your outstretched finger is also the same, even at the poles, and so it takes the sun the same amount of time to "travel" that (angular) distance there as the equator. WikiDao(talk) 22:40, 23 November 2010 (UTC)[reply]
You are obviously wrong, the sun may be close to the horizon all the day but it moves around the horizon. (Obviusly we are talking about apparent motion on the sky, the sun has a real motion relative to the galaxy of 0.13 AU/day or relative to the cosmic microwave background 0.21 AU/day) --Gr8xoz (talk) 22:36, 23 November 2010 (UTC)[reply]
Whether the sun apparently moving approximately 360 degrees per day is accurate depends on how you're measuring the angle. If you're measuring the angle around the Earth's polar axis, then yes, 360 degrees is about right. But if you're measuring the angle along the sun's apparent path, then the rate can be much less than 360 degrees per day. Red Act (talk) 22:41, 23 November 2010 (UTC)[reply]
Hm, you're right -- on a Uranus-like planet with exactly 90° tilt, you need not have the sun move at all (though of course it'll only be truly overhead for an instant per year, but that's practically irrelevant since years are long). But the slowest it can get on Earth (i.e., at the poles on a solstice) is of its maximum 360°/d speed, so that's a relatively minor effect compared to the 0–100% variation caused by the apparent direction. --Tardis (talk) 22:48, 23 November 2010 (UTC)[reply]
Yeah, you're right, the 23 degrees at the Tropic of Cancer isn't all that big of an angle. I was picturing it as being bigger in my head when thinking about it. Red Act (talk) 23:05, 23 November 2010 (UTC)[reply]
Whoops, yes, my statement that the sun has "almost no" motion over the course of a day at the poles was an overstatement, because the Earth's axis isn't tilted enough. But the apparent motion is less than 360/24 degrees per hour, measuring along the sun’s apparent path. Red Act (talk) 22:54, 23 November 2010 (UTC)[reply]
(e/cx2) Yes, the explanation of the original question has to do with "apparent" path. It is true that the sun "sets" faster at the equator than at higher latitudes.
At a pole, in summertime, the sun at most just dips below the horizon, but travels all around the horizon (360°) over the course of the day. When it does slip below the horizon, it slides along it as it slowly sinks before slowly rising again.
At the equator, when the sun sets, it does so directly downward, ie. with apparent motion perpendicular to the line of the horizon.
At a latitude of say 45°, the sun is apparently moving at the same angular rate, but some of that motion is a bit sideways, along the horizon. The "vertical" component of its motion is therefore less than at the equator, and more than at the poles, so the time it takes to set at that latitude is slower than at the equator and faster than at the poles. WikiDao(talk) 22:54, 23 November 2010 (UTC)[reply]
  • In case it is unclear, when I originally doubted the exact accuracy of "The sun moves at that angular velocity all the time and everywhere", I was talking about the apparent angular velocity along the sun's path, as viewed from a fixed point on Earth. To put it another way, you fix your frame of reference to the horizon, then look at the speed the sun is moving through the sky relative to that, in whatever direction it happens to be moving. I am not convinced that that speed is always the same. 86.174.40.11 (talk) 01:50, 24 November 2010 (UTC).[reply]
Angular velocity is the same for any observer at any location on the surface of Earth. This is because the Earth rotates at a constant rate.
But angular velocity is a vector, so it is useful to consider that velocity as having "vertical" (define as perpendicular to the horizon) and "horizontal" (parallel to the horizon) components. At the equator, there is relatively small (and sometimes no) "horizontal" component, so the sun approaches the horizon and disappears beneath it very rapidly compared to at higher latitudes, where there is an increasingly large "horizontal" component and therefore a correspondingly decreasing "vertical" component. That is, the vertical and horizontal component velocity vectors do change according to latitude, while the angular velocity vector itself remains constant. WikiDao(talk) 03:01, 24 November 2010 (UTC)[reply]
It's not exactly the same angular velocity at all locations and times, because the Earth's axis isn't exactly perpendicular to its orbital plane. Tardis' calculation above looks right to me. The magnitude of the angular velocity at a pole on a solstice is about 8.2% less than it is anywhere on earth on an equinox. But the direction of the sun's path relative to the horizon is a much bigger effect. Red Act (talk) 03:48, 24 November 2010 (UTC)[reply]
Your latitude doesn't matter (except in an extremely small way due to parallax effects), only the Sun's current declination, which is determined by the time of year. The Sun's angular velocity is 8.2% slower (if the calculation is correct, and I think it is) at the solstices than at the equinoxes anywhere on the Earth. Remember that the whole sphere of the sky rotates as a unit (in your frame of reference), one rotation per sidereal day, and everyone sees it the same way; only the Sun's position on that sphere gradually changes with the seasons (progressing along the ecliptic, which is angled at 23.5° to the celestial equator, corresponding to the inclination of the Earth's axis).
Incidentally, another source of minor variation are the facts that the Earth's orbit is not circular and that consequently the Earth's orbital speed is not constant. This causes the Sun's motion to drift alternately ahead of clock time and behind it; see equation of time. But the variation in angular velocity for this reason is only on the order of 0.1%, I would estimate. --Anonymous, 11:03 UTC, November 24, 2010.
I probably should also have made it clear that I am not claiming that a variation in apparent angular velocity is the reason why the length of twilight varies across the earth. I understand the stuff about angle of setting relative to the horizon just fine. My original thinking was that at the equinoxes the polar sun travels right around the horizon in 24 hours, whereas at the summer solstice it travels right around the sky in the same time, but at a higher elevation, so it actually travels a shorter "distance". Therefore the "speed" cannot be the same. This theory seems to be borne out by the calculations, and the 8.2% figure, given above. 86.174.162.130 (talk) 12:33, 24 November 2010 (UTC)[reply]

Perpetual stew

I have a batch of perpetual stew going (and have for a couple weeks now). My Q is if there is any concern that some of the foods, by being cooked far more than is typical, might lose their nutritional value or morph into something harmful. I've noticed that everything turns brown after some period of time, is this a sign of chemical changes which may be harmful ?

So far, I have added the following:

Tomato soup
Ham bone
Carrots
Corn (kernels)
Corn (baby cobs)
Green beans
White lima beans
Beets
Broccoli
Tender cactus
Sweet potato
Potato
Cilantro
Hot peppers
White onion

Next, I intend to toss all the leftover turkey from Thanksgiving in there. Is there any reason for concern ?

And, for the inevitable bunch who always claims that any Q is a medical Q and can't be answered, this is a food safety Q, and you don't need to have a medical degree to legally offer advice on food safety, so it's not a problem. StuRat (talk) 17:45, 23 November 2010 (UTC)[reply]

Well, you'll obviously get some vitamin loss, but that's not terribly harmful. As long as you heat it frequently enough to prevent bacterial growth, I can't see anything that would cause problems. Looie496 (talk) 18:27, 23 November 2010 (UTC)[reply]
In my household, such a pot of stew, fitted with a padlock, would not survive until the next day, let alone perpetually. Be that as it may...
Think that these romantic cooking traditions survive partly because they serve as a focus for romantic memes and partly because long periods of slow cooking can produce a stew, pottage, gruel etc., with lots of umami. However, with the exception of saturated fats, all other fatty acids degrade with prolonged exposure to heat. In other words: it encourages the formation of trans fats. The essential fatty acids get destroyed. Vegetables and grains also contain these fatty acids, so a purely vegetarian stew is not going to avoid this degradation. Obviously, prolonged heating will also destroy all the heat liable vitamins. Prolonged containment in a non-stick pot may also impart unwanted levels of perfluorooctanoic acid. On the plus side. This type of food preparations was popular in England during the period were the average life span vacillated between 25-45years and thus may have helped these emerald isles from suffering over-population. Other than that: bon appétit and enjoy! --Aspro (talk) 19:24, 23 November 2010 (UTC)[reply]
I suspect that the primary cause of the short life span was poor sanitation, and particularly the habit of dumping raw sewage directly into their water supply.
Trans fats are bad all right. But there are some veggies and tubers with virtually no fats, so they should be OK. I may have to rethink the turkey leftovers, though. What's the time frame for this change ? Would a day do it ? A week ? A month ? Let's assume it's at boiling temp half the time, and cooling off at night. StuRat (talk) 19:53, 23 November 2010 (UTC)[reply]

I should also mention that it's an aluminum pot, but not a non-stick surface. Any worries about ingesting too much aluminum ? I heard that aluminum causes senility, but can't recall any details. :-) StuRat (talk) 19:53, 23 November 2010 (UTC)[reply]

I forget where I read it, but I thought elevated levels of Aluminum was found in Alzheimers patients. Googlemeister (talk) 20:14, 23 November 2010 (UTC)[reply]
Good question. With frying, the higher temperature mean that the fats are degraded whilst the food is cooked. Simmering -as in a stew, is certainly OK for a single or leftover cooking. I have never come across any convincing reason why ordinary aluminium oxide from utensils presents a problem (but other aluminium compounds show growing concern that they cause laboratory rats immune problems). The connection between aluminium and Alzheimer's disease is still debatable, but has never worried me because ordinary aluminium oxide is so inert and the science has not yet come up with anything convincing. The secret, I think, is to always use fresh unprocessed food. Even those tasty German sausages are processed food. A technology which dates before the time we had refrigeration. --Aspro (talk) 20:24, 23 November 2010 (UTC)[reply]
The Alzheimer's disease#Prevention section, at least, lists more studies that conclude that there is no connection between aluminum and Alzheimer's than studies that suggest that there might be. Red Act (talk) 22:02, 23 November 2010 (UTC)[reply]
I understand that acrylamide is produced when browning food (although apparantly not from oxidative browning such as when you cut an apple) and is a Substance of very high concern. Before it was discovered in food it was thought to be likely to be a carcinogen: I'm personally sceptical of the reports that all of a sudden say it isn't really. Heterocyclic amines are produced when cooking meat at high temperatures, as are Polycyclic aromatic hydrocarbons - I don't know if stewing for days on end would have similar results. 92.15.13.42 (talk) 20:40, 23 November 2010 (UTC)[reply]

Some good points were made about high temperature changes, but do any of those apply at boiling temp ? Assuming I never let it run dry, I see no reason the temp would ever exceed that inside the pot. StuRat (talk) 21:38, 23 November 2010 (UTC)[reply]

You say "I've noticed that everything turns brown after some period of time" and I pointed out that browning is associated with acrylamide production. I've also noticed: "Let's assume it's at boiling temp half the time, and cooling off at night". A heating/cooling cycle would be the sort of thing that encourages some bacteria to grow. 92.15.13.42 (talk) 21:56, 23 November 2010 (UTC)[reply]
The browning in his stew has nothing to do acrylamide - that comes from high heat + gases (frying)...assuming he's simmering as stew should be done and it's under/in fluid. Apples slow-cooked will brown that way.
⋙–Berean–Hunter—► ((⊕)) 22:35, 23 November 2010 (UTC)[reply]
Gases? What gases? What do you mean please? 92.15.6.122 (talk) 23:29, 24 November 2010 (UTC)[reply]
Not just frying, but baking, barbecuing and even microwaving too. 92.15.15.224 (talk) 12:38, 24 November 2010 (UTC)[reply]
Not all forms of barbecuing have this result. The barbecue that I had in my slow cooker before this thread began didn't have any of this acrylamide production as it was produced from low heat. As for gases, it is evident that the process like most other processes also rely on the gases in air or the foodstuff itself combined with high temp heating. Boiling doesn't do it, and there is an apparent reduction of acrylamide when foods are cooked under a vacuum which seems to substantiate that gaseous compounds play a part.
⋙–Berean–Hunter—► ((⊕)) 13:50, 26 November 2010 (UTC)[reply]
You must be unique in calling cooking in a slow-cooker "barbecuing". That is not the common meaning of barbeque. 92.24.178.149 (talk) 22:00, 26 November 2010 (UTC)[reply]
I hear they slow cook Barbecue in Texas. Will this thread ever end?! WikiDao(talk) 22:26, 26 November 2010 (UTC)[reply]
As for bacteria, yes, I would expect some to grow at night, but for them to be killed by the boiling the next day. And dead bacteria are edible in small quantities. If there was ever any visible growth on the surface, I'd toss the whole batch out. StuRat (talk) 22:58, 23 November 2010 (UTC)[reply]
Dead bacteria I wouldn't worry about. The toxins (some of which may be carcinogenic) that those bacteria and particularly fungi (if any) produced before they died I would worry more about. Also any endospores or spores that may have been produced. (There's a reason people aren't encouraged to simply cook spoilt food well. And usually by the time you actually start to see something it's starting to get to late.) These problems would seem to be particularly concerning if you are growing a fresh batch every day at the temperatures the encourage high replication. Nil Einne (talk) 05:32, 24 November 2010 (UTC)[reply]
One of the fundamental things about food hygeine I was taught long ago on this side of the pond was that you must not re-heat food more than once. There is some reason to do with bacteria going into encysted form; I forget the details. The advice about never reheating food more than once is given many times on this website: http://www.eatwell.gov.uk/keepingfoodsafe/germwatch/ Other government advice says: "If food is allowed to cool down over a long period of time any food poisoning bacteria or bacterial spores that survive the cooking process (bacterial spores are heat resistant so they will be present) will be able to grow and multiply to a level that may cause illness." So if you are doing this repeatedly, the bacteria will build and build, for example Bacillus cereus and other mentioned in the Foodborne illness article and here http://www.eatwell.gov.uk/healthissues/foodpoisoning/abugslife/ . As mentioned elsewhere, the toxins survive cooking. 92.15.15.224 (talk) 12:38, 24 November 2010 (UTC)[reply]
The "reheating" they mention seems to be just warming it to a comfortable eating temperature, and I agree that this is a bad idea; just reheat what you intend to eat. However, bringing it to a boil for a long period will kill off all the bacteria, and start the clock over again, so I don't see the issue there. I also don't quite follow the bit about bacteria spores surviving boiling and regrowing when temps are right. After all, isn't the entire basis of canning that food can be 100% sterilized by boiling ? If not, cans of food would all go bad after a couple weeks sealed in the can. StuRat (talk) 18:19, 24 November 2010 (UTC)[reply]
You've forgotten about the toxins - they make you ill. Boiling something in a saucepan is very different to sterlising it in a can under high pressure. I believe the cans are stored for a while and the improperly sterilized ones explode. The germ spores survive the cooking, as well as the toxins, and become active again afterwards. Its the repeated heating and cooling that is dangerous. 92.15.6.122 (talk) 22:40, 24 November 2010 (UTC)[reply]
AFAIK, most authorities recommend you reheat food to above comfortable eating temperature (too hot to eat) and then allow the food to cool down somewhat. If currently you only reheat food to eating tempereature I guess you have other problems with safe food practices. Also as 92 has said as does our article, most sterilisation for canning usually involved high pressure and significantly above 100 degrees C. I'm pretty sure with commercial canning at least, properly sterilised equipment is used (even if the canning occurs at 100 degrees C) and fresh food whereas in your case it sounds like you're going to be growing more and more each day so once some bacteria with heat resistant spores do get in there's a far change they're going to stay there. Nil Einne (talk) 11:16, 25 November 2010 (UTC)[reply]
Maybe you should get a slow cooker. 92.15.13.42 (talk) 21:51, 23 November 2010 (UTC)[reply]
What does that do for me that a pot on the stove doesn't ? StuRat (talk) 22:56, 23 November 2010 (UTC)[reply]
It allows more careful regulation of temperature in the stew. Stoves heat strictly from the bottom, but a good slow cooker tends to heat from all sides. Also, slow cookers tend to maintain a more constant temperature over time without much maintenance. On a stove top, you often have to cycle the heat up and down to keep the temperature at an ideal point. The slow cooker has an internal thermostat which cycles the heat for you, often more efficiently than you can do it. --Jayron32 01:45, 24 November 2010 (UTC)[reply]
A non-pressurized pot also seems to be self-regulating, as far as temperature goes. The temp increases from the burner, until boiling occurs. At that point the boiling action cools the stew such that it remains at that temp until all the water boils off. With the size of the pot, that would take days, if loosely covered. That gives me plenty of time to add more water. The boiling also evenly distributes heat throughout the pot. StuRat (talk) 03:05, 24 November 2010 (UTC)[reply]
Sort of. Often, the optimum temperature for cooking is just below the boiling temperature, called by cook books and the like as a bare simmer. There are several problems with cooking at the boiling temperature. First of all, it can actually toughen meat rather than tenderize it; the gelatin-forming reactions that occur at slightly lower temperatures still occur, however at the boiling point, the fibers in the meat itself can undergo a sort of crosslinking creating tough meat. Optimum meat-cooking occurs at temperatures some level below the boiling point, say 180-200 degrees or so. Also, rapid boiling can aggitate the food, causing vegetables to break up in ways that may be undesirable in the final product. Slowcookers are optimized to cook at this bare-simmer temperature, say 200 degrees, and maintain it pretty much indefinately. With stovetop cooking, you are basically stuck at the boiling point itself. Yes, you can maintain lower temperatures, but it requires more watching. --Jayron32 03:16, 24 November 2010 (UTC)[reply]
...and the cooking results are just different. You can't get that pork roast as tender on the stovetop as you can in a slow cooker. You're much less likely to burn your food and there is no open element like a stovetop so you can leave your home or sleep (or edit on the Wiki :)) and not worry about it. I am able to get a pretty good rendition of pit-cooked barbecue with 12-14 hours of slow-roasting that I would never dream of doing on a stovetop...and heating your oven for that long would cost more. The cooker puts off less heat, too.
⋙–Berean–Hunter—► ((⊕)) 02:32, 24 November 2010 (UTC)[reply]
Well, I only make perpetual stew in the winter, when the extra heat and humidity are welcome. I'm not worried about leaving burners on, as there are no kids or pets in the house. Since my burners are gas, which is 1/3rd the cost of electricity, this should cost less than a slow cooker. There is some concern that leaving the gas on for hours may deplete oxygen levels and increase levels of combustion products (other than water vapor) and unburned gas. However, it's on a very low level on one burner only, and not in a part of the house where I hang out, so the risk is minimal. StuRat (talk) 03:02, 24 November 2010 (UTC)[reply]
Given your observation of "extra heat and humidity", it's obvious that your cooking method (pot on burner) is not very efficient. You seem to welcome that inefficiency, but it is an indicator that other methods (like a slow cooker with lid) could result in much more efficient heating if the power sources were equivalent. Whether the difference in the price of gas (vs electric) and the "welcome" nature of the heat loss into the room are offset by the waste and risk involved in heating a pot with an open flame is complex and at least partly subjective. Heating your home with a stove is notoriously inefficient. -- Scray (talk) 04:16, 24 November 2010 (UTC)[reply]
Why would heating a home with a stove be inefficient ? It seems to me that 100% of the heat goes into the room, versus with a central heater, where much of it goes up the chimney or is lost heating the ducts and walls. I certainly wouldn't choose to heat a house that way, but not due to inefficiency, but because that much burning gas would indeed deplete the oxygen and create dangerous levels of combustion products inside the house. Then there's the issue of distributing the heat from the kitchen to the rest of the house. So, in winter, the added heat and humidity is that much less which is required from the furnace and humidifier, so it isn't lost at all, but is 100% reused. That's highly efficient. In summer, that's another matter, requiring extra A/C, so I don't do that. StuRat (talk) 04:29, 24 November 2010 (UTC)[reply]
Because you're heating your home unevenly. Unless your house has very good insulation and airflow your kitchen will be warmer for the rest of the house even though you probably spend little time there. And since your thermostat is probably not in the kitchen this extra heat is not actually saving you much on your other heating bill.
This is typically what people mean when they claim that electrical area-heaters are "inefficient". They don't literally mean that energy is disappearing into the aether.
Of course, I have no idea if it works out like that in your home; I've never been. APL (talk) 08:23, 24 November 2010 (UTC)[reply]
The very advantage of electrical space heaters is that they do heat unevenly, so you can heat the room you're in and let the rest get colder, thus only heating a small portion of the house. If my central heat went out and I needed to use the oven for heat, then I would spend my time in the kitchen, until the furnace was fixed.
Something else I've noticed is that, while heat doesn't flow very quickly from one room to another, humidity does. It would be nice to be able to quantify this observation in some way, so if anyone knows of one, I'd be interested. StuRat (talk) 18:27, 24 November 2010 (UTC)[reply]
If you go back 50 years then in many working-class UK homes the kitchen was the centre of the home, at least in winter, for this very reason. The range used to cook meant it was warm and poorer families would not be able to heat other rooms in the house. -- Q Chris (talk) 09:22, 24 November 2010 (UTC)[reply]
Also many homes have an air vent that let the air exit the home from the kitchen and also from the bathroom in order to not spread smells in the home. In that case you may heat the air just before it is vented out. I assume you are not using an Extractor hood that vent out the warm air. --Gr8xoz (talk) 11:06, 24 November 2010 (UTC)[reply]
There was a kitchen vent, but it's blocked off now for insulation reasons. StuRat (talk) 18:14, 24 November 2010 (UTC)[reply]
The other reason not to keep your stew for ever is that it won't be so nice to eat after a while. Suggestion: do your stew again, eat it over two days, maximum three. Then do another batch, varying the ingredients a bit. Soupe au pistou is a nice way to use a ham bone, tomatoes and white beans; online recipes are a better guide than our article. Itsmejudith (talk) 13:15, 24 November 2010 (UTC)[reply]
That approach requires washing the pot frequently, and also necessarily means that some of the detergent residue will be ingested. StuRat (talk) 18:31, 24 November 2010 (UTC)[reply]
Rinse the pot out after with clean water and dry thoroughly. It is this very rapid dehydration of any remaining bacteria which kills them off. --Aspro (talk) 18:55, 24 November 2010 (UTC)[reply]
"Perpetual Stews" are a time honored technique. Some stews are maintained for decades and remain allegedly delicious the whole time. APL (talk) 16:21, 24 November 2010 (UTC)[reply]
I think it would be a very good idea StuRat for you to first get to understand a bit of basic food safety. Not all toxins are destroyed by 'boiling' such as those produce by Bacillus cereus which has already been mentioned. Canning undergoes higher temperatures than 100 deg C which includes an adequate dwell time, because not all bugs and spores are destroyed at the normal atmospheric boiling point. Foods should be cooled quickly and the larger the pot the longer it will take to cool. I have often read of large fatalities at Indian wedding parties because this aspect of bigger than normal pots has been overlooked. Sounds like you have been lucky up to now. This site has some tips.[21]Aspro (talk) 18:55, 24 November 2010 (UTC)[reply]
If, as many indirect references here suggest, perpetual stews are a time-honored tradition, then Aspro's comment reminds me of the frequent clash between empiricism and extrapolation. Perhaps major illnesses resulting from perpetual stews have been overlooked - this is worth investigating (and I would do so before indulging in this practice); however, I suspect something more interesting is at work - extrapolation from other situations is being applied to a specific, well-tested practice, with the extrapolation predicting major problems that actual experience does not support. I think there's much to be learned from these situations, but I've also learned to trust empirical evidence more than extrapolation (with a pinch of caution, which StuRat has already expressed). -- Scray (talk) 20:18, 24 November 2010 (UTC)[reply]
There is no clash as I see it. The wife of the house whom learnt the tradition from her mother, whom in tern learnt from her mother etc., was only able to do so because they had learnt a safe technique which enabling them to cheat an early death and so pass the method on to their daughters. Rediscovering a successful way to recreate this gastronomic medley in a cauldron, using trail and error methods with a pinch of ignorance here-and-there, will - I fear - ensure that such a stew will kill off its paddle stirrer and its chances of ever becoming a proper perpetual stew worthy of such a title. Do you see my point? --Aspro (talk) 20:58, 24 November 2010 (UTC)[reply]
LOL @ "trail and error" ... go down one trail until you hit a dead end, then try another ? I don't think it can be quite so dangerous as you suppose, or people would have learned to avoid the practice centuries ago. For example, tomatoes apparently leached out lead from the leaded plates and bowls of the day, and thus poisoned people. They had no idea that this was the problem, but still thought of tomatoes as poisonous and avoided them. Something similar would happen if perpetual stews regularly killed people who got the technique wrong. Perhaps they would develop a theory that slow-moving invisible demons invade the stew if it's left uneaten for too long, which isn't really all that far off. :-) StuRat (talk) 21:45, 24 November 2010 (UTC)[reply]
A bigger point is I don't see any evidence perpetual stews were cooked the way StuRat cooks them. My guess is they were kept heated for the whole day (not necessarily boiling but usually enough to reduce problems) rather then Sturat's attempts to grow microbes with heat resistant spores. Of course humans have managed to learn to deal with a number of potentially dangerous foodstuffs like Cashew nuts and tapioca. Also the consumption of carcinogenic but not acutely dangerous toxins wasn't generally a big deal. Nil Einne (talk) 11:16, 25 November 2010 (UTC)[reply]
Then how would they have cooked them, if not bringing them to a boil during the day and letting them cool off at night, exactly as I do ? Would they stay up all night to stoke wood under the fire ? StuRat (talk) 17:16, 25 November 2010 (UTC)[reply]
In old times there would be a coal-fired kitchen range, a large iron thing that would stay hot overnight, with the coals probably still smouldering and its big thermal mass taking a long time to cool. In the UK they were normally built into the wall, adding more thermal mass. More importantly, people often died of infectious diseases etc, so some food poisoning wouldnt have been noticed. Goodness knows why you want to play russian roulette with your food - I'm detecting that North Americans think any food is wholesome (hence the eating of pop tarts without shame or self-loathing} and cannot get their heads round the idea that some foods can be bad for you. 92.28.251.194 (talk) 17:53, 25 November 2010 (UTC)[reply]
I don't think coal-fired stoves were common, say 1000 years ago, when wood would have been the norm for cooking. So, how did they keep their perpetual stew hot then ? StuRat (talk) 05:06, 26 November 2010 (UTC)[reply]
In 1010AD, they probably didnt have your "perpetual stews", little or no idea of hygeine or medicine, and people usually died young for unidentified and unspecified reasons. 92.28.241.63 (talk) 12:15, 26 November 2010 (UTC)[reply]
To get temps higher than 100°C in something that's mostly water, you would need to pressurize it. Yet I don't believe this is normally used in home canning, so how do all the bacteria get killed there ? StuRat (talk) 20:17, 24 November 2010 (UTC)[reply]
Canning often involves hypertonic solutions (e.g. salt and sugar). In addition to providing a higher boiling temperatures, such solutions are relatively inhospitable to many food pathogens. -- Scray (talk) 20:22, 24 November 2010 (UTC)[reply]
Home canning has been the cause of many a multiply deaths ( the whole family can be affected). The article gives some pointers on how to do it safely. Even something as traditional as the sausage has been subjected to ancient laws stipulating how they should be made in order to avoid botulism. The name of this bug itself, comes from the Latin for sausages with which it was so often associated. --Aspro (talk) 21:11, 24 November 2010 (UTC)[reply]
But, of course, there are many deaths from each stage in food production: preparation, distribution, and serving; both at home and on an industrial scale, so the question is whether deaths from home canning are a higher proportion than from those other sources. Myself, I seem to get food poisoning about 1/10th the time I eat out, so that's a rather low target to try to beat. I don't get sick nearly so often from home cooking, and I attribute this to one factor: If I see food that looks suspicious, I won't eat it or serve it to others I know, but the restaurant workers don't know or care about the people they serve, so are likely to just remove the visible mold from the top of the food and serve it anyway. The chances that they personally will be punished for this are low enough that it's not a concern.
I have a specific example from a Wendy's salad bar (do any of them have those any more ?). They apparently were just "refreshing" the salad bar periodically, meaning adding fresh food to the top. I saw no evidence that they ever dumped out the old food. The cucumbers were fresh on top and gradually got worse going down, to where they were all decomposed into something like snot at the bottom of the container. I complained to the management, and they pointed out that there are holes in the bottom of each container where the decomposed cucumber juice can drain out, so they weren't concerned about it. StuRat (talk) 21:32, 24 November 2010 (UTC)[reply]
However do they get any customers? 92.15.6.122 (talk) 22:49, 24 November 2010 (UTC)[reply]
Well, I didn't go back after that, and I suspect the same is true of others, as most (or maybe all ?) Wendy's seem to have discontinued their salad bars. Perhaps they do a better job at what they serve now. StuRat (talk) 01:26, 25 November 2010 (UTC)[reply]
The Botulism article gives some relevant details. 92.15.6.122 (talk) 22:49, 24 November 2010 (UTC)[reply]
The first line in that says that anaerobic conditions are required, but this pot is opened up daily and stirred, while boiling, so plenty of oxygen should be available. StuRat (talk) 17:21, 25 November 2010 (UTC)[reply]
That was just an example. There are many germs that can make you ill or kill you, not just botulism. This List of foodborne illness outbreaks in the United States says there were 76 million illnesses from food in a year - I'm beginning to see why. 92.28.251.194 (talk) 17:53, 25 November 2010 (UTC)[reply]
But why refer me to an article which clearly isn't relevant ? And that last list also doesn't seem relevant, unless there are perpetual stew cases listed in there. The closest thing I saw was the case where botulism in peppers served at the Trini and Carmen restaurant in Pontiac, Michigan caused the largest outbreak of botulism poisonings in the United States. The peppers were canned at home by a former employee. Fifty-nine people were sickened. But, again, that's botulism, so not relevant to perpetual stew. StuRat (talk) 05:03, 26 November 2010 (UTC)[reply]
The botulism article is relevant to canning, which was discussed above, and is one example of food-borne illness. Of the 76 million illnesses, most of them were domestic rather than being those listed. I'm puzzled why you want to persist with a practise that is likely to make you ill or worse, despite all the advice to the contrary. 92.28.241.63 (talk) 12:26, 26 November 2010 (UTC)[reply]
If you have some specific evidence that perpetual stews cause illness, I'd like to see it, but unrelated articles about industrial food contamination aren't helpful, and neither is citing the number of food-caused illnesses, if not broken down to include perpetual stews. StuRat (talk) 18:05, 26 November 2010 (UTC)[reply]
So the fact that its "perpetual stew" means that it is absolved from all the rules of normal food hygiene? When you start heaving, I hope you can realise why. It's your funeral. 92.24.178.149 (talk) 21:57, 26 November 2010 (UTC)[reply]
If you want to incubate things, try making your own yogurt. 92.15.6.122 (talk) 23:31, 24 November 2010 (UTC)[reply]

Energy to Mass

Ok it seems that with exotic materials such as antimatter, it is pretty easy to grasp how to convert mass to energy, but are there known ways to convert energy to mass? For example, say there a huge number of photons, do we know a way, even only a theoretical one, to convert them into something recognizable, like say aluminum or carbon? Googlemeister (talk) 20:28, 23 November 2010 (UTC)[reply]

Nucleosynthesis, particularly stellar nucleosynthesis, is the only practical way to assemble any reasonable quantity of heavy atomic nuclei out of smaller "pieces". To fuse together two hydrogen atoms, a proton–proton chain reaction takes place - this is actually exothermic, and it requires "a starting ingredient" - hydrogen. To produce hydrogen out of "pure energy", see Baryogenesis and Big Bang nucleosynthesis - at present, we can not do this in a laboratory. (In other words, we don't have any machine or particle accelerator that can take photons in and spit out hydrogen). At best, we have a theoretical understanding of the mechanism involved. This conversion from large quantities of undifferentiated energy, into distinct particles, is the process we call the "big bang." In the beginning, a hyper-dense region of energy must exist. It begins to expand, which causes the energy density to decrease and symmetry-breaking begins to occur (notably, the separation of fundamental interactions). As energy density decreases, inhomogeneity develops, and different "zones" begin behaving as various elementary particles, exhibiting the first fundamental interactions. We call this stage a "quark plasma." These quarks begin the process of coalescing into light nuclei - protons, mostly; and electrons; and thus is born the first hydrogen. For this process to occur, we need extraordinarily dense energy. It is still not clear whether a homogeneous, isotropic clump of energy will inherently begin to break symmetry (i.e., whether this is "built in" behavior); or if we began with a non-homogeneous or non-isotropic universe. We do not understand the requirements for symmetry-breaking in the universe-at-large, so we don't have a method to reproduce baryogenesis experimentally. Nimur (talk) 20:51, 23 November 2010 (UTC)[reply]
See Gamma ray#Properties#Matter Interaction, Pair production. It is basically the reverse of annihilation. This reaction cannot as of yet be finely controlled enough to stop the produced pair from self annihilating and reverting back to energy. Even if there is a way, electrons alone do not make an atom. It simply needs to much energy, and is too inefficient to be considered an experiment just to see if you can. Plasmic Physics (talk) 21:10, 23 November 2010 (UTC)[reply]
See also Hawking radiation, which is a peculiar effect of pair production which occurs near black holes. --Jayron32 01:42, 24 November 2010 (UTC)[reply]
So, assuming that the questioner is really asking "can we create macroscopic amounts of matter out of radiation" (so avoiding the whole "energy - mass - aren't they the same thing ?" minefield), then I think the answer is "not with known technology or any feasible extension of it", because (a) we cannot confine a large enough amount of radiation in a small enough space for a long enough time to create enough particles and (b) we have no idea how to prevent the creation of equal amounts of matter and antimatter, or stopping the matter/antimatter pairs turning back into radiation. Gandalf61 (talk) 12:13, 24 November 2010 (UTC)[reply]
If energy such as changing electromagnet fields were used to accelerate an object, such as a small steel ball, or an ion, wouldn't the ball or ion be more massive, thus demonstrating energy turned to mass? (even if no new protons or electrons or neutrons were created). 67.48.228.68 (talk) 20:35, 25 November 2010 (UTC)[reply]
"...thus demonstrating energy turned to mass" - not really, because energy has mass and mass has energy - see mass–energy equivalence (yes, yes, I know this depends on which definition of mass you use). Despite the title used by the original questioner, I think it is clear that they intended to ask about turning radiant energy into matter, which is quite different. Gandalf61 (talk) 12:05, 26 November 2010 (UTC)[reply]

Remove strong smell from plastic and aluminium?

OK, so this is a weird question, but I still think the science desk is my best bet. I just bought an external Disk enclosure; it's in parts - aluminium heatsink, acrylic/plastic outside, and the circuit board. The heatsink and plastic both smell VERY strongly (with the same smell), and I'm very sensitive to smells. Since they are 100% separated from any electronics, I can clean them easily. The question is: with what? Any advice on how to get rid of the smell, or even a hint as to why it's there in the first place? (Since the materials are different, I don't get why they smell the same.) I'd describe the smell a bit like "burnt plastic". -- Aeluwas (talk) 20:50, 23 November 2010 (UTC)[reply]

I suspect outgassing, which is normal for some new synthetic materials. The only solution I know of is to put it somewhere where it won't bother you while it does this, perhaps a few weeks ? Of course, you could just leave all that stuff off indefinitely, and the hard drive may actually air cool even better without it. However, there is a risk of spilling something nasty into it and also dust accumulation, so perhaps it is best to put the enclosure back on after it stops stinking up the place. I would guess it was made in China, as many such products have flaws like you describe. One warning, don't put it in sunlight, or the plastic portion may be degraded by the UV light.
Now, let's also consider it actually being coated with something smelly. Does a cloth stink after you rub it on the object ? If so, then cleaning may help, after all. I'd just use liquid dish-washing detergent, preferably fragrance-free, if you are sensitive to that, too. Don't use bleach, as that could damage it. If some stink still remains, then go back to the first idea of leaving it where it won't bother you until it stops stinking, or maybe return it and get one that smells better. StuRat (talk) 21:28, 23 November 2010 (UTC)[reply]
"Now, let's also consider it actually being coated with something smelly."
I noticed that this morning. EVERYTHING in the entire package (including the package itself) has the same smell. Antistatic bags, the components, the small plastic bag for the screws, both the USB and FireWire cables, and their plastic bags. All the same smell. Since these are so different materials, it must have been added for some reason... Any ideas why (which of course would lead us to what)?
This is, by the way, the second time ever I can recall being bothered by how computer parts smell (unless you stick your nose right down to a PCB!), the other being when I got my UPS. It also had a weird smell for the first month or two of use, possibly because of something protective burning off the battery, IIRC. (I've had it for two years so I don't recall the details of what I looked at back then.) -- Aeluwas (talk) 08:59, 24 November 2010 (UTC)[reply]
Well, this seems to confirm that it is indeed coated with something smelly. They might coat everything with something to give it a nice sheen and make it more marketable, but your description of it smelling like burning plastic makes me think they had a plastic fire in the factory, causing smoke damage to everything in it, but just packaged it up and sold it anyway. You can wash most of those items with dish-washing detergent, and maybe just toss out the bags, but returning it all and getting a replacement sounds like a good alternative to me. I'd check the batch number to make sure it isn't the same, or, better yet, get a different brand that hopefully doesn't also try to sell smoke damaged items to the public as new ones. StuRat (talk) 18:05, 24 November 2010 (UTC)[reply]
Sounds to me, that this smell is from the remaining free, unbound and volatile plasticizer in the plastic. See: Plasticizer#For_plastics. Heat plastic and more gets driven off. It should disperse in time. Can't think off-hand of anything that will help which wont risk making the plastic brittle, other than perhaps an alcohol wipe which might be effective. Alcohol wipes should be safe to use on plastic surfaces, which is why they are made for electrical equipment servicing. Some types of potting compounds and conformal coating can also smell a bit strong with an acrid tint to it. Think this is the real origin to the Skunk Works name, because they used ******** to stick black project planes like the ********* together. I don't believe the Lockheed explanation at all. --Aspro (talk) 19:28, 24 November 2010 (UTC)[reply]
Whatever this stuff is, it won't give in easily. FWIW: the plastic bags etc. are obviously irrelevant, I only brought them up because it all smells the same, as if it's sprayed on or something. Anyway, what smells by far the most is the aluminium heatsink; the plastic is acceptable. By now the heatsink has been in water for about... 22 hours, 10 hours of which with (quite a lot) of dish soap massaged onto it and in the water, and just now I wiped it thoroughly with pure isopropyl alcohol, rinsed with water... and absolutely ZERO difference. What gives? -- Aeluwas (talk) 20:40, 24 November 2010 (UTC)[reply]
Well, anything which can dissolve is usually water-soluble, alcohol-soluble, or oil-soluble (detergent handles the last case). So, this leaves us with something that isn't soluble at all. I'm going back to my theory that it's out-gassing of some substance within the aluminum, which then apparently also deposits on nearby items. So, are you ready to return it, yet ? If not, put it in the garage or somewhere out of nose-shot and it will eventually lose most of it's smell. StuRat (talk) 21:16, 24 November 2010 (UTC)[reply]

Hydrated Salts and Coordination Compounds

Hello. If all the water of crystallization in copper(II) sulfate pentahydrate evaporates at 200°C, is CuSO4·5H2O a coordination compound? If so, are the water molecules covalently bonded to the metal centre, Cu2+? (Covalent bonds need higher temperatures to break.) Are most hydrated salts coordination compounds? Thanks in advance. --Mayfare (talk) 23:48, 23 November 2010 (UTC)[reply]

It can be complex. The individual water molecules can be bonded in any number of ways, sometimes multiple ways in the same compound. It can be bonded as a ligand to either the copper(ii) ion or the sulfate ion, or it can simply occupy locations in the crystal lattice without being bonded directly to either ion. The article water of crystallization uses copper(ii) sulfate as a specific example; I'll let you read it to discover for yourself the particular bonding. --Jayron32 01:40, 24 November 2010 (UTC)[reply]

If the water molecules attract to the metal centre by hydrogen bonding, why is CuSO4·5H2O a coordination compound? Shouldn't ligands be covalently bonded to the metal centre in such compounds? --Mayfare (talk) 02:05, 24 November 2010 (UTC)[reply]

The water molecules are NOT attracted to the metal center by hydrogen bonding. The metal center is bonded to six oxygen atoms, four of them water molecules and two of them sulfate ions. The fifth water molecule is not bonded directly to the metal ion; it is basically free-floating in the interstices of the crystal latice, weakly hydrogen bonded to all of the various oxygen atoms surrounding it. The oxygen-copper bond is called a "coordinate bond", which generally means that both electrons are donated by the lewis base (in this case water). However, as Physchim62 implies below, electrons have no memory of which atom they came from. In the end, it just matters that there is are oxygen-copper covalent bonds. These bonds can be explained by either hybridization theory or molecular orbital theory, which basically provide different perspectives on bonding, but such an explanation is probably outside of the scope of this reference desk. --Jayron32 02:25, 24 November 2010 (UTC)[reply]
The bonding in such compounds (coordination compounds) is midway between covalent and ionic. You can describe it in many different ways, the electrons don't care! Physchim62 (talk) 02:15, 24 November 2010 (UTC)[reply]

Are coordinate covalent bonds so weak that 200°C is sufficient to evaporate all the water of hydration including the four water molecules bonded to the metal centre? --Mayfare (talk) 19:53, 24 November 2010 (UTC)[reply]

Quite possibly. What you would need to do is to find the equilibrium constant for the reaction CuSO4·5H2O <--> CuSO4 + 5H2O, and convert this to ΔG via the equation ΔG = -RTlnK (alternately, you could calculate ΔG directly). Then, from ΔG = ΔH - TΔS, and find the exact temperature when ΔG changes from positive to negative. The way you usually do this experimentally is to find K at two different temperatures, which lets you find ΔG at two different temperatures. This allows you to solve a system of equations to isolate ΔH and ΔS values. Then substitute in these found values of ΔH, ΔS and set ΔG = 0. The temperature you find would be the minimum temperature to cause the reaction. --96.255.208.104 (talk) 21:46, 24 November 2010 (UTC)[reply]

November 24

Are there any self-sufficient polities left?

Let's set a minimum population of 100,000 to rule out dinky communes and Pacific Islanders and such - are there any truly self-sufficient groups remaining in the world? or has trade (and other things) created interdependence pretty much everywhere? The Masked Booby (talk) 08:06, 24 November 2010 (UTC)[reply]

You have to distinguish between "can be" and "is". I bet a lot of groups could be self sufficient if they wanted, but use interdependence to make things easier and more efficient. North Korea has an official policy of self-sufficiency, but as far as I know it doesn't work in practice. Ariel. (talk) 09:25, 24 November 2010 (UTC)[reply]
I would particularly if you are considering "can be" suggest you also have to define "self-sufficiency". At a basic level, this would arguably be water, food, some form of energy and perhaps shelter. While things like the internet are sometimes considered important enough to life in the developed world that they may be considered a basic right, it's questionable if the lack of internet access (and computers or something else to use that access) is really enough to make some society non self-sufficient. Then there are of course things like large TVs that wouldn't generally consider a basic necessity at all. On the other hand there are the more iffy things like medical care. Nil Einne (talk) 13:26, 24 November 2010 (UTC)[reply]
You could say the combined human population of the Earth forms a self-sufficient polity of 6 billion people. Apart from that, trade has been improving human living standards since at least the Stone Age, so if you defining "self-sufficient" a no trade, then no, no significant subset of the human population is self-sufficient. Physchim62 (talk) 13:32, 24 November 2010 (UTC)[reply]
The last really big group of people to be discovered that had previously had no contact with outsiders was people living in the interior of New Guinea [22]. During the 1930s, mining expeditions and such discovered about a million people who had had no contact with people on the coasts. This population was not a single cohesive unit, but rather a smattering of individual tribes or villages, each one maybe numbering in the thousands. There may still be small groups of people who have not been contacted by the rest of the world (in the Amazon, or similar little explored regions), but the possibility of discovering a largish group of people like this is very small. Buddy431 (talk) 18:23, 24 November 2010 (UTC)[reply]

Fastest acceleration in the world

Which source produces the fastest acceleration in the world or universe? Can such an acceleration kill by dragging all blood out from some point in the human organism? —Preceding unsigned comment added by 89.77.158.172 (talk) 09:00, 24 November 2010 (UTC)[reply]

Well an explosion such as a supernova would certainly take care of the killing bit. Not sure whether that would be the fastest acceleration going on.--Shantavira|feed me 09:25, 24 November 2010 (UTC)[reply]
(ec)A wakefield plasma accelerator is probably the strongest on earth. In the universe there are some amazingly intense accelerators though. Magnetars, Pulsars, Black holes. There are two ways to accelerate something. You can accelerate every atom of the object at once, gravity for example does this. If you do that then you feel nothing - every part of you accelerates at the same rate. The second way to accelerate something is to "press" only on the outside of the object - for example a rocket ship presses on the bottom of your feet. In that case you would feel it. The article on G-force may help as well. Ariel. (talk) 09:42, 24 November 2010 (UTC)[reply]
I was thinking about this the other day by coincidence. What actually happens at the moment of say Beta decay (or other interactions ) in terms of acceleration for the emitted electron and electron antineutrino or other products ? I got nowhere. Is it even meaningful ? Sean.hoyland - talk 09:44, 24 November 2010 (UTC)[reply]
That's a pretty interesting question. I think it has no meaning because of the uncertainty principle. You can't know both Time and Energy at the same time, but when a particle is emitted its energy is very accurately known, this means that time has no meaning, and if time has no meaning neither does acceleration. Ariel. (talk) 10:33, 24 November 2010 (UTC)[reply]
I think some of the fastest acceleration is when high energy particles collide in particle accelerators such as the Tevatron and Large Hadron Collider. I do not know how the uncertainty principle affect this. --Gr8xoz (talk) 10:38, 24 November 2010 (UTC)[reply]
They accelerate over a large time and distance so the uncertainty principle should not be a problem. I did some calculations for the g-force article. The LHC accelerates protons at 1.9x108 g. A wakefield accelerator is far more stupendous: 8.9x1020 g. Ariel. (talk) 10:50, 24 November 2010 (UTC)[reply]
You're looking at the wrong side of the reaction. Gr8xoz mentions the acceleration (or colloquially decceleration) that occurs when particle beams collide. When two high energy particles collide head-on one will have a problem defining exactly what one means by the acceleration during the collision itself, but there will certainly be a large change in the distribution of momentum afterwards, and hence in some sense it is probably fair to say that such collisions involve very large accelerations. Dragons flight (talk) 11:08, 24 November 2010 (UTC)[reply]
I was, you're right. But I wonder if you can really define acceleration there, at those energies the protons don't really collide and bounce - they melt, and new particles are generated on the spot. What about when two atoms of air collide? They are traveling at about 1000 miles per hour, and turn around in a very short distance. Assuming they turn around in the distance of the size of an atom, then I calculate an acceleration of 6.22×1014 g! [23] That's absolutely immense. Ariel. (talk) 12:06, 24 November 2010 (UTC)[reply]
Protons in the LHC main loop circulate at a rate of ω ≈ 2πc / 27 km ≈ 70 kHz, and that gives a (relativistic) centrifugal acceleration of γvω ≈ 1017 m/s², assuming circular motion. In fact the beam path is not a circle and the beams are only bent at certain points, so the maximum acceleration must be higher, at least 1018 m/s². Presumably an exact figure could be found on the net somewhere. -- BenRG (talk) 01:49, 25 November 2010 (UTC)[reply]
In living things, I believe the fastest acceleration is accomplished by the ever-cool mantis shrimp, which can whip its claw though sea-water at a mind-boggling 10,400g (102,000 m/s2), about as fast a .22 calibre gun. Fast enough to generate a shock wave that can kill prey. And don't bother trying to hide from it - it can see from infrared right through to ultraviolet (and polarized light to boot). Matt Deres (talk) 14:37, 24 November 2010 (UTC)[reply]
A simple TV CRT can produce immense accelerations. It operates at around 1000V and accelerates electrons across 20 cm, which corresponds to an acceleration of 10^15 m/s^2. --140.180.14.145 (talk) 19:24, 24 November 2010 (UTC)[reply]
That's some good examples. Maybe it's worth mentioning the acceleration article, in particular point out the A=F/m equation. This means that Acceleration is proportional to Force and inversely proportional to mass. So applying a larger force to a smaller mass results in a greater acceleration. However, it stops making a lot of 'intuitive' sense when you start talking about relativistic speeds and zero rest mass particles. Vespine (talk) 23:22, 24 November 2010 (UTC)[reply]
Relativistic acceleration makes intuitive sense. It's just curvature of the worldline. -- BenRG (talk) 01:52, 25 November 2010 (UTC)[reply]

Psychology

What is the name utilized when someone creates traumatic acts so then they can come in & become the hero? Ie: a fireman who sets fires to be the savior or the nurse who doses patients inducing cardiac arrest & then administering CPR. —Preceding unsigned comment added by 99.184.223.44 (talk) 14:47, 24 November 2010 (UTC)[reply]

That sounds a bit like Münchausen syndrome by proxy. AndrewWTaylor (talk) 15:51, 24 November 2010 (UTC)[reply]
MSbP typically (if there exists such a thing as a 'typical' case) involves someone who is seeking attention and sympathy through assuming a 'victim' role, rather than a 'hero' role. We have an article Hero Syndrome which is directly on point, but the label as used there seems to be mostly a media creation rather than a formal medical diagnosis. (Google searches on "firefighter arson" and similar keywords pull lots of hits, but very little good medical or scientific literature.) TenOfAllTrades(talk) 16:37, 24 November 2010 (UTC)[reply]
Actually, that's Munchausen syndrome, where the person hurts themself to get attention. Munchausen by proxy means they hurt someone else to get attention, "by proxy" meaning "designating another person". MSbP often occurs when mothers hurt their children, and then bring them to the doctor, or "save" them themselves, bringing attention for the act. --96.255.208.104 (talk) 21:38, 24 November 2010 (UTC)[reply]
Beverley Allitt is an example of this. --TammyMoet (talk) 18:44, 24 November 2010 (UTC)[reply]

No air

When exactly did people first discover that space had no air in it, and how was the discovery made / proven? —Preceding unsigned comment added by 91.1.149.157 (talk) 21:01, 24 November 2010 (UTC)[reply]

Vacuum says Otto von Guericke is the one. Clarityfiend (talk) 21:15, 24 November 2010 (UTC)[reply]
It's a long article, but I don't think it says that - or at least precisely that. Otto von Guericke did work on vacuum pumps, but I don't think he postulated anything about the nature of outer space, which is what I think the questioner is asking about. I may be wrong there; I didn't read the article thoroughly. Matt Deres (talk) 21:33, 24 November 2010 (UTC)[reply]
Outer space#Discovery says that Otto von Guericke concluded that there must be a vacuum between the Earth and the Moon. Red Act (talk) 22:17, 24 November 2010 (UTC)[reply]
Thanks for the correction; I've struck-through my earlier comment. Interestingly, it seems that it wasn't until the early 20th century that consensus regarding the vacuum of outer space managed to overcome the idea of luminiferous aether - centuries afterwards. Matt Deres (talk) 22:36, 24 November 2010 (UTC)[reply]
Peripherally related - long before there were rocket explorations of outer space, radio was used to probe our upper atmosphere. This allows experimental study of the composition of the high atmosphere, which gradually tapers off toward vacuum. The radiosonde article, and the related ionosonde, describe the radio apparatuses that scientifically probe the ionosphere. The realization that the atmosphere becomes heavily ionized as well as very sparse was "mostly accidental." Scientists began to study radiophysics in order to make effective use of the skywave effect and to explain the Luxembourg effect. Nimur (talk) 22:47, 24 November 2010 (UTC)[reply]
I got the name from the outer space article, but somehow typed vacuum. Must be one in my head. Clarityfiend (talk) 01:11, 25 November 2010 (UTC)[reply]
Even in ancient and medieval times, people were of the opinion that "outer space" had no air in it, though they thought that the realm of air did extend to the orbit of the moon. They thought, following Aristotle, that beyond that was aether. (Despite some confusion in the linked article, based on the similarity of name, this has basically no relation to the luminiferous ether referred to by Matt Deres). Deor (talk) 01:35, 25 November 2010 (UTC)[reply]

November 25

light peak

i have some doubts in light peak topic which and what is the use of the micro controller used in it and how the data conversion takes place (electrical to optical) —Preceding unsigned comment added by 117.254.115.64 (talk) 00:30, 25 November 2010 (UTC)[reply]

Have you read our article on Light Peak and checked the official Intel Research Light Peak Overview website? Nimur (talk) 00:49, 25 November 2010 (UTC)[reply]

Does phosphorus (P) really look like that? --Chemicalinterest (talk) 00:58, 25 November 2010 (UTC)[reply]

The two standard allotropes of phosphorus are white phosphorus and red phosphorus; this could be something like "violet phosphorus" or "black phosphorus", both of which are more stable that white or red phosphorus, but require high temperatures and/or pressures to form. It could also be some form of phosphate. Phosphate minerals and Phosphorite are important sources of phosphorus, often for use in fertilizers. --Jayron32 01:07, 25 November 2010 (UTC)[reply]
The only time I recall encountering elemental phosphorus, it looked like white phosphorus. More often, I've encountered calcium phosphate, or various mineral forms like apatite. Nimur (talk) 02:06, 25 November 2010 (UTC)[reply]
My money's on that being a phosphate-containing mineral, rather than pure phosphorous. (Similarly, the samples for 'chromium' and 'magnesium' have the look of metal-containing mineral ores rather than the pure metals themselves — though I'm not qualified to say whether or not they're just very badly oxidized.) TenOfAllTrades(talk) 14:48, 25 November 2010 (UTC)[reply]

Brachistochrone problem

Can anyone explain the Brachistochrone problem in a more simple way than "The curve connecting two points displaced from each other laterally, along with which a body, acted on only upon by gravity, would fall in the shortest time."? Thanks in advance. Toolssmilezdfgsdffgrdsfrtdfh975243 (talk) 01:20, 25 November 2010 (UTC)[reply]

You have two fixed points. How do you connect a wire between them so that a bead moving under gravity gets from the higher bead to the lower bead in the least time? Assume the wire is frictionless. --140.180.14.145 (talk) 01:35, 25 November 2010 (UTC)[reply]
I think the word with is unnecessary, and makes the phrase confusing. 81.131.65.104 (talk) 14:17, 25 November 2010 (UTC)[reply]

Corvus

Why were Corvuses so upseting to a ship's balance? Is it because it is a large, heavy mast sticking vertically in the air? If so, couldn't one sail with it lowered? And I can't imagine that it could weigh any more than the ship's mast, or at least not with the leverage the mast would have with such an enormous height. Thanks in advance for the answers. --T H F S W (T · C · E) 02:53, 25 November 2010 (UTC)[reply]

The illustration in the linked article seems to show it being much longer than a mast, and it had to be sturdy enough to support a column of rushing troops, so it would have been both heavy (they gave the estimate of 1 ton) and have had a high center of gravity. Lowering it along the deck of the ship, or better yet, into the hold, would certainly help, but it looks from the diagram like the only way it could be lowered was off the side of the ship, which would pull the ship over to that side (if it wasn't supported by another ship at the far end). StuRat (talk) 03:24, 25 November 2010 (UTC)[reply]
Well, I wouldn't really consider that diagram as an authority, and the beam supporting it is not a mast. And I think corvuses (or whatever the plural for corvus is) went off the front, rather then the side most of the time. And I think a mast would have to be stronger; I don't think a large group of soldiers would have nearly as much weight as a couple tons of sail. Plus, whereas soldiers rushing across are spread out and and are supported at both ends, a mast has to support the hole sail from the top. --T H F S W (T · C · E) 04:29, 25 November 2010 (UTC)[reply]
On a TV show (don't recall which one, but I think it was on the History Channel), they recreated a corvus and found it to be extremely clumsy and dangerous to its deployers. Clarityfiend (talk) 05:02, 25 November 2010 (UTC)[reply]
One other factor to consider is that wood is better under compression (like a vertical mast) than it is under a sheering or bending load (like a deployed corvus). Thus, much more wood is needed to support the 2nd case. Although, while the weight of the sails and the mast itself compresses the mast, the force on the sails, from the wind, is a sheering force, so the mast must be strong enough to withstand that, too (up until the point where they pull in the sails in high winds). StuRat (talk) 17:10, 25 November 2010 (UTC)[reply]
No. The mast is not taking up most of the sheering force of the sail. That is, instead, transferred via the rigging, in particular the stays, shrouds, and sheets. --Stephan Schulz (talk) 00:16, 26 November 2010 (UTC)[reply]
In that case, my point (about the corvus needing to be bulky to support sheering forces not experienced by the mast) is even stronger. StuRat (talk) 04:58, 26 November 2010 (UTC)[reply]
Yes. And the corvus was also intentionally made heavy. It was supposed to smash down any defenders, and to securely fix itself into the enemy ship with a spike. It's not very effective if your armored legionaries run over the boarding bridge while it slips off the other ship...swimming with 40 pounds of bronze armor may be borderline possible for a good swimmer, but it is definitely not a battle-winning strategy. --Stephan Schulz (talk) 10:12, 26 November 2010 (UTC)[reply]
The reason carrying a corvus in rough seas is problematic is that it raises the ship's center of mass. A stable ship will have a low center of mass. The higher a ship's center of mass, the more likely a slight tilt will cause it to capsize. WikiDao(talk) 15:29, 26 November 2010 (UTC)[reply]
I believe everyone here knows that, the real Q is why it raises the center of mass more than the mast does, which we seemed to have answered fairly well. There is also the point about the center of mass being off center, which has also been addressed. StuRat (talk) 17:58, 26 November 2010 (UTC)[reply]
Mast? I had thought the corvus was deployed on oar-ships. Any heavy weight above the water-line, whether in addition to a mast or not, would have made the ship top-heavy, I'm just trying to make that answer clear. WikiDao(talk) 20:00, 26 November 2010 (UTC)[reply]

How does radiometric dating work?

I understand the principle behind radiocarbon dating, namely that the fraction of C-14 in the atmosphere is maintained at a constant level by the flux of cosmic rays activating nitrogen. Since organisms exchange carbon with their surroundings, the fraction of C-14 in an organism also remains constant during its lifetime. When the organism dies, C-14 is no longer in steady-state and its fraction begins to decrease, allowing its use for dating.

My question is about other forms of dating, like uranium-lead dating. As far as I understand, uranium concentrations are not being held in steady-state like C-14 is, so all uranium created from the supernova (or equivalent) that formed the solar system should decay at the same pace. If this is so, how can uranium be used to find the age of a rock, and what exactly does this age correspond to? The Wikipedia article on the subject goes in depth on the calculations involved, but not this background. —Preceding unsigned comment added by 68.40.57.1 (talk) 05:48, 25 November 2010 (UTC)[reply]

Quoting from the article you linked to:
Uranium-lead dating is usually performed on the mineral zircon (ZrSiO4), though it can be used on other minerals such as monazite, titanite, and baddeleyite. Zircon incorporates uranium and thorium atoms into its crystalline structure, but strongly rejects lead. Therefore we can assume that the entire lead content of the zircon is radiogenic. Where this is not the case, a correction must be applied. Uranium-lead dating techniques have also been applied to other minerals such as calcite/aragonite and other carbonate minerals. These minerals often produce lower precision ages than igneous and metamorphic minerals traditionally used for age dating, but are more common in the geologic record.
From later parts of the article, it can be more complicated then that as lead can sometimes leach but this seems to be the basics.
Nil Einne (talk) 06:59, 25 November 2010 (UTC)[reply]

Sunrise still becoming later after solstice

I wanted to find when sunrise would be in Cincinnati, so I went to sunrisesunset.com, selected Cincinnati at http://www.sunrisesunset.com/custom_srss_calendar.asp, and got a calendar for December. To my surprise, sunrise is latest at the end of the month: yes, it's only three minutes later than at the solstice, but it's still later. Why would sunrise continue to happen later after the solstice? 66.161.250.230 (talk) 12:29, 25 November 2010 (UTC)[reply]

I think we've discussed this before so you may find something in the archives. In any case [24] [25] should get you started Nil Einne (talk) 14:07, 25 November 2010 (UTC)[reply]
If you look at those data for Cincinnati, you will see that both sunrise and sunset are getting later in late December, and the daylight period is getting longer (as you would expect). The reason is that the moment that the Sun is highest in the sky is not exactly midday, even after accounting for time zones. You can get a better explanation of why in our article on the Equation of Time. Physchim62 (talk) 14:15, 25 November 2010 (UTC)[reply]
That happens because earth's orbit is not a perfect circle. It is slightly elongated. Because of that some days are actually slightly longer or slightly shorter than 24 hours. That causes a slight shift in the sunrise time that compounds the shift in sunrise time due to the change o seasons. The final result is that the latest run rise is slightly shifted from the solistice. 76.123.74.93 (talk) 14:13, 25 November 2010 (UTC)[reply]
People have known about this for a long time, even the ancients who, lacking the distractions of the modern entertainment industry, filled their time with astronomical observations and careful calculations of recurrant celestial events. The Analemma was devised as a means of calculating deviations in the actual day from the mean solar day. --Jayron32 14:58, 25 November 2010 (UTC)[reply]
The earliest sunset is in early December, the latest sunrise is in early January. http://www.timeanddate.com/worldclock/sunrise.html The shortest day of the year is at the winter solstice. I do hope that the proposed bill coming before parliament in the UK is succesful and we switch to continental-european time so that the evenings are lighter and more enjoyable, particularly in the spring and autumn. 92.28.251.194 (talk) 18:34, 25 November 2010 (UTC)[reply]
I raised this same question earlier this year and received some excellent answers. See HERE. Dolphin (t) 01:15, 26 November 2010 (UTC)[reply]

Special relativity

It's been said that special relativity raises the status of measurement - that the value of a quantity is intimately tied to how it can be measured. But where in special relativity does that become important. I learned SR by first having the Lorentz transformations derived, and (almost) all the results followed from these equations. So where does the elevation of measurement come into effect? 70.52.44.192 (talk) 13:02, 25 November 2010 (UTC)[reply]

Are you sure that the statement refers to special relativity, not quantum mechanics? --Wrongfilter (talk) 14:34, 25 November 2010 (UTC)[reply]
How did you derive the Lorentz equations? I've certainly seen them derived starting with a discussion about how time and space intervals are measured e.g assuming distances are measured by sending and receiving light signals. As I recall the term "operational definition of measurement" was thrown about. 129.234.53.175 (talk) 18:19, 25 November 2010 (UTC)[reply]
The Lorentz equations are so basic that the way you derive them depends a lot on which assumptions you want to use. It's kind of like proving 2+2=4.
I don't think I agree that special relativity changes the nature of measurement, versus Newtonian physics. What I do think, though, is that physicists and mathematicians overlooked special relativity for decades because they weren't thinking carefully about measurement. That was the only real contribution of Einstein's 1905 paper. the Lorentz transformations had been derived already (by Lorentz) but people were still stuck to Newton's idea of absolute mathematical time, which they believed was the thing being measured even if physical clocks were affected in such a way that they measured it wrong. Einstein finally threw that away, and people who read his paper finally realized why the Lorentz transformations made sense. It's interesting that the key to understanding quantum measurement turned out to be the same as the key to understanding special relativity: treating the measurement apparatus as a physical system that follows the same rules as the thing being measured. I have the fond hope that whatever is preventing us from understanding quantum gravity will turn out to be equally fundamental... -- BenRG (talk) 21:35, 25 November 2010 (UTC)[reply]

Relative velocity

I think it is more practical saying Lorentz assumed the Lorentz Transformation (LT) and Einstein is the first person trying to derive it. If you look at the time equation of LT, you will wonder how did Lorentz get it? But after you change the spatial equation to x=(x'/γ)+vt, and change the time equation to t=(t'/γ)+(vx/c^2) then put the right part of the equation into t of the previous equation, you will find out that the result is x = γ(x'+vt'). Yes, after you combined equations in LT you get the spatial equation of the inverse LT. Do you know how Lorentz get the time equation? It is so simple that Lorentz just assumed the "hypothesis of ruler contraction" to get x'= γ(x-vt), and the x= γ(x'+vt') for inverse LT, then replaced the x' in the later equation by γ(x-vt); and ha, there is the time equation in LT. Any two of the four equations in LT and inverse LT can derive the other two of them. That means, no matter how people derive LT, the equations of LT and inverse LT will always coexist. Logically speaking, we may assume S' is moving at velovity v and S is rest or we may assume S' is rest and S is moving at velocity -v but to assume both conditions to coexist we should be able to find something very bad in the LT. It is bad, in LT, the v is always zero.Jh17710 (talk) 04:29, 26 November 2010 (UTC)[reply]

The phrases "is at rest" and "is moving at velocity v" are meaningless, unless it's clear what movement is being measured relative to. If the x and x' axes point in the same direction, then the statements "S' is moving in the x direction relative to S at speed v" and "S is moving in the x' direction relative to S' at speed –v" mean the same thing. There is no inconsistency there, and no problem with the LT and the inverse LT coexisting in the same problem. Red Act (talk) 19:40, 26 November 2010 (UTC)[reply]

Platinum arsenide

Does anyone here know how to extract platinum metal from platinum arsenide (in my case it is sperrylite) and disposing the arsenic safely? I have search the internet and found no information on this.

Another thing I want to know that when we burn platinum arsenide,does it decompose? I gues it is because platinum is noble metal,and I have done and experiment,when I burn it,it turn to black and somewhat seem shrinked. If it does,it will decompose to platinum metal rather than its oxides?

I heard that platinum arsenide is attacked with oxidizing acid like nitric acid, when it attacked,does it form platinum nitrate and arsenic nitrate or just eat the arsenic away leaving the platinum metal?

Thank you very much! —Preceding unsigned comment added by 124.82.11.255 (talk) 14:08, 25 November 2010 (UTC)[reply]

Chemically extracting metals from their ores is actually usually a very dangerous process. Platinum extraction, as you note, produces nasty arsenic compounds. Gold extraction often involves the use of cyanides, see Gold cyanidation. Metal extraction is almost universally poisonous, environmentally destructive, and/or energetically expensive. See tailings and slag for examples of toxic wastes from various stages of metal extraction processes. --Jayron32 15:02, 25 November 2010 (UTC)[reply]
aqua regia will convert this to chloroplatinic acid H2PtCl6, and nitric acid to Pt(NO3)4. Burning the arsenide should remove the arsenic as an oxide vapour, but this could be polluting. Graeme Bartlett (talk) 21:17, 25 November 2010 (UTC)[reply]

ECG

Performing ECG on a patient having a metalic pin in his femurKhuloodm (talk) 16:12, 25 November 2010 (UTC)[reply]

Dose a metalic pin in apatient's bone affect ECG readingsKhuloodm (talk) 16:22, 25 November 2010 (UTC)[reply]

I have no idea, having a look at the Electrocardiography article, I can't deduce any plausible reason why a metal pin should affect ECG readings, but if that's a homework question, maybe it's one of those trick questions that only ECG technicians know the answer to. Vespine (talk) 21:44, 25 November 2010 (UTC)[reply]

Human urine

What does human urine taste like ? Obviously I have no interest in trying it out and I'm not asking you to go do some original research I'm just morbidly curious. I'm more looking for scientific answers in the form: human urine contains these compounds, which are also found in these more commonly ingested substances, so it might taste like this. Thanks. 24.92.78.167 (talk) 17:29, 25 November 2010 (UTC)[reply]

Our article on urophagia discusses this a bit - it seems to vary quite a bit based on what the excreter has recently eaten and drunk. Because salts are concentrated in it (per our article), I assume it is usually salty unless something odd is affecting the flavour/odour. Matt Deres (talk) 18:33, 25 November 2010 (UTC)[reply]
It's definitely salty. It tastes pretty much how it smells: like urine. Urine is generally sterile, and safe enough to drink in small quantities; there's no reason not to try some (other than the grossness factor, which isn't insignificant). Buddy431 (talk) 00:03, 26 November 2010 (UTC)[reply]
Its salty (due to the salts) and has a bitter / ammonia-like taste due to the urea. Beyond that, it very much depends on what else the person has been eating drinking. Rockpocket 00:13, 26 November 2010 (UTC)[reply]
In the early days of medicine, physicians would taste the urine of any patient suspected of suffering diabetes. If the urine was slightly sweet due to the presence of sugar this confirmed the diabetes. See History. Dolphin (t) 01:05, 26 November 2010 (UTC)[reply]
You're clearly referring to diabetes mellitus, as distinguished from diabetes insipidus (the latter associated with dilute, not sweet, urine). Note that urine contains no sugar until the diabetes mellitus is sufficiently advanced for blood glucose to exceed the renal threshold (about 180 mg/dL or 10 mM), so the "taste test" would not rule out early diabetes mellitus (of course, diabetes mellitus generally isn't symptomatic until the renal threshold of glucose is exceeded, so it was a useful test). -- Scray (talk) 01:44, 26 November 2010 (UTC)[reply]
Mellitus means sweet in Greek... that's an intentional correlation. Apparently a number of cultures either tasted the urine themselves or used animals (ants or bees) to determine whether or not a particular individual's urine contained sugar, making it sweet. Shadowjams (talk) 12:25, 26 November 2010 (UTC)[reply]

cloning plants

Were scientist cloning plants before Dolly the Sheep? If so, for how long? —Preceding unsigned comment added by 69.247.48.131 (talk) 20:47, 25 November 2010 (UTC)[reply]

You don't need to be a scientist to clone plants. Everytime you take a cutting of a house plant, you are cloning it! 86.162.106.18 (talk) 21:07, 25 November 2010 (UTC)[reply]
King's Holly Mac Davis (talk) 23:16, 25 November 2010 (UTC)[reply]
Banana's are famously propagated by cloning. Bananas have probably been cultivated for the past 7000 years in some areas [26], but it's unclear how long they have been cloned, rather than grown from seed. In any case, grafting has been practiced in the far east for the past 4000 years or so. Buddy431 (talk) 00:12, 26 November 2010 (UTC)[reply]
The Navel Orange is my favorite example, since all of the millions and millions consumed over the last 180+ years have been, essentially, the same orange! The Masked Booby (talk) 03:30, 26 November 2010 (UTC)[reply]
Along these lines, the same can be said for every commercial apple variety. There's only one granny smith, only one red delicious, etc. The point is that if a certain apple tree makes tasty apples, there is no guarantee that the offspring grown from seed (indicating sexual reproduction) will taste the same. The only way to ensure this is by grafting (cloning) one tree over and over. SemanticMantis (talk) 15:13, 26 November 2010 (UTC)[reply]
That's not strictly true: as the Red Delicious article points out, there have been a number of mutations that have occurred over the last hundred years, leading to multiple strands (genetically similar, but not identical), that may truthfully be called Red Delicious apples. In general, though, you are quite correct. Buddy431 (talk) 23:32, 26 November 2010 (UTC)[reply]

Research subject

Is there a way I could get paid to become a psychological/cognitive research subject? I have a lot of strange abilities. I lucid dream every night, I can depersonalize, derealize, and überrealize very easily and at will. Psuedohallucinations are common, and so are false memories, illusions of precognition, and many other little oddities. None of these things have ever bothered me in life. I enjoy them and learning how to manipulate them. I would like to learn to do more of these things, and help humanity's understanding of these mental effects. I am also very interested in being injected with psychoactive drugs during the dream state to see how sensations and perceptions are altered. My dream recall can go very deep, and is easily trained and untrained. I would really like to find a situation where I can spend some time delving into my own mind in waking and dream states while free from the responsibilities of civilization. Any ideas? -- Mac Davis (talk) 23:33, 25 November 2010 (UTC)[reply]

Its unlikely anyone is going to inject you with psychoactive drugs, but this site should give you some pointers of how to get involved in dream research. Rockpocket 00:18, 26 November 2010 (UTC)[reply]
Hey! That's a really good site. I read a lot of it. I also found a presentation from DEF CON with some scripts and schematics. I also already have somebody reliable to work with.

You can look on the bulletin boards around your local college or university psychology department to see who needs research subjects; they often pay nominal fees for your time. But please don't specifically seek out experiments involving areas in which you think you may be a statistical outlier. To do so will skew the experiments' results. Ginger Conspiracy (talk) 01:37, 27 November 2010 (UTC)[reply]

Car turbochargers and torque

What causes turbos to produce so much torque (especially smaller twin turbos)? They smaller twin turbos are always capable of producing so much more torque than horsepower. Why is that? —Preceding unsigned comment added by 76.169.33.234 (talk) 23:38, 25 November 2010 (UTC)[reply]

Your statement that some turbos are capable of producing more torque than horsepower is meaningless. Torque and power are two different quantities. Torque is typically measured in Newton.metres and power is measured in Newton.metres.second-1, Watts, kilowatts or horsepower.
Consider an engine producing a torque of 1000 N.m. If the speed of this engine is 1000 RPM it is producing power of 104.7 kW, but if the speed is 10,000 RPM it is producing 1047 kW.Dolphin (t) 04:58, 26 November 2010 (UTC)[reply]

Let me use a specific example to rephrase the question in a way that the OP actually means and would be helpful to follow-up RefDeskers: Why do turbo-charged engines such as that in the SEAT Leon Cupra (2L Turbo, 177kW @ 5,700-6,300rpm, 300Nm @ 2,200-5,500rpm) produce so much more torque so much lower down in the rev range than naturally aspirated engines of the same power e.g. the Honda S2000 (2L NA, 177kW @ 8,300rpm, 208Nm @ 7,500rpm)? The torque curves of turbo charged engines are disproportionately swelled at lower RPMs as compared to NA engines. What makes turbochargers so naturally effective at lower RPMs? Zunaid 10:05, 26 November 2010 (UTC)[reply]

Thanks for that clarification. It was perfectly clear to me what was being asked; I don't know if Dolphin really didn't understand or was just being difficult. StuRat (talk) 17:51, 26 November 2010 (UTC)[reply]
See:Turbocharger--Aspro (talk) 17:16, 26 November 2010 (UTC)[reply]
Does that article actually answer the question ? If so, I must have missed it. StuRat (talk) 17:55, 26 November 2010 (UTC)[reply]
Yes. More air drawn into the cylinder on the intake stroke.--Aspro (talk) 18:01, 26 November 2010 (UTC)[reply]
And how does that alter the HP to torque ratio ? StuRat (talk) 23:01, 26 November 2010 (UTC)[reply]
More oxygen availability causes a faster burn with the same amount of fuel, and allows a complete burn with more fuel. Ginger Conspiracy (talk) 01:41, 27 November 2010 (UTC)[reply]

November 26

Swallowing one's tongue

I read the following sentences on the Wikipedia: "Teammates Ivica Dragutinović and Andrés Palop immediately ran to his side as he lost consciousness. Moments later, club medical staff and other players followed suit, as Dragutinović stopped Puerta from swallowing his tongue." What is "swallowing ones tongue"? Can you give some explanations? Thank you! —Preceding unsigned comment added by 72.198.195.96 (talk) 05:43, 26 November 2010 (UTC)[reply]

When you lose consciousness your tongue can lose muscle tone and (especially if you're flat on your back) can fall back in your mouth occluding the airways. It's not that you actually 'swallow' it in the way that you swallow food, but merely that if can block the airway. That's why one of first steps in first aid with an unconscious patient is to turn them on their side (along with preventing choking on things like spontaneous regurgitation). --jjron (talk) 08:24, 26 November 2010 (UTC)[reply]

RFID

   Can RFID tagging (i.e. of products exposed for sale in a retail store) be used to defeat aluminium-lined bags used by shoplifters? In addition, what do RFID scanners which are to be placed at a store's entrance look like; are they conspicuous enough to deter would-be shoplifters from even trying? Alternatively, can RFID scanners placed at store entrances be made inconspicuous enough to aid in apprehending shoplifters? Rocketshiporion 05:50, 26 November 2010 (UTC)[reply]

Aluminum foil will attenuate RFID signals in the same way it attenuates most security tags. The amount of foil required to prevent a detection will depend on the details of the system, but for many RFID uses the signal may be undetectable with only 1 to a few layers of foil. Dragons flight (talk) 07:30, 26 November 2010 (UTC)[reply]
Note that current security measures try to detect an object in the vicinity of the door, but a better approach is to continuous monitor the position of every item in the store, and alarm when any of them disappear, noting the site of the disappearance. This requires greater range, bandwidth, and computing power to track all those objects, but it is possible right now. However, the cost of the system and size of the tags makes it not yet practical for most items. Perhaps very expensive items, like jewelry, might be the first to get this treatment. StuRat (talk) 17:46, 26 November 2010 (UTC)[reply]

mining asteroids

Hi, I came across this page where they say an asteroid (3554 Amun) that is a mile wide contains 30 times as much metal as Humans have mined throughout history (ever). Surely this is wrong, or am I understanding it incorrectly? Sandman30s (talk) 06:45, 26 November 2010 (UTC)[reply]

I agree it is wrong, though perhaps not as wrong as one might guess. According to list of countries by iron production, global iron mining was 2.3 billion tons / year in 2009. As raw metal, that would have volume of 0.3 km3. 3554 Amun is only about 7 times that annual volume. So definitely not 30 times all metal ever, but still a large amount on the scale of iron mining. Also, iron stands out for its very large production volumes. Most other metals we mine are in much smaller quantities (e.g. copper and aluminum are only a few percent of the iron values), so a concentration of those metals would be comparatively more significant if one existed. Dragons flight (talk) 07:50, 26 November 2010 (UTC)[reply]
Thanks, I would never have imagined those figures... Sandman30s (talk) 09:48, 26 November 2010 (UTC)[reply]
Hang on a moment. I could not find a web page giving the volume of Amun (I did find multiple sources giving its "diameter", but since small asteroids are not spherical, this gives little idea of its volume). But Wikipedia's page shows its mass as 1.6e13 kg, which is 16 billion metric tons. This accords with Dragon's figure of 7 times the 2.3 billion tons of iron mined in 2009, if those are metric tons; if they're short tons, as one might expect from a US source, it would be nearer 8 times. But several other Internet sources give Amun's mass as 30 billion metric tons, which (if iron) would be equivalent to 13 or 14 years' terrestrial production rather than 7 or 8. Still nowhere near 30 times the world's all-time production; perhaps someone slipped a factor of 1,000 in their original calculation. --Anonymous, 02:02 UTC, November 26, 2010.

How can a solvent be non-polar but be made of polar molecules?

My organic chemistry textbook makes the distinction between polar molecules and polar solvent. It says that polar molecules are identified by the high dipole (u) seperation of the molecule, while polar solvents are identified by a high dielectric constant.

It says that all polar solvents are made from polar molecules, but the opposite is not true and provides the example of formic acid vs acetic acid to demonstrate this.

Both formic acid and acetic acid are polar molecules by virtue of their dipole moment. However, formic acid, with a dielectric constant of 59, is also a polar solvent, while acetic acid, with a dielectric constant of 6.1, is not a polar solvent.

Why is this? Would both solvents be adequate for dissolving ionic compounds such as NaCl? Acceptable (talk) 08:05, 26 November 2010 (UTC)[reply]

Solubility of polar substances (and by extension, ionic) substances is dependant almost solely on dielectric constant. Your textbook is pretty much spot-on on this one. The solubility of something like NaCl is dependent on the ability of the solvent to solvate the ions; that is to make bonds to the ions which are stronger than the bonds the ions would make to each other (strictly speaking, it is defined thermodynamically; the substance is soluble if the free energy released in the formation of the solvent-ion bonds is greater than the free energy required to break the ion-ion and solvent-solvent bonds). Dielectric constant takes this ability into account, whereas dipole moment does not. --Jayron32 16:32, 26 November 2010 (UTC)[reply]
But why does formic acid have a higher dielectric constant than acetic acid when both have a carboxylic acid functional group? They differ only by the fact that one has a methyl group and the other has a hydrogen. Acceptable (talk) 21:24, 26 November 2010 (UTC)[reply]
The methyl group is MUCH more "electron donating" than the hydrogen atom is. You can think of this in two ways; either you can think of it as the methyl group donating electrons to the carbon part of the COOH dipole OR you can think of the methyl group as acting like a "positive charge sink", again to the same effect. The dipole moment is calculated for the bond, while the dielectric constant is calculated across the whole molecule; while the dipole moments will be similar (but not the same, due to the methyl's effect described above), the dielectric constant will be much lower on the acetic rather than the formic acid. You can see the effect even greater on very large molecules; take something like stearic acid; the dipole moment on the COOH will not be that much smaller than it will be on the acetic acid, indeed after propionic acid, longer chains of carbons do not markedly affect the dipole moment in that part of the molecule. However, the dielectric constant continues to go down to nearly nil. I am pretty sure that after 4 or 5 carbons, the bulk substance is considered essentially non-polar; despite the acid group. --Jayron32 21:33, 26 November 2010 (UTC)[reply]

Is it possible to vomit in your sleep and die by suffocation without being under the influence of anything?

In other words, could a perfectly normal, sober person who hasn't had anything to drink//taken any mind-altering substances vomit and thereby asphyxiate during slumber, or would the body's involuntary control measures trigger countermeasures like gagging, rolling over to one side, etc. before the person awoke?

Or perhaps it's not even possible to vomit in one's sleep? Sign me "curious" The Masked Booby (talk) 09:21, 26 November 2010 (UTC)[reply]

Did a few google searches on it, most of the results appear to be people asking about it on various forums - One answer on this forum referred to GERD - Although perhaps not vomiting in the sense that you mean, perhaps mildly along the same lines. All other mentions that I've come across thus far (the last few minutes), seem to point to factors enducing the vomiting while asleep: alcohol; obesity; underlying illness; directly related illness.. etc. Darigan (talk) 09:54, 26 November 2010 (UTC)[reply]
I think a better question is can you do that and stay asleep, and I doubt it. If I vomited I'd wake up for sure. Same goes x1000 for not being able to breath. Ariel. (talk) 11:15, 26 November 2010 (UTC)[reply]
I think the best anyone could say is it is unlikely. The human body is a fantastically unpredictable thing, so I would not be surprised to find, if I looked hard enough, isolated reports of one or two people who have died by choking on their own vomit, in their sleep, without any complicating factors. --Jayron32 15:22, 26 November 2010 (UTC)[reply]
One thing of course if we are thinking there is no illness or other factor as someof the above answers are discussing, vomitting itself is not common asleep or not. Nil Einne (talk) 16:50, 26 November 2010 (UTC)[reply]
I recall a report or two of people suffocating to death from their vomit while very drunk and thus probably unconscious. 92.24.178.149 (talk) 22:05, 26 November 2010 (UTC)[reply]

The suffocation reflex from carbon dioxide buildup in the lungs is very painful if you aren't used to it, and very powerful. Vomit is a fluid which is almost always easy for a conscious person to expel. A conscious person in shock may be effectively paralyzed, however, which is why CPR and related forms of first aid instruct the person administering the aid to check to see that the airway is clear, even when the victim is conscious. Ginger Conspiracy (talk) 01:03, 27 November 2010 (UTC)[reply]

Copper containing biomolecules

Why did copper containing biomolecules appear later in evolution than their iron based analogues that do the same job? —Preceding unsigned comment added by Blackmetalgrandad (talkcontribs) 10:57, 26 November 2010 (UTC)[reply]

I can't answer your question, but you should know that there is something like 100,000 times as much iron on earth as there is copper. Ariel. (talk) 11:22, 26 November 2010 (UTC)[reply]
Oops, I meant why did the copper based ones appear earlier!144.32.126.11 (talk) 11:33, 26 November 2010 (UTC)[reply]
It could be that hemocyanin would have been more efficient in the early seas which where oxygen poor, than iron based oxygen carriers. Also, it allows (or favours) simpler body organs. --Aspro (talk) 16:44, 26 November 2010 (UTC)[reply]

What is its appearance? Colorless gas? (just a guess) --Chemicalinterest (talk) 15:02, 26 November 2010 (UTC)[reply]

Yes, good guess! It sublimes at 4.8 °C under atmospheric pressure. Physchim62 (talk) 15:12, 26 November 2010 (UTC)[reply]

Iodine from caliche

The article does not state how iodine is extracted from the iodide and iodate in caliche. How is it extracted? All of these iodine questions are because of this. --Chemicalinterest (talk) 15:25, 26 November 2010 (UTC)[reply]

It can be reduced with sodium bisulfite but there are more advanced multi-step processes which seem to be more popular these days.[27] Ginger Conspiracy (talk) 00:54, 27 November 2010 (UTC)[reply]

Why kilogram as a base unit rather than gram?

Why was the kilogram chosen as a base unit of the International System of Units rather than the gram? The kg is the only base unit with an SI prefix as part of its name. Our article on the Kilogram says, "Since trade and commerce typically involve items significantly more massive than one gram...[the standard became] one thousand times more massive than the gram—the kilogram." Although this seems common sense, wouldn't the desire for consistancy in the new system be more important? Doesn't having one base unit that already contains a prefix in its name cause confusion when multiplying it by a another prefix? It seems that the term "kilokilogram" would be necessary to describe a mass of 1000 kg, which is awkward. --Thomprod (talk) 15:26, 26 November 2010 (UTC)[reply]

(1) "Since trade and commerce typically involve items significantly more massive than one gram...[the standard became] one thousand times more massive than the gram—the kilogram."
(2) No.
(3) No.
--Shantavira|feed me 15:45, 26 November 2010 (UTC)[reply]
The OP is making a common mistake; the confusion between the metric system and SI. The SI is a subset of the metric system chosen for convenience in the widest possible applications. The other thing about SI is that its "base" units are used to derive the so-called "derived units"; thus the Newton and the Joule and the Pascal and other units are always expressed as ratios or products of things like kilograms, meters, seconds, etc. There are other systems besides the SI which use different metric units, see cgs system, which uses units like the erg. In summation: The SI is not the metric system. It is a set of units speficially chosen from the metric system, chosen to be convenient for use in certain applications. --Jayron32 15:52, 26 November 2010 (UTC)[reply]
See grave (unit). Basically a grave (from gravity) was going to be the base unit; a gram was an alias for a milligrave; just as a ton is an alias for 1000 kg. However, this was just before the French Revolution, and Grave is also a French title; similar to the German Graf, or English Count. Grave as a title has the same etymology as Graff, which is different from Gravity. After the revolution it was felt that this would be contrary to égalité so grave as a unit was dropped. BTW, this came up a few months ago. CS Miller (talk) 16:00, 26 November 2010 (UTC)[reply]
Jayron: So the kilogram was chosen over the gram as the base unit of mass in the SI because the former was (and is) more "convenient" in most applications. I understand that. But didn't the kg stick out to the designers of the SI as the only base unit name to include a prefix? If I was designing a new system (based on mathematics and powers of ten and such) with base units and modifying prefixes (somewhat analogous to nouns and adjectives in grammar), why would I build in a point for possible later confusion by including a modifier in the name of a base unit? There is no color named "lightred", for example.
CS: I understand the politics of not using the word "grave". But, why didn't they just come up with a completely different word to represent 1000 grams, if that amount was thought to be more convenient than one gram? --Thomprod (talk) 17:00, 26 November 2010 (UTC)[reply]
Pass. Perhaps it was just an interim decision use gramme instead of milligrave, and no-one got around to making a new word for grave/killogramme. I made a slight mistake in my previous statement - originally the definition was the gramme - 1cc of water at 0°C, and the grave was the practical physical object to represent it. I don't know why the definition was moved to kg/grave - perhaps it was to make the definition and representation the same. CS Miller (talk) 17:28, 26 November 2010 (UTC)[reply]
Ok, now that makes sense. If the unit called "grave" was practical, but now politically incorrect, it would have made more sense to call it something brand new, rather than the hybrid "kilogram". --Thomprod (talk) 18:35, 26 November 2010 (UTC)[reply]

Force as Gradient of potential

We can define the quantity U such that dU is to equal -dr. But dU = ∇Udr. So F⋅dr = -∇Udr For this to be true in general, F = -∇U.

If a force can be written as the gradient of a scalar field, then this is taken as a definition that F is conservative. But where in the above derivation was F assumed to be conservative? 70.52.44.192 (talk) 20:54, 26 November 2010 (UTC)[reply]

Please do your own homework.
Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know.
Having said that, you might benefit from reviewing force, gradient, and reading Newton's laws of motion#Variable-mass systems carefully. Ginger Conspiracy (talk) 23:52, 26 November 2010 (UTC)[reply]
No, it isn't a homework question, although looking back it does seem that way. The reason I spelled out the F = -∇dr derivation was so that someone could point out more clearly where the assumption that F is conservative was made.
I looked at the articles you mentioned. I don't see what variable-mass systems have to do with conservative forces, and the gradient article doesn't mention potential energy. The force article starts by saying "a potential scalar field is defined as that field whose gradient is equal and opposite to the force produced at every point". But why can't such a potential be defined for any force? You might say, well U isn't a function of just r for all forces, just conservative ones. But if dU is defined as above, then it does seem, incorrectly, that all forces can be constructed to be the gradient of a scalar field. 70.52.44.192 (talk) 01:46, 27 November 2010 (UTC)[reply]

November 27