Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Why do they make pipes out of lead?: the Angry Scolding Librarian should find another area of Wikipedia in which to work.
Line 782: Line 782:


::[https://www.youtube.com/watch?v=jWOxnpmw7Dk Small amounts are fine.] [[User:ScienceApe|ScienceApe]] ([[User talk:ScienceApe|talk]]) 16:56, 24 January 2016 (UTC)
::[https://www.youtube.com/watch?v=jWOxnpmw7Dk Small amounts are fine.] [[User:ScienceApe|ScienceApe]] ([[User talk:ScienceApe|talk]]) 16:56, 24 January 2016 (UTC)

:Thanks. So you're all saying a smooth steel ball bearing with no small bumps or surface defects could be microwaved without sparking? [[Special:Contributions/75.75.42.89|75.75.42.89]] ([[User talk:75.75.42.89|talk]]) 22:06, 24 January 2016 (UTC)


= January 24 =
= January 24 =

Revision as of 22:06, 24 January 2016

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 17

Is one of the following option right?

I found this question on Facebook (chemistry group), but I'm not sure if one of the given options is right.

"salt" in chemistry is:
a) compounds that have ionic bonding
b) compound s that consist of elements of halogen family, no matter what is the type of the bonding (ionic or covalent)
c) compounds that have the Cl element
d) compounds that have no metals
According to our article: "In chemistry, a salt is an ionic compound that results from the neutralization reaction of an acid and a base." and I don't see this option here. Are the options wrong? 92.249.70.153 (talk) 05:08, 17 January 2016 (UTC)[reply]
a. They are compounds with ionic bonding. Not all salts result from the neutralization reactions of acid and a base. For example, how would you explain the formation of tetrabutylammonium bromide? Yanping Nora Soong (talk) 05:30, 17 January 2016 (UTC)[reply]
Thanks, so if I understand you well the mistake is in the article here. Am I right? 92.249.70.153 (talk) 06:35, 17 January 2016 (UTC)[reply]
No, our article (Salt (chemistry)) provides a definition that is generally considered correct.
Most introductory chemistry textbooks will use a definition very similar to the one you find in our article. There are corner cases and subtleties of definition. Most importantly, if you study more chemistry, you will learn that the definition of acid and base is trickier than it first seems. In introductory chemistry, you will focus on the standard definitions and standard chemical reactions; but as you dive deeper, each successive complication necessitates a refinement of many definitions.
In some sense, we call this style of formal education a "lie-to-children," but that's not entirely fair. If you want a complete and total definition of the word "salt" in chemistry, you'll have to read hundreds of books and thousands of research papers. If you want the definition in one sentence, our article (Salt (chemistry)) does a great job introducing the concept in its lede.
If you strongly feel that the opening definition in that article is incorrect, then:
  • Find multiple reliable, encyclopedic sources to back you up. An internet-quiz on a social forum is not really a reliable source.
  • Engage with the regular contributors at Talk:Salt (chemistry) and discuss your proposal.
  • Reach consensus and make a change.
In this case, I do not recommend making the change first, because most educated chemists and scientists will agree that our article's lede definition is generally correct.
Nimur (talk) 17:53, 17 January 2016 (UTC)[reply]
The generic form of the given definition is:
[H+][B] + [A+][OH] → [A+][B] + HOH
so it's trivially easy to see what one would react with what to give any arbitrary cation/anion result. Just because you happen to know how to make the A+ from something other than A itself doesn't have any relationship to the fact that AB can be made starting with AOH. DMacks (talk) 20:52, 17 January 2016 (UTC)[reply]
The Wikipedia definition seems a little weird to me, but the text from ionic compound sheds some light on it: "Ionic compounds containing hydrogen ions (H+) are classified as acids, and those containing basic ions hydroxide (OH) or oxide (O2−) are classified as bases. Ionic compounds without these ions are also known as salts and can be formed by acid-base reactions." So the definition given is kind of a roundabout way of saying that if you have an ionic compound and you get rid of any H+ and OH- present by reacting them, you get a salt. But the thing is, a salt can result from reacting an acid and a base but it doesn't have to, if you don't neutralize all equivalents of H+ or OH-. (Also I suppose there must be some cute example where you have an OH- in a cage of carbon or something so it won't directly react with the acid?) I don't see an obvious reason not to adapt the ionic compound definition and say that a salt as an ionic compound that doesn't contain H+ or OH-. This is really semantic, not a matter of true acid or base nature as per Lewis acid, given that AFAIK lithium tetrachloroaluminate is a salt, even though it hazardous as an acid that will readily react with water. [1] I would assume that mixing lithium hydroxide and hydrogen tetrachloroaluminate will not produce much of a yield of lithium tetrachloroaluminate + water, since the reverse reaction occurs so readily, so its classification under the definition currently used in the salt article seems very iffy. Comments?? Wnt (talk) 14:35, 18 January 2016 (UTC)[reply]

what is it called the compounds that occur between metals to metals?

According to what I'm reading now on "chemistry essentials for dummies" book (p.72) ionic compound occur between a metal and non metal while covalent compound occur between two nonmetals. So my question is what is it called the compound the occurs between metals to metals? 92.249.70.153 (talk) 05:14, 17 January 2016 (UTC)[reply]

Update: I found the answer right there. It's called metallic bonding. 92.249.70.153 (talk) 06:36, 17 January 2016 (UTC)[reply]
Yup. Metallic bonding is one of the major types of chemical bonds. DMacks (talk) 11:43, 17 January 2016 (UTC)[reply]
To be strictly correct here, we're confusing two different terms. A chemical compound is different than a chemical bond. A compound is a bulk material, while a bond is a type of force of attraction between particles. For example, something like sodium sulfate is usually classified as an ionic compound, but it has both ionic bonding (between the sodium ion and the sulfate ion) and covalent bonding (between the sulfur and oxygen atoms within the sulfate polyatomic ion.) Metallic bonding, by its very nature, does not really fit into the "compound" thinking for many reasons, and we don't often use the term "metallic compound" in the way we use terms like "ionic compound", "molecular compound". We usually use terms like "pure metal" or alloy to describe metallic bonding where all atoms are the same, vs. one with multiple metallic elements. It has to do with the nature of metallic bonding, the so-called sea of electrons model. Ultimately, alloys exist in the fuzzy boundary between compounds, homogeneous mixtures, solutions, etc. It is far less important that, as a student of chemistry, whether one classifies an alloy as a compound or a mixture, and far more important that one understands what is going on at the atomic level. --Jayron32 02:10, 18 January 2016 (UTC)[reply]

Science(Physics?) Question

I have limited math skills and zero physics skills. My question is complicated. To begin: Light has no mass, as objects approach light speed they become more massive (E=MC squared?) however gravity bends light gravity has mass therefore light must have mass to be bent by gravity. Or am I going astray in my understanding of light, mass and gravity? I am 68 and not in school but interested in astronomy (mostly self taught) tHANK YOU FOR YOUR HELP.------ Dennis H

There are several ways to understand this. One complicated one is that gravity bends the universe, so light, traveling in a straight line, ends up traveling in a curved line. I personally find it much easier to understand that gravity actually affects ENERGY (for example an object moving very fast has lots of energy, so will be affect by gravity more), and since light has energy, obviously it would bend. (A complication is that light can not change speed, but gravity changes the speed of things, but light can't, so how is it able to be affected?) Ariel. (talk) 06:39, 17 January 2016 (UTC)[reply]
I think when we say light has no mass, that means no rest mass, but there is also a type of "mass" that's due to relative motion. StuRat (talk) 07:03, 17 January 2016 (UTC)[reply]
See Gravitational lens for a fairly non-technical explanation, and Two-body problem in general relativity for something a bit more advanced. On the question of whether light has mass, see Mass in special relativity. Tevildo (talk) 11:58, 17 January 2016 (UTC)[reply]
You've made a good observation. General relativity describes gravity as a warping of spacetime, and consequently, it predicts gravity will affect even things with no rest mass, like photons. This is a significant difference from Newtonian gravity, and observations of light from distant stars being bent by the Sun's gravity were a major piece of evidence that convinced many scientists of the accuracy of general relativity. These videos by PBS Space Time are a really good primer on relativity, and I highly recommend them. --71.119.131.184 (talk) 06:27, 18 January 2016 (UTC)[reply]

1966 Palomares B-52 crash

Did those Mk28-type hydrogen bombs in 1966 Palomares B-52 crash contain non-nuclear explosives because it was a non-combat mission? Or there was some other reason the nukes weren't armed and didn't explode? Our article doesn't seem to clarify that. --93.174.25.12 (talk) 10:40, 17 January 2016 (UTC)[reply]

All nuclear bombs contain a conventional, non-nuclear explosive as well as the fissile nuclear core. That chemical explosive compresses the core and that starts the fission explosion. When the article says "the non-nuclear explosives in two of the weapons detonated upon impact with the ground", it means those explosive detonated, but they didn't set off the nuclear explosives they were attached to. -- Finlay McWalterTalk 10:53, 17 January 2016 (UTC)[reply]
Ball bearing safety system in a British nuclear weapon
The article doesn't say why the chemical explosives didn't trigger the nuclear physics package. Presumably they have some safety mechanism to prevent inadvertent nuclear detonation - but I can't find out specifics of what that might be for this bomb variant in either the B28 nuclear bomb or Python (nuclear primary) articles. I know some nuclear bombs keep an inert material (in some cases steel ball-bearings) in the void inside the core - these had to be removed to "arm" the bomb, presumably in-flight during an actual nuclear bombing raid. This is the mechanism used in the British Violet Club and related bombs; presumably US weapons had some analogous system. -- Finlay McWalterTalk 11:16, 17 January 2016 (UTC)[reply]
Two things. One, every nuclear bomb contains non-nuclear explosives. The chemical explosives are what assemble the fission core into a critical mass, when they are detonated. See nuclear weapon design. Two, nukes aren't armed unless you're planning to set them off. This accident demonstrates why. Most nuclear weapons contain multiple safety devices to keep them from going off unless you're quite sure you want them to. For instance, nuclear missile warheads include devices that only arm the warhead when they detect the acceleration from being launched. --71.119.131.184 (talk) 11:41, 17 January 2016 (UTC)[reply]
In some designs it's important that the pressure wave from the chemical explosives is symmetrical, otherwise it won't compress the core enough to make it critical. If an impact accidentally sets them off, they'll probably fire first on one side of the sphere, whereas in an intentional detonation they're fired electrically, all at once. I have no expertise in the matter, but it makes sense that this could prevent the nuclear explosion from happening. --76.69.45.64 (talk) 19:43, 17 January 2016 (UTC)[reply]
This is why gun-type fission weapons are inherently far more dangerous than implosion designs - they only have a single chunk of chemical explosive (as opposed to between two and dozens, depending on the specific design, for an implosion-type weapon), and if that goes off, bye bye birdie. Whoop whoop pull up Bitching Betty | Averted crashes 23:55, 23 January 2016 (UTC)[reply]
The deal is that with conventional explosives, the materials are inherently unstable - always on the verge of an explosion. Whack a bomb the wrong way and KABOOM! But with nuclear weapons, it requires considerable finesse to bring the nuclear material together fast enough to get them to critical mass without the increasing temperatures as you approach criticality blowing the bomb apart before it can properly explode. This failure is called a fizzle. Almost any fault in the way the bomb goes off can cause this - so an accidental full-on nuclear explosion due to a damaged bomb is highly unlikely.
That said, a fizzle can be a very dangerous outcome in itself. Although all of that explosive power won't be unleashed, the conventional explosives and the heat of fizzle can cause horribly radioactive material to be spread over a large area resulting in contamination that would be a serious problem to clean up.
But even for a fizzle to happen, the conventional explosives have to explode - and that is no more likely than in a conventional bomb. Probably less so because of the extra care and attention that's paid to the safety of the design and construction of nuclear devices. Conventional explosive bombs with faulty fuses rarely explode spontaneously - even after 50 or more years buried in soil or rubble.
SteveBaker (talk) 20:34, 17 January 2016 (UTC)[reply]
Nitpick: your statement isn't true for all conventional explosives. Some are designed to be very stable. Many plastic explosives can be lit on fire and not explode. --71.119.131.184 (talk) 06:14, 18 January 2016 (UTC)[reply]
Good point - I guess I should have said "the kinds of explosives that 'just blow up' as a result of an accident are inherently unstable"...but that would be something of a tautology! SteveBaker (talk) 20:32, 19 January 2016 (UTC)[reply]
In the case of the Palomares incident, the plutonium from two of the thermonuclear weapons' "primaries," or fission stages was scattered over a two square kilometer area near the fishing village of Palomares after those weapons' conventional explosives detonated (according to our article on the Palomares B-52 crash). No nuclear yield resulted because the conventional explosives didn't go off with the correct timing to compress the plutonium in the primaries to supercriticality - which is needed to release large amounts of nuclear energy from a fissile material.
The section of our article on Nuclear Weapon Design dealing with Warhead Design Safety explains why the thermonuclear weapons accidentally dropped at Palomares didn't detonate with a nuclear yield. In short, they're made not to go off with a nuclear yield unless all the conventional explosive "lenses" (in modern US designs, very often just two are used) go off at exactly the same time. Any other detonation of the conventional explosives should just scatter the fissile around without a nuclear yield.
The standard which has been defined for nuclear weapons safety in the US nuclear arsenal is "one-point safety," defined by the Department of Defense Nuclear Weapon System Safety Program Manual as
  • (1) The probability of achieving a nuclear yield greater than 4 pounds trinitrotoluene (TNT) equivalent will not exceed 1 in 10 to the 6th power, in the event of a detonation initiated at any one point in the high explosive (HE) system.
  • (2) One-point safety will be inherent in the nuclear system design and will be obtained without the use of a nuclear safing device.
By 1966, initial one-point safety problems with the Mk28 weapons had long been resolved, and the early models of that weapon had been retired starting in 1961. The bombs on the B-52 aircraft that crashed near Palomares, Spain were almost certainly one-point safe by inherent design, according to the Nuclear Weapons Archive's "Complete List of All U.S. Nuclear Weapons".
Hope this answers your question. loupgarous (talk) 22:19, 22 January 2016 (UTC)[reply]

Bigger microclimates

In areas with generally uniform topography (whether flat or consistently hilly), what factors can produce climatological anomalies that are hundreds of square miles in area? Go to File:2012 USDA Plant Hardiness Zone Map (USA).jpg and look at Ohio; there's a big light-blue blob just northeast of Columbus, for reasons that I can't understand. The nearby city of Mansfield is large enough that it generally appears on statewide weather maps (the ones showing current or predicted temperatures for the state's larger cities), and it's routinely the coldest of any such city, despite lying in a region that mixes flat farmland with low-relief wooded hills no closer to major waterbodies than the surrounding terrain. The state's other light-blue areas are part of large zones or are the effects of smaller microclimates (see Milligan, Ohio for the area southeast of Columbus), with nothing comparable to the Mansfield area. Nyttend (talk) 15:35, 17 January 2016 (UTC)[reply]

A topographic map (e.g., here shows that this area is a few hundred feet (~100 m) higher than the surrounding region. Not exactly the Cascade Range but enough relief to have a modest climate influence. Shock Brigade Harvester Boris (talk) 15:55, 17 January 2016 (UTC)[reply]
That blue blob is the Tibet of Ohio. 1400-151x feet above sea level! (for comparison the Empire State Building antenna is 1,504 feet above sea level) [2] Sagittarian Milky Way (talk) 16:11, 17 January 2016 (UTC)[reply]
But the region that includes the state's high point, northwest of Columbus a short distance, has a climate similar to the surrounding region; the local ski resort (see File:Mad River Mountain and Valley Hi.jpg) exists because of snow-making machines, not because the area gets additional cold weather. And going to Boris' map — you also don't have a colder zone in Belmont County and areas north of there in the far east, which is the state's largest area of 1300+ feet, even when you get back from the river and its potential warmer microclimate. Nyttend (talk) 16:22, 17 January 2016 (UTC)[reply]
Just a guess but those two regions look steeper than the blue blob (especially the lowest of all three), causing faster drainage of cold air? Also, the highest point in Ohio is in a city park 2 miles from downtown (heat island effect?), and only 29-40 feet higher. Sagittarian Milky Way (talk) 16:45, 17 January 2016 (UTC)[reply]
I strongly doubt that it's a heat-island effect; look at the location, 40°22′13″N 83°43′12″W / 40.37028°N 83.72000°W / 40.37028; -83.72000, and it's easy to find other 1500+ spots out in the township, e.g. 40°22′21″N 83°39′24″W / 40.37250°N 83.65667°W / 40.37250; -83.65667 near the spot marked "New Jerusalem" on the USGS topo map, while only the highest spots in Mansfield are above 1300 feet, and Mansfield a good deal larger, it's more likely to generate the heat island effect, although I doubt a large effect; the final sentence of Urban heat island#Causes says that a 1-million-person city may create a 2-5ºF difference in mean annual temperature, and the two cities are 13K and 47K respectively. Nyttend (talk) 01:17, 18 January 2016 (UTC)[reply]
Just a few thoughts: binning continuous data into discrete chunks can always produce artifacts, e.g. discretization errors. The USDA hardiness zones for 2012 are computed via mean annual minimum temp, 1976-2005. Such temperature information at that resolution is the effect of downscaling, which involves all kinds of mathematical voodoo (which usually works well, but should not be universally blindly trusted, as that can lead to false precision errors in the gridded data).
Now, the good folks at USDA are clever, and I'm not saying the whole thing is an artifact. It probably is a bit cooler there. But perhaps the nature of the data product, combined with the high elevation, may make this anomaly more apparent on the map than it is in reality. I would not be surprised if 75% of the blue region you mention is only 1 F lower in mean annual min than a wide swath of the surrounding green. Finally, you may get a bit more out of looking at older hardiness maps. As you probably know, these zone are changing, and this previous version do not have that feature. Here [3] you can see how they have changed, and also note the weird banding structure in the diffs (I have no idea why those bands show up, but it is almost certainly not anomalous, and illustrates how these things often defy simple intuition - climate science is hard stuff!) SemanticMantis (talk) 15:52, 18 January 2016 (UTC)[reply]

universal basic income

Why do most variations of universal basic income assume that everyone will suddenly become utopians overnight instead of remaining feckless, lazy addicts? The human mind can't take endless free time, a strong work ethic only comes about through necessity for basic survival — Preceding unsigned comment added by DannyBIGjohnny (talkcontribs) 18:04, 17 January 2016 (UTC)[reply]

This question, as phrased, does not appear to be a request for scientific reference material. Would you like to rephrase it, or do you need help finding an internet discussion forum on that topic?
Nimur (talk) 18:19, 17 January 2016 (UTC)[reply]


There are a lot of assumptions in your question:
  • "The human mind can't take endless free time" - Firstly, how do you know that? People retire from work all the time - and remain perfectly sane despite having "endless free time". Secondly, what makes you think that people without work have "free time"? Perhaps they are taking care of children or a sick relative...maybe they are using their time to invent The Next Great Thing?
  • "a strong work ethic only comes about through necessity for basic survival" - Again, how do you know that? Plenty of people work harder than necessary for "basic survival" in order to have a better-than-basic life.
  • "remaining feckless, lazy addicts" - Why do you think people who don't get that universal basic income are "feckless", "lazy" or "addicts"? That is also far from the true in every case.
To answer the part of the question that seems to matter, read Basic income pilots which lists the outcomes of Basic Income experiments around the world. The three that were tried out in the USA had really good outcomes. The early studies found only 17% less paid work being done among women, 7% among men. The gender difference probably implies that women found themselves able to stay home and look after their children...so "feckless" certainly doesn't seem to have been a significant result. They found that the money was not squandered on drugs and luxury goods...so much for "addicts". There was an increase in school attendance. Another study reported reduced behavioral and emotional disorders among the children, an improved relationship between parents and their children, and a reduction in parental alcohol consumption. Again, contradicting your expectations.
I doubt many people think that a universal basic income would result in a "utopia", it's fairly clear that we would expect a significant number of benefits to accrue to society as a whole. SteveBaker (talk) 20:17, 17 January 2016 (UTC)[reply]
Social benefits, although not exactly the same, is also a testing scenario for the idea. Countries with it, including those with generous cash in hand social benefits, did not succumb to all the forms of vice. There is plenty of empirical hard data, beyond ideological worldviews, to analyze the effect of introducing a basic income scheme.Denidi (talk) 22:03, 17 January 2016 (UTC)[reply]
In case you are not aware, you have posted this question to a place that exists almost solely because of motivated people who are volunteering their time to a cause they believe in. You are probably less likely to run into people here who believe the "default" human condition is "feckless, lazy addicts". Vespine (talk) 23:21, 17 January 2016 (UTC)[reply]
Although to be fair, not everyone who contributes here is unemployed and using Wikipedia to fill their spare time. It would be interesting to discover whether Wikipedia contributors are either more or less often employed than the general public since that would shed light on some of the issues in question here. SteveBaker (talk) 16:55, 19 January 2016 (UTC)[reply]
  • This problem has been thought about seriously by economists, psychologists, sociologists, etc. Do some reading on the Post-scarcity economy for more information. --Jayron32 01:54, 18 January 2016 (UTC)[reply]
    Noting, of course, that the topic of post-scarcity economics is an interesting one—but it's definitely a much more extreme condition than a simple universal basic income. TenOfAllTrades(talk) 02:09, 18 January 2016 (UTC)[reply]
It's dangerous to interpret works of fiction as having any kind of predictive power. If such a future were to come about without drama or intrigue - it would not be interesting as the scenario for a science fiction novel/movie. Hence authors tend to look on the dark side. SteveBaker (talk) 16:55, 19 January 2016 (UTC)[reply]
I made no such dangerous interpretation; I just assumed that quoting related works of fiction was the best response to the OP's invitation to speculate. He'd already been told to get himself to a chat forum. There's also Charles Alan Murray's (of Human Accomplishment fame) short story "The Social Contract Revisited" on the topic of paying every adult $10,000 per annum as a replacement for all other government subsidies. μηδείς (talk) 22:11, 20 January 2016 (UTC)[reply]
You need money to make money, and if you don't have enough to begin with you might not be able to work your way up. Especially if a means-tested welfare system means working more doesn't actually result in a net increase in wealth. Those problems shouldn't apply in the case of a universal basic income, and the advocates of such would argue that some/most examples of people (apparently) "remaining feckless, lazy addicts" are actually the result of the first two problems mentioned.62.172.108.24 (talk) 15:49, 18 January 2016 (UTC)[reply]
The OP's question has some silly assumptions, but not as many as a true, monetary UBI, whose academic proponents are basically innumerate and innocent of any understanding of economics. (There have been at least three academic journal issues I know of devoted to debating it). It cannot work for the purposes they intend (without "utopians" - who could make, pretend or play-act that any crazy system whatsoever worked). But it would work quite well toward the aim of some of its (non-innumerate) wealthy proponents who have some grasp of economics: destruction of well-functioning "welfare states", class polarization, resurgent reactionary politics after it collapsed or was debased.John Z (talk) 01:50, 21 January 2016 (UTC)[reply]

How can black holes form?

I know this has probably been asked before or is in a wikipedia article but I can't find the answer.

To an observer it takes an infinitely long time for matter to pass the event horizon of a black hole. So how does the black hole form in a way we can be aware of it or its effects? If it takes an infinite amount of time for matter to get there, how can it 'exist' to us? I've read about the time dilation effect, and I think I understand the basics but how can two black holes collide to form a supermassive black hole when from our perspective that would take an infinite amount of time?

I hope my question makes sense! Thanks 95.146.213.181 (talk) 19:56, 17 January 2016 (UTC)[reply]

For an external observer they never collapse completely staying in sort of a frozen state with the radius close to the gravitational. Ruslik_Zero 20:14, 17 January 2016 (UTC)[reply]
Exactly. The infinities don't come about until the event horizon has formed - and once it has, it's meaningless to talk about what's happening "inside" while still considering events from the perspective of an outside observer. SteveBaker (talk) 20:21, 17 January 2016 (UTC)[reply]

OP here, thanks for the quick replies. I'm not concerned about what's happening inside the event horizon (or do I need to understand that before I understand what happens outside it?), I still don't understand how they can form from our perspective as outside observers. Could you give some links for a layman to understand please? I've read the wikipedia article on black holes and under the growth sub-heading it states 'Once a black hole has formed, it can continue to grow by absorbing additional matter'. How can it do that, if it takes an infinite amount of time?

I'm sorry if I'm not explaining my question clearly (and I realise that much greater minds than mine, or even the ref desk know how black holes form). To put it another way, as the mass of a 'proto-black hole' approaches the density of a black hole, to us (and the rest of the universe) matter moves into it at slower and slower speeds. The bit I don't understand is how, from our perspective, matter moving into the proto-black hole can get there to form a balck hole.

Thanks 95.146.213.181 (talk) 20:53, 17 January 2016 (UTC)[reply]

Leonard Susskind explains this by using the uncertainty principle to show that from outside we cannot tell if a particle falling into a black hole is still outside the event horizon or not. As something approaches the event horizon, a photon or particle to probe the position from outside has to become more and more energetic to determine where the infaller is. Until the energy required is more than the mass of the infaller or the blackhole. Resulting in the probe destroying what we are trying to observe. Graeme Bartlett (talk) 21:22, 17 January 2016 (UTC)[reply]
Let me try to give a few different perspectives on this...
  • The event horizon is a surface in spacetime. Spacetime doesn't change, it just is. Event horizons don't form, they just are.
  • It's physically meaningless to say that an event horizon forms at a particular time "relative to an outside observer" because of the relativity of simultaneity. You can draw surfaces in spacetime and decree that they represent the "now" and that everything on the surface happens at the same time, but there's more than one way to do it and they're all meaningless. When people say that the event horizon hasn't formed yet, they're probably thinking of the "now" as a constant t in Schwarzschild-like coordinates. If you instead use Eddington–Finkelstein-like coordinates, then the event horizon does form at some particular time "for you".
  • Independently of whether the event horizon "exists now", it is true that you will never see anything cross the event horizon, because by definition it's the boundary of the region of spacetime you'll never see. But it's rather solipsistic to say that something never happens just because you never see it happen. In an exponentially expanding universe like the one we seem to inhabit, there is a cosmological horizon and we will never see anything beyond it, sort of like a black hole turned inside out. If nothing outside that horizon happens, then the universe is a perfect sphere with us at the exact center. Even in special relativity, if you accelerate uniformly forever, there is an event horizon behind you (called a Rindler horizon) and you will never see what happens beyond it, but you don't have the power to prevent that half of the universe from existing just by accelerating away from it. These event horizons behave just like black hole horizons, even emitting Hawking radiation (in the case of uniform acceleration it's called Unruh radiation).
  • Classical systems can only asymptotically approach a ground state (in this case a perfectly spherical hole with no hair), but quantum systems emit a final photon/graviton/whatever and reach the ground state at a finite time. For black holes formed from collapsing stars, I think the time from seeing an "almost collapsed" star to seeing the final photon/graviton is a small fraction of a second, though I really should have a source for that. After that, you have a black hole as surely as you have a hydrogen atom in the ground state. (This is probably related to Susskind's argument in Graeme Bartlett's reply.)
  • Quantum black holes eventually evaporate. In Hawking's original semiclassical treatment, you see the hole finish forming at the same time as you see it finish evaporating (not because they happen at the same time, but because they happen on the same lightlike surface, and the light all stacks up and reaches you at the same time). I'm not sure that picture is accurate, though, in part because of the previous bullet point. -- BenRG (talk) 21:42, 17 January 2016 (UTC)[reply]
These are good questions. One thing I'd like to point out is we never "see" beyond the event horizon. We can't, with currently accepted physics, meaningfully say anything about what happens beyond the event horizon. We detect black holes by detecting their effects on things outside their event horizons, such as their gravitational effects on other objects. A singularity not "hidden" by an event horizon would be a naked singularity, which is a topic of discussion in theoretical physics, with debate over whether such a thing could actually exist. Also I'll recommend these two videos by PBS Space Time which discuss black holes. You'll need some background knowledge (and there are links to some other videos that may help), but they're intended to be accessible to laypeople. --71.119.131.184 (talk) 06:12, 18 January 2016 (UTC)[reply]

OP here, thanks for all the responses. I still haven't wrapped my head around things, I think I need to read up a lot more to understand your answers! Your answers have been very much appreciated :-) Mike 95.146.213.181 (talk) 18:10, 18 January 2016 (UTC)[reply]

  • This should prolly be hatted, gin we got an article. μηδείς (talk) 23:17, 19 January 2016 (UTC)[reply]
I'm not sure what hatted means, as in close the question? I don't see any point in that as I have my answers now and those answers may help others. And gin had nothing to do with my question :-) 95.148.212.178 (talk) 22:26, 20 January 2016 (UTC)[reply]

How much as human DNA changed along the centuries

Genetically, how different are we from our ancestors of 10,000 years ago? We would look different due to the diet and environment. However, were we DNA-wise essentially the same as today? I suppose if we go 60,000 years back in time, as we left Africa, we would not see Caucasians or Asians, but what else is new? --Denidi (talk) 22:45, 17 January 2016 (UTC)[reply]

Based on human genome and mutation rate one expects about ~1 mutation in coding regions and ~60 mutations in non-coding regions (including regulatory sequences) of the human genome per generation. That mutation rate will accumulate noticeable variation over thousands of years. Of course, mutations that prove detrimental will be selected against, so the true number of accumulated mutations may be somewhat lower than one might expect via simply counting generations. Dragons flight (talk) 00:07, 18 January 2016 (UTC)[reply]
What mutations prove detrimental has changed. Denidi implicitly mentioned one factor. We would not see Caucasians or Asians, because light skin (via either of the two light-skin genetic changes) was detrimental in the African sun but beneficial in the European or Asian mid-latitude sun. Within the past century, modern medicine has reduced the lethality of various conditions and diseases. Robert McClenon (talk) 00:38, 18 January 2016 (UTC)[reply]
See Human evolution#Recent and current human evolution, which gives the examples of lactase persistence and resistence to diseases carried by domesticated animals. I suspect another example would be the increasing frequency of short-sightedness, which until a few centuries ago would have been a major disadvantage but since the invention and common availability of spectacles is no longer selected against.-gadfium 00:52, 18 January 2016 (UTC)[reply]
I disagree as to myopia. After division of labor, it was no longer a major disadvantage. It only dictated what occupational role the person could fill. They couldn't hunt. They could perform crafts. In an early literate society, it was possible that the nearsighted person could become a scribe, and being a scribe was a high-status occupation in early literate societies in which literacy was the exception rather than the rule. However, it does illustrate that, in general, technology changes what are harmful conditions. Nearsightedness wasn't one, in a society with division of labor. Robert McClenon (talk) 01:47, 18 January 2016 (UTC)[reply]
They would likely have more body hair. Head lice are supposed to have developed "30,000–110,000" years ago, as a result of humans having lost body hair in most places, leaving an isolated habitat for lice on the head. So, that puts the transition in the 60,000 years ago range you are interested in. StuRat (talk) 05:06, 18 January 2016 (UTC)[reply]

January 18

ASASSN-15lh ‎

Considering what our sources on ASASSN-15lh say, does it mean that if it was in our galaxy, the light from this hypernova would be seen in the northern sky, even if it exploded in the southern sky? If yes, what would be it's rough intensity compared to the southern sky? Brandmeistertalk 09:19, 18 January 2016 (UTC)[reply]

Well what I saw said brighter than the full moon. The full moon is not apparent on the other side of the world. A fully set sun can't be seen on the fully night side of the Earth. But perhaps you could see some odd lighting on the dark part of the moon, but otherwise if it was below the horizon you should not see anything. Graeme Bartlett (talk) 09:48, 18 January 2016 (UTC)[reply]
"If it was in our galaxy" is somewhat meaningless, given that our galaxy is 100–180 light years across. If it was nearby we wouldn't have much time in which to enjoy the spectacle.--Shantavira|feed me 10:14, 18 January 2016 (UTC)[reply]
Erm, double check your numbers; 100-180 thousand light years. Fgf10 (talk) 10:35, 18 January 2016 (UTC)[reply]
If we see a supernova in our Galaxy it would probably be nearer rather than further because dust blocks anything that's not close or up in the sticks astronomically speaking (page 348). Specifically, that link says most the dust is 100 parsecs wide (326 light years) and Earth is inside it. The part of the Milky Way that's visible is actually less densely populated with stars than the dark rift of dust running down the middle. This is why it's easier to see a much further galaxy through city light pollution than our own Galaxy. Because we are not in Andromeda's central plane but rather see through it at a glancing angle we see through more of the dustless upper or lower suburbs before the line of sight must be stopped by dust. Sagittarian Milky Way (talk) 12:14, 18 January 2016 (UTC)[reply]
Also you can see some southern stars from the Northern hemisphere. At the north pole you cannot see any, but as you approach the equator most of the southern stars will be above the horizon at one part of the day. I suppose you are asking because most of the Milky Way is in the southern sky. Graeme Bartlett (talk) 11:07, 18 January 2016 (UTC)[reply]
The Milky Way is half in each hemisphere, as is required for a great circle. Anywhere but the Arctic you should see more than half reasonably good. The galactic center is in the southern hemisphere (Sagittarius) so more than half the naked eye stars or supernovae (long term average) likely are in the southern hemisphere. But if by southern sky you mean "invisible or hard to see at middle latitudes" that might not be the case, as midnorthern latitudes can see the direction of the galactic center. It would require a majority of stars or visible Milky Way supernovae to be on the southern side of -40° to -50° declination or so and that might not be possible no matter how amazing the sky that far south is. Sagittarian Milky Way (talk) 12:14, 18 January 2016 (UTC)[reply]

Cost of Space Missions and Materials

The price of many space missions is available, but not how they have reached that amount of money- how much goes to advertising? how much do the materials cost? if I plan my own mission how do I know how much it will cost? And on the same note- where can I find information about the materials that spacecrafts are used for? (The questions are for hypothetical mission planning) 77.125.0.41 (talk) 14:41, 18 January 2016 (UTC)[reply]

Don't know what you mean by advertising. This is not generally a commercial product or service. Mars One might be an exception, since they appear to spend most of their funds in marketing themselves. For real programs, go to a project article, for example Space Shuttle program and follow the links. --Denidi (talk) 16:28, 18 January 2016 (UTC)[reply]
Our article on Space advertising covers launch and in-flight/in-space advertising, often a way to bring money into a government space program, not an expense. We don't seem to have any information on private space-launch providers advertising in Earthly sources to attract customers. Rmhermen (talk) 17:43, 18 January 2016 (UTC)[reply]
Indeed, there must be some client-hunting. However flying to space is such a niche industry, with so few players, that everyone must be aware of the existence of all other providers/buyers. Some sort of corporate relationship management must exist though, since there is a series of ancillary providers of products and services. — Preceding unsigned comment added by Denidi (talkcontribs) 18:57, 18 January 2016 (UTC)[reply]
Are we sure the OP referring to advertising by companies or sponsorship of government programmes? It sounds to like the OP is referring to costs, so I'm not so sure. (Of course costs contractors incur from advertising would generally end up being part of the costs they charge, however advertisingwise I think even now the companies probably end up charging less because whatever small amount they spend on advertising, they surely make more back the advertising they get from working on the space programme. That's after all why the government can also get sposorship.) Many space programmes do spend money on what is effectively advertising to promote their programme. A cynic would suggest it's to help bring in government funding but as one of their missions is education, educating people about the mission and getting them interested would also be a core purpose. (And likewise many people involved would be enthuastic about educating and getting people interested as an end goal because they think it's good for these people to learn rather than because it promotes the programme.) However these costs must be tiny, it's not like they do a massive campaign with paid radio, TV, newspaper, online banner ads, Facebook ads, Google ads etc. It's more a case of getting someone to run their website and social media accounts, write press releases, engage with the media etc. Even something like NASA TV or engaging with schools is to some extent partially an advertisement for NASA programmes. Nil Einne (talk) 00:27, 19 January 2016 (UTC)[reply]

CO2 pressure in a soda drink

At what pressure is the CO2 in a carbonated beverage? Does it make sense at all to talk about the CO2 pressure before opening the can/bottle? I ask because the the CO2 would be dissolved, and not a gas. --Needadvise (talk) 16:18, 18 January 2016 (UTC)[reply]

It varies depending on the brand and the packaging or dispensing method. Slate notes that 12-week old poorly stored plastic bottles can lose 15% of their CO2 compared to a aluminim can packed at the same time and pressure.[4] Fountain sodas can have their pressure manually adjusted. Rmhermen (talk) 17:33, 18 January 2016 (UTC)[reply]
Glass Soda siphons are filled at 60PSIG, plastic bottles at 0PSIG (yes, zero). However things are not so simple - the plastic bottles are filled with the water cold and fully saturated with CO2, the glass with the water at room temperature. CO2 absorbs better in cold water, so when the cold water warms up the pressure in the bottle goes up. They fill cold, at zero PSIG, then put the cap on and let it warm up. The glass bottles are filled using a one-way valve, so all the pressure stays inside.
As the CO2 gets absorbed by the water the pressure drops, until the water holds as much CO2 as it can. So if you fill at a certain pressure, then shake the bottle to mix the CO2, you can noticeably feel the pressure drop. Then you fill again, mix, and the pressure drops less the second time. In my experience it takes 5-10 fills to max out the dissolved CO2. (Yes, I make my own seltzer at home.) Ariel. (talk) 08:33, 19 January 2016 (UTC)[reply]

Asian eyes (epicanthic fold)'s protection from the sun

Is it true that Asian eyes (the presence of the epicanthic fold and single eyelid) provides greater protection from the sun? I've heard this before, but has there ever been a study that indicates a lower incidence of eye related diseases among East Asians? ScienceApe (talk) 16:27, 18 January 2016 (UTC)[reply]

Epicanthic fold states that the reason is unknown. Some features have no use, so it would be no surprise if no explanation exists. We can speculate that their eyes are more 'squinty' to block out more direct rays of sun. But also we could say that they have more fat to protect against cold weather. These theories could be completely wrong, but they seem to at least have some intuitive merit. --Denidi (talk) 16:33, 18 January 2016 (UTC)[reply]
Sure, it could be due to a lot of things, or maybe nothing (e.g. genetic drift). Maybe sexual selection played a role. Especially in human biology and evolution, we should all be wary of a just-so story. SemanticMantis (talk) 17:35, 18 January 2016 (UTC)[reply]
Protection from what regarding the sun? Eastern Asia, which includes China and Japan, is on the same latitude as the Mediterranean Sea where the native population have no natural epicanthic fold. Similarly the United States is on the same latitude and in either place I am unaware of any higher incidence of eye pathology caused by solar radiation. Richard Avery (talk) 07:47, 19 January 2016 (UTC)[reply]
A feature does not have to be an adaptation to the environment where it is common. You can also find white people nowadays in places like the Mediterranean region or the Southern US. The epicanthic fold could have arisen in a different region. But even then, given the feeble evidence, I stand by the position that the reason why the epicanthic fold exists is unknown, and only speculate about a possible evolutionary advantage. --Denidi (talk) 12:52, 19 January 2016 (UTC)[reply]
It is indeed unknown. But the sun sounds pretty absurd to me. If I wanted to *try* to find a guess for this, I'd need to compare the poorest of the poor - people starving, flies crawling over their bodies laying eggs, pus coming out of their eyes. It might be that one morphology or the other provides some sort of protection against some region-specific pathogen. Wnt (talk) 13:33, 19 January 2016 (UTC)[reply]
With respect, you're substituting one absurd, speculative, almost-certain-to-be-untrue hypothesis for another. SemanticMantis has put forth the only explanation that is to any extent consistent with understanding of the evolution of this variety of superficial feature: Sexual selection. Indeed, this is basically exactly the same archaic/urban myth that continues to persist for why certain people have darker skin tone than others; that too was attributed by folk evolutionary theory / racial segregationists to the notion of natural selection. And it remained popular amongst those who know just enough about the origins of human diversity to accept natural selection but don't know enough to really understand how it works. Today, like most other questions of morphology/phenotypic variations between the features of different races, it is understood to be the result of sexual selection, not natural selection. I know of no serious research to suggest the epicanthic fold is any different in this regard than variations in the morphology of the bridge of the nose, or skin tone, or any other feature which tends to vary between the races--be the source sunlight, pathogen, or glowing mutagen. Please avoid searching, speculative answers of the "it might be that" variety, unless you can provide at least some reliable sourcing that lends WP:WEIGHT to such a notion.
For ScienceApe, you might be interested in some light reading on this topic courtesy of Jared Diamond in the The Third Chimpanzee, where he debunks many of this class of popular myth about how the races are the result of adaptation to localized environments. He wasn't the first to do so, of course--Darwin himself understood that sexual selection was the more sensible explanation of this divergence than his notions on adaptive natural selection, and he was very cognizant of the fact that those inclined towards racist ideology would be tempted to leverage his work on natural selection to imply that humanity was actually composed of subspecies; that's part of the reason The Descent of Man is twice the size of On the Origin of Species. But, in any event, Diamond sums up the contemporary research rather nicely. If you would like more specific and detailed references to particular features (or just more detail on how sexual selection leads to racial divergence) please feel free to ping me here for them or inquire on my talk page. Snow let's rap 03:24, 22 January 2016 (UTC)[reply]
@Snow Rise: If you're claiming that skin color is the result of sexual selection, that does not agree with the Human skin color article. Any very white kid who has played for a few hours in the Florida sun knows how poorly adapted this phenotype is to lower latitudes! But the article raises points of skin permeability and folate depletion that I didn't even think of ... reminding me that Just So Stories can lead you wrong even when they are right. I don't deny my random suggestion was just one of many conceivable guesses ... nonetheless, I hesitate to attribute everything to sexual selection. I mean, why are epicanthic folds sexually selected for in one place and sexually selected against everywhere else, and how can that be stable for thousands of years? I mean, my own perception of the attractiveness of the trait changed greatly over the course of a decade (much less than that if sexes are considered individually) ... so it scarcely seems like something set in stone that can drive evolution! Wnt (talk) 21:51, 22 January 2016 (UTC)[reply]
That's the very distinction of sexual selection when compared against natural selection; it doesn't require adaptive properties in terms of the same kind of survival mechanisms; it's instead about mate selection and broadcasting (or faking) fitness in order to mate (rather than survive long enough to mate). In terms of research into why these traits get preserved over generations, such that certain populations exhibit increasingly particular traits, it is (in humans and many other social species) governed by the fact that individuals tend to imprint for their notions of attractiveness at a young age. Many studies have shown that if you measure very particular feature's of an individual's spouse (size and shape of earlobe, relative size of brow, eye and nose shape, ect.) they tend to show a close correlation to those of that same individual's siblings and other close relations, relative to the average for those features in the the population to which the individual immediately belongs. Similar studies of other highly social mammals have dyed the fur of parents and siblings a colour which does not occur in nature (hot pink for example) and the offspring from those litters are vastly more likely to select a similarly-coloured mate once entering sexual maturity. Now, for humans, this situation is likely changing and will continue to do so as we become a more global community with more people of mixed ethnicities in our peer groups and immediate families. Nevertheless, this helps explain one of the major mechanisms which answer your inquiry as to how these changes have persisted historically (and often grown more pronounced over time). There are of course other factors involved in Sexual selection--it can include direct ques as to reproductive or general health, for example--and I recommend that article and the books I linked above for the OP if you're interested in the science involved (I'm also happy to collect together and provide links to more specific research if you'd rather be reading the niche experts directly rather than secondary overview; just let me know).
In any event, while it is your own prerogative to "hesitate to attribute" causality with regard to your own impressionistic views that you choose to adopt on a given matter or concept, for the purposes of the ref desks, please refrain from automatically passing this speculation along unless you know you have sourcing for those assumptions, be they affirmative theories of your own or speculation as to how much weight should be given to the consensus notions of a field of research. As to human skin color, I hesitate to comment as I have not yet reviewed its content, but I will certainly do so to make sure it is consistent with the contemporary scientific understanding of this topic--it is, afterall, an immensely important topic that we should have well-vetted coverage on! I will say this much: the notion of skin tone being the result of natural selection amongst humans is not just a just-so story, it's really the just-so story for human morphology, the very one that (arguably more so than any other) helped lead to the widespread understanding of the pitfalls involved in this kind of thinking. If you look at the data for the distribution of the traditional ranges of light or dark skinned peoples and then compare it against the amount an intensity of sunlight that people receive in those ranges, this traditional/folklorish theory falls apart instantly. Snow let's rap 23:35, 22 January 2016 (UTC)[reply]
I don't think I was sounding all that dogmatic about my idea, and you're blowin' pretty hard here. Comparing skin color to latitude seems like a strawman to me - it could depend on how much vitamin D in the diet (more = darker) , how much folate in the diet (more = lighter), apparently humidity (more = lighter), not to mention tree cover, clothing customs (and the weather that influences them), even the prevalence of unknown factors that increase or decrease cancer risk. But even today, a black family in Scandinavia or Britain (see rickets) will be warned to be sure to get enough vitamin D in the diet, and a white family in the tropics or the outback will need few reminders to keep the sunscreen and the floppy hats close at hand. To deny non-sexual selection applies in this case at all seems absurd - it's one thing to write a just so story, another to live under the selective pressure. Wnt (talk) 02:08, 23 January 2016 (UTC)[reply]
I don't recall ever saying you were being dogmatic or anything remotely in that vein. I said you were speculating, without sources to back up that wild guesswork, on the reference desk, which, at best muddies the waters for the OP's understanding of the issue he came seeking insight on here, and, at worse convinces him to believe your pet theory (Though I rather suspect ScienceApe probably knows better). You defended that decision and, for good measure, decided to further speculate in minimizing a mechanism that has been thoroughly investigated for its role in creating racial variation, a mechanism which I did reference, however briefly (Jared Diamond, with more sources to follow bellow).
Further, you seem to have lost the thread on what is being discussed. Of course human skin (and form generally) has adaptations to make the best of the environment, and these traits can vary between populations to some limited degree. The issue in question (after you put the notion forth) is whether the visually-recognizable racial characteristics are the result of natural selection. (i.e. does skin pigmentation vary in populations as a direct result of the level of exposure to sunlight, are epicanthic folds defenses against some speculative parasite, ect.) Correlation≠causation. Yes darker skin protects well against UV light, but you have to remember that this darker skin was the original phenotype of human populations. You go further to imply that human skin tone lightened due to adaptive pressures to allow more UV light into the skin for Vitamin D synthesis. This source helps explain how that isn't so--by the way, you might want to broaden your academic searches if you don't find what you are looking for in PubMed, which is hardly the be-all and end-all of biomedical/genetic reference material--and will go some way to explaining where your inductive reasoning is failing you here, as will this.
If you can put sources which bring this well-established line of research into question, by all means, please do so; I'm always happy to eat my words in exchange for new knowledge. Otherwise, I ask that you remember that this isn't reddit, it's Wikipedia and claims need to be sourced, and every bit as much on the reference desks as elsewhere on the project. And regardless of whether I have managed to convince you of how the mechanics of human skin variation are thought to work, your original epicanthic fold suggestion that got us on to this topic is clearly just a notion that occurred to you, not something supported by research in the slightest.Snow let's rap 07:46, 23 January 2016 (UTC)[reply]
I should add I just checked at PubMed and I'm not seeing the evidence you talk about. There's scarcely anything on skin color and sexual selection, and even that recognizes the role of UV. I found this recent paper that says that even beyond visible skin color, northern Europeans have other mutations to allow more UV in to increase vitamin D production. I feel like you're pushing a fringe view. Wnt (talk) 02:19, 23 January 2016 (UTC)[reply]
Please see above. The question your comments raised isn't whether human skin has adapted to protect against UV, it's whether human variation in skin tone is a result of this factor. Snow let's rap 07:46, 23 January 2016 (UTC)[reply]

Specific gravity of Urine is typically reported in either g/cm3 or kg/m3. Can anyone tell me which unit of measure is correct for the reference ranges provided in the article? — Preceding unsigned comment added by JohnSnyderDTRRD (talkcontribs) 18:46, 18 January 2016 (UTC)[reply]

Technically, specific gravity doesn't have units. Instead, it's the ratio of the density of a substance to that of a reference substance, usually water. That said, the density of pure water is normally taken as 1 g/cm3, so the specific gravity and the density in g/cm3 are usually numerically identical. (The density of water is 1000 kg/m3, so orders of magnitude considerations should probably have lead you to rule that out.) - P.S. If you're attempting to use these numbers for anything important, I wouldn't trust the values given on Wikipedia. Get them from a more reliable source, one which you're confident that you can interpret correctly. -- 19:02, 18 January 2016 (UTC) — Preceding unsigned comment added by 160.129.138.186 (talk)

Electric transferring machine

Do we have anything existing supporting electric transferring from a battery to a any motor whatsoever without a wire(s)? -- Mr. Zoot Cig Bunner (talk) 19:20, 18 January 2016 (UTC)[reply]

There is an article about it here: Wireless power. It has many examples of different ways to do this. Ariel. (talk) 19:36, 18 January 2016 (UTC)[reply]
This is not sufficient, you have to keep it touched...
Thanks btw.
Mr. Zoot Cig Bunner (talk) 19:29, 19 January 2016 (UTC)[reply]
If I'm understanding you right, you're asking whether it's possible to point a battery at a motor from across the room and make it turn, without close-contact or near-field-transmission. What you're looking for is described at Wireless power#Far-field or radiative techniques; basically, unless you're fitting both transmitter and receiver with perfectly-aligned microwave dishes, any transmission is going to be staggeringly inefficient and become exponentially less efficient with distance owing to the cube-square relationship. (Tesla claimed that the World Wireless System could theoretically transmit power over any distance but general consensus is that even if one could get it to work, a system transmitting enough power to power an electric car at a distance of more than a few meters would (1) get you the kind of electricity bill more normally associated with industrial smelters, (2) kill every living thing in the vicinity, and (3) overheat spectacularly and catastrophically.) ‑ Iridescent 19:45, 19 January 2016 (UTC)[reply]
(adding) There have been some proof-of-concept experiments using high-intensity lasers aimed precisely at photovoltaic cells as a method of transmitting energy over long distances (the particular envisaged application is recharging spacecraft), but this is not something you'd want to try at home, since a laser powerful enough to transmit enough power to run a motor is also a laser powerful enough to blind you instantly. ‑ Iridescent 20:04, 19 January 2016 (UTC)[reply]
Woah...no! Check out a Crookes radiometer (for example) - this is a "motor" that's powered by sunlight - so if you had a laser that produced the same energy per meter-squared as sunlight, it could power the radiometer - and walking through the beam would be no different than crossing a beam of sunlight. Staring right into it might be a bad idea - but no worse than staring at the sun. So long as it was a visible-light laser, I don't think it would be a hazard. Our article on this gizmo says that it can be powered from the heat from your hand...so clearly a very low energy source is sufficient. Also, note that the size and power of the motor is not being asked about here - so we could imagine a tiny, super-lightweight motor that could be powered by light alone from a very dim source. This is far from impossible. SteveBaker (talk) 20:19, 19 January 2016 (UTC)[reply]
Oh, sure, one could power a radiometer with a low-power laser, in the same way one could power a micromotor with a Tesla coil, but I'm assuming that's not what the OP means; given that he says "power any motor", I'm assuming he's talking about something with practical applications, in which case to run a decent-sized motor constantly (as opposed to five minutes of operation followed by five hours charging) would involve lasers or microwaves at ray-gun intensities. ‑ Iridescent 20:27, 19 January 2016 (UTC)[reply]
-- Mr. Zoot Cig Bunner (talk) 20:20, 20 January 2016 (UTC)[reply]
Yeah "any motor whatsoever" is not reasonable. There are some VERY large motors out there! There are some 100,000 horsepower motors out there (being used by Boeing and NASA in their wind tunnels)...that's 74 Mwatts. There are petawatt lasers out there - but they only fire for the briefest time. There are, however, continuous power megawatt-range lasers (Boeing YAL-1, for example) - so it's not too implausible that we could deliver the power from A to B wirelessly. Obviously you'd need some fancy equipment to turn the resulting light beam into electricity...probably you'd need to boil water and have a steam-powered generator or something. It might be possible...but definitely not easy!
SteveBaker (talk) 21:38, 19 January 2016 (UTC)[reply]
I've read thoroughly, both of your statements made it easy to understand.
One thing I wish to say i.e., this is not mandatory, otherwise humans would've found a way to contain it. Thanks guys. -- Mr. Zoot Cig Bunner (talk) 20:20, 20 January 2016 (UTC)[reply]
Pshh, who needs electricity? Attach blades to a shaft, then laser ablate the blades to turn the shaft. Bob's your uncle. --Link (tcm) 21:50, 20 January 2016 (UTC)[reply]
I'll read through, thanks. Regards. -- Mr. Zoot Cig Bunner (talk) 20:28, 21 January 2016 (UTC)[reply]

Syn-Bio-Sys

I recall a prototype suit that feeds your body food. Does it exist? If so, why is it not out yet? -- Mr. Zoot Cig Bunner (talk) 19:20, 18 January 2016 (UTC)[reply]

This is an article about it, and this is the appropriate page on the artists' website. Note that it's designed as performance art rather than a practical method of nutrition. Tevildo (talk) 19:57, 18 January 2016 (UTC)[reply]
Looks disgusting; I was expecting something awesome Tev.
Just to clarify, you still have to put something in the stomach right?
Mr. Zoot Cig Bunner (talk) 19:31, 19 January 2016 (UTC)[reply]
Yeah - it's an art concept. I don't see anywhere where they crunched the numbers to see whether adequate calories could be delivered with such a thing (seems highly unlikely) - or whether all of the required nutrients would be available. Meh.
If you consider Spirulina (dietary supplement), 100g of dried algae provides 290 calories - so for a 2000 calorie daily diet, you'd need to consume about 700g of the stuff - and that's dry weight. If you can't dry it out (which would take a lot more equipment and some source of energy beyond what the body could provide), then you'd need to consume around 7 to 14 kg of the filtered wet stuff per day.
This NASA report suggests that with optimal conditions, the amount of Spirulina in an optimum experimental setup with enough CO2, water and sunlight, doubles about twice per day. But wrapped around a human body and getting only whatever sunlight and CO2 is available is FAR from optimal - and the doubling time could easily be more like one doubling every 10 days. So, for example if the temperature in the algae tank/tube is only 25 degC instead of the optimal 38 degC - you'd need to be hauling around 70kg (wet weight) of Spirulina per day to keep yourself in food...add the tubes/tanks and the other equipment to support that vast amount of algae and you're probably hauling around maybe twice your body weight in gear.
On the plus side, spirulina contains almost all of the nutrients your body needs (there is some debate about vitamin B12). However, as NASA helpfully point out: "The most difficult problem in using algae as food is the conversion of algal biomass into products that a space crew could actually eat over a long period of time."
So this "body suit" is either art or fiction. There is no chance of it being real.
SteveBaker (talk) 19:46, 19 January 2016 (UTC)[reply]
The suit seems like a low-tech, very low efficiency way of trying to copy Elysia chlorotica. Looking around I found this cool paper (though I haven't checked the nitty-gritty yet). I don't know if chloroplasts in animals have a chance of providing meaningful amounts of energy, but at least it is attemptable. Wnt (talk) 21:22, 19 January 2016 (UTC)[reply]

I recalled guys, the one's I've read of in the newspaper is called Synthetic Biological Suit/System. Its a armless jacket; like a waistcoat or a cold wastcoat lookalike jacket - the description said something like it... -- Mr. Zoot Cig Bunner (talk) 20:21, 20 January 2016 (UTC)[reply]

How to make a Sea water drinkable

  1. Do we have a step by step guide?
  2. how does it work in our world 'naturally' and 'industrially'?

Mr. Zoot Cig Bunner (talk) 19:20, 18 January 2016 (UTC)[reply]

See Solar still and Watermaker for two methods. Ariel. (talk) 19:38, 18 January 2016 (UTC)[reply]
For "naturally", see water cycle. For "industrially", see water desalination. Of course, removing salt from the water is just one step to making it drinkable. See Water purification for the rest. StuRat (talk) 19:40, 18 January 2016 (UTC)[reply]
I'll read through, Thanks guys. -- Mr. Zoot Cig Bunner (talk) 19:32, 19 January 2016 (UTC)[reply]
To put it simply, water is a liquid that boils at a low temperature, while sodium chloride (salt) is a solid that boils at a really high temperature. So whenever salt water evaporates, the salt stays behind. The water in the air then cools and condenses somewhere - the top of a still, or a rain cloud - and that is fresh water produced by distillation. The process of reverse osmosis is used in industry but rarely occurs in nature, mangroves being the exception. Wnt (talk) 21:30, 19 January 2016 (UTC)[reply]
Thank you. -- Mr. Zoot Cig Bunner (talk) 20:21, 20 January 2016 (UTC)[reply]

Time dilation and speed being relative

Speed is relative correct? There is no "absolute" speed as newton pointed out. If you are moving in a car going at 40mph, you are still at rest with respect to the car. So if you have a rocket ship moving at 99%c that ship is moving at 99%c relative to the Earth. And time slows down for the rocket with respect to the Earth. However since speed is relative, you could say that the rocket is at rest and the Earth is moving at 99%c with respect to the rocket. So why does time speed up on Earth with respect to the rocket? As far as the universe is concerned, neither is moving in the absolute sense, so what determines what gets slowed down and what gets sped up? ScienceApe (talk) 20:21, 18 January 2016 (UTC)[reply]

The common belief that "time slows down when you are moving fast" is false. What is true is that each observer "sees" time appear to move more slowly on an object moving at a relative speed. This observation is entirely symmetrical, so the rocket observes time to move more slowly on earth. In fact time is progressing quite normally for both observers in their own reference frame. The Twins paradox is still true because of the change in frames as the rocket accelerates. Dbfirs 20:30, 18 January 2016 (UTC)[reply]
I believe there is another misconception in your question. It is in the bit 'no "absolute" speed as newton pointed out'. According to special relativity, there is one absolute speed: the speed of light.--Denidi (talk) 21:27, 18 January 2016 (UTC)[reply]
Yes, in the sense that your speed relative to light coming from any direction will always be c in free space, and the same fraction of c in any given medium (air, water, etc). Dbfirs 21:35, 18 January 2016 (UTC)[reply]
I believe precisely this question is addressed by the Twin paradox. Vespine (talk) 21:46, 18 January 2016 (UTC)[reply]
Yes, I accidentally put an "s" on the link I gave above, but it still goes there via a redirect. Dbfirs 21:49, 18 January 2016 (UTC)[reply]
I'm still having my 1st coffee of the day, completely missed you already linked it:)Vespine (talk) 21:53, 18 January 2016 (UTC)[reply]
No problem, it's nearly bedtime here. Better to link twice than not at all. Enjoy your day. Dbfirs 22:00, 18 January 2016 (UTC)[reply]
These types of questions about spacetime have analogues in Euclidean geometry. The Euclidean analogue of your question is the question of which of two non-parallel lines is "denser" than the other. If you draw perpendiculars to one line at regular intervals and extend them until they intersect the second line, the second line will be longer between each pair of consecutive perpendiculars (by where m is the slope of one line relative to the other), so it's "denser". But if you switch the role of the two lines, the first line is the one that's "denser". How can that be? Well, it just is. There would be nothing surprising about it if I hadn't introduced the confusing concept of "density" of lines, which is nearly identical to "time dilation" in spacetime.
The diagram to the right shows the slightly more complicated case of the Euclidean twin paradox. You can solve this with the perpendicular lines, and get the right answer (as shown). It's not wrong, just stupidly overcomplicated. The simple answer is that the two sides add up to more than the one side because of the triangle inequality, or the fact that a straight line is the shortest distance between two points. It's the same thing in spacetime, except that because of the opposite sign of the spacetime version of the Pythagorean formula, a straight line (inertial motion) is the longest distance (elapsed time) between two points (events), so the stay-at-home twin is older.
See also File:Euclidean analogue of length contraction.svg, File:Euclidean analogue of velocity addition.svg, and File:Euclidean barn-pole paradox.svg. -- BenRG (talk) 22:18, 18 January 2016 (UTC)[reply]

Underground depth measuring device

Is there any device that can measure the depth underground from the perspective of a man below, such in caves, subways or how deep a man is in catacombs below the ground level? Seemingly sonar-based devices wouldn't work in that case. [[--93.174.25.12 (talk) 21:41, 18 January 2016 (UTC)[reply]

A basic Pressure altimeter should still work as long as the chamber is not sealed air thight. Vespine (talk) 21:50, 18 January 2016 (UTC)[reply]
(ec) Interesting question. I suppose an pressure altimeter could work, although with a few caveats:
1) The cave would have to be open to the air, so air pressure in the cave would not be disconnected from the atmosphere. So, an air pocket in an undersea cave wouldn't work (or at least it would need to be adjusted to reflect the water pressure from the water above it).
2) It would tell you how far above or below sea level you were, and you would need to know how high above or below sea level the surface is, to determine how far below the surface you were.
3) It wouldn't be all that accurate, as air pressure also changes with the weather, etc. Taking an accurate air pressure reading at the entrance of the cave would help to account for that. Air pressure changes in caves also tend to lag the changes outside, so you would need to account for this lag, or better yet, take your readings when the air pressure has been stable for many hours. So, there would be a fair amount of calculation to be done, but this could all easily be automated. If you took one pressure altimeter with you and left one at the entrance, then you could use data from both (after you leave), to calculate the depth at various times in your expedition. StuRat (talk) 21:52, 18 January 2016 (UTC)[reply]
Not a realistic answer, but apparently muons penetrate deep underground, but may be stopped by tens of meters of rock, so in theory....... hmmm, I dunno, but this came up when I searched. Wnt (talk) 13:24, 19 January 2016 (UTC)[reply]
Cave survey has some methods. shoy (reactions) 13:47, 19 January 2016 (UTC)[reply]
For more precision than you can get with a pressure altimeter using air, you can use a tube filled with water that leads (possibly by a circuitous route) to the surface and measure the pressure at the bottom of that. The difference in pressure at the ends of the tube (knowing the density of the water) gives the elevation - and the large values of pressure should make it easier to measure it accurately. The incompressiblity of water should eliminate the 'lag' effect you get when there are pressure changes at the surface. However, variations in temperature may make the density estimage inaccurate - but that's true when you do it in air also. This approach is called "Hydrolevelling" (Hmmm...no article on that?!). SteveBaker (talk) 18:44, 19 January 2016 (UTC)[reply]
I thought of suggesting that, but the difficulty in snaking a water line deep into a cave dissuaded me. StuRat (talk) 22:11, 19 January 2016 (UTC)[reply]
Aha! It's mentioned in our Cave_survey article...evidently it is actually used! See: Cave_survey#Hydrolevelling SteveBaker (talk) 03:26, 20 January 2016 (UTC)[reply]
And of course, you can just drop a rope with marks on it down a vertical shaft, or use sonar to measure depth there. Any horizontal chambers or tunnels will have the same depth. For angled tunnels, you can then use the same methods listed previously, but measuring from where the tunnel meets the vertical shaft, instead of from the surface. Temperature variations will hopefully be less there. StuRat (talk) 03:57, 20 January 2016 (UTC)[reply]

January 19

Carbs in Stiegl Radler

How many carbs are there in a 0.5 liter serving of this beverage, described on the can as "grapefruit Salzberger Stiegl Radler beer with fruit soda. Malt beverage speciality. 40% Stiegl-Golbbräu and 60% fruit soda. Product code 8-52527-00028-1." On the internet I see statements of 0 carbs to 44 carbs, and both extremes seem very unlikely, but the higher is more plausible. The beverage can provides no guidance. An answer based on reliable sources is urgently desired. Edison (talk) 03:04, 19 January 2016 (UTC)[reply]

In the United States, the labeling of all malt beverages, regardless of alcohol content ... is regulated by the Alcohol and Tobacco Tax and Trade Bureau (TTB). TTB does not require that the products it regulates bear nutrition labeling. In other words, there are no legal requirements for the producer or retailer to disclose ingredients or calorie information. Labels for beers are not rule-free: there are mandatory labeling requirements; but these do not include nutrition information like carbohydrate content or calorie content.
If you're lucky, you might find some independent food science laboratory who has voluntarily assayed the contents and published the lab results at no cost ... but this isn't very likely.
One major American importer lists 118.8 calories per 11.2oz; you could contact them, or the manufacturer, if you wanted more details.
WP:OR: By my back-of-the-envelope calculations, this data is sufficient to estimate 30 grams sugar per 500 milliliters of the drink ... by solving simultaneously for the mass of ethanol and sugar, constrained by the alcohol-by-volume, the calories-per-mass, adjusting for serving-size, and so on. This is not an accurate methodology for many reasons, but it's probably in the ballpark. This is slightly more sugar, and significantly more alcohol, than a typical serving of "cola" or fruit-soda soft drink.
Nimur (talk) 05:37, 19 January 2016 (UTC)[reply]
Thanks for the calculation. 2% alcohol by volume would yield the calories from alcohol, so the total calories less the alcohol calories indicates the carbohydrate calories (since fat and protein are likely nil) and that divided by 4 would yield the grams of carb. I see the article Nutrition facts label which indicates several countries require detailed nutritional labels for "prepackaged food," but do other countries also give alcoholic drinks a pass from such labelling as the US does? It is hard to see why alcoholic drinks should be exempt from any nutritional labelling other than alcohol by volume and ingredients. Edison (talk) 22:01, 19 January 2016 (UTC)[reply]
I think the logic is similar to cigarettes. That is, they knew they were unhealthy, so regulating them in this way, which supposedly has the purpose of ensuring that they were healthy, seems out of place. I don't personally agree, though, as there are degrees of healthiness and unhealthiness, and the consumer may well want to know the specifics, so they can make better decisions. And, for alcoholic beverages, they are only unhealthy if abused, so that's another reason to give them proper labels. StuRat (talk) 22:15, 19 January 2016 (UTC)[reply]
AU has slightly stricter requirements than USA see here [5], where it says basically nutritional info is require in some cases. Here [6] should be a table summarizing alcohol labeling laws for different countries, but the IARD site is currently down for maintenance. SemanticMantis (talk) 22:24, 19 January 2016 (UTC)[reply]

what field studies maximum world population carrying capacity?

To answer the question what is the maximum world population carrying capacity, it seems as if you would need to know a lot about mathematics and statistics, biology and genetics, medicine, agriculture, economics and possibly several more fields. So what if you went to universities to find the people who know the most about this topic, what department would they be in?--Captain Breakfast (talk) 19:08, 19 January 2016 (UTC)[reply]

Conceivably demography. Loraof (talk) 19:32, 19 January 2016 (UTC)[reply]
The figures named in this article seem to be biologists. Clearly they would draw on work in other disciplines - they would not need to "know" a lot about the work done in those other fields. Ghmyrtle (talk) 19:40, 19 January 2016 (UTC)[reply]
Indeed - it doesn't take much to find out the area of land that could potentially be farmed, the maximum yield of that land and the food intake requirements to figure out whether food would be the limiting factor (for example). Repeat that for all of the limiting factors you can think of - pick the lowest number - and you're done. With access to the right kinds of basic information (much of which is very easy to find) - it seems like anyone with a reasonable scientific background and a command of basic arithmetic could come up with a plausible number.
The real difficulty is in identifying all of the things that high population pressure could cause to go wrong. We already know (for example) that global climate change has come about because of the product of the size of our population and our per capita energy use. We also have reasonable estimates for how far we could push the warming trend without catastrophically (and suicidally) trashing the planet - but we can't easily predict whether technology can step in (eg with effective fusion reactors producing safe/cheap energy) to ameliorate that - or whether that will remain out of reach and with increasing energy demands, we'll burn through even more fossil fuels - thereby hastening the end.
But what if some other thing that we haven't thought of is the limiting factor? There have been many things over the years (CFC pollution destroying the ozone layer, for example) that were completely unforseen until we started to investigate them.
Being unable to know that kind of thing pushes even the best science into the realms of speculation.
IMHO, the true answer is "less people than we have now"...and that highlights another issue. Right now, the earth is carrying 7 billion people - but is that a sustainable total? If the population doubled overnight, we could probably struggle on for a while before systems started to collapse around us - but there are strong signs of those collapses happening right now. So if your question is about "sustainable" population size, it's going to be a very different answer than "peak" population size. What the peak number is would likely be capped by birth rates - what the sustainable number is depends on resource depletion rates, technological advances and so forth.
But that adds another layer of complexity to the mix. We're currently running out of some very fundamental resources - copper and helium, for example, are close to exhaustion. If it turns out that population growth is limited by the amount of copper we have (unlikely - but bear with me!) then starting a super-strict copper recycling system would allow us to have a larger population than if we totally deplete the reserves and mountains of copper stashed in hard-to-recycle landfills. So the final maximum number of people depends a lot on the route we take to get there...which is one of those political/sociological things that we're disastrously bad at predicting.
The problem is more to do with methodology and data capture than specific fields of study/expertise.
SteveBaker (talk) 20:09, 19 January 2016 (UTC)[reply]
There's a lot of assumptions and no references here, as well as some big mistakes. A very important error is your claim that "it doesn't take much to find out...the maximum yield". Hundreds of researchers and thousands of peer-reviewed papers say otherwise. Just search google scholar for /potential yield/, /yield limitation/, and various related terms, throw in agriculture, agronomy, or agroecology in as you see fit. You are right that sustainability makes all this even harder, maximum sustainable yield gets over 4k hits on google scholar just since 2012! So lots of people are building their careers on advancing our understanding of maximum yield. This is clearly a very active area of research, not something that is easy to figure out. SemanticMantis (talk) 22:07, 19 January 2016 (UTC)[reply]
For a comprehensive demolition of the "running out of helium (etc)" myth, see the in some respects bonkers, but knowledgeable, Tim Worstall. HenryFlower 05:21, 20 January 2016 (UTC)[reply]
Lots of ecologists estimate carrying capacity of other organisms, but humans are ecosystem engineers, and that complicates things, because then there is no one constant "carrying capacity" like there is for a small closed system, and that number will depend on myriad other factors, some of which will continually change (i.e. changes in technology result in non-stationary distributions for life history statistics and traits).
So this is hard stuff, and there aren't really any named subfields that are centered around researching human carrying capacity. There are indeed many people who talk about it, just not all under one roof. Plenty of people are working on related sub-issues, such as how much food we can make on how much land. To further confuse the issue, bio-sciences departments are structured in very different ways at different universities, so the people who work on this will be housed in departments that have a variety of names. Increasingly, a university professor will have a "home" department, as well as memberships in various "centers" or "groups", and if you look into research on carrying capacity, you can find authors with over five different named affiliations!
This all complicates the answer to your question, but here are some department names you can look at, some research articles, and some names for general fields of research that include carrying capacity. Members of Agroecology departments sometimes write papers on human carrying capacity of earth. Sometimes you might see this research coming out of an evolution/ecology dept, or "integrative biology", or even just "biology." But names are often political more than scientific. Here [7] is a fairly famous and relevant paper, the authors are housed in an "Energy and resources group", and a "Center for conservation biology". Here [8] is another highly cited paper, this one written by someone housed in a geography department. That may seem strange, but increasingly Geography is seen as a "big tent," inclusive field, and the departments will include people who do a lot of demography, ecology, and mathematical modeling. U. Arizona has a "School of Sustainability" [9] that I think touches on human carrying capacity, while QUT has an "institute for future environments" [10]. They have a nice "dashboard" calculator that you can play with to look at results [11] of modeled carrying capacity as a function of several inputs. So, that's a selection illustrating the high variety of names involved. If you want to learn more about the latest thinking on the human carrying capacity of earth, you're better off searching by term than by department or field. If you want to find people at a specific university, I suggest browsing the keywords in faculty lists, perhaps for many different named units. I'd be happy to help further if you want more along these lines, or want to narrow the question. SemanticMantis (talk) 20:39, 19 January 2016 (UTC)[reply]

See urban farming especially vertical farming; hydroponics etc. In theory, you could have orbital solar collectors or deep geothermal energy or fusion reactors powering unlimited food growing space in deeply carved caverns, even as ultra common components of the Earth like silicon and aluminum are used to handle most mechanical/structural needs. Though underground cities seem gloomy, if done well they might provide an opportunity for large parklands to recreate ancestral ecosystems without exotic competitors. I suppose eventually you run out of carbon and nitrogen? The oceans should provide a pretty good supply of most other basics. Wnt (talk) 18:03, 20 January 2016 (UTC)[reply]

On the question inside the OP's question, I recall Joel E. Cohen's 1995 book How Many People Can the Earth Support? as being a good, level-headed source. He's quoted in the article Ghmyrtle linked to above.John Z (talk) 01:11, 21 January 2016 (UTC)[reply]

Why do humans sometimes classify themselves as animals and sometimes not?

Sometimes, they may say, "humans and animals", as if humans are not animals. Sometimes, they are more accurate and say, "humans and other animals" or "humans and nonhuman animals". But a picture book about "baby animals" never seems to include a picture of a human baby, or that an "animal hospital" seems to be really a term for a veterinary hospital. 140.254.77.184 (talk) 20:40, 19 January 2016 (UTC)[reply]

The English language is not always very precise; formal definitions depend on context. We have an article on definition that covers this topic thoroughly in the general case. In the specific case of the word "Animal," the word sometimes means something in a formal scientific context; and sometimes it has a more flexible meaning. Nimur (talk) 20:59, 19 January 2016 (UTC)[reply]
(WP:EC) See exceptionalism generally, and human exceptionalism more specifically. The general populace just isn't that precise in common speech. Scientists are usually more careful when they publish, for example the phrase "Non-human primate" gets over 60k results on google scholar [12], and "non-human animal" gets over 27k. Additionally religion can play a role, e.g. most of the world's popular creation myths, and especially the Genesis creation narrative draw a sharp division between humans and other animals. SemanticMantis (talk) 21:00, 19 January 2016 (UTC)[reply]
If humans had not been so integrated as they are today, and there may be different populations of hominids, would "humans" be a term for one population of hominids and for other hominids, they will be classified as "non-human hominids"? 140.254.77.184 (talk) 21:23, 19 January 2016 (UTC)[reply]
I don't know, my WP:CRYSTAL is broken. But personhood is a relevant article for this line of inquiry, including Personhood#Non-human_animals and Great ape personhood. SemanticMantis (talk) 21:53, 19 January 2016 (UTC)[reply]
It's really worse than that - it's not at all uncommon to hear people say "Birds, fish and animals" - or "animals and insects". Even when we are trying to be inclusive, we'll often talk about "plants and animals" as a synonym for "all living things" - completely missing bacteria (which are neither) - and more modern definitions of "plant" which doesn't include fungi.
As with so many things, language is vague in general use - and only gets tightened up to some degree when speaking formally and/or scientifically. Even amongst people who should no better, and in the formality of scientific papers, you'll come across: "Our drug has just been through animal testing" - with the implication that no human trials have yet been done.
This vagueness is probably acceptable because we know what's intended from the context. But if you get into the arguments about whether a tomato is a fruit or a vegetable - genuine confusion does arise due to sloppy English alone.
The problem here is that when we came up with these words, humans were truly NOT considered to be animals. Now we are (well, for most people at least) - trouble is that we now don't have a handy, catchy, well-known word meaning "non-human animal". Perhaps language will eventually catch up.
If you think you're immune to this - quick test: Is a pickup truck an "automobile"? What about an SUV? SteveBaker (talk) 21:17, 19 January 2016 (UTC)[reply]
The "is a tomato a fruit or a vegetable" thing is more about different meanings of the words. In a culinary context, a tomato is a vegetable, because it's a savory plant food. In a botanical context, a tomato is a fruit, as it's the ovary and seeds of a flowering plant. There's nothing wrong with words having multiple meanings, though some people seem to get bothered by it. --71.119.131.184 (talk) 21:27, 19 January 2016 (UTC)[reply]
Hey, thanks. This response prompted me to link polysemy for OP, and then I found a highly relevant article and term of which I was not aware - Essentially_contested_concept. SemanticMantis (talk) 21:53, 19 January 2016 (UTC)[reply]
That's identical to this case, where humans are animals from a biology POV, but not from a religious or legal POV. StuRat (talk) 21:52, 19 January 2016 (UTC)[reply]
The term "animal" means any living thing that "breathes",[13] hence the distinction from "plant" (plants also "breathe" but not in a way that humans would have understood 500 years ago). I would think only the most extreme religious fanatic might deny that humans are biologically animals. But that's mere physiology. "Human exceptionalism" figures into it strongly. And it doesn't require being religious to see the enormous intellectual gulf between humans and every other creature known to us. ←Baseball Bugs What's up, Doc? carrots→ 21:22, 19 January 2016 (UTC)[reply]
Bugs, with due respect, it sounds like you are promoting an etymological fallacy, "that the present-day meaning of a word or phrase should necessarily be similar to its historical meaning." The term "animal" is not wholly defined by its etymology. Nimur (talk) 21:39, 19 January 2016 (UTC)[reply]
There's no conflict between the original meaning and the way it's used today. It's been fine-tuned a bit, and humans weren't initially considered animals as such - but the concept of a living thing that breathes still works. ←Baseball Bugs What's up, Doc? carrots→ 21:43, 19 January 2016 (UTC)[reply]
So you're saying that (for example) a jellyfish (kingdom Animalia) "breathes" but a tree or a bacterium doesn't? Or perhaps you're denying that a jellyfish is an animal? Jellyfish passively allow oxygen to diffuse through their tissues. I don't think "breathing" is a suitable criterion for being an animal. The first paragraph of Animal has a concise definition - from a scientific perspective. From a common person's perspective, it's a mess. They'd probably say it's an animal because it moves around by itself...but that's a rather 'iffy' definition too. SteveBaker (talk) 02:55, 20 January 2016 (UTC)[reply]
I looked for our relevant article, and surprisingly couldn't find one. Animal-Vegetable-Mineral_Man and Animal, Vegetable, Mineral? are not that relevant, despite both referencing Aristotle's trichotomy. SemanticMantis (talk) 22:35, 19 January 2016 (UTC)[reply]
Oh, my: Cheese 'n' crackers! Here's the Stanford Encyclopedia: diff. 22:58, 19 January 2016 (UTC)
SemanticMantis and the OP, our article Kingdom (biology) describes why those people with a scientific education class the human species Homo sapiens in the kingdom Animalia (Latin for "of the animals") - because humans are not members of the other kingdoms (or domains or empires) into which life has been classified by taxonomists. It also discusses Aristotle's initial classification of the animal kingdom, and Theophrastus's later classification of plant life into its own kingdom, the formation of various kingdoms after the invention of the microscope, and newer proposed taxonomies of life based on conjectures of which forms of life have common ancestors, based on DNA sequences.
To the OP: Other posters, above, have been very thorough in pointing to our articles on various reasons why people choose not to call themselves animals. Most of these reasons are listed in our article on Anthropocentrism, one example of which is Genesis 1:26: "And God said, Let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth." The meaning of the word "dominion" in this verse is subject to controversy among Biblical scholars, but whether it implies fee simple ownership or stewardship, the implication remains "Animals over there, humans over this way". loupgarous (talk) 06:12, 20 January 2016 (UTC)[reply]
The issue here is the use of paraphyletic rather than cladistic terms. For example, to this day there are people around here who will swarm out like flies to correct you if you call a chimpanzee a monkey. "Animal" is used like that in popular speech, to include everything in Animalia except humans. Are apes monkeys, are humans apes, are humans monkeys, are apes animals, are humans animals...? Well, that's really a matter of whose semantics you want to use. My feeling though is that most of the professionals have grown altogether uninterested in policing the application of non-cladistic definitions, though I'm sure somewhere in the world right now there is some lawyer desperately fighting over the definition of a duck or something. Wnt (talk) 13:55, 20 January 2016 (UTC)[reply]

January 20

Strings At Absolute Zero

Hey, I was just wondering if this could be explained to me. I am under the impression that string theory says that all matter is made up of strings, and the way these strings vibrate designates properties to the matter. I am also under the impression that at absolute zero matter has no kinetic energy, and neither does its constituents. If matter lacks any kinetic energy, then the strings would be inert, making them not protons or electrons or neutrons etcetera. If this was the case, and we managed to get (I know this is impossible) say, a hydrogen atom to absolute zero, it would not be a hydrogen atom. In fact, it would not be an atom because it would not be composed of protons, electrons or neutrons. What am I not understanding here? JoshMuirWikipedia (talk) 06:36, 20 January 2016 (UTC)[reply]

I know next to nothing about string theory, so I can't make any intelligent comment on that part of the question.
However, I can say that one of your basic premises is wrong. It is not correct that matter at absolute zero has no kinetic energy. There is a minimum zero-point energy. (Of course, as a practical matter you can't get to absolute zero anyway, but that's a separate issue — you can get very close indeed to absolute zero, but there's still a fixed minimum to the kinetic energy below which you can never drop.) --Trovatore (talk) 06:41, 20 January 2016 (UTC)[reply]
(edit conflict)That's not quite what absolute zero is. Absolute zero, as our article describes, is the lowest possible energy state of any matter. For an ensemble of atoms, which describes most ordinary matter, this means the atoms are not moving at all, and yes, would have no kinetic energy. But that simple definition does not hold when looking at other types of matter. You don't even have to get as exotic as unproven strings - a gas of electrons never has zero kinetic energy, even at absolute zero (described in our article). So yeah, "absolute zero" does not mean "zero energy", but rather "minimum energy". In fact, even in our "ensemble of atoms" example, the "zero kinetic energy" result only holds if you take a very Newtonian view of matter and energy. In reality, the constituent particles within atoms are always in motion. Someguy1221 (talk) 06:45, 20 January 2016 (UTC)[reply]
No, sorry, that's absolutely not true. Ordinary matter does have zero-point energy, and the atoms absolutely are moving at absolute zero. --Trovatore (talk) 07:01, 20 January 2016 (UTC)[reply]
Oh, sorry, I didn't read your whole statement. Your last sentence is correct. But it flatly contradicts your third sentence, and your third sentence is blatantly false! Why did you say it, given that you knew it was false? --Trovatore (talk) 07:04, 20 January 2016 (UTC)[reply]
It's not false, given as I said, a Newtonian view of matter. A block of wood sitting motionless at absolute zero has no "v" in KE=(1/2)mv2. Of course, that's true of any block of anything with zero measured velocity. It was simply meant to be a train of thought showing that what we think of as "kinetic energy" in high school physics is a very incomplete picture that ignores microscopic properties. Someguy1221 (talk) 00:17, 21 January 2016 (UTC)[reply]
It's not false given a Newtonian view of matter? But the Newtonian view of matter is false, so that's meaningless. We should never tell people that matter has no kinetic energy at absolute zero, because it's false. --Trovatore (talk) 00:52, 21 January 2016 (UTC)[reply]
And yeah, I did write it in a stupid way, now that I reread it. Someguy1221 (talk) 00:18, 21 January 2016 (UTC)[reply]
Also, it's not just "the constituent particles" with in the atoms that are always in motion. It's the atoms themselves. --Trovatore (talk) 07:26, 20 January 2016 (UTC)[reply]
You can never get to a zero energy point with matter if only because E=Mc^2. Couple that with other quantum effects and you can't localize an atom which means it can move without consuming energy. --DHeyward (talk) 13:09, 20 January 2016 (UTC)[reply]

low boiling point solvents have lower enthalpies of vaporization than water, yet they feel cooler to the touch than water.

I used dichloromethane to clean up some sticky tape residue the warehouse staff had left on my second-hand hotplate (the tape itself was no longer there -- someone probably just stuck a sign to it using invisible tape but then didn't wipe it off -- and the hotplate otherwise looked clean when I turned it on to test, only for bad smells to evolve and for the black shadow of polymerized sticky tape residue to emerge). I used a combination of soap and water in one hand, and some dichloromethane-wetted towels in the other hand. I immediately noticed that the DCM-wetted-towel felt very cool to the touch, even through gloves, though the towel was getting dry more quickly than the water towel, while the soap-and-water-towel got very warm. After cleaning the burnt sticky tape residue I took off my gloves and threw away the paper towels, I noticed the DCM-wetted paper towels were very cold, despite having way less heat capacity than the soap-and-water, which AFAIK should not have warmed as quickly.

I also notice that hotplate issues aside, that dichloromethane and ethanol both produce a superior cooling effect than water despite their low heat of vaporization. This posed a curious optimization problem: for a given ambient temperature T, assuming that the reservoir being cooled is big enough that it doesn't change temperature significantly as it loses heat to simplify the scope of the problem (our body temperature doesn't drop either). So I note that the power loss for the heat reservoir being cooled by vaporization of the solvent should be the following:

W = amount of moles of solvent in contact with the reservoir (n) * equilibrium percentage of solvent in the vapor phase (K) * rate constant (k) * heat of vaporization (H)

but K = e^(-H/RT+S/R)

so W = n * k * e^(S/R) * H * e^(-H/RT)

e^(S/R) is just a constant dependent on entropy of vaporization (s)

Assuming that k and n are constant and don't vary with H (k and n represent more like the physical engineering issues of surface area and heat flow and whatnot, independent of solvent), and that H doesn't vary significantly with temperature,

  • then dW/dH = -k*n*s*H/RT * e^(-H/RT) + n*k*s*e^(-H/RT) = e^(-H/RT) * n*k*s * (1-H/RT)
  • dW / dH = 0 when H/RT = 1 or when H = RT (I believe this is a maximum)

Thus, the fastest-cooling solvent for a heat reservoir at room temperature should ideally, be as close to 2.47 kJ/mol as possible, and any lower or higher would result in a lower cooling rate? Yanping Nora Soong (talk) 11:41, 20 January 2016 (UTC)[reply]

There is a partial pressure of water vapor in the atmosphere. Solvents have virtually no partial pressure. High humidity and low humidity affect the cooling efficiency of water but not solvents as far as I know. The lower the heat of vaporization, the faster the substance will carry heat away from the source through vaporization.--DHeyward (talk) 13:02, 20 January 2016 (UTC)[reply]
Dichloromethane has a partial pressure of ~0.5 atm around 24C. Yanping Nora Soong (talk) 11:39, 21 January 2016 (UTC)[reply]

Carbon based Zeolite

Hello,

The International Zeolite Association (IZA) states (http://www.iza-structure.org/databases/ModelBuilding/Introduction.pdf) that zeolites can be built with "tetrahedral TO4 frameworks, where T may be Si, Al, P, Ga, B, Be etc.." - Why not carbon, since carbon leads to tetrahedral structures as well? Is it (theorically) possible to form 3-D ether (?) structures, just like Crown ethers are 2-D ones?

Thanks for any hint. — Preceding unsigned comment added by 192.54.145.66 (talk) 11:43, 20 January 2016 (UTC)[reply]

The problem is that these solid polymerized carbon oxides would quickly depolymerize to carbon dioxide. This is due to the very stable nature of the C=O double bond. Whereas Si, P, B, etc. don't form strong double bonds with oxygen, because of poor orbital overlap. Notice that I doubt that the empirical formula of these "3D ethers" would be CO4, as it would require the inclusion of dioxygen (neither is SiO4 accurate for silicon dioxide (silica)). I suppose "3D ether structures" could be possible at very high pressures and very low temperatures. Yanping Nora Soong (talk) 11:53, 20 January 2016 (UTC)[reply]
Indeed, the "TO4" is usually written "TO4/2" to reflect both the tetrahedral structure and the fact that oxygen atoms are shared between two "T", thus the global formula is TO2 (like SiO2).192.54.145.66 (talk) 13:18, 20 January 2016 (UTC)[reply]
PMID 22540597 is just the first of many literature references I found for the carbon analog. Note that it indeed only apparently exists under special/extreme conditions. DMacks (talk) 17:33, 20 January 2016 (UTC)[reply]
That's the answer. I should add that "polymeric carbon dioxide" (or CO2) are good search terms. See [14] [15] etc. Wnt (talk) 17:53, 20 January 2016 (UTC)[reply]

CD jitter

I am looking for references on how much cd jitter is needed to be audible to the average person.--178.105.166.117 (talk) 17:58, 20 January 2016 (UTC)[reply]

This is going to be a complicated problem to define. Do you mean the jitter in the buffering of digital audio frames, or jitter in the analog frequency (or phase) during the digital-to-analog conversion, or jitter along some other axis?
One place to start reading is our article on psychoacoustics, the scientific study of sound perception. This field has a tendency to err towards the "subjective," but it can be conducted in a very quantitative fashion, if you read the right books. I highly recommend the online textbooks of Julius Orion Smith; these are available at zero cost.
For example, from Spectral Audio Signal Processing: Further Reading, links to a dozen research publications on acoustic perception as it pertains to various digital models, including a lot of discussion about sensitivity to noise.
Nimur (talk) 18:38, 20 January 2016 (UTC)[reply]
Last time I worked with CD's, the method of decoding audio was through single bit (sometimes multi-bit) sigma delta modulators. phase noise on the oversampled bitstream to the analog decimation low-pass filter filter I believe is what you are looking to quantify. Jitter associated with reading the disc and buffering is not a factor. Oversampling tends to push noise out of the baseband. --DHeyward (talk) 20:05, 20 January 2016 (UTC)[reply]

OP:Final d/a clock jitter- no oversampling. Some sources say you cant hear 2ns pk-pk jitter. Some say it has to be less than 100ps to be inaudible. Who is right?--178.105.166.117 (talk) 17:33, 21 January 2016 (UTC)[reply]

Such a specification implies that a listener can hear spectral purity well above the actual spectral purity of the true signal. This seems very unlikely: I suspect neither noise specification is accurate. Nimur (talk) 18:30, 21 January 2016 (UTC)[reply]
I don't know enough to comment on the figures, but I do agree with Nimur and think you should be very careful about anything you hear relating to audio (no pun intended). There are a lot of myths about audio that some people who believe themselves audiophiles believe and seem to spread, despite having basically no good evidence for their claims (and often based on flawed understandings of the science or basics of how something works).

I'm not completely sure whether [16] is trustworthy but the writer does at least mention ABX and double blind (albeit only that the test referred to didn't involve that) and Audio Engineering Society who I believe tend to reject such crap. The writer suggests anything less than -100 dBFS although that's for the whole system including recording. As mentioned there and [17] + [18], jitter is one of the areas where there seems to be a lot of myths and misinformation.

Theoretically you'd expect those working as sound engineers for music, movies etc and those who work on mastering to avoid such myths, but I've read enough stuff like [19], to make me think some of them do seem to accept and even spread such myths. It's particularly surprising how often you hear about such people who seem convinced of something yet there's no mention of an ABX or similar test (even when it sounds like it would be easy).

The spread of 96 khz/24 bit or even 192 khz audio for the end human user (as opposed to the processing stage where 24 bit is useful), despite the fact there's very little evidence anyone can hear the difference. Well, other than distortions introduced by equipment which is unlikely to generally be considered beneficial [20], and the possible advantages in processing difference of such audio considering a different target market [21]. And I don't think we can totally blame marketing departments for this either.

Frankly the very high percentage of the overall bitrate of many video formats devoted to audio thanks to these moves and also the move to loseless formats is also IMO another sign. Although at least there's often some evidence someone with super equipement who is looking for the artificats can detect them at common bitrate used for the lossy compression schemes with video. Albeit if you increase the bitrate to significant below whatever the compression of your lossless, you'll probably get transparency for nearly everyone. BTW the Hydrogenaudio forums are always a useful source. [22] while targetted at audio engineers seems a decent basic introduction of the crap that audiophiles can spread.

Nil Einne (talk) 17:29, 23 January 2016 (UTC)[reply]

This is also related to the mythos surrounding perfect pitch, the reputed ability of certain "musical prodigies" to produce and/or identify frequency-accurate tones. All you need to refute them is a double-blind test with a microphone and a modern digital oscilloscope. Audio signal frequency accuracy, in the real world, is probably accurate to less than 1Hz. This works out great, because the bandwidth of a "perfect sinusoidal tone," even from one of the more perfectly-spectral instruments like a flute, is still quite wide. If you look at the timbre of a resonant instrument like a grand piano, it becomes clear that we are hearing very broadband, noisy tones - and it is this quality that we find aesthetically pleasing.
Again, I point to the texts I linked earlier, in which the formal mathematics are worked out and compared against psychological studies of sound perception. We humans really like phase noise to be added in to our music. Most electronic instruments have to add phase noise so that they sound more natural. Trying to "perfectly" construct a signal waveform is overkill; humans can't hear accurately enough for bit-perfect, sample-for-sample recreation to actually matter very much below, oh, maybe 10 microseconds (this is, without coincidence, pretty close to the time-scale that corresponds to the upper frequency range of our hearing).
What we do hear, and find annoying, are spectral noise spurs, shot noise (unwanted impulses and jump discontinuities), and very high amplitude white noise. Technically, all of these noise can be mathematically transformed to represent as phase noise, (equivalenlty, "jitter" in the time domain); but if you want to play mathematical games, ... well:
Here's a fantastic white-paper from Maxim Semiconductor on the theory and practice of random noise contributing to timing jitter. Pay close attention to the frequency axis on their Bode plots! Maxim mostly makes high-frequency RF and mixed signal devices: so when we build a 200MHz amplifier or a 20 GHz amplifier, then ... yes, we worry about a few nanoseconds of timing jitter. In audio devices, at this magnitude, phase noise sources are literally and actually quantitatively thousands of times too small for you to notice. They are way below your noise floor. It is a near-certainty that your stereo picks up more noise from the next-door neighbor's refrigerator compressor motor. Any "real" audiophile who says they can hear audio jitter at one gigahertz probably needs to put themselves and their Hi-Fi stereo in an RF-clean remote location in the desert, and while they're at it, they should probably ask the Air Force to shut down the GPS satellites for a few hours so remote radio signals from hundreds of miles away won't interfere with their amplifier noise floor.
Those high voltage power lines a few miles out of China Lake sure do put out a lot of radio-noise in the audible frequency spectrum!
I doubt even a trained musician - even one who claims to have perfect pitch - could detect as huge an error as 10 microseconds of peak-to-peak jitter, provided there no samples are dropped and no gaps in playback (because a gap would be zero-fill, in other words mathematically equivalent to shot noise, not phase error).
Nimur (talk) 19:17, 24 January 2016 (UTC)[reply]
Well I guess, when using 44.1 kHz, only 22.05 kHz exactly can be transmitted a square or rectangle. It might be the same when displaying lower and not exact half resolution on a TFT monitor. TFTs have fixed number of pixels. The picture can be fitted to the screen, but it is not that sharp as in the TFTs physical resolution. Those losts will occur. --Hans Haase (有问题吗) 17:54, 21 January 2016 (UTC)[reply]

Natural habitat of birds in the city

If birds (i.e. hawks) live in the busy city, is the city their "natural habitat"? Would it still be the "wilderness" if the raccoon digs into the trash bin or dumpster behind a restaurant? Although the "wilderness" seems to evoke imagery of trees and rocks, does the wilderness have to be trees and rocks and things that Mother Nature made directly? Can the "wilderness" be man-made structures that are abandoned and overtaken by rodents that may dig into trash? 140.254.77.208 (talk) 19:15, 20 January 2016 (UTC)[reply]

  • "Wilderness" is the wrong word here, since it implies lack of human settlement. You want "environment" which can include an urban environment. μηδείς (talk) 20:35, 20 January 2016 (UTC)[reply]
  • (ec)"Wilderness" implies uninhabited by humans, or nearly so.[23] Some would half-joke that NYC is a "wilderness", but there is no question that it has plenty of human habitation. ←Baseball Bugs What's up, Doc? carrots→ 20:37, 20 January 2016 (UTC)[reply]
  • It does? <g> Coyotes and other "wild animals" have been found in Manhattan - the fact is "wilderness" may apply to any area not under direct use by Man within any reasonable length of time. Cities have been around for a vey long time now (likely well over 10,000 years from current reports) meaning that yes - some birds can and do treat urban areas as "natural habitats". Collect (talk) 20:42, 20 January 2016 (UTC)[reply]
  • Well, it's alleged to be, at least. :) Many types of creatures dwell in cities, and it is their "normal" habitat, for sure. It's not "wilderness", though. ←Baseball Bugs What's up, Doc? carrots→ 20:45, 20 January 2016 (UTC)[reply]
I wouldn't call a city their "natural habitat" unless they have actually evolved to adapt to it, and can no longer survive outside the city. StuRat (talk) 04:13, 21 January 2016 (UTC)[reply]
What if they can survive outside the city but survive better in the city? Isn't the city then their natural habitat? Examples include urban rats and urban pigeons. Robert McClenon (talk) 16:17, 21 January 2016 (UTC)[reply]
From the same window in Manhattan, I've seen a peregrine falcon fly by with a pigeon in its talons on two separate occasions. That falcon has adapted to city living as the cliff-dwelling pigeons did. There is no reason to require that populations to evolve specialized traits if they are well suited to the urban environment. The fact that they have developed the ability to find sustainable food supplies and nesting sites to produce new generations indicates that they have colonized the city successfully. BiologicalMe (talk) 17:01, 21 January 2016 (UTC)[reply]
Commensalism and synanthrope are relevant here. E.g. Norway rats and the house mouse are only found at very low densities outside of human habitations. There are many critters whose natural environments are primarily human enclaves. We also have an article on urban wildlife.

How much time can a fish survive outside water?

I asked in IRC but they sent me here. I don't want information about big fishes like sharks, I want information about small fishes which are about the size of human palm. Thank you 188.42.233.35 (talk) 21:33, 20 January 2016 (UTC)[reply]

7 hours, according to the Daily Mail. And there are also lungfishes, which live even longer out of water. --Scicurious (talk) 21:57, 20 January 2016 (UTC)[reply]
Amphibious fish is worth a read if that's what you're interested in, mudskippers can survive for a very long time out of water in the right environment 95.148.212.178 (talk) 22:16, 20 January 2016 (UTC)[reply]
For a fish without lungs that doesn't get lucky, maybe a few minutes. StuRat (talk) 04:10, 21 January 2016 (UTC)[reply]
The problem is not that of lungs, because fishes have initially no problem breathing in air, where they can find far more oxygen than needed. Their problem is that outside water, the gills eventually dry out and cannot exchange oxygen any more - this is when they sllowly suffocate. See Fish gill#Breathing without gills 192.54.145.66 (talk) 14:19, 21 January 2016 (UTC)[reply]
Decades ago when I was (more) interested in fish and angling, I read that (British) hobbyists who raised prize carp, would routinely take them on train trips to Fish Shows in the UK or the nearer Continent (France, Belgium etc.) by wrapping them in wet newspaper and carrying them under their arms. Different species and families of fish differ in their ability to absorb oxygen through their skins, per 192.54's link above, so a general answer may not apply to a particular species. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 14:44, 21 January 2016 (UTC)[reply]
But didn't the owners experience nerve pain when they passed through a tunnel?98.213.49.221 (talk) 20:03, 21 January 2016 (UTC)[reply]
I had always thought that rather than the gills drying out, when fish are taken out of water, the gills are no longer supported by the water so the gill lamellae collapse on each other causing a dramatic (fatal) decrease in the ability for oxygen exchange.DrChrissy (talk) 17:12, 21 January 2016 (UTC)[reply]

January 21

Is it true a lot of people with down syndrome sound German/Russian when they talk?

I wonder if they do. 208.181.190.136 (talk) 03:35, 21 January 2016 (UTC)[reply]

Sounds like a myth. This review of Downs Syndrome speech impediments mentions no such thing, though perhaps that would be a useful place for you to learn why people with Downs often sound the way they do. Someguy1221 (talk) 05:44, 21 January 2016 (UTC)[reply]
Yes, it is true that a lot of people with Down syndrome sound German or Russian when they speak. The huge majority of them are found living in Germany and Russia. English speaking people with Down syndrome do not sound German or Russian. In the UK they speak English with what ever accent they have absorbed from their social mileau. This is based on personal research (shock - horror) from meeting dozens of people with Down syndrome. I have often wondered why Canadian people sound like Americans with a Scottish grandfather. Richard Avery (talk) 07:58, 21 January 2016 (UTC)[reply]
My American friend has two Scottish grandfathers, and doesn't sound one bit Canadian. μηδείς (talk) 17:34, 21 January 2016 (UTC)[reply]

Anti-animal armor ?

For zookeepers and others who need to deal with potentially dangerous animals, wouldn't having a suit of armor that could protect them be a good idea ? Seems like that would be preferable to using tranquilizer darts on all the animals in an enclosure, say if the zookeeper needs to go in and retrieve something dropped into it accidentally. So, has this approach ever been tried ? StuRat (talk) 04:03, 21 January 2016 (UTC)[reply]

Tranquilizers are not without risk, they generally don't tranquilize animals unless they have a very good reason to, I doubt something being dropped accidentally into an enclosure qualifies. I think most enclosures would have a "locked" area where the animal can be herded into. Also I doubt there is a suit of armor that could do much to protect someone from a really powerful animal like a tiger or a rhino. Vespine (talk) 05:33, 21 January 2016 (UTC)[reply]
Also there are those Bite suits that are used for training attack dogs, such as for the police department. Surprised I can't find a wiki article for it. Vespine (talk) 05:36, 21 January 2016 (UTC)[reply]
Such suits have been built, notably by Troy Hurtubise who won an ignobel prize for actually testing his invention. I'm going to make a wild guess that most sane people don't want to actually put such suits to the test. Either the thing that's been dropped is not important enough to do anything but wait for the animals to wander into the lockable part of the enclosure, or the thing is so important you have to get it out right away and use tranqs. Someguy1221 (talk) 05:40, 21 January 2016 (UTC)[reply]
And, body armor is really heavy, and limits your movement. Ask anyone who's worn a bulletproof vest. On top of that it still won't do much to protect you against blunt trauma (this is why maces and other bludgeoning weapons were useful), which could easily be inflicted by many large animals. Armor won't stop an elephant from trampling you. --71.119.131.184 (talk) 05:48, 21 January 2016 (UTC)[reply]
Another possibility is the risk of injury to the animal. ←Baseball Bugs What's up, Doc? carrots→ 11:45, 21 January 2016 (UTC)[reply]
There are dog attack suits or bite suits you can find on web search, and of course the shark suit. Wnt (talk) 12:13, 21 January 2016 (UTC)[reply]
Have to agree it doesn't seem that such suits will be particularly useful in the scenario that was outlined, or really any likely scenario in a zoo. As also mentioned, it's unlikely something falling in to the cage would be considered urgent in most cases.

If for example there is risk to human life like a baby or child that's dropped in to the cage, or perhaps a live weapon; an attempt would be made to draw the animal in to the secure area, meanwhile people with tranquiliser guns and live weapons will go on standby to shoot the animal if necessary and can be carried out without risk to human life. (The area around the cage is likely to be cleared as much as possible.) If it's fairly urgent, e.g. something that's fairly poisonous, most likely an attempt will made to draw the animal to the secure area. Perhaps tranquiliser teams will be put on standby. If it isn't something that really matters e.g. someone's digital camera, then it'll probably simply be left there and removed later, perhaps when the cage is normally cleaned.

During routine cleaning, animal/s are drawn and locked into (or perhaps they already are in) a seperate enclosure if necessary [24] [25] [26]. Often I believe they are feed at the same time which makes it easier to draw them in to the seperate enclosure (if needed) and also distracts them during the cleaning. Multiple gates may be used to try and prevent accidents [27]. The people involved should hopefully also have experience and training on how to deal with such animals.

I guess routine use of the suit may provide some emergency advantage if all the measures fail an animal does end up in the same enclosure. But the reduction in mobility and visibility combined with the extra weight and heat and unclear protection means it's probably not particularly useful compared to other methods to reduce the risk of harm. And there would also likely be other issues, like the difficulty & cost of cleaning such suits.

Tranquilisers are only used in genuine emergencies or when an animal needs to be inspected up close or operated on, and other methods [28] won't do. They aren't used just because someone happens to drop something in to the cage, nor against people in gorilla suits [29].

All the examples above seem to be either where you want the animal to bite you (dog training suits); where you want to be able to get very close for entertainment or study or other purposes and lack the ability to enforce separation (the bear suit and shark suit). In fact, consider the shark suit example, shark proof cages are used when you intentionally want to get close to a shark.

Nil Einne (talk) 14:40, 21 January 2016 (UTC)[reply]

Surprisingly, zoos have thought about how to manage animals and their enclosure. Larger animals likely to be dangerous to the staff will be housed in accomodation that has at least two seperate, lockable areas. This allows for those areas to be cleaned or maintained in whatever way in the physical absence of the animals. The other common reason that staff may want to get close to larger dangerous animals is for physical examination or treatment. Apart from the reasons given above about lack of mobility and practicality no amount of armour or physical protection is going to allow any animal keeper(s) to subdue a warthog let alone a chimpanzee or gorilla to the point where physical examination or treatment is possible. For some curious reason large primates, large cats, and the like just won't calm down however many keepers in armour or bite proof clothes hold them down. Some animals will surprise you though. Richard Avery (talk) 15:00, 21 January 2016 (UTC)[reply]
They do use armour when it is appropriate. E.g. Snake handlers use snake proof boots, and heavy gloves like these [30]. Beekeepers also wear full protective suits. I think you also might be underestimating how many different ways that zookeepers can control their animals. If something valuable got dropped into the tiger pit, they'd not tranquilize the tigers, they'd probably feed them in another enclosure then close the door. If you want to learn more about actual best practices, many zoos have "behind the scenes" tours, and of course many documentaries show how zookeepers manage their flocks. Here's a clip from the Detroit zoo [31]. If you want to learn more about armour, that's known as personal protective equipment in professional contexts, so searching things like /[zoo, animal of interest] PPE/ will show you some selection of what is used. Just searching /animal ppe/ got me the NIH guidelines, which require bite-proof gloves for some situations, but not all [32] We also have a whole article on United_States_environmental_and_occupational_health_in_zoos. SemanticMantis (talk) 15:07, 21 January 2016 (UTC)[reply]
Many large animals (e.g. elephants, rhinos) in zoos require daily inspection and perhaps daily treatment during illness. This is often achieved with positive reinforcement training. The animals are trained to approach an area of their enclosure which is designed for "protected contact" and is almost always "behind the scenes". The animal can be called with voice commands, and so can be summoned if, e.g. something is dropped in the enclosure. This contact area is reinforced with a heavy steel structure which allows the animal to place limbs or areas of the body in such a way they can be examined closely or treated by humans. So, it is a suit of armour of kinds, but it is immobile and not worn.DrChrissy (talk) 17:33, 21 January 2016 (UTC)[reply]
Thanks, "protected contact" is a very good key term. Here's a bunch of photos showing exactly what you describe [33]. SemanticMantis (talk) 17:41, 21 January 2016 (UTC)[reply]
Some great examples there - thanks for those - as usual, a picture speaks a thousand words.DrChrissy (talk) 17:49, 21 January 2016 (UTC)[reply]

Lost mathematics and Scientific discoveries

I heard a lot of Scientific and Mathematical data and knowledge are being lost at an unprecedented rate, both online and in print format. How can we retrieve lost Scientific and Mathematics knowledge? — Preceding unsigned comment added by 100.38.74.62 (talk) 13:55, 21 January 2016 (UTC)[reply]

Where have you heard this? Jstor has far more articles online than they did 10 years ago, and they are continually adding more articles from further back in time and more obscure journals. The Royal Society put a huge chunk of their archive (starting from 1665) online and publicly accessible, only in 2011 [34]. SemanticMantis (talk) 15:15, 21 January 2016 (UTC)[reply]
I guess you'll need to show us where you got this claim from...probably the original source for the claim could illuminate your question somewhat.
That said, the data behind an article in a scientific journal might maybe be destroyed, or at least be inaccessible somehow. I suppose that might be what they are referring to.
But if something is "lost" (in the sense of being "misplaced"), rather than being "destroyed" - then maybe all that's needed is better search technology?
Either way - please tell us where you heard this - maybe we can come up with a more concise answer. SteveBaker (talk) 15:32, 21 January 2016 (UTC)[reply]
  • The OP may be referring to data rot and should check out the much more comprehensive digital preservation. Not that the data was scientific, but I just threw out about 200 video tape recordings made in the 80's and 90's which have become physically unplayable or distorted to the point of uselessness. I did take the precaution of backing most of them up to DVD over a decade ago. My cassette tape collection is entirely useless. μηδείς (talk) 17:30, 21 January 2016 (UTC)[reply]
  • Our various links from curation may also be of use. The OP may be thinking of the increasing trend (or at least, increasing public perception) of libraries and other institutions throwing out large sections of their collections to make space. We don't have an article about the topic in particular (that I can see anyway), but there are tons of links out there, mostly either by people who just found out about the practice and are having an apoplectic attack or by the staff who have been discreetly doing it for years trying to talk people off the ledge. Here's a good example. Matt Deres (talk) 20:20, 21 January 2016 (UTC)[reply]
Weeding (library). DMacks (talk) 20:37, 21 January 2016 (UTC)[reply]
I think it is a known fact in Informatics that the science grows on its skin, in other words only the most recent articles are referenced in current publications. Most old publications are getting forgotten but I believe it is a normal process because in some way the correct knowledge is being preserved. Also I doubt this concerns Mathematics, only Medicine and Natural Sciences. --AboutFace 22 (talk) 23:53, 21 January 2016 (UTC)[reply]


There are cases where knowledge in individuals is lost, but a record still exists of that knowledge somewhere. One example is knowing how to use a slide rule. StuRat (talk) 07:52, 22 January 2016 (UTC)[reply]
I guess the point is that scientists and mathematicians don't have a particular responsibility to also be archivists and historians - so the loss of data that is no longer needed to conclusively prove things that are already well known and understood may not be of huge concern to them. In general, if some old piece of theory is at risk of being overthrown, someone will go and re-do the experiments it was based upon with more modern methods and equipment as a double-check. SteveBaker (talk) 17:40, 22 January 2016 (UTC)[reply]

Could STIs that are tolerable in one human population be deadly in another?

In other words, is there a human population that has been isolated from the rest of the world and may have no immunity or tolerance towards a certain STI, but then somehow intermarriage between the isolated human population and another human population occurs which may cause the formerly isolated population to be susceptible to disease? 140.254.136.149 (talk) 18:12, 21 January 2016 (UTC)[reply]

This is part of the history of syphilis. --TammyMoet (talk) 18:46, 21 January 2016 (UTC)[reply]
I see that syphilis is not from biblical times as I previously thought. I am not sure where I have heard this, but a long time ago, I heard that syphilis and gonorrhea came from biblical times. Now in retrospect, I think "in biblical times" really means "a long time ago". 140.254.136.149 (talk) 19:28, 21 January 2016 (UTC)[reply]
History of syphilis is more in-depth. The complementary article would be yaws. Matt Deres (talk) 20:24, 21 January 2016 (UTC)[reply]

Force of gravity at the center

[I understand that Newtonian gravity, while superseded by relativistic theory, is still accurate for general purposes. Please correct me if my understanding be wrong.]

Newton observed that the force of gravity is inversely related to the object's distance from the center of gravity; if I'm 1,000,000 km away from a body, I experience far less gravitational force (as far as it's concerned) than I do if I'm 1,000 km away from it. But what if you're 0.1km from the center? Since the denominator for the inverse-square law calculation is 0.25km, you should experience immense gravitational force (you're quadrupling the product of the masses, rather than dividing it as normal), but at the same time, since you're at the center of gravity, you're being pulled essentially equally in all directions, so you should experience essentially no gravitational force. Both can't be right; I've mangled something somewhere.

Furthermore, imagine that you're climbing an indefinitely strong ladder to Earth's center (geothermal heat ignored, so you can go all the way down without melting yourself), with indefinite strength and time — you can go either up or down, stopping whenever you wish without your weight breaking the rungs, so that you can feel your weight instead of falling and therefore being weightless. At what point, or in what area, will you weigh the most? Nyttend backup (talk) 20:25, 21 January 2016 (UTC)[reply]


The net gravitational force inside a uniform spherical shell is zero. This is a result derived by Gauss a long time ago. That means that once you are under the surface of a planet the material above you has no gravitational effect, so in the F=G*m1*m2/r^2, m1 is reducing. as r reduces. i'll leave you to do the (trivial) maths. Greglocock (talk) 21:06, 21 January 2016 (UTC)[reply]
[ec] A unique property of the inverse square law is that, inside a sphere, the gravitational attraction from the mass of the exterior of the sphere cancels out, so the gravitational force is the same as on the surface of an isolated sphere - in your example, a sphere of 0.25 km in radius. It's an interesting metaphysical point (or, at least, an element of the weak anthropic principle) that, given three-dimensional space, the universe only works if gravity follows an inverse-square law. For the second question, the gravitational force varies linearly with the distance. From Newton's equation , M, the effective mass of the planet, is proportional to the cube of the distance , so the equation becomes , or . Tevildo (talk) 21:15, 21 January 2016 (UTC)[reply]
You may be interested in the shell theorem. Basically, the "force of gravity is inversely related to...distance from the center of gravity" is only valid if the source of gravity is spherically symmetrical, and you are measuring the force from outside the sphere. The moment you enter a sphere (like the Earth), you can't use the same simplified equation anymore (Gmm/r2). The shell theorem explains why, as well as what happens when you enter the sphere. In fact, the gravity drops off to zero as you approach the center. Gravity would only approach infinite strength if the source of the gravity was compressed into a single point, as in a gravitational singularity, as this would be the only way to get arbitrarily close to the source of gravity without entering it. Someguy1221 (talk) 21:30, 21 January 2016 (UTC)[reply]
Indeed. To supplement the shell theorem article, we also have an article on Gauss's law for gravity. Nimur (talk) 21:49, 21 January 2016 (UTC)[reply]

January 22

gravitons

how do they whizz here there asnd everywhere at infinite speed to convey the force of gravity between all objects in the universe?--178.105.166.117 (talk) 01:25, 22 January 2016 (UTC)[reply]

The graviton is a hypothetical particle (not discovered yet), which is supposed to travel at the speed of light. Tgeorgescu (talk) 01:41, 22 January 2016 (UTC)[reply]
One thought experiment which I still find fascinating is that given light takes about 8 minutes to travel from the sun to the earth; IF somehow it was possible for the sun to just instantly wink out of existence, we would not have any way of knowing this had happened for 8 minutes. The sun would still be visible in the sky and the earth would keep orbiting around the now non existent sun for another 8 minutes. After that 8 minutes, we would "see" the sun disappear, and instead of following it's previously curved orbit, the earth would start to travel in a straight line the direction it was last traveling as the sun vanished. That is to say, Gravitons do not have infinite speed. Vespine (talk) 02:37, 22 January 2016 (UTC)[reply]
The thing is, no two points in spacetime can interact faster than c, so to an observer on Earth the Sun only stops existing when they see it go bye-bye. There's no way the observer could "know" the Sun stopped existing while the Sun's light is still getting to them. --71.119.131.184 (talk) 21:51, 22 January 2016 (UTC)[reply]

We actually have an article on the Speed of gravity. There is no direct measurement of the speed of gravity, though this may be possible in the future with measurement of gravitational waves. The indirect measurements that have been attempted show a speed consistent with the theory that changes in gravitational fields (and perhaps gravitons themselves) propagate at exactly the speed of light. Someguy1221 (talk) 04:06, 22 January 2016 (UTC)[reply]

If things worked any other way, there would be severe causality issues that I strongly suspect would be obvious at the galactic level. SteveBaker (talk) 04:33, 22 January 2016 (UTC)[reply]
(Strictly speaking, I don't think anything would be egregiously broken if gravity propagated slower than the speed of light—not that there's any reason to suspect such a thing.) TenOfAllTrades(talk) 19:28, 23 January 2016 (UTC)[reply]
If space is literally bent near massive objects, then the explosion of one of those objects should alter space in some kind of "ripple effect" in its gravity - extending, as hypothesized, at the speed of light. Unfortunately for us, our ability to witness the sun going nova would be very short-lived. And the earth wouldn't head off in a straight line, as the massive amount of debris from the explosion would, at the very least, start pushing us away from the sun's former location. Or more likely it would incinerate and disintegrate the earth, which would become part of the debris from the sun. On the plus side, this is not likely to happen any time soon. ←Baseball Bugs What's up, Doc? carrots→ 15:22, 22 January 2016 (UTC)[reply]

Parachute jump

Suppose you are dropping paratroopers into a fairly small DZ on a windy day (but without wind shear); if you know the drop altitude AGL and the wind speed, can you calculate how far upwind of the DZ should you drop the paratroopers so they will land near the center of the DZ with a minimum of steering? 2601:646:8E01:9089:F4FA:EFCF:C9DF:C509 (talk) 02:45, 22 January 2016 (UTC)[reply]

I'm certain you can. The military would no doubt take that into consideration when deploying paratroops. Pretty much all modern parachutes are steerable, so it might not be much more than if it's light wind, deploy a bit up wind from the DZ, if there's more wind, deploy a bit further. Do that a few times and you'll quickly get the hang of how far away you have to deploy for a particular wind speed for a particular parachute. Vespine (talk) 03:05, 22 January 2016 (UTC)[reply]
Here is how the U.S. Army does it. Shock Brigade Harvester Boris (talk) 03:06, 22 January 2016 (UTC)[reply]
Thanks! So if the wind is blowing at 15 knots and you jump from 1000 feet, you'll drift about 450 meters, right? 2601:646:8E01:9089:F4FA:EFCF:C9DF:C509 (talk) 08:17, 22 January 2016 (UTC)[reply]
It's not quite that simple:
1) Wind speed, and even direction, varies with altitude. So, you'd need to account for the drift at each level.
2) You are assuming sustained winds, but there are also wind gusts to deal with, which are, by their nature, unpredictable.
3) How far they are blown off course depends on their rate of descent. Therefore, wind will have minimal effect until the parachute is opened.
4) There's also a problem in heavy winds that once they land they could be dragged by the wind. (There is a parachute release mechanism, but using that while being dragged over rough terrain might be difficult. If they could release the instant they hit, then they would be OK, although the released parachutes blowing across the field will give away their position.) StuRat (talk) 08:29, 22 January 2016 (UTC)[reply]
Yes, you did calculate the drift D=KAV correctly using a personnel Load Drift Constant K = 3.0 meters per paragraph 6-32 and figure 6-4. But you also need to take the forward throw into account, as discussed in paragraph 6-36 and table 6-9, which is given as 229 meters for personnel out of a C-5, C-130, or C-17, and then you need to combine them vectorially with regard to wind direction and drop heading as discussed in section 6-89 and shown in Figure 6-8. There seems to be a problem with that figure in the PDF linked above by Shock Brigade Harvester Boris, so you may prefer this copy of the complete US Army Field Manual FM 3-21.38, Pathfinder Operations (with chapter 6, Drop Zones).
Note that there does seem to be an error in that manual in the "Determination of Release Point Location" section (starting with paragraph 6-89). Figure 6-8 shows all five steps, but Step 5 is omitted in the discussion, and the example given under the discussion of Step 4 (paragraph 6-93) belongs under the missing discussion of Step 5 as it is the calculation of Throw Distance, which for rotary-wing and STOL aircraft "equals half the aircraft speed (KIAS), expressed in meters." -- ToE 16:09, 23 January 2016 (UTC)[reply]
So, for a DC-3 flying at 100 knots, the paratroopers would be thrown forward 50 meters? 2601:646:8E01:9089:94DA:2520:D95F:848D (talk) 02:45, 24 January 2016 (UTC)[reply]
It's probably greater than that. Clearly the "equals half the aircraft speed (KIAS), expressed in meters" formula for rotary-wing and STOL aircraft does not extend to the higher drop speeds of the fixed wing aircraft in table 6-9 with forward throws of 229 meters, as there is no way that they have drop speeds anywhere near 458 knots. In fact, table E-1 gives a personnel drop speed of 130 - 135 knots. The formula breaks down somewhere, and I suspect that it is a stretch to apply at the 90 knots used in the example problem. If I had to guess, based purely on intuition, I'd be tempted to scale it with the square of the drop speed, and predict (100/135)^2 * 229 m = 125 m. -- ToE 04:36, 24 January 2016 (UTC)[reply]
It's worth noting that they very often drop supplies and weapons alongside the paratroops - and steerable parachutes don't help those things! So there is a degree of importance to making the drop in the right location. SteveBaker (talk) 17:37, 22 January 2016 (UTC)[reply]

N-A=S what these letters stand for?

In chemistry there is a formula for finding the number of the bonds. N-A=S what these letters stand for? 92.249.70.153 (talk) 14:06, 22 January 2016 (UTC)[reply]

S= N-A, where S is the total number of shared electrons, N is the total number of valence shell electrons needed by all the atoms in the molecule or ion to achieve noble gas configurations and A is the total number of electrons available in the valence shells of all the atoms in the structure. — Preceding unsigned comment added by 81.131.178.47 (talk) 15:32, 22 January 2016 (UTC)[reply]

A link to show the context. Mikenorton (talk) 15:35, 22 January 2016 (UTC)[reply]

Spanish electric connector

Hi, does anyone know whether this old type of connector found in Spain has an official name or not?

Thanks in advance.--Carnby (talk) 19:13, 22 January 2016 (UTC)[reply]

Tough call. This [35] looks like a very WP:RS. It has official names and specification codes for many, many, plugs/sockets. It simply calls this type "old spanish socket". It doesn't have a picture, but the description matches these photos very well, IMO. oops; sorry :( SemanticMantis (talk) 19:56, 22 January 2016 (UTC)[reply]
That's a mirror of an old version of AC power plugs and sockets. -- Finlay McWalterTalk 20:01, 22 January 2016 (UTC)[reply]
D'oh! Thanks, I will look more carefully at the header next time :) SemanticMantis (talk) 20:05, 22 January 2016 (UTC)[reply]
Maybe you can ask our Spanish friends over at Wikipedia:Café/Archivo/Técnica/Actual or elsewhere on la enciclopedia libre. The Quixotic Potato (talk) 00:52, 23 January 2016 (UTC)[reply]
 Done--Carnby (talk) 12:33, 23 January 2016 (UTC)[reply]

Top British medical Schools?

Can someone give a simple, easily copied list of the most prestigious British medical schools? This inquiry is for a user who has very limited WP access, and who Wants me to send a list (category), rather than a full-blown prose article. Thanks. ````` — Preceding unsigned comment added by Medeis (talkcontribs) 21:23, 22 January 2016 (UTC)[reply]

Googling "British medical schools" gives me this. Rojomoke (talk) 22:56, 22 January 2016 (UTC)[reply]
Adding "top" to the search gives me this ranked list. Rojomoke (talk) 23:00, 22 January 2016 (UTC)[reply]
List of medical schools in the United Kingdom is the wikipedia article for this. There are only 32 medical schools in the UK. So, there is no problem taking a look at all of them. I'm not sure what part of them are top. The Guardian score linked above could be slightly misleading. All of them seem to get scores that are not far apart.--Llaanngg (talk) 18:13, 23 January 2016 (UTC)[reply]
@Medeis: - (Position: I've done quite a bit of education consultancy work and advice with private tuition and out-of-school education) - All British medical schools essentially teach the same syllabus with the same entry criteria, so there’s a limit to how different they can be. Furthermore, by not attending a London university or Oxbridge you probably get reduced cost of living, so attending a ‘worse’ university may have dramatic compensations.
Honestly, your friend probably wants to worry about getting the grades to be in a position to make that choice. From tuition work I've done I've been startled to see kids in year 10 (age 15) predicted BCC science GCSEs whose parents say they need help deciding whether they should become a doctor or a dentist. 94.119.64.1 (talk) 16:56, 24 January 2016 (UTC)[reply]
Thanks, I'll pass that on. What's actually going on (and the reason I didn't just google it, although I did send the results above) is not an actual choice of schools, but backstory for a story set in Britain. Had I been asked the same question for the US I would have suggested the University of Pennsylvania, which is top-tier, but not a cliche like Harvard or Columbia. But as an American I have no knowledge of British schools other than the obvious cliches. μηδείς (talk) 17:39, 24 January 2016 (UTC)[reply]
"Prestige" doesn't really have the same meaning WRT British academia as the Ivy League etc does in the US, other than a tendency for Oxbridge to consider themselves superior (sometimes with and sometimes without justification). Since most institutions in England and Wales (Scotland and NI are independent for the purposes of education) are charging the legal maximum tuition fee of £9000 per annum, there's a strong levelling effect since if a place starts getting a bad reputation, people will just go elsewhere. If you want a place with all the rowing-and-rugger English cliches, but avoiding Oxbridge, I'd suggest Barts. ‑ Iridescent 18:01, 24 January 2016 (UTC)[reply]
My friend agrees Barts would have been ideal, if not for Sherlock Holmes, and has settled on UCL, and I have sent the text of that article and some images. Thanks, again, for the assistance. μηδείς (talk) 20:07, 24 January 2016 (UTC)[reply]
Resolved

January 23

Developmental stage of cells and freezing

Why does freezing work when we freeze egg cells, embryos, or sperm, but does not work for higher level of development? By work I mean you can still thaw (defrost?) it and get a functional human out of it? --Scicurious (talk) 01:36, 23 January 2016 (UTC)[reply]

Probably someone else will provide more details but our article Cryopreservation and to a lesser extent Cryobiology and Cryonics has some details. Note that your idea freezing just works is largely incorrect. For both Embryo cryopreservation and Oocyte cryopreservation, you have to carry out an effective method of cryopresevation or you're most likely not going to have much success. Notably as our article says, human oocyte cryopreservation is still a relatively new technique. (As embryo cryopreservation is older, some women have frozen embryos which were fertilised by a former partner, which can lead to problems if the partner does not wish the embryos to be used.) Even Semen cryopreservation generally uses some method although the large number of sperm in a normal sample means you have a larger margin for error. Nil Einne (talk) 04:15, 23 January 2016 (UTC)[reply]
Yes, I mean "freezing an egg cell works" (somehow works, by proper procedure). I.e. it's possible. This is contrary to "freezing a baby", which never works, no matter what you do. I.e. you won't be able to reactivate it. --Scicurious (talk) 13:51, 23 January 2016 (UTC)[reply]
[citation needed] for your final statement. AFAIK, and this is supported by the articles I linked to, there's no intrinsic reason based on our current understanding to think it's impossible although it is likely to be very difficult. The reasons why it's so much more difficult with a whole organism where you have to successfully freeze and revive the vast majority of cells in a complex large (in all dimensions) system are given in the article, although many of the, should be obvious with a bit of understanding of ow cryopreservation techniques work and biology and physics. And you already see similar issues when comparing sperm vs oocytes or embryos. Nil Einne (talk) 14:32, 23 January 2016 (UTC)[reply]
BTW, my comments and questions have a definite purpose. They are important because if you have a fundamental misunderstanding of what is and isn't possible and how it is and isn't possible; this may very well be one of the reasons why you're having trouble understanding why whole organism cryopreservation for organisms with large & complex body structures like humans (or even Drosophila [36]) is very difficult. And it's not like it hasn't been achieved for adult organisms with simpler body structures like various nematodes [37]. P.S. I don't intend to suggest complexity and size are the only factors although they are big ones. For a variety of reasons certain things may actually be easier to cryopreserve than you would expect comparing to something else. Also I don't think cryopreservation of vertebrates is an active area of research and definitely not dogs or cats or even rats. But cryopreservation of organs is to some extent [38]. This shouldn't be that surprising, if you can't even really cryopreserve a rat heart yet, cryopreserving a whole rat is very unlikely. Nil Einne (talk) 15:42, 23 January 2016 (UTC)[reply]
The problem isn't related (trivially) to stage of development, but to size and complexity. We can readily freeze and thaw many types of mature human cells. What we can't (usually) do is freeze large pieces of animal tissue. That is to say, we can freeze, store, and revive primary cardiomyocytes, but not hearts; we can freeze, store, and revive primary hepatocytes, but not livers; and so forth. TenOfAllTrades(talk) 18:41, 23 January 2016 (UTC)[reply]

Why we stop producing lactase as adults?

I have read about being lactase persistent is an advantage for survival. Only 10% of the world population are lactase persistent as adults. The 10% that are lactase persistent are results of a mutation occurred thousands of years ago. We are all born with lactase persistent as babies then lose it as adults. My question is why do we lose it as adults? If that was an advantage, why natural selection turned it off during adulthood? Why most people were not lactose persistent until mutation kicked in? All the answers to this question "why" I found online include something like because we don't need milk as adults anymore. Yes, sure, while we don't need milk as much as we do as babies, but it is still considered to be an advantage. Just because we don't need something doesn't mean that thing doesn't have an advantage. That doesn't answer why the gene is turned off in adulthood. Why natural selection was not at work until much later when some new mutations kicked in? Thank you! (p.s.: I don't care about the mechanisms and any other matters; the answers should focus only on the why aspect). 146.151.96.202 (talk) 06:37, 23 January 2016 (UTC)[reply]

Because producing lactase uses energy, which takes energy away from other stuff? That's the best answer I can give you. 2601:646:8E01:9089:90DA:8B23:BEB4:5241 (talk) 07:55, 23 January 2016 (UTC)[reply]
As is explained in our article Lactase persistence, it was not an advantage in societies that didn't have dairy farming so it was turned of in adulthood. it was only turned back on in societies that had developed dairy farming. Richerman (talk) 11:23, 23 January 2016 (UTC)[reply]
Okay, even if it wasn't an advantage in societies that didn't have dairy farming, but it was not an disadvantage either. Why was there the need to turn the gene off as adults? 146.151.96.202 (talk) 21:04, 23 January 2016 (UTC)[reply]
Infant mammals have only one oral activity which is suck-swallow-breathe and this represent a drain on maternal resources until they develop the skills of eating and swallowing solids. Mammals have various way of encouraging infant skill development as soon as possible, which is an evolutionary advantage in the wild. Lactose intolerance i.e. cessation of Lactase production that causes the infant animal to experience stomach upset if it persists in breast feeding, contributes to the infant seeking independent nutrition.
Are you saying the gene was turned off in adulthood is the natural selection way of encouraging infant independence of its mother? If lactose persistent population can be independent without the gene turned off, this argument doesn't seem to hold. The gene does not need to be turned off for the babies to seek independence. The mother can teach the babies to be independent. I see no harm in keeping the gene on. Why it was turned off in the first place is puzzling me. 146.151.96.202 (talk) 21:02, 23 January 2016 (UTC)[reply]
No. Contribute to means help to cause or bring about (Oxford Advanced Learner's Dictionary). It attacks a Strawman to say this argues the cessation of Lactase production is "the essential natural selection way". Do you also need to be given a justification for each developmental change from infant to adult such as loss of Umbilical cord, loss of primary teeth, and loss of Moro reflex? AllBestFaith (talk) 01:57, 24 January 2016 (UTC)[reply]
I thought the usual idea was that, if older children kept nursing, they would compete with their younger siblings, to the disadvantage of the latter.
Of course the mother could prevent that. But would she? The same question applies to the "independence" version, actually. It's not absurd that there might turn out to have been an evolutionary advantage in making the weaning more automatic. --Trovatore (talk) 21:36, 23 January 2016 (UTC)[reply]
Trovatore. "The same question applies to the "independence" version." Can you explain more what do you mean by this statement? And why wouldn't the mother prevent that? It makes sense that the mother would make the older siblings seek independence, so that she could feed the younger ones with her milk (since the young ones are unable to eat solid food yet). It's entirely possible to become independent without turning off the gene as happened with cultures that are lactose persistent. Again, I see no clear advantage in turning off this gene.146.151.96.202 (talk) 20:21, 24 January 2016 (UTC)[reply]
Some human populations have developed Lactase persistence, in which lactase production continues into adulthood. It may have developed as a response to growing benefits of digesting the milk of farm animals such as cattle. Research reveals lactose intolerance in humans to be more common globally than lactase persistence and the variation has been tied to genetics, but that the largest source of variation has been shown to be based on exposure (e.g., cultures that consume dairy). P.S. Sensible consideration of evolutionary factors should identify probable mechanisms without which answers to "why" are only speculation.
See the article sections about Lactase persistence#Evolutionary advantages and Baby-led weaning in humans. AllBestFaith (talk) 14:00, 23 January 2016 (UTC)[reply]
I don't know where there is something about this but I think I also read somewhere there is an advantage in stopping the transmission of diseases in having children be appreciably different from adults in various ways, so there is a drive to accentuate differences. Dmcq (talk) 15:21, 23 January 2016 (UTC)[reply]

Video and scientific racism

Video in question. Most scientific racism came from whites during the age of scientific racism, but this particular video has it coming from a black man. I know evolution is not a ladder and lifeforms don't become "better" or "degenerate" as they mutate, they merely adapt to the environment, but I was curious if anyone had any direct refutations of what this man is saying. ScienceApe (talk) 17:00, 23 January 2016 (UTC)[reply]

There are plenty of easily found analyses of scientific racism. You appear to be asking this question just to show us that a black man can be a "scientific" racist too. --Llaanngg (talk) 18:15, 23 January 2016 (UTC)[reply]
(edit conflict) In the unlikely event that this is a genuine question and not a piece of trolling, there's an explanation of this particular theory and the arguments around it at Melanin theory. ‑ Iridescent 18:55, 23 January 2016 (UTC)[reply]
What points do you want us to refute? There's no scientific racism involved in the video, the guy is a racist talking garbage with almost no science involved. He claims that white people are a different species to black people and then a few seconds later says that they are the same species. His claims that white people are more prone to depression and violence isn't supported by any research (obviously there are some studies that would support that claim, while other studies would show that black people are more prone to depression and violence). I'm not sure what the 'SAR' gene is that he mentions, but if white people were that prone to a genetic problem it would be common knowledge, not posted by an idiot on youtube 95.146.213.176 (talk) 00:13, 24 January 2016 (UTC)[reply]
As I said above, this is not really a question, but a means to spread the word about this deluded black racist. --Llaanngg (talk) 02:05, 24 January 2016 (UTC)[reply]
You really think I was trying to spread the word about this guy? I can assure you I wasn't. I was hoping for a direct refutation of the things he was saying. If I was trying to spread his word, I think twitter or facebook would be a better venue don't you think? ScienceApe (talk) 17:26, 24 January 2016 (UTC)[reply]
I know people are having a knee jerk reaction to this video which is understandable. I agree, he is an idiot, I was hoping for a refutation of the points he was saying though. He didn't say white people are a different species, he said they are the same species. As for the points I want you to refute, he mentions "SARA" makes white people more violent. Do white people possess this gene at a higher level as he alleges and what does it do? He also alleges that slc2485 gives them pale skin and alleges that it was developed in central asia. At 1:15 he mentions something but I can't make out what he's saying, but he alleges that it gives them straight hair. He goes on to say that 3%-4% of white DNA is from neanderthals. He alleges that they are able to cope with cold weather from the creatine in their skin. He then says at the "demi level" that 4% stretches out to 70% of their skin. These are some of the allegations he's making. Like I said, I'm not saying he's being rational nor am I defending what he's saying. I just want to know if the assertions he's stating are correct or not. ScienceApe (talk) 17:35, 24 January 2016 (UTC)[reply]

Why does powdercoated metal, like the inside walls of a microwave, not spark?

75.75.42.89 (talk) 23:40, 23 January 2016 (UTC)[reply]

Small amounts of metal won't spark. ScienceApe (talk) 23:44, 23 January 2016 (UTC)[reply]

Small amounts of metal certainly will spark - have you never put a cup or plate with a gold leaf rim in a microwave? Whether a piece of metal arcs or not depends on it's shape - thin edges or points tend to build up a charge and arc (spark). The side walls are flat with no edges so the microwaves are reflected back and don't build up a charge. Other metal components, such as the metal rack, are designed without points or sharp edges for the same reason. Richerman (talk) 00:54, 24 January 2016 (UTC)[reply]
(ec) A metal object in the cavity of a microwave oven can produce sparks if it concentrates the electric field sufficiently to cause breakdown of the air. Whether this is the case depends on the size, shape, position and orientation of the object. A metal object with sharp edges is likely to concentrate the field at the edges, particularly if the size of the object is such that the electric currents on it resonate at the frequency of the applied field (~2.45 GHz). The walls of the cavity do not give rise to sparks because they lack extremities with convex edges, rather than because they coated. --catslash (talk) 01:18, 24 January 2016 (UTC)[reply]
Additionally, the fact that the cavity of a microwave oven is made of metal is essential to its operation. The walls of the oven form a Faraday cage, trapping the microwaves inside the oven. If your microwave has a window, you might have noticed the metal grating over it. This is also part of the Faraday cage. The openings in the grate are smaller than the wavelength of microwaves, so microwaves cannot pass through them. Visible light has a much shorter wavelength, so it passes through just fine. --71.119.131.184 (talk) 01:55, 24 January 2016 (UTC)[reply]
I think ScienceApe meant small in size. Each granule of the powdercoat metal is insulated from the next granule, so there is no large voltage induced. The gold leaf rim will not spark if it is cut into small sections. Dbfirs 09:15, 24 January 2016 (UTC)[reply]
Small amounts are fine. ScienceApe (talk) 16:56, 24 January 2016 (UTC)[reply]
Thanks. So you're all saying a smooth steel ball bearing with no small bumps or surface defects could be microwaved without sparking? 75.75.42.89 (talk) 22:06, 24 January 2016 (UTC)[reply]

January 24

"freeform" electromagnetic coils

1. What's the proper name for these "freeform" electromagnetic coils[39]?

2. How are they made? Roughly which one of the following guesses is the closest?

A) wind one layer, spray some adhesives, and then wind another layer on top

B) Adhesives are added continuously as the winding continues

C) The magnet wire is pre-coated with an adhesive.731Butai (talk) 04:36, 24 January 2016 (UTC)[reply]

If there were a coilform with a central cylindrical core and endpieces, the coil could be wound as shown by rotating the coilform while the supply bobbin moved slowly back and forth via gearing to lay down straight layers. The endpieces could then be removed and the wound coil pushed off the central cylinder. The wire would tend to maintain its form, but clearly it would deform if stressed. It could be dipped in varnish to cement it together into a rigid form. Edison (talk) 05:05, 24 January 2016 (UTC)[reply]

Is the [40] one that you uploaded yourself or that you have information about? It is not obvious from the picture alone that this is intended to be an electromagnetic coil as it has no terminals, no adhesive can be seen and the wire could be uninsulated (not magnet wire). A spool of plain wire delivered from a factory could look like this. AllBestFaith (talk) 16:15, 24 January 2016 (UTC)[reply]

Analogue live TV

Before digital "film", how was live television done? My mental image of television, pre-digital, was that the scene was recorded by a videocamera and microphone, the sounds modulated onto electromagnetic waves as with radio, the film developed and then the images somehow modulated onto electromagnetic waves, and chronological conjunction between the two processes is enforced to ensure that the video and sound be synchronised. This doesn't seem to fit with live TV, however, as there's no time to develop anything; how would it be possible to broadcast anything that wasn't a recording? Today, it's easy: you can basically use the same techniques as Skype, but that wasn't possible in the 1950s. Nyttend (talk) 05:40, 24 January 2016 (UTC)[reply]

Television cameras before the 1980s used video camera tubes. Basically they worked like a CRT television in reverse. In a CRT display, one or more tubes scan their beams across the screen to produce the picture. In tube cameras, the camera focuses incoming light onto a target and one or more tubes scan the target. --71.119.131.184 (talk) 05:50, 24 January 2016 (UTC)[reply]
Note also that for quite a long time, a lot of prerecorded stuff on TV was direct on to Videotape not film. In fact, sometimes the content may have gone from videotape to film.

And BTW home camcorders weren't that uncommon before everything went digital. America's Funniest Home Videos for example was before digital video was particularly common, and some Youtube videos also look they were probably recorded on analog tape.

Nil Einne (talk) 07:07, 24 January 2016 (UTC)[reply]

Let's put it this way. You (Nyttend) have the mental image "that the scene was recorded by a videocamera". But that's really two things: converting the scene into an electronic signal, and recording the electronic signal. In a live breoadcast, the electronic signal would be used directly (more or less) to modulate electromagnetic waves just as the audio signal is used in radio. (For live color TV it would also be necessary to convert the R/G/B signals from the camera into the applicable encoding, i.e. NTSC, PAL, or SECAM.) --76.69.45.64 (talk) 07:22, 24 January 2016 (UTC)[reply]
Incidentally, the processes Nyttend describes is John Logie Baird's "Intermediate Film Technique", used for a few months in 1937 in the UK, but obsolete since then. It introduced a delay of about 1 minute in a live broadcast - see this article. Tevildo (talk) 09:03, 24 January 2016 (UTC)[reply]
Germany transmitted film-intermediate TV earlier. [41] During the 1936 Summer Olympics experiments were conducted with both an analog electronic camera and with a mobile TV truck. On the roof of the truck was a film camera. The film was developed in the truck and then run through the transmitting apparatus. AllBestFaith (talk) 17:09, 24 January 2016 (UTC)[reply]

Live TV originally went "straight to air", with no intermediate recording and playback steps. The signals from the microphones and camera were combined (usually through vision mixer and audio mixer desks) into a composite signal which was distributed to the transmitter site, modulated onto an RF carrier, and broadcast. All of these were real-time analog processes, with no delay except for that inherent in signal processing and propagation. The Anome (talk) 10:09, 24 January 2016 (UTC)[reply]

The Anome has hit on it. Just like a phone call, there is no need for a TV camera and transmitter system to make a permanent record of anything: if you have an outside broadcast unit you can just turn on a camera, transmit that signal to a control centre and then put it out on the air, without at any stage 'recording' it permanently. Much early TV was broadcast live without any copy of it being kept. Before the days of cheap magnetic recording systems, if you needed a permanent record of it, you'd often literally just film a television set with a film camera. Similarly, most analogue phone calls have never (one assumes) been permanently recorded onto anything. They just go from one phone to another through the wires. Blythwood (talk) 13:02, 24 January 2016 (UTC)[reply]
Thanks to everyone! I had no idea that it was possible for the TV camera to do anything except impress each scene on a separate film still; I didn't know that they used CRTs to send imagery to a transmitter. I'd imagined that the first cameras of any sort that used neither film or U-matic videotape were digital cameras. Nyttend (talk) 15:26, 24 January 2016 (UTC)[reply]
See also Kinescope. ←Baseball Bugs What's up, Doc? carrots→ 15:19, 24 January 2016 (UTC)[reply]
The modern equipment for conversion from movie film to electronic TV signal (for taping or immediate transmission) is a Telecine. AllBestFaith (talk) 17:21, 24 January 2016 (UTC)[reply]

Considering starting a new project on here

Hello all! As a spare time project, I’m looking to do spend some time in the next few months messing around with R, its graphics packages in particular. I’d be interested in combining this with my contributions to Wikipedia (do two obsessions together!) - does anyone have suggestions for any publicly available molecular biosciences data that might be interesting to do something with? Preferably something I can't screw up too badly! Blythwood (talk) 11:43, 24 January 2016 (UTC)[reply]

A while back I got to messing with Module:ImportProtein, which is in Lua (see the talk page for an example), and like so many things... put it aside for "a while". If you're interested, I'm not reserving the copyright. :) I wasn't aware of any direct R integration with Wikipedia, though it would create interesting possibilities! Wnt (talk) 14:25, 24 January 2016 (UTC)[reply]
Some aspects of it might be too close to original research, which is fine for other places, but not welcome in wikipedia. On the other hand, you could research numerical data already on Wikipedia and plot it in a more visual way. --Scicurious (talk) 15:40, 24 January 2016 (UTC)[reply]

Did the universe start with only neutrons?

If hydrogen fusion created all other atoms from hydrogen, and hydrogen is a proton and an electron, and fission reaction is the decay of a neutron into a proton and electron, then did the universe start with only neutrons? — Preceding unsigned comment added by 86.153.69.165 (talkcontribs)

Our current theories imply that there were subatomic particles before neutrons existed; and it seems like neutrons and protons both started emerging roughly around the same time. Have a read through Chronology of the universe, Quark–gluon plasma, and related articles.
I think your insight is good, in that you're looking for reverse- reactions (like beta decay) and trying to conserve charge; but you are missing some important complications that arise when we study sub-nuclear particles in great detail. We now know that there are lots of valid ways that we can break protons and neutrons apart if we use very high energies. Present theories for the early universe imply that our heavy particles were created around the hadron epoch after protons and neutrons coalesced from quarks. Before that time, the energy density was so high that we barely understand the rules that govern quark combination: what we do know is that there were no protons or neutrons yet.
Nimur (talk) 15:13, 24 January 2016 (UTC)[reply]

Reverse polarity Schottky diode

On Digikey in the diode section[42], what does the "schottky, reverse polarity" diode type stand for? I understand what a regular Schottky diode is, but am not sure what a reverse polarity Schottky diode is. Johnson&Johnson&Son (talk) 15:05, 24 January 2016 (UTC)[reply]

Could it be a Schottky diode for creating a reverse polarity protection?--Scicurious (talk) 15:47, 24 January 2016 (UTC)[reply]
The OP has linked to a diode selection guide where one chooses filter(s) to limit the selection. Applying the "Schottky Reverse Polarity" filter reduces the number of manufacturers (to 2) and introduces selection menus for reverse leakage and capacitance when reverse biased. They are all Schottky diodes and this is just the guide designer's way to offer a detailed reverse specification if needed. AllBestFaith (talk) 16:42, 24 January 2016 (UTC)[reply]

Why do they make pipes out of lead?

^Topic ScienceApe (talk) 16:59, 24 January 2016 (UTC)[reply]

"Lead piping was used because of its unique ability to resist pinhole leaks, while being soft enough to form into shapes that deliver water most efficiently." ScienceApe, you constantly asking questions where the answer is the first Google hit on the question is starting to pass over the line separating "good faith curiosity" from "trolling". ‑ Iridescent 17:05, 24 January 2016 (UTC)[reply]
I don't appreciate being accused of trolling. If I wanted to troll, I would ask a bunch of nonsensical questions using multiple sock puppet accounts so you didn't know they were from the same person. ScienceApe (talk) 17:19, 24 January 2016 (UTC)[reply]
Maybe you are doing it, but was not caught yet. Anyway, it's difficult to see a purpose on your questions sometimes. Maybe you should perform a simple search for a question before you ask it here. Otherwise you would look more like a science ape, than as a science curious person. --Scicurious (talk) 17:50, 24 January 2016 (UTC)[reply]
I don't need to prove I'm not a troll, but you're free to believe whatever you like, however I'm not going to stop asking questions if I'm curious about something. However feel free not to respond to my questions. ScienceApe (talk) 20:41, 24 January 2016 (UTC)[reply]
A real life reference librarian who frequently scolded patrons for not just looking stuff up themselves would soon be fired. If it makes someone that angry when someone asks a question that is easy to find an answer for, then the angry librarian should find other areas of Wikipedia in which to work. It is disruptive to scold people who ask question when it is not clear they are trolling, as it is not clear here.Edison (talk) 20:49, 24 January 2016 (UTC)[reply]
We should compare with some of the alternatives available at the time:
1) Iron pipes: These can rust. While a small amount of added iron in the diet may actually be healthy, in antiquity people didn't know that iron pipes were healthier than lead. Also, the orange or brown water it produces doesn't look or taste good. And eventually the pipes can rust through. (There are water treatment methods to prevent rust, but they wouldn't have had those in antiquity, either.)
2) Ceramic pipes: These can crack, due to seismic activity, frost-freeze cycle, tree roots, or subsiding of the ground around it. Therefore, they tend to be leaky.
3) Copper pipes: These can corrode to produce green sludge and eventually fail from that corrosion. Similar to iron, a bit of added copper in the diet may actually be healthy, but they didn't know that in antiquity.
So, if you didn't know about lead poisoning, lead pipes seemed like a good option (or gold pipes, if you happened to be filthy rich). StuRat (talk) 17:39, 24 January 2016 (UTC)[reply]
Wooden water pipes were popular in London for mains water supply in the 16th to 18th centuries, but generally connected to lead pipes in peoples' houses. I believe that they were still being replaced in the 1960s. IIRC they were generally made from Elm wood which is resistant to rot when not exposed to air. Alansplodge (talk) 18:02, 24 January 2016 (UTC)[reply]
Yes, I forgot about wood. Bamboo can be used, too, since it's naturally hollow, although some type of sealant may be need at the joints. StuRat (talk) 18:51, 24 January 2016 (UTC)[reply]
My gut feeling is that this would have to do with metal prices - alas that article, unfortunately, doesn't contain even a current table, let alone historical data. It would be worth updating with information from various sites like this. But my impression is that lead is a cheap metal because it is not usable for very many things, and a pipe buried in the ground is one case where the weight and the softness don't count against it. Alas, even that didn't pan out in the end... Wnt (talk) 20:38, 24 January 2016 (UTC)[reply]

Where can I find the coefficient of friction between nickel and polyethylene? Actually the coefficient of friction between nickel and any common plastic would be fine.

I found this site[43] that has the data for nickel and Teflon, but Teflon is little too difficult for me to get my hands on. Johnson&Johnson&Son (talk) 17:10, 24 January 2016 (UTC)[reply]

The coefficient of friction of plastics is usually measured against polished steel. PTFE (Polytetrafluoroethylene, brand name Teflon)'s coefficient of friction is 0.05 to 0.10. Polyethylene can be supplied in various grades for which this supplier quotes coefficients of friction 0.18 to 0.22. That is for steel. This table gives some comparison with nickel. AllBestFaith (talk) 17:55, 24 January 2016 (UTC)[reply]