Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 71.119.131.184 (talk) at 08:33, 30 January 2016 (→‎The odds against us being here). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 25

Why do Northeast Megalopolis snowstorm records look like this instead of a northern bias?

Boston: 27.6 inches (2003)

New York: 26.9 inches (2006) (27.9" (2016) if the site became the nearest airport in the mid-20th century like the others)

Philly: 31.0 inches (1996)

Baltimore: 29.2 inches (2016) (Baltimore suggests this might just be the record since the airport existed (1950))

Washington: 28.0 inches (1922)

Washington Dulles Intl, Virginia: 32.4 inches (2010) despite this weather station only starting in the 1960s.

Does the distribution of water vapor by latitude have anything to do with it?

How sure are scientists that climate change will make single snowstorm records easier to break in the future? Sagittarian Milky Way (talk) 00:15, 25 January 2016 (UTC)[reply]

Let me speak to the relationship between how far north (or south, in the Southern hemisphere) you are and the amount of snow you get. The closer to the poles, the lower the temperatures. At low temperatures, less moisture evaporates from lakes, rivers, and oceans, especially once they freeze over. This makes for less snowfall. Therefore, there is very little snow at the South Pole, but, since it rarely melts, you see thousands of years worth of snowfall on the ground at once.
Now, this doesn't necessarily affect the snowfall amounts in this particular storm, as many other factors and local conditions are also more important, but it is a general trend. In fact, many of the places with the heaviest snowfalls historically are places which get lake effect snow, where air moves over warm water, picking up water vapor, then depostits it once it moves over colder land. Buffalo, New York is one such spot, with Lake Erie providing the (relatively) warm water. StuRat (talk) 00:49, 25 January 2016 (UTC)[reply]
Locations move. Equipment and methods change. And the forecast has uncertainty [1]. There is no reason to believe any of it is related to climate change as climate change has still remained unmeasurable as an observation of weather. --DHeyward (talk) 05:54, 25 January 2016 (UTC)[reply]
One might term this ocean-effect snow, as it involved water vapor being swept off the relatively warm ocean surface (warmer this year because it's a strong El Nino year) and meeting an Arctic air mass over the continent. It's a classic (several authorities are saying textbook) example of explosive cyclogenesis and it's a product of North American geography. The warm water of the Gulf of Mexico and consequent moisture streams and the presence of the Gulf Stream favor large snowfalls relatively far south. However, these snowfalls tend to be intense rather than frequent, so total snowfall over a season will be higher farther north where it's colder and stays colder for a longer time, but where there is less access to subtropical moisture. Topography helps too - the Appalachian Mountains lift moisture to colder altitudes and it rains or snows out, leaving places like Pittsburgh relatively dry in these kinds of storms. Some of the same geographic elements give rise to Tornado Alley in the spring. Warm moist marine air meets cold dry continental air, and boom. Acroterion (talk) 15:11, 25 January 2016 (UTC)[reply]
Actually we call it a Nor'easter  :) --DHeyward (talk) 16:09, 25 January 2016 (UTC)[reply]
A rare case of the media not emphasizing an unexpected new name for a common thing making it seem like it's new.. Ocean effect snow! (said in a deep, booming, echoing voice) Does this mean that if this (admittedly high sigma) weather pattern happened 3 weeks ago we could've had even more snow? The sea was warmer then and Manhattan air reached 11°F. Sagittarian Milky Way (talk) 16:40, 25 January 2016 (UTC)[reply]
DHeyward is correct, it's a textbook nor'easter, and the "ocean effect snow" is something I just coined to compare it against lake-effect snow, which happens on a smaller scale without needing a storm system. As for more snow, I devoutly hope not. I've been shoveling three feet of snow for the past two days and finally have it so the cars are free, we can take out the trash, get mail and let the dogs out in the back yard without losing them entirely. I think this storm system turned out to be as efficient as it could be. Normally as a nor'easter forms, the air temperature goes up as the wind starts to come from the ocean (i.e., from the northeast). Often that means that it turns to rain as the storm gets wound up. However, if there is a blocking high over the Canadian Maritimes the cold air can't be eroded by the storm and it stays cold enough to snow..Acroterion (talk) 18:00, 25 January 2016 (UTC)[reply]
  • See here. The prevailing explanation is that increased ocean temperatures causes more moisture to enter the atmosphere, increasing the amount of moisture available for large storm systems (hurricanes, nor'easters etc) thus making them more intense, and more frequent. --Jayron32 18:09, 25 January 2016 (UTC)[reply]

Stupid physics question (How can we see things more distant than the age of the universe?)

According to wikipedia, the age of the universe is 13.8 billion years. The origin of the universe was a single point which resulted in a big bang. The size of the universe is 91 billion light years. Nothing can go faster than the speed of light. In 13.8 billion years the size of the universe should be 13.8 light years right? Brian Everlasting (talk) 00:34, 25 January 2016 (UTC)[reply]

It's actually a very common question. The answer is that the space between large-scale structures in the universe expanded by a process called Inflation (cosmology). Dbfirs 00:38, 25 January 2016 (UTC)[reply]
This is wrong. The boundary of the visible universe is only affected by expansion since the CMBR last scattering time, around 380,000 years after the big bang. It is unrelated to inflation, which ended 10−something seconds after the big bang. -- BenRG (talk) 01:41, 25 January 2016 (UTC)[reply]
Yes, of course! Light didn't start out until after inflation stopped, so it is entirely Metric expansion of space (and that is speeding up). Dbfirs 09:56, 25 January 2016 (UTC)[reply]
(EC)see Cosmic Inflation. The part that makes it super confusing is that you would be correct IF the universe actually "big banged" INTO pre-existing space, but it didn't, SPACE it self formed along with the big bang. Vespine (talk) 00:40, 25 January 2016 (UTC)[reply]
There's also the complication of Metric expansion of space but this is minor by comparison. Dbfirs 00:44, 25 January 2016 (UTC)[reply]
The key point is that relativity says nothing can travel faster than the speed of light (in a vacuum) through spacetime. It says nothing about how quickly spacetime itself can move. This distinction is crucial for understanding things like inflation, but such nuance tends to be omitted from pop science descriptions, which tend to say almost-true-but-subtly-misleading things like "nothing can travel faster than light". --71.119.131.184 (talk) 00:55, 25 January 2016 (UTC)[reply]
Also in that vein, the size of the observable universe is 91 billion light-years. The size of the universe as a whole may be infinite: see shape of the universe. --71.119.131.184 (talk) 00:57, 25 January 2016 (UTC)[reply]
Someone deleted my time dilation and Theory of Relativity comment, but I don't see any changes in the View History tab. Willminator (talk) 01:08, 25 January 2016 (UTC)[reply]
What I was trying to say in my deleted comment is that time is relative according to the Theory of Relativity. Gravity affects time. For example, if someone were to approach a black hole, from the observer on Earth looking up, it would look like the person has slowed down for thousands of years, but from the person's point of view, only seconds would have passed. The light of a star that's let's say, 1000 light years away from Earth doesn't necessarily have to travel 1000 years to Earth from the perspective of an observer on Earth. Willminator (talk) 01:24, 25 January 2016 (UTC)[reply]
The image on the right shows how this works geometrically. Later times are at the top. The brown line (on the left) is Earth, the yellow line (on the right) is a distant quasar, the diagonal red line is the path of light from the quasar to Earth, and the orange line is the distance to the quasar now. You can verify by counting grid lines (which represent 1 billion (light) years each) that the quasar is 28 billion light years away along the orange line though the light took only about 13 billion light years to reach us. -- BenRG (talk) 01:49, 25 January 2016 (UTC)[reply]
It's kind of funny though. I mean, the quasar is expected to be 28 billion ly away, but we don't know it didn't sprout a star drive and is coming on right behind the light ray. And in the frame of reference of the light (or someone arbitrarily close to lightspeed) no time at all has passed, and the distance is zero! (We're all just foreshortened a lot) Of the two, the frame of the lightspeed traveller is at least one we could be in, while the other distance is a spacelike estimate, so surely it is more meaningful to say it is 0 ly away than 28, right?  :) Honestly though, what confuses me greatly with that diagram is what happens if something moves away from us. What exactly does it look like when a galaxy, after space cleared, has simply moved far enough away that by the time we look at it its light is almost infinitely redshifted and unable to reach us at all? (this is related to something else I don't understand, which is why the lines for us and the quasar diverge at such a sharp angle on that figure, rather than each moving down a line of "longitude" on that horn thingy. Wnt (talk) 15:51, 25 January 2016 (UTC)[reply]
Wnt, if you look more closely, you'll see that the brown and green lines for us and the quasar are each "moving down a line of "longitude" on that horn thingy". (Don't confuse the diagonal-ish red line of the light from the quasar to us with the brown line on the far left for us.) The "lines of longitude"" show static positions in space that are moving apart as time progresses "upwards" only because "space" itself is stretching.
On this scale, only something moving at a substantial fraction of light speed for a long time will show up as moving across rather than "along" the static "longitude" lines.
As for your galaxy that is "almost infinitely red-shifted", this does occur and means the galaxy is close to being beyond the Observable Universe from our point of view (as we are from its). {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 23:04, 25 January 2016 (UTC)[reply]
The boundary of the visible universe is the cosmic microwave background. Its redshift is about 1100, large but still infinitely far from infinity. Any astronomical object we can see will have a redshift smaller than that (unless it's retreating very rapidly relative to the Hubble flow). The CMB is the boundary simply because the earlier universe was opaque to light. But it was transparent to neutrinos and gravitational waves, so I guess "visible universe" is a better name than "observable universe". -- BenRG (talk) 00:44, 27 January 2016 (UTC)[reply]
You're right that the notion of "distance now" is somewhat dubious since we don't know what has happened in the last umpteen billion years, and the spacetime interval to everything we see is zero. But, for better or worse, when astronomers are quoted in the popular press saying that an astronomical object that they just saw is X billion light years away, the orange line is what they mean.
In the diagram the Earth and quasar are both assumed to be stationary relative to the Hubble flow. This is approximately correct for Earth, and it's almost certain to be approximately correct for any distant object that's bright enough for us to see, because its speed is an average of the original speeds of the huge number of particles that make it up. If an object is moving significantly relative to the Hubble flow then its redshift is the special-relativistic redshift/blueshift of the source object relative to the Hubble flow, times the cosmological redshift, times the (small) redshift/blueshift of Earth relative to the Hubble flow. -- BenRG (talk) 00:44, 27 January 2016 (UTC)[reply]

Liquid non-Newtonian fluids?

Which non-Newtonian fluid or fluids would be considered only a liquid, not a plastic solid nor a colloid that involves liquid mixed with solid particles unless there's a colloid that is considered to be only a liquid, nothing in between? I have learned that not all fluids are liquids, but that all liquids are fluids. A couple of examples of non-Newtonian fluids are toothpaste, ice in the case of moving glaciers, ketchup, lava, pitch, and much more. They don't flow easily and consistently like water and other Newtonian liquids do. I read that pitch is said to be the world's densest liquid, but it is also considered to be a viscoelastic, solid polymer. What does that mean? Is it always a liquid? Can one look at the molecular structure of non-Newtonians fluids to determine which ones are truly liquids? Willminator (talk) 01:05, 25 January 2016 (UTC)[reply]

Is shampoo enough of a liquid for you? I don't think mine has any solid particles in it, but some might. The Kaye effect is a cool demonstration of the non-newtoniannness of shampoos and soaps, check the video refs at the bottom of our article. Also oobleck is indeed a colloid, but you can make it so that it flows nearly as easily as water. At that point, it requires a lot of force to see the shear thickening though. Check out shear thinning and shear thickening if you haven't, they discuss some additional examples and comcepts. SemanticMantis (talk) 14:45, 25 January 2016 (UTC)[reply]
What about ketchup, mustard, and toothpaste? Also, what does it mean for pitch to be a viscoelastic, solid polymer if it is supposedly the highest viscous liquid? Is it a liquid always or not? Willminator (talk) 03:23, 27 January 2016 (UTC)[reply]

Are long underwater submarine cruises harmful to human health?

AFAIK the effect of low gravity on bone density means that being in space is fundamentally harmful to humans, and no matter how fit and well-trained they are, this imposes a limit on how long astronauts can stay in orbit. Is there a similar physiological reason why long underwater cruises on a nuclear submarine would be harmful to the crew, and if so roughly how long could they stay underwater? Or would the food run out before anything else became an issue? I guess a modern sub is able to carry some gym equipment; what about sunbeds to replace exposure to sunlight? 94.12.81.251 (talk) 11:56, 25 January 2016 (UTC)[reply]

This is a review of medical problems for naval personnel in the Royal Navy's Vanguard-class submarines. Their routine patrols are about 3 months in duration. Mikenorton (talk) 12:30, 25 January 2016 (UTC)[reply]
That's an interesting study - but it doesn't really tell us much because it's only run over 3 month patrols. Over 74 patrols, each with a 150 man crew (340,000 man days) they only had to pull someone out of the boats 5 times - twice were appendicitis, once for a "Chemical eye injury", once for a seizure and once for severe traumatic hand injury. I'd bet that the eye and hand injuries related to the work being done and the seizure and appendicitis cases are probably within the norms for 340,000 man hours of any other human situation.
Looking at the problems that were not sufficient to cause the crewmember to be evacuated - we have lots of other injuries - things like chest pain - and "acute opiate withdrawal". But, again, nothing that looks like problems due to being cooped up in a submarine for three months.
So from a cursory glance, there are no issues that would prevent longer missions (except of course that the submarine can't carry enough food for longer trips).
I think we'd need data from much longer trips. But a lot has to depend on monitoring and initial crew quality. The guys who are going to spend a year on the ISS get studied in minute detail before being launched up there. Submarine crews also get health checks - but I can pretty much guarantee that it's nothing like as careful as with ISS crews. That's evident from the crewmember who suffered from "opiate withdrawal"...I can't imagine that being remotely possible with ISS crews.
Looked at another way - it's hard to imagine how submariners could be worse off than the ISS crews. They don't have the gravity problems - or the lowered atmospheric pressure issues that ISS have - they have more space to move around in - and the larger crew presumably makes the mental health issues of being cooped up in a small space more manageable. Submariners get plenty of exercise and "real" food (well, more real than the ISS crew get) - and they don't generally suffer from things like solar radiation that the ISS crew have issues with. So you'd expect them to do much better.
I think we'd need longer studies and with more controlled crew selection and pre-processing before we could reasonably conclude an amount of time. SteveBaker (talk) 14:19, 25 January 2016 (UTC)[reply]
Steve remarks that it is "hard to imagine how submariners could be worse off than the ISS crews..."
Well, there is, of course, combat in submarine warfare. As unfathomable as it may seem, in this decade, for state-against-state naval warfare to occur, it is a real threat and it is one reason that large Navies still spend lots of resources to train and maintain crews and prepare for undersea warfare.
In December, I was gifted a non-fiction book, Pig Boats, about submarine warfare during World War II. It details the raw unpleasantries of the war for submarine crews. If you can imagine a way to cause health-harm to a human, the submariners had to deal with it at some time during the war. One advantage the astronauts on International Space Station have is that for the most part, nobody is actively trying to harm or destroy them.
Nimur (talk) 14:51, 25 January 2016 (UTC)[reply]
I also don't think space weather makes the ISS rock and churn on a regular basis, as terrestrial weather does with subs. Last guy I talked to who served on a sub mentioned how some of them hated rising to periscope depth due the increase in motion sickness it could cause. Crew in a nuclear submarine probably also go longer than ISS crew without seeing the sun. This seems like a pretty serious issue, light levels are carefully studied and controlled on subs [2]. While the ISS crews may have their own problems with light, at least they can often look out the window and see the sun. Here's an interesting ref on disorders in circadian rhythms that mentions submarines [3]. SemanticMantis (talk) 15:25, 25 January 2016 (UTC)[reply]
Our OP is concerned with long underwater cruises on a nuclear submarine - they don't spend much (if any) of that time at periscope depth - so seasickness is not a significant issue. Even if seasickness were a problem - it's a short term, non-life-threatening phenomenon that would not limit the amount of time a person could spend in a submarine - so it's not relevant to answering this question.
Similarly, any likelyhood of there being combat missions for these craft has zero impact on the OP's question - which is how long you could live in one of them.
Comparisons with WWII submarines is also pretty irrelevant. A typical nuclear submarine is huge...they are not the cramped, cold, miserable places you'd imagine from seeing WWII craft. SteveBaker (talk) 16:56, 25 January 2016 (UTC)[reply]
I don't care to argue with you. But if you want to actually help OP, you could try supplying references. SemanticMantis (talk) 17:24, 25 January 2016 (UTC)[reply]
As aside, Steve, I don't think the "acute opiate withdrawal" is what you suspect. (That is, it isn't a situation where someone who got addicted to painkillers while landside, nobody noticed when he came aboard, and then he went into withdrawal when his supply ran out at sea.) The footnotes indicate that it was a patient who was prescribed opiate analgesia for pain and who abruptly stopped taking his meds without discussing it with his doctor. Yeah, it's less likely aboard the ISS, but in principle an astronaut could ignore the flight surgeon and stop taking his prescribed meds in orbit, too. TenOfAllTrades(talk) 20:50, 25 January 2016 (UTC)[reply]
Here's a few more scholarly articles on light and circadian rhythms in submarines [4] [5], and one naval report [6]. SemanticMantis (talk) 17:43, 25 January 2016 (UTC)[reply]
Regarding User:SteveBaker comments above, I also fail to see how the ISS could be less detrimental to your health than a functioning nuclear submarine. Unless you are in something like the Russian submarine Kursk. It might seem counter-intuitive, but crews of nuclear submarines are even less exposed to radiation according to this source than people living above the surface. The background radiation is quite low inside a submarine. The health concerns for the ISS is not only the lack of gravity, they are also exposed to cosmic rays. Scicurious (talk) 19:46, 25 January 2016 (UTC)[reply]

IR laser line generator.

I have an idea for a commercial application needing a laser line generator. These are tiny little gadgets costing a few bucks that include a low power laser source and a lens to spread the light out like a fan over maybe 60 to 120 degrees, sealed into a cylinder a couple of centimeters long. (You see them in supermarket barcode scanners, for example).

I know that red and green laser line generators in the <5mWatt range are considered to be class 1 or 1/M laser devices - which means that they're "safe for consumer use". A line-laser is considerably safer than a regular laser pointer because the energy is spread over a wider area, hence class 1 or 1/M rather than class 2 like most laser pointers.

But I'm considering switching from a red light laser to an IR line-laser of identical power and beam spread. I realize that IR lasers are invisible - so there is a risk of someone staring into the thing without knowing it's there, they don't invoke the blink reflex or close down the iris.

Trouble is, I can't figure which class these IR devices belong to and there is no indication on the manufacturer's web site to tell me.

Does anyone know the guidelines about these classes of device? Does the class get better if I limit the power to 3mW or even 1mW?

TIA SteveBaker (talk) 17:15, 25 January 2016 (UTC)[reply]

I think you'll need to get a copy of ANSI Z136.1 to rigorously answer this question. It does not seem to be freely available. Laser_Institute_of_America, the official secretariat of ANSI in this matter, will sell you a print or electronic copy. Here [7] is the TOC and index. Here [8] is a comparison of the 2014 standard compared to previous versions. SemanticMantis (talk) 17:34, 25 January 2016 (UTC)[reply]
The way I learned it in school, every non-visible laser was automatically treated as if it were a Class IV laser. However, if you dig very deeply into, say, OSHA standards, they do not (for the most part) actually distinguish between different classes of laser when specifying workplace safety requirements. As SemanticMantis correctly pointed out, the ISO, ANSI, and IEC technical specifications that define commonly-used laser classification terminology are neither free nor zero-cost.
Invisible lasers are inherently more dangerous: you won't even notice when they malfunction, or when they reflect specularly off a distant object, and so on.
Nimur (talk) 19:14, 25 January 2016 (UTC)[reply]
Classification based on continuous wave power at various wavelength.
I can't speak to the accuracy, but laser safety does have charts regarding safe exposure and classification at near-infrared wavelengths. One of the graphs, reproduced at right, suggests that near-infrared wavelengths would be considered Class 1 ("safe under all conditions of normal use") at power levels less than around 0.5 mW and class 3R (or worse) at higher power levels indicating at least some risk of eye injury. If you are definitely going to work with such lasers, I would strongly recommend you verify such safety information with reputable third parties. Dragons flight (talk) 12:27, 26 January 2016 (UTC)[reply]
0.5mW is really low. I've only found cheap 'near' IR line lasers at 1.0mW - but that's spread out over around 90 degrees of 'line'. I wonder whether the classification system would be OK with me physically limiting how close someone could get their eye to the laser such that no more than (say) 1/10th of the entire line could impinge on their eye? Assuming that the light is spread out evenly, that would limit the practical exposure to 0.1mW - which ought to be really safe.
The trouble with "reputable third parties" is that they want to charge a lot of money for their services! I'll certainly go that route before making a final product - but if it's clear that an IR line-laser a non-starter then I'd rather not fork over the cash! SteveBaker (talk) 15:08, 26 January 2016 (UTC)[reply]
One option would be to move to longer wavelengths. I was reading the graph at ~800 nm, but if you can move to short-IR at say 1500 nm, you can apparently go to 10 mW at Class 1. Not sure what wavelengths of lasers are available though. As you say, there is probably a fair argument that a line source is much less dangerous than a point source, but I'm not sure how they officially consider such details. Dragons flight (talk) 15:40, 26 January 2016 (UTC)[reply]
The key to safety with lasers - or any other energetic item - is not whether the device is safe when used correctly. It's about ensuring high confidence that the product is idiot-proof, accident-proof, and so forth. Spreading laser energy using a lens or optic does truthfully reduce the hazard - as long as the optic is correctly operating. If the laser is only eye-safe when the beam is spread into a large angular pattern, then what would happen if the device is dropped or misaligned, and the same laser light energy no longer travels through the optic? Now the device has become unsafe.
This is why lasers are classified for safety, and fully-assembled optical systems are not: the beam-spreader may reduce hazard, but the laser itself is still a Class IV, (or whatever).
And when your laser light is invisible - you will not see it when the beam escapes from its designed optical path.
Steve runs a small business - he doesn't have a corporate training department to provide a mandatory Laser Safety class; he doesn't have some weaselly guy from Health and Safety department running around telling him what he may and may not do; he doesn't have a collared shirt from the legal department telling him how to reduce corporate liability or ensure compliance in every municipality where his business might be construed to operate; and Steve is (as evidenced by his comments) interested in cutting costs. Steve, I really try to avoid interjecting pure, unreference-able advice when I contribute to the reference desk, but here's some free advice from a person who has worked with lasers; err on the side of caution. Do not try to manufacture or sell or use an invisible laser. If you want to play with something safer, try removing the blade-guard from a circular saw, or dumping the charge out of shotgun shells, or generally anything else where you can see and avoid the hazard.
Nimur (talk) 17:31, 26 January 2016 (UTC)[reply]
I think that's a little unfair. My wife and I use a couple of 100 Watt IR lasers every day - in machines that I hand-built from plans (and then heavily modified). So, yeah - I know that lasers are decidedly dangerous, and invisible ones, doubly so and I have a ton of respect for them. Which is why I'm asking the question rather than just building something that might expose someone to danger! My thought processes for this design go like this:
  1. Is it even plausible that an IR laser can be considered "safe" for consumer use?
  2. Is it more plausible that an IR line laser can be considered "safe"?
  3. Does it matter how I enclose it to limit the fraction of the laser line that could impinge on the eye?
  4. If all of those things suggest that this concept is feasibly something that would pass regulatory/safety muster - THEN I can go spend a pile of cash to get an expert to sign off on the design.
  5. If the expert says it's OK - I can make and sell my gizmo.
The point being that steps (1) through (3) need to be considered BEFORE I spend money on step (4) or proceed to step (5). If it's very obvious from available public data that a 1mW IR laser line generator cannot be considered safe for a consumer product - then I can dump the idea and go back to a red laser line generator which I know is class 1M because the manufacturer says so. If it seems likely that an IR laser would also be legal/safe - then I can spend the money to (hopefully) rubber-stamp that answer...with a suitably low probability that I'll be paying money to get a "No!" answer!
16:22, 27 January 2016 (UTC)
Steve, I know you're a smart guy and I trust your judgement... even your step-by-step procedural thought process makes sense. But what I'm trying to say is that the answers to steps 1 through 3 are "no, no, and no;" and if you did have a giant corporate bureaucracy-machine, they'd be the ones telling you "no, no, no, run away screaming."
For perspective, take a long, hard, un-biased, non-fiction look at how a real DVD player protects its laser diode. Disregard fictional videos and "tutorials" from internet enthusiasts who believe that they have "taken out the laser" to play with it.
Can you still go ahead and do it? Sure. Sometimes, great innovation takes place because a smart person pushes beyond the envelope of normal procedure. Maybe once every few decades, a great leap forward occurs in commercial applications for laser optics. But, a lot more often, somebody goes permanently blind, and a giant lawsuit bankrupts everybody involved.
For what it's worth, I definitely did recall that you have a powerful cutting laser, and it's probably among the most dangerous items in your house, or even in your entire neighborhood; so you probably aren't chomping at the bit to encourage members of the public to borrow time on it. It's not a toy. There are lots of things in your house that you wouldn't want people playing with.
Nimur (talk) 16:57, 27 January 2016 (UTC)[reply]
Yeah - improperly handled, a 100W IR laser is a terrifying weapon! It's not for no reason that our two machines are labelled "The Death Ray of Ming the Merciless" and "Illudium Q36" respectively. However, once they are mounted in a nice opaque metal box with magnetic switches to disable the power when the lid is opened, or the smoke extractor isn't pulling air, or the water chiller isn't producing adequate flow rates and appropriate temperature water...they are really pretty idiot-proof machines. Of course, if you do happen to encounter an idiot - then you shouldn't expect them not to be able to bypass the safeties and do extreme amounts of damage - but that's true of very many other things one has around the house. There is no doubt in my mind that a small family car is by far more dangerous than a suitably enclosed 100W laser.
Anyway - I'm convinced that there isn't enough evidence that a 1mW IR laser line generator is considered (legally) safe (although I'm pretty sure it would actually be safe in practice) - so I guess I'll stick with the red laser this time around. SteveBaker (talk) 23:23, 27 January 2016 (UTC)[reply]

Little bugs in uncooked pasta

On more than one occasion, I have noticed little bugs in boxes of uncooked pasta. I see them as soon as I open an otherwise unopened box. Or when I put the pasta in boiling water, I see the little bugs rise to the top. Needless to say, it's disgusting. Where did they come from? How did they get there? And how do I prevent this in the future? I have done a Google search and got a lot of mixed and contradictory results. Help! Thanks. 2602:252:D13:6D70:6441:F28D:D981:B287 (talk) 18:01, 25 January 2016 (UTC)[reply]

What do they look like? All kinds of pests might infest a pantry, and there may be some slight differences in how to treat. A common one is the pantry moth AKA Indian mealmoth. Does that look like the right bug? Another common pantry pest is the flour beetle. One thing to look into is whether you are buying contaminated goods or if the pests are getting in to your food at your house. If it's the former, buy different things. If it's the latter, there are steps you can take. Here are some reliable sources for how to treat and prevent pantry pests: UC Davis [9], Clemson [10], Utah State Extension [11]. Transferring things like pasta to airtight containers is a good first step. SemanticMantis (talk) 18:08, 25 January 2016 (UTC)[reply]
Hard to describe. They are so tiny, they are about the size of a pencil-point tip. Perhaps it is that pantry moth AKA Indian mealmoth that you linked above? (I assume the photos in that link are magnified many, many times over?) I have read that they are already in the pasta (as eggs? or larvae?) and that they then hatch after they come into the house. In other words, they are already in the box when I buy it at the store. Help! Thanks! 2602:252:D13:6D70:6441:F28D:D981:B287 (talk) 18:19, 25 January 2016 (UTC)[reply]
Don't sound like it. You should be able to see the light coloured segmented larvae (grub) (which may look slightly similar to a Maggot) if they were Indian mealmoths. By the time they are fully grown, the size may be about the size of a broken of pencil tip i.e. perhaps 5 mm-10mm long, not simply the point. You'll also see their waste (the threads). And if you've been having the problem for a while it's likely you'll see the months around your house. I think it's more likely to be some sort of Flour beetle from your description. Or maybe a Wheat weevil or some variety of Oryzaephilus or something like that. Nil Einne (talk) 18:47, 25 January 2016 (UTC)[reply]
Yes, it does sound more like the Wheat weevil. Yes. What do I do? 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 18:41, 25 January 2016 (UTC)[reply]
(EC) Note that if they are Indian mealmoths, you should transfer even opened containers since they are great at piercing plastic bags. Even then, you may find they make their way into containers which seem airtight. (It's possible the food or container was already infested with eggs but I'm not convinced it always was. There's also the fact that sometimes you see signs of the infestation, but no dead larvae or moths despite the product not being opened in a while.) Freezing generally kills the eggs and some people recommend it even if you're planning to throw out the food, particularly if your rubbish won't be picked up for a while. (If you're home composting you definitely want to freeze. Indian mealmoths are not something even helping your composting.) All these combined mean even if it only a localised infestation, it can be quite difficult to get rid of and may require a fair amount of stuff to be thrown out. Nil Einne (talk) 18:28, 25 January 2016 (UTC)[reply]
I don't mind throwing the old boxes out. No problem. But what do I do with the new boxes? The ones that I bring home from the store? Thanks. 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 18:45, 25 January 2016 (UTC)[reply]
Did you read my three last links above? They tell you very clearly how to deal with pantry pests. The short version is: buy food without pests, and store food properly. While it can happen that goods are infested before you buy them, things are more likely colonizing the food in your pantry. Invest in some airtight containers, and inspect your food as soon as you get it home. If you detect pests in the food when you first get it, return it to the store and complain. SemanticMantis (talk) 18:49, 25 January 2016 (UTC)[reply]
(EC) I was mostly referring to Indian mealmoths. But the point is to remove the infestation. If you've successfully removed the infestation (it will take a few weeks or months to be sure), there shouldn't really be anything much to do other than taking resonable precautions like storing food in airtight containers, keeping an eye out for reinfestation and not buying too much food (i.e. using the food fairly fast). As was mentioned by SemanticMantis, if the food is infested at point of purchase, it's probably better to simply buy food that isn't infested. If you really want to treat infested food, I suspect freezing will work for most insect pests (particularly multicyclic freezing). Nil Einne (talk) 18:53, 25 January 2016 (UTC)[reply]
We've dealt with them by putting grain products into metal bins as soon as they come in from the store, and by using pantry moth traps (available at your favorite home improvement store) that use a pheromone lure to a sticky trap. They also help to monitor whether you've got a problem. Use cans and traps for a couple of life cycles and you should be free of them. Acroterion (talk) 18:55, 25 January 2016 (UTC)[reply]
Home-stored product entomology Anna Frodesiak (talk) 19:01, 25 January 2016 (UTC)[reply]
Thanks for the indent and sorry for just dumping the link there. I couldn't reach they keyboard because I was snuggled up under the blanked with only a mouse because it's like -54 Kelvin here. By the way, consider zipping over to the Home-stored product entomology talk page about that article and it's big 5 pests (more than that, me thinks!). Anna Frodesiak (talk) 00:08, 26 January 2016 (UTC)[reply]

Thanks. If I buy a box of pasta, can I just throw it in the freezer? If so, would I throw it in the freezer as is (in the original box)? Or put the contents of the box in some other container? And, if I freeze it, what do I need to do when I want to cook it? Thaw it? Defrost it? Or just cook it right from the frozen state? Thanks! 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 19:23, 25 January 2016 (UTC)[reply]

I think you can freeze it in the original container and toss it right in the boiling water from there. If there's any left, you might want to seal the closed package in a plastic bag, to prevent freezer odors from being absorbed. You might use the plastic bag right from the beginning, too, as pasta boxes don't seem to be properly sealed to me (which is probably how bugs keep getting in). You might try another grocery store and another brand of pasta, as one or the other obviously has an insect problem. StuRat (talk) 19:51, 25 January 2016 (UTC)[reply]
Yeah, the solution to bugs in your pasta is not freezing all your pasta forever. It's buying pasta with better quality assurance and using proper storage... SemanticMantis (talk) 20:17, 25 January 2016 (UTC)[reply]
I don't see the need to freeze indefinitely. The point of freezing is kill the eggs. If you don't have an existing infestation and practice decent vigilance and do your best to, you'll hopefully not get another one if you kill eggs before an infestation can take hold. (There is a risk that freezing won't kill all eggs particularly if the food has a lot of them and you are buying a lot of food that is infested. It's also likely some species have eggs that are hardy enough to survive freezing, or even cyclic freezing.) I do agree that this doesn't seem the smartest solution as opposed to just avoiding products that are infested. Nil Einne (talk) 13:34, 26 January 2016 (UTC)[reply]

A few people have said that I should buy food that is not infested. That is obvious. But, how do I know that when I am at the store? I don't see or notice these bugs until after I get home and open the box and start to cook. So, at the store, how would I have seen/noticed this? Obviously, one does not open up the box of pasta in the store. It sits at home until the day I decide to use it. 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 20:56, 25 January 2016 (UTC)[reply]

I suggest you start opening them when you get home, to determine if they have bugs, because, if they do, those bugs might infest other items in your kitchen, like cereal boxes. You may already have an infestation problem, in which case you would have to take action to clear that up. Another alternative, once you've found the pasta to be clean, is to store it in glass or tin containers, which, unlike those paper boxes, seal tightly enough to keep bugs out. StuRat (talk) 21:04, 25 January 2016 (UTC)[reply]
As already mentioned, you should be storing your food in airtight hard containers if you are having problems. It's ideal if you do this from the getgo since some insects can pentetrate the plastic bags. If you open your food and find it is already infested, and return it to the store. If this happens often or they refuse to accept returns purchase the same (or nearly the same) day when they are clearly defective and unless you're giving them older items must have been defective when purchased you should do what's already been suggested and shop somewhere else. (If they do accept returns and it happens often and you still want to shop, I guess they'll probably know they have a problem and so keep letting you make returns despite the fact you're always doing it. If they do start to make problems, I guess you could say you'll open it in front of the staff.) Nil Einne (talk) 13:31, 26 January 2016 (UTC)[reply]
Storing non-perishables in your refrigerator should be a good test. If it still has bugs, they most likely came from the store or the manufacturer, not from your hone. ←Baseball Bugs What's up, Doc? carrots→ 16:34, 26 January 2016 (UTC)[reply]

My refs above

Sorry, it looks like I've inserted some references where they shouldn't be, and I can't seem to delete them. Can anybody more experienced with wiki editing remove those 3 links please? Thanks Mike Dhu (talk) 21:02, 25 January 2016 (UTC)[reply]

I changed them to bare urls, so that they don't appear at the bottom of the page - I hope that's what you wanted. Mikenorton (talk) 21:08, 25 January 2016 (UTC)[reply]
Great, thank you, I need to learn more about how to edit wikipedia, I must have selected the wrong tag for the references. Sorry, I'll stick to sandbox for a while :-/ Mike Dhu (talk) 21:16, 25 January 2016 (UTC)[reply]
There's a way to embed references within a section on a talk page, but the exact syntax is not coming to mind just now. ←Baseball Bugs What's up, Doc? carrots→ 21:30, 25 January 2016 (UTC)[reply]
Please see this edit.—Wavelength (talk) 21:49, 25 January 2016 (UTC)[reply]
Yes, one of those previous edits, before Mikenorton kindly corrected my mistake, was my reply to a question where I wanted the references to appear, but they were appearing at the bottom of the ref desk. I used the 'ref' tag instead of square brackets, but realise now they are used for different purposes. I should probably post the rest of this reply as a question on the computing ref desk but I'll ask here first out of courtesy as this is where I made the mistake. I'll be spending a bit of time in sandbox now, but would it be possible to change the tooltips to give more information? Whenever I hover over any wiki markup it just displays a tooltip that I should click on it to insert it, without explaining specifically what the markup is for. I appreciate that for those of you who have spent a bit of time editing wikipedia it's second nature to know what tags to use, but more info on tooltips would make it easier for newcomers. Thanks Mike Dhu (talk) 00:22, 26 January 2016 (UTC)[reply]
There's no need for self-deprecation; in many ways, the problem is with this page and not anything you did "wrong". Considering that we're ostensibly here to supply references, we're not actually well set up to provide them in the same manner that articles do. Matt Deres (talk) 15:50, 26 January 2016 (UTC)[reply]
I didn't think I was being self-deprecating, just acknowledging that I made a mistake while I'm still learning how to edit wikipedia, but thank you. It would be nice to have more info from the tooltips though, so I'll post a question on the computing ref desk about that. Mike Dhu (talk) 22:07, 26 January 2016 (UTC)[reply]

January 26

Atmosphere of Venus / compressing C5O10N2

I was musing over a Venus terraform in a very crude way, considering what the prevalences in a 92 bar atmosphere are relative to Earth:

CO2: 8878% vs 0.040%
CO: 0.16% vs ~0
N2: 322% vs 78.084%
Ar: 0.64% vs 0.93%
H20: 0.18% vs 0.001%-5%
He: 0.011% vs 0.0005% (waaaat?)
Ne: 0.064% vs 0.0018%
SO2: 1.38% vs ~0

I get that in order to make Venus atmosphere Earthlike, you basically have to dump 364 pounds of C, 970 pounds of O, 36 pounds of N and a mere 0.1 pounds of S onto every square inch of the planet's surface - or beneath it. Ignoring the S, which can clearly be made a solid but seems too small a component to figure into a bulk formula, that's an empirical formula of C5O10N2. Now we saw in a thread above that even bulk CO2 can be pressurized into an extended solid, but is there any way to predict what this composition would turn into, at what pressure? For example, I'm thinking you might get two NO2 groups on a C5O6 extended structure (almost a polyketone, though adjacent ketones are high-energy and disfavored) ... but I certainly don't know that. It seems empirically like a doable experiment, but has anyone done it? (yes, I realize that this is nearly 1 million times harder than fixing global warming, possibly using similar carbon sequestration technology, without a local infrastructure, and so it is not going to be done by any normal means we know of today... and the planet is still extremely dry)

Another question: why so much helium? I thought the conventional wisdom was that light gasses are lost, but if you sequester away the other stuff there's like 200 times more helium on Venus than here - even though the argon level is lower. and there's also 60 times more neon. I thought a noble gas was a noble gas... There might be an answer at [12] but I didn't have access, and can't riddle it out by the abstract. Wnt (talk) 17:27, 26 January 2016 (UTC)[reply]

The thermosphere of Venus is much colder than that of Earth. Therefore the losses of light gases are smaller. Ruslik_Zero 20:37, 26 January 2016 (UTC)[reply]
Well! I went back, looked at that article and atmosphere of Venus - sure enough, Earth's thermosphere can get up to 2500 C, and Venus can get up to ... 120 C or so. I have no idea why. But I thought the popular wisdom was that all Venus' hydrogen was lost to space. How can that be so if helium doesn't escape nearly as much as on Earth due to the colder thermosphere? I clearly have more shocks in store for me here. Wnt (talk) 14:26, 27 January 2016 (UTC)[reply]
I assume it's cooler because of Venus's albedo. Venus's thick atmosphere reflects most of the incoming sunlight (as demonstrated by the fact you can't see the planet's surface from space). Hydrogen is of course even lighter that helium—only a fourth the atomic weight—so that still might not be enough to retain hydrogen. These are just educated guesses; I'm not a planetary scientist. --71.119.131.184 (talk) 11:11, 28 January 2016 (UTC)[reply]

Research article query

According to the article found at [13], figure 2 implies that as the hydrogen content in chromium approaches nothing, the FCC and HCP crystal structures of chromium become stable at ambient pressure. This is a problem, because it is know for a fact that the BCC is the solely stable structure at ambient pressure and temperature. How does one reconcile this implied inconsistency? Plasmic Physics (talk) 20:42, 26 January 2016 (UTC)[reply]

I don't know about "known as a fact" (or crystal phases of chromium hydride, at that). In general, science is always preliminary. But in this case, the system is at 150°C, which certainly is not "ambient temperature". --Stephan Schulz (talk) 22:12, 26 January 2016 (UTC)[reply]
Chromium with zero percent hydrogen can hardly be considered as chromium hydride, can it? Looking at the pressure-temperature phase diagram of chromium, it should remain as BCC up to its melting point. So, the system being at 150 degrees should not matter. Plasmic Physics (talk) 22:54, 26 January 2016 (UTC)[reply]

Earliest predictions of man on moon in 60s?

May 25, 1961 President Kennedy announced the US goal to send men to the moon and return, 'by the end of the decade'. Are there records of earlier predictions by scientists, policy makers or governments (not looking for the 'Jules Verne' long literary history), that men could land on the moon in the 1960s or by when? I'm interested in finding the first informed, professional prediction that proved correct - men walking on the moon before the end of the 1960s. Thanks if you can point to a link or citation.

Extensive predictions, varying in accuracy and credibility from fringe lunatics to public statements by esteemed scientists, major movers and decision-makers! I have a stack of moon books at home written in the 1940s and 1950s; they make for great historical reading. If you'd like a complete listing, I can provide titles and authors.
Perhaps the first place to start is our article on Wernher von Braun:
"In 1930, von Braun attended a presentation given by Auguste Piccard. After the talk the young student approached the famous pioneer of high-altitude balloon flight, and stated to him: "You know, I plan on traveling to the Moon at some time." Piccard is said to have responded with encouraging words."
By the mid-1950s, the accuracy of the mission-statement was becoming very concrete and there are hundreds of scientific publications that aptly describe how a manned moon mission would probably look.
The reason that Mr. Kennedy's statement was so important was that he had the power to finance the program.
This 1979 documentary by James Burke, The Other Side of the Moon, is spectacular. He overviews the political clime, including interviews with several scientists and program managers. Among the key statements (somewhere probably around half an hour into the documentary) is a description of how they managed to get Mr. Kennedy to make a statement: it had been decided that it would be politically expedient for Vice President Johnson to formally advise the president in writing that a moon mission would be possible, and that it would be politically expedient for the President to proceed to order a study, and eventually make a formal public statement. The discussions that led to that point were quite extensive.
Nimur (talk) 23:34, 26 January 2016 (UTC)[reply]
See Moon in fiction, which lists many previous stories of human landings on the moon. Use your own judgment as to how realistic any of the twentieth-century stories were. Robert McClenon (talk) 23:37, 26 January 2016 (UTC)[reply]
But the question was about "informed, professional predictions", not fiction. --76.69.45.64 (talk) 05:59, 27 January 2016 (UTC)[reply]

I'm the OP: After further research myself, I came on this, which I think is the type of information I was looking for. Can anyone get closer, more specific, earlier in terms of the type of prediction this demonstrates: "Copenhagen, Denmark, Jan 8 (1960) (AP) - A Soviet rocket expert said today that man may set his foot on the moon some time in the 1960s. Stopping over at Kastrup airport en route to an international conference in Nice, France, Lt. Gen. Anatoloy A. Blagonravov told newsmen it is still too early to set a date for the firing of a manned moon rocket, 'but I would consider it probable that it may be sent to the moon within a brief period of years, possibly in 10 years." - ...I find it interesting that it was a Soviet expert that made this prediction, and that he predates Kennedy's declaration. This guy turns out to be pretty interesting as he was key in representing the Soviets in all talks on cooperation and joint space activities. Wikipedia has a brief article about him, but amending it to include the information about his prediction is above my skill set, I think. I also wonder if this info was hard to find or not obvious to Wikipedia researchers as the guy was a Soviet instead of an American? Research bias? I'm not accusing just wondering... — Preceding unsigned comment added by 94.210.130.103 (talk) 10:17, 27 January 2016 (UTC)[reply]

I would note that moon shots weren't really an independent new technology to predict. To this day, moon shots and other orbital activities serve as a sort of respectable face for ICBM development, and I believe on examination many components can be found in common. Since the latter was seen as a very high priority and subject to great planning and anticipation, the former should have been more predictable than if it were done solely by ivory-tower researchers looking to have a space jaunt. Wnt (talk) 15:34, 27 January 2016 (UTC)[reply]


Here's a good starting book: Realities of Space Travel, (1957), by edited by Leonard Carter. This book details the mechanisms of moon- and interplanetary flight, endorsed by several scientists from the American Institute of Physics, the British Interplanetary Society, and so on. Understand that this book, published in 1957, predates the formal existence of NASA...
The book is full of citations, science, math, technology reviews, and so on.
The introduction to this book walks the reader through an orbital dynamics equation to calculate the necessary energy budget for a manned rocket flight to the moon.
Later chapters detail the state of the art in technology, including rocket design, electronics, biomedical factors, and so on.
Nimur (talk) 17:15, 27 January 2016 (UTC)[reply]

January 27

Can a man's epididymis grow back if *all* of it is removed?

For reference: Epididymis. Futurist110 (talk) 00:05, 27 January 2016 (UTC)[reply]

I don't think so. Organs generally don't regenerate. Semi-exceptions: liver and brain. Why are you asking? Are you thinking of something like a vasectomy spontaneously reversing, which can happen? If so, that involves the vas deferens, not the epididymis. --71.119.131.184 (talk) 00:26, 27 January 2016 (UTC)[reply]
Why exactly can the vas deferens grow back but not the epididymis, though? Futurist110 (talk) 02:34, 27 January 2016 (UTC)[reply]
Indeed, if one tube/duct can grow back, then why exactly can't another tube/duct likewise grow back? Futurist110 (talk) 02:36, 27 January 2016 (UTC)[reply]
Good question. It doesn't really "grow back" in the sense of sprouting a new one from scratch. Exact vasectomy methods can vary a little (see the article), but in general the vas deferens is severed. Sometimes a portion is removed, but sometimes it's just cut, and the cut segments closed off with surgical clips or something along those lines. So, you can get minor tissue growth that winds up reconnecting the segments. Some additional procedures, like forming a tissue barrier between the vas deferens segments, have been tried to reduce the likelihood of spontaneous reversal. --71.119.131.184 (talk) 02:45, 27 January 2016 (UTC)[reply]
OK. Also, though, out of curiosity--can the vas deferens grow back if *all* of it is surgically removed? Futurist110 (talk) 02:58, 27 January 2016 (UTC)[reply]
In general, the less differentiated a tissue is, the easier it is for it to regenerate. The vas deferens are fairly simple muscular tubes, in contrast to the epididymis and testes, which are specialized organs, so it's not surprising that you can get some regrowth of the vas deferens. --71.119.131.184 (talk) 02:45, 27 January 2016 (UTC)[reply]
Is regrowth of the epididymis and testicles (after *complete* removal of the epididymis and testicles, that is) completely impossible or merely unlikely, though?
Also, please pardon my ignorance, but isn't the epididymis a tube just like the vas deferens is? Futurist110 (talk) 02:58, 27 January 2016 (UTC)[reply]
The epididymis is a 'tube' but a longer and more complex one and if it's removed completely it won't grow back (parts of it may, but not the enitre connection). If you snip the tube, then you've got a similar situation to a vasectomy, which can in rare circumstances reverse (repair) itself, but that's a very small step as opposed to regenerating an entire epididymis. The body is good at 'protecting' itself by repairing damage, whether that's by growing new tissue to reverse a vasectomy or repairing a damaged organ. What we lack is the regenerative capability to re-grow any parts that have been removed/destroyed completely, including the epididymis. I think a quick google search will make it clear that testicles don't grow back Mike Dhu (talk) 03:21, 27 January 2016 (UTC)[reply]
An aside on skin growth and stretch marks
Another exception is skin, which grows just fine, if given enough time (like when you gain weight). But, if you try to grow it too quickly, you get stretch marks and scars.StuRat (talk) 00:29, 27 January 2016 (UTC) [reply]
That's not an exception, just wrong. Skin contains elastic fibers that allow it to stretch during weight gain or recoil or shrink during weight loss. So someone who gains a large amount of weight does not grow new skin. Their skin stretches to accommodate the accumulation of fat tissue. Scicurious (talk) 14:13, 27 January 2016 (UTC)[reply]
The definition of "growth" here may be tricky. According to these two papers ( [14][15] ) the skin consists of epithelial proliferative units (though there may be some equivocation on the details) and each unit has its own stem cell. Given the chance, they clonally expand, but a unit without a stem cell can also be colonized. If you simply look at a section of skin, you're not going to see a lot of gaps where cells no longer contact -- something is taking up the slack. Yet at the same time, the hair follicles don't increase in number, and they have their own stem cells that can provide regeneration in case of injury. So when a baby's scalp becomes a woman's, you can say her skin grew, in the sense that it is probably thicker and stronger and has more cells in it than when she was a baby. But yet, the hair follicles are no more numerous, so the regenerative potential from that source is presumably reduced. I'm not sure exactly what happens to the average EPU size. Wnt (talk) 14:58, 27 January 2016 (UTC)[reply]
Just compare the surface area of a man at 150lbs to the surface area of the same man at 350 lbs a few years later. The skin "grew" under most any definition: more of it, more cells, new cells, more area, more mass, etc. Here's a nice paper that breaks down relative skin proportion in mice [16]. Unfortunately it's about two different strains of mice rather than weigh gain within strains but the point is that they bigger mice get more skin as they grow, just as humans grow more skin as they grow. SemanticMantis (talk) 15:11, 27 January 2016 (UTC)[reply]
Ahem. Stretch_marks "are often the result of the rapid stretching of the skin associated with rapid growth or rapid weight changes." That sentence is not sourced, but see table 2 here [17], and perhaps add it to the article for your penance :) Now, this may have been avoided had Stu given a reference, and you also are correct that the skin can stretch a lot. Some specific types of stretch marks are less influenced by weight gain, but that's a small detail in an issue unrelated to OP. Please let's endeavor to include references when posting claims to the ref desks. Thanks, SemanticMantis (talk) 15:04, 27 January 2016 (UTC)[reply]

U.S. currency subjected to microwaves

See HERE Is there any validity to this? Hard to filter out the nonsense/conspiracies. If so what is the mechanism of action? 199.19.248.82 (talk) 02:07, 27 January 2016 (UTC)[reply]

I wouldn't say it's that hard. Youtube videos and anything associated with Alex Jones or spreading conspiracy theories like godlikeproductions.com and prisonplanet.com are obvious stuff to filter out. Of the top results, that will probably leave you with [18], [19] & [20]. A quick read should suggest both the first and second link are useful. In fact despite the somewhat uncertain URL, the snippet probably visible in Google for the first link does suggest it may be useful, which is confirmed by reading.

A critical eye* on the less top results will probably find others like [21] which are useful.

I don't think it's possible to be certain how big a factor the metallic ink (or perhaps just ink) was responsible for the cases when the bills did burn, and how much of it is simply that stacking a bunch of pieces of paper and microwaving them for long enough will generally mean they catch fire. Suffice it to say they are probably both factors. It's notable that these stories claiming evils lack controls, they didn't try a stack of similar sized ordinary paper (obviously it would be very difficult to use the paper used for bank notes.

BTW, [22] isn't actually too bad in this instance if you ignore the crazies. It does look like one of the more critical posters there made a mistake. While the idea that some minimum wage employee is going to be microwaving $1000 worth of $20 bills or heck that they would just so happen to have that much cash in their wallet isn't particularly believable, if you read carefully the original story carefully [23] the min wage employee was someone else not the person who had the money. Still, as some of the other sources above point out, there are obvious red flags in the original story like the fact that they claimed to microwave over $1000 in $20 bills but only show 30 bills (i.e. $600) there. And that their claim the burning is uniform isn't really true. The amount of burning shown varies significantly and while it's normally in the head sometimes it seems much more in the left eye than the right.

Critical eye = at a basic level, anyone who seriously believes there are RFID tags in bank notes would be best ignored. And while forum results can have useful info, it often pays to avoid them due to the number of crazies unless you can't find anything better. Instead concentrate on pages that sound like whoever is behind them is trustworthy and check out by reading. To some extent anything which sounds like it's claiming to be scientific is potentially useful in this instance since while there are a lot of sites and people who claim science when they are actually into pseudoscience this is much more common with stuff like alternative medicine, climate change deniers or ani-evolutionists than it is with conspiracy theories about RFID tags in banknotes.

A lot of this can be assessed without having to even open the sites/links in the Goodle search result. Some others do depend on existing knowledge, e.g. knowing the URLs for Alex Jones or conspiracy theorist sites. Still you only need a really brief look to realise goodlikeproductions or prisonplane are not places you should count on for useful info.

Nil Einne (talk) 08:55, 27 January 2016 (UTC)[reply]

Given how extensive known privacy invasions have become, and how obvious the government's motive for spotting large amounts of money is, I don't think condescension is deserved. The people I saw made various hypotheses and tested them. However, some of the assumptions may be questionable. For example, I doubt that RFID is the only way to track a bill by penetrating EM radiation, and I doubt that RFID chips inevitably catch on fire in a microwave. I am very suspicious of the uses of terahertz radiation and lower frequency radio waves - obviously, the higher the frequency/shorter the wavelength, the smaller the receiver can be and the more readily it can dissipate heat to its surroundings. Alternatively, terahertz can simply penetrate the human body, as with airport scanners, and so if someone designed a set of terahertz dyes, probably some conjugated double bond system that goes on a really long but tightly controlled distance, then they can have their damned identifying codes marked out in a way you will see only if you can scan through the terahertz spectrum with a more or less monochromatic emitter and view what is transmitted and reflected. If I see someone do that experiment on a bill, I'll believe it's not being tracked... maybe... otherwise, I should assume it is (it's just a question of whether those interested in you are important enough to have access) Wnt (talk) 15:13, 27 January 2016 (UTC)[reply]
Their tests were very poorly planned (if you genuinely believe that something has an RFID either feel for the tag or look at it under a microscope as someone else who didn't believe their nonsense did, don't microwave). And as I already said lacked even the most basic control for even the stupid test they were doing. And they were either incapable of even counting, or didn't even show all their results. Results which didn't even show what they claimed to show. So condescension is well and truly deserved.

BTW most terahertz radiation can barely penetrate the human body (our article says "penetrate several millimeters of tissue with low water content"). The main purpose of most airport scanners using terahertz radiation is to penetrate clothes not the human body (they may be able to see the body which is quite different from penetrating the body).

Note that in any case, the issue of whether bills are being tracked is unrelated to the question (unless your claiming the cause of the notes catching fire in microwaves really was because of RFID chips which it doesn't seem you are) and wasn't discussed by anyone here before you. I only mentioned that the specific claim mentioned in some sources discussing microwaving money (the presence of RFID chips in money as a cause for them catching fire) was incredibly stupid.

Nil Einne (talk) 19:23, 27 January 2016 (UTC)[reply]

This is a multi-layered question:
  1. Does paper money catch fire in a microwave? (We don't trust YouTube videos!)
  2. Does all paper catch fire in a microwave when stacked in the same manner as the money? (All experiments need a 'control')
  3. If (1) is true and (2) is false - then is the fact that paper is made of cotton rather than wood-pulp the cause of this?
  4. If (3) is false - then are metal particles in the anti-counterfeiting ink the cause of this difference?
  5. If (4) is false - then...and so on...
The idea that there are sneaky RFID tags embedded in the money seems really unlikely given the size of antenna you'd need - and the fact that you'd see them if you held the bill up to the light. So that comes in as question #20 or so after we've speculated about the fact that maybe 30% of all paper money has traces of cocaine in it and maybe that's what catches fire.
If you're determined to find a conspiracy, what seems more plausible is that the pattern made by the metal in the ink could cause some kind of unique signature in reflected or transmitted radio waves - and this would somehow make the money detectable...but the behavior of the money in a microwave oven doesn't really prove that either way. However, that seems more plausible than RFID tags.
This image clearly shows that paper money is clearly detectable in x-rays - so for sure the metal inks make it detectable in some manner.
Microwave ovens are a continual source of surprising effects. Cut a grape *almost* in half, leaving a thin shred of skin between the halves - lay the two halves spread apart inside a microwave oven and zap them - and you get an impressive light show of sparks. Does this prove that government can 'track' grapes? No! That's ridiculous! So why would you assume that money catching fire in a microwave would imply that?
We know that putting some kinds of metal into a microwave causes unusual effects - so why not the metal inks or the cotton fibers or some other effect inherent in the structure of paper money?
SteveBaker (talk) 13:58, 28 January 2016 (UTC)[reply]

Are dogs racist?

Do dogs prefer their own breed for mating or at least, are more aggressive towards breeds far away from them? --Scicurious (talk) 12:46, 27 January 2016 (UTC)[reply]

Intriguing question. I have absolutely no idea how to answer it though... just picture the kind of laboratory you'd have to set up to try to socialize dogs under highly consistent conditions, then see whether they act differently. I'm tempted just to read anecdotes here, like [24]. Individual people describe dogs with out-of-breed associations, even as others say that you can just tell at a dog show, etc. The existence of the mutt is proof that any breed loyalty is not absolute ... it's also a reminder that the dogs people buy are often not the result of freely assortative (or non-assortative) mating. Wnt (talk) 15:25, 27 January 2016 (UTC)[reply]
The studies will be done more like sociology/ethology, not through controlled exposure experiments. They'll use things like surveys and observations and medical records and lots of relatively fancy statistics. E.g. these [25] people have survey data on dog-dog aggression by breed, but I can't see that they reported it! Even if breed was not a significant factor, they should say so...This paper [26] does have relevant data, (tables 2, 3) but the data are sparse and breed of the other dog is not reported. Here are a few more scholarly papers that look promising [27] [28]. OP can find many more by searching google scholar for things like /dog breed intraspecies agression/. If OP is interested in anecdotes and OR (which is potentially valuable here), I'd suggest asking at a dog forum. SemanticMantis (talk) 15:56, 27 January 2016 (UTC)[reply]
If you go to any large dog park, you'll see dogs of all breeds playing together - even when there are enough of one common breed for them to potentially group together. So it seems rather unlikely that they care very much. The only preferences I think I see are that there seems to be some kind of broad preference for other dogs of similar size. Our lab gets visibly frustrated with very small dogs...but whether that is due to their behavioral tendencies is hard to tell. SteveBaker (talk) 16:04, 27 January 2016 (UTC)[reply]
Steve, I just misread your message and began wondering why your laboratory was getting frustrated with small dogs! Presumably, you mean your labrador! This breed, rather surprisingly, has topped several lists for aggression - particularly when their home territory is "invaded" by people such as postmen. Regarding the OP, I have no references to support this but I very much doubt there would be a psychological racism about mating amongst dogs. There may be preferences according to size, but just the other day I saw a rather humerous photo of a male Chihuahua perched on the back of a female Great Dane so he could mate with her. Very probably staged though.DrChrissy (talk) 16:24, 27 January 2016 (UTC)[reply]

Excessive Inbreeding as practiced by humans on pedigree dogs (controversy) has caused genetic defects that would not survive under natural selection while there is likely evolutionary survival value to hybrid vigor. Dogs sensibly rely more on their Vomeronasal organ to evaluate the pheremones of a potential mate than on any version of a Kennel club breed checklist. AllBestFaith (talk) 17:17, 27 January 2016 (UTC)[reply]

It's not about mating and it's not about dogs, but rats can be racist, see Through the Wormhole S06E01 Are We All Bigots. Of course, they can be educated not to be racist. Tgeorgescu (talk) 20:28, 27 January 2016 (UTC)[reply]

I interpreted the headline differently: I once knew a dog who was racist regarding humans. He had been mistreated by Mexican men, and was consequently suspicious of all men, but he got crazy in the presence of Mexicans. — Sebastian 18:10, 29 January 2016 (UTC)[reply]

Why don't some species have a common name?

Many species have a common name. Human. Squirrel. Rat. Dog. Whale. Dolphin. Fern. Some species don't seem to have common names. Entamoeba histolytica. Staphylococcus aureus. Candida albicans. Why don't scientists invent common names for specific parasites, bacteria, and fungi? Instead of Staphylococcus aureus, which can be a mouthful to say, the common name may be Staphaur bacteria. 140.254.70.165 (talk) 12:49, 27 January 2016 (UTC)[reply]

Also, of the above names, only two are single species in common use, H. sapiens and C. familiaris. Robert McClenon (talk) 21:59, 27 January 2016 (UTC)[reply]
My answer as to why scientists don't invent common names is that they don't need to, because scientists refer to the species by its taxonomic name. It is up to non-scientists to invent common names, since the scientists are satisfied with the scientific name. Why journalists and others don't invent common names for every species is described below. Robert McClenon (talk) 21:56, 27 January 2016 (UTC)[reply]
It is "an attempt to make it possible for members of the general public (including such interested parties as fishermen, farmers, etc.) to be able to refer to" them, according to common name. I find it difficult to find an exception, but if common people relate somehow to a species, then a common name exists. Otherwise not. --Scicurious (talk) 12:57, 27 January 2016 (UTC)[reply]
As to fishermen, I will note that often the same common word, such as "trout" or "bass", may be used differently in different English-speaking regions. Fishermen who are aware of regional inconsistencies in naming will often use the unambiguous scientific name to disambiguate. Robert McClenon (talk) 21:56, 27 January 2016 (UTC)[reply]
It could also be that scientists just aren't that creative with names... FrameDrag (talk) 14:46, 27 January 2016 (UTC)[reply]
It's the other way round. Scientists give each known species a name. Common people are not prolific enough to keep up with them.Scicurious (talk) 14:54, 27 January 2016 (UTC) [reply]
Staphylococcus aureus is known just as 'Staph', similarly Streptococcal pharyngitis is known as 'Strep'[29], so those are the common names. Mikenorton (talk) 13:10, 27 January 2016 (UTC)[reply]
Golden staph.
Sleigh (talk) 13:49, 27 January 2016 (UTC)[reply]
Bacteria are a special case, where the usual test for a species, whether it breeds with itself and not with related species, does not apply. This results among other things in so-called species, such as E. coli, that consist of a multitude of so-called varieties that are really so different in their behavior that they are probably multiple species. But the question originally had to do primarily with plants and animals. Robert McClenon (talk) 21:53, 27 January 2016 (UTC)[reply]
Does that refer to the color of the snot ? :-) StuRat (talk) 16:40, 27 January 2016 (UTC) [reply]
The thing about common names is that they need to be popular and commonly used - and it's hard to dictate that. People name things if they need to - and not if they don't. People don't need common names for organisms they'll never encounter or care about. Also, there are far too many organisms out there to have short, simple, memorable names for all of them. We tend to lump together large groups of organisms into one common name. "Rat" (to pick but one from your list) formally contains 64 species...but our Rat article lists over 30 other animals that are commonly called "Rat" - but which aren't even a part of the genus Rattus. So allocating these informal names would soon become kinda chaotic and nightmareish. Which is why we have the latin binomial system in the first place. That said, scientists very often do invent common names for things - so, for example Homo floresiensis goes by the common name "hobbit" because the team that discovered this extinct species liked the name and it seemed appropriate enough that it's caught on. Whether that kind of common name 'catches on' is a matter of culture. All efforts to get members of the public to understand that there is no difference between a "mushroom" and a "toadstool" and to adopt a single common name fail because the public believe that there are two distinct groups of fungi even though there is no taxonomic difference between fungi tagged with one or other of those two terms. Another problem is that common names are (potentially) different in every language...so would you have these scientists invent, document, propagate around 5,000 common names - one in each modern human language? It's tempting to suggest that the same name would be employed in every language - but pronunciation difficulties and overlaps with names for existing organisms or culturally sensitive terms would make that all but impossible. SteveBaker (talk) 16:00, 27 January 2016 (UTC)[reply]
The OP is overlooking the more obvious Hippopotamus and Rhinoceros, which have local names but in English are known by these Latin-based names - or by "hippo" and "rhino", which mean "horse" and "nose" respectively. ←Baseball Bugs What's up, Doc? carrots→ 17:07, 27 January 2016 (UTC)[reply]
Of course, the full names in Greek being "River Horse" and "Horned Nose". (the names derive originally from Greek rather than Latin, though arrive at English through Latin transcription. The native Latin word meaning horse is "equus", c.f. equine. The native Latin word meaning nose is "nasus", hense "nasal".) Of course, both names are wrong. Hippos are not particularly closely related to horses, and the growths on the faces of rhinos are not true horns. So even in the original language of Greek, neither name is related to actual Biology in any way. Such is language. --Jayron32 20:45, 27 January 2016 (UTC)[reply]
  • The other issue is that the vast majority of species don't have common English names at all, because English speakers don't commonly encounter them. Consider the 400,000 different species of Beetle. Of course, we have some names for beetles English speaking people run into every day, like ladybugs/ladybirds or junebugs, or japanese beetles (even these names have multiple species they cover though, and often mean different unrelated species in different geographies). We have 400,000 different latin binomial names for these species, because each needs a unique identifier, but seriously, we don't also need 400,000 unique different English names for them, especially where they aren't beetles anyone runs into in their everyday lives. --Jayron32 20:37, 27 January 2016 (UTC)[reply]
In many cases, the differences between the species may not be significant enough that a non-zoologist recognizes them as different species. The common name may refer to a genus, a family, or an order. Most beetles are just called beetles, unless someone has a reason to identify them more specifically, such as "Japanese beetle" as a garden pest. Even with mammals, and even with large mammals, people don't always see the need for distinctive common names. "Zebra" and "elephant" are not species but groups of species. There usually really isn't a need for a common name for every species. Robert McClenon (talk) 21:50, 27 January 2016 (UTC)[reply]
Yes, but every species and subspecies of zebra also has a common name, like Chapman's zebra and Hartmann's mountain zebra. In the UK, there are common names for hundreds of different types of beetles, for example, those without common names are ones that are really uncommon - beetles nobody refers to in everyday speech. Alansplodge (talk) 22:19, 28 January 2016 (UTC)[reply]

Disadvantages of iontophoresis for administering drugs ?

Iontophoresis#Therapeutic_uses doesn't list the disadvantages, but they must be substantial, or I would have expected this method to have replaced needle injections entirely. So, what are the disadvantages ? I'd like to add them to the article. StuRat (talk) 16:43, 27 January 2016 (UTC)[reply]

This [30] is a very specific study about a specific thing, but it says in that one case (of problem, treatments, drugs, etc) "In contrast, electromotive administration of lidocaine resulted in transient pain relief only" compared to other treatments, which were concluded to be better. Here is a nice literature review [31] that has lots of other good refs. I'm no doctor, but I don't get the idea that it was ever intended to replace needles entirely. For one, it seems much slower. Another is that the permeability of skin is different with regard to different size compounds, so some things may be too big to pass through easily. Another potential factor is the stability and reactiveness of the compounds to the electrical field. It's also clearly more expensive and rather new, compared to injections via syringe and hypodermic needle, which are cheap and have been thoroughly studied for efficacy. If you search google scholar, you'll see lots of stuff about bladders and chemotherapy, and nothing about using it as a method to deliver morphine of flu vaccine. I'll leave you to speculate why that might be... Also I think you are vastly underestimating the time scale at which the medical field changes. The key ref from the article [32] is preaching that we should do more research on this, and cites small trials. And it is only from 2012! SemanticMantis (talk) 17:17, 27 January 2016 (UTC)[reply]
I just saw it in a 1982 episode of Quincy, M.E., so it's been around for at least 34 years. If it really could replace all injections, then I would think it would have, by now. StuRat (talk) 17:28, 27 January 2016 (UTC)[reply]
Well, sure, the idea has been around for a while. I'd also suggest a TV show isn't a great record of medical fact. My second ref says "The idea of using electric current to allow transcutaneous drug penetration can probably be attributed to the work done by Veratti in 1745." I agree it can't replace all injections. I agree there must be things it won't work for, and cases where syringes are just better. I'm trying to help you find out what those cases and things are. The references I gave above, and especially the refs within the second, discuss some difficulties and problems, but you'll have to read them to see what they're really talking about and to understand the details. To clarify what I said above, EDMA only works well with an ionized drug. That alone would probably be useful to clarify win the article. As for timing, when I see research articles on EDMA written in the past few years talking about "potential", and "new frontiers," I conclude it is not yet widely used for many things, but it may become more widespread in the future. Maybe someone else wants to find additional references or summarize them for you in more detail, but that's all I've got for now. SemanticMantis (talk) 17:58, 27 January 2016 (UTC)[reply]
I think it's unlikely a TV show would completely make up a medical procedure that didn't exist. StuRat (talk) 21:08, 27 January 2016 (UTC)[reply]
As for flu vaccine, picture running an electrophoresis gel with a mixture of large protein complexes and a small molecule like lidocaine. I'm thinking you wouldn't get one out of the well before the other runs off the bottom. Flu antigen is just not going to go far under the influence of a few plus or minus charges; it's like putting a square sail on a supertanker. Wnt (talk) 18:31, 27 January 2016 (UTC)[reply]
Isn't there a nasal spray flu vaccine ? That implies that it can pass through the skin on the inside of the nose. Is the diff between that skin and regular skin so much that electricity can't overcome it ? StuRat (talk) 21:10, 27 January 2016 (UTC)[reply]
Live attenuated influenza vaccine goes through cells by the usual receptor-mediated process. Even the most delicate mucosa, like the rectum, shouldn't let viruses or other large proteins slip past - HIV actually finds its CD4 receptors on the epithelial cells, as far as I recall. Wnt (talk) 23:15, 27 January 2016 (UTC)[reply]
An obvious limitation is spelled out in the article but is easily missed: the substance needs to be charged. But for chemicals to penetrate where they need to be in cells, often you want them to be neutral. To give kind of a bad example, crack cocaine is a neutral alkaloid, while the cocaine powder is a salt, and clearly the user notices the difference. There are many substances of course which can be charged if the pH is weird enough... but I think that means exposing not only the outside of your skin but the interstices of the cells to the weird pH; otherwise the stuff could end up stuck somewhere in the outer layers of skin with no prospect of moving further. That said, testing my idea, I found [33] which says that lidocaine and fentanyl have been delivered by this route. Fentanyl has a strongest basic pKa of 8.77 [34] so apparently this is not insurmountable. That reference also says it has been used on pilocarpine and ... tap water??? Reffed to this, which says the mechanism is not completely understood (!) but I don't expect to have access. Well, this is biology, a field that is under no obligation to make sense, since the cells can react however they want to an applied current. I should look further... Wnt (talk) 18:25, 27 January 2016 (UTC)[reply]
Yes, that tap water mention in our article shocked me, too. Is it really safe to inject that, or get it under your skin by any other mechanism ? (I realize some is absorbed through the skin when you take a bath, but even that can cause cell damage given enough time.) StuRat (talk) 21:00, 27 January 2016 (UTC)[reply]
I would call attention again to the question of cost (which SemanticMantis did bring up). I'm pretty sure an iontophoresis machine costs more than a needle and syringe. And even though the main machine is probably reusable, I imagine the part applied to the skin needs to be single-use for hygiene reasons. If the issue is simply the patient disliking injections, there are probably cheaper measures, like applying topical anesthetic before the injection. There's also been increasing attention given to intradermal injections, which require a much smaller needle and thus reduce discomfort. --71.119.131.184 (talk) 05:01, 28 January 2016 (UTC)[reply]
There are lots of issues with injections, beyond discomfort. It is an injury after all, and repeated injuries to the same area cause cumulative damage. They somewhat reduce this problem by changing injection sites, but for people who need constant injections, it's still an issue. StuRat (talk) 05:44, 28 January 2016 (UTC)[reply]

As for it being slower than an injection, that could actually be an advantage. My Dad had iron injections, and there was apparently a problem with too much iron in too small of an area, causing severe cramps. If that could be done more slowly, hopefully the iron would have time to distribute more evenly. They could also do this with a slow IV drip, but then you have the issue of that excess fluid. StuRat (talk) 05:48, 28 January 2016 (UTC)[reply]

Why do humans around the world cover the genitals?

Depending on the culture, humans may or may not cover the breasts or the nipples. However, across most cultures, it seems that humans cover the genitals. Is this a universal human trait? Are there human societies that don't cover the genitals? I remember watching a film adaptation of Romeo and Juliet, and the setting looked as if it took place during the Italian renaissance. The men in the motion picture dressed themselves in long pants that really highlighted their genitals. But they still wore clothing that covered them. 140.254.229.129 (talk) 18:29, 27 January 2016 (UTC)[reply]

We have many good articles that relate to this issue. See modesty, nudity, nudism, taboo, as well ass public morality and mores for starters. No, the trait of hiding one's genitals is not completely universal among humans. If you look through the articles above, you'll see there are exceptions in various places/times/cultures. Nature vs. nurture and enculturation may also be worth looking in to. SemanticMantis (talk) 18:58, 27 January 2016 (UTC)[reply]
"ass public morality" isn't covered in the linked article. Unless you count the links to regulation of sexual matters, prostitution and homosexuality and other articles which may cover ass public morality. Nil Einne (talk) 19:01, 27 January 2016 (UTC)[reply]
Codpiece. Sagittarian Milky Way (talk) 20:10, 27 January 2016 (UTC)[reply]
Merkin too if we're listing such things. SemanticMantis (talk) 20:56, 27 January 2016 (UTC)[reply]
If merkins are outerwear in a Romeo and Juliet film it isn't historically accurate. Sagittarian Milky Way (talk) 22:44, 27 January 2016 (UTC)[reply]
Simplifiable: If merkins are outerwear in a Romeo and Juliet film it isn't historically accurate.  ;-). --Stephan Schulz (talk) 14:05, 28 January 2016 (UTC)[reply]
Aside from moral/sexual issues, there are also practical reasons:
1) Hygiene. Do you really want to sit in a chair after a woman menstruated on it or a man's penile discharge dripped on it ? Of course, exposed anal areas are even more of a hygiene problem, but it's hard to cover one without the other (especially in the case of women).
2) Safety. An exposed penis is a good target for dogs or angry people, as are testicles.
3) Cold. Unless you happen to live in a tropical area, it's likely too cold for exposed genitals a good portion of the year. StuRat (talk) 21:06, 27 January 2016 (UTC)[reply]
On StuRat's #3, note that the Yaghan people of Tierra del Fuego were one of the better-known <insert politically-correct word for "tribes" here> who didn't wear clothes, despite the maximum temperature in the summer in that part of the world being only about 10 C. Tevildo (talk) 22:17, 27 January 2016 (UTC)[reply]
Why did they do that? I've heard that's why it's called Tierra del Fuego (they just stood around fires their whole lives). Sagittarian Milky Way (talk) 22:48, 27 January 2016 (UTC)[reply]
Nay, the article says that they didn't spend *all* their time around the fires ... to the contrary, the women went diving in very cold ocean waters for shellfish. I have no idea, but I wonder if their lifestyle helped with cold adaptation, so these dives wouldn't be fatal?? Wnt (talk) 01:25, 28 January 2016 (UTC)[reply]
The assumption in the question is false. In Ecuador, in some tribes the men tie the penis to a string around the waist. Supposedly this is to keep fish from swimming up it when they bath in the river. National Geographics in past decades always had photos of naked natives. Edison (talk) 04:46, 28 January 2016 (UTC)[reply]
The candiru is a real thing... though it seems a rare accident to us, I imagine that people out doing survivalism daily have more exposure ... and it sure makes a big impression. Wnt (talk) --- hmmm, reading our article I just linked I'm not so sure it's a real thing. Wnt (talk) 16:18, 28 January 2016 (UTC)[reply]
One idea I've heard: When other primates stand face to face, their naughty bits typically are not readily visible; but we walk around in what amounts to a display posture, which our relic instincts can see as a challenge or invitation (depending on the parties' sexes), creating unnecessary social awkwardness. —Tamfang (talk) 09:27, 28 January 2016 (UTC)[reply]
I hadn't thought of that. Are there any animals with both eyes and genitals or pubes visible from in front instead of behind or on the ground? How close to humans do they get (evolutionarily)? Sagittarian Milky Way (talk) 12:07, 28 January 2016 (UTC)[reply]
Not really. Humans are relatively unique among mammals in that our mostly hairless nature and bipedalism make our genitals fairly visible from the front. But many primates have far more visible genital areas, related to estrus signalling just google /[primate of choice] estrus signal/, baboons are particularly notable. Baboons are particularly notable. But humans don't telegraph ovulation, see links below. SemanticMantis (talk) 15:13, 28 January 2016 (UTC)[reply]
Maybe because humanoids are sexy all the time it became impractical for most (but not all) societies to continue the nakedness. Other primates might be able to get away with very obvious estrus because losing all attraction for females without the rare red buttocks was rewarded by Darwin. (the promiscuous bonobo seems to have gotten around this by becoming so jaded by sex that they go back to eating etc. in seconds (if a bonobo loses patience and smacks an annoying child the mother will retaliate and then they will have sex for like 3 seconds and all is forgiven) Sagittarian Milky Way (talk) 13:35, 29 January 2016 (UTC)[reply]
Yes, this is the sort of thing Desmond Morris gets in to. Also related to how, even naked, female human genitals are reduced and less visible, comparted many primate analogs. Fair warning though, many current scientists see such views of sociobiology as largely Just-so stories, though there is some slightly more rigorous contemporary research along these lines. Some slightly related info at Concealed_ovulation#Concealed_ovulation_as_a_side_effect_of_bipedalism. Here's a popular article about visibility of human genitals compared to other primates, [35], and the the related scholarly article is here [36]. SemanticMantis (talk) 15:13, 28 January 2016 (UTC)[reply]
One of my personal favorites is the traditional penis gourd (koteka) of Papua New Guinea. Natives traditionally wore a dried gourd over the penis (and tied to the scrotum and waist) and nothing else, which leaves even the testicles visible. Natives who lost their gourd for whatever reason would nonetheless hide themselves out of an apparent sent of modesty, even though the difference in what was visible with and without a penis gourd would be very small. Dragons flight (talk) 12:29, 28 January 2016 (UTC)[reply]
I don't know if you intended to generalize about PNG, but if you did, don't, and if you didn't, this is for other readers: New Guinea is so mountainous, and its tribes so often isolated from each other, that it has a surprisingly big fraction of the world's living languages; any assertion that's true about one NG tribe is likely to be false about another. —Tamfang (talk) 12:16, 29 January 2016 (UTC)[reply]

Just my own observation but humans are over-fascinated with sex far beyond any other species. Culturally, we pay far more attention to sexuality than nearly any species. Any nature TV show on animals always focuses on mating and mating habits. Humans also seem to be unique (or at least a rarity) in that the female, even though is rate limited in reproduction, puts a lot of effort in being visually appealing (i.e. see cosmetics industry, supermodels for clothing, shampoo, etc, etc) even though males aren't particularly choosy in who they will copulate with (birds are rate limited too but it's the male that generally works appearance). Humans, it seems to me, are very close to bonobos primates with a non-stop emphasis on sexual activity (heck, just read section 4.3 to see the over-sexed species called humans writing about an over-sexed chimpanzee - WP even throws in a Great Ape face-to-face sexual encounter gratis). Clothing seems like it is used to control that emphasis especially concealing arousal but also causing it. I just finished watching a show on dogs/wolves. Since it was made by humans, a large segment was devoted to mating, dominance, hierarchy and even showing sneaky mating by non-alpha wolves. Then I looked at my dog and it's obvious they don't give a shiat about human mating but they are very interested in what we eat. I imagine documentaries made by dogs would be 40% food, 40% butts and 10% on mating behavior regardless of what animal. People, seem to make documentaries that are 80% on mating behavior and 20% on how mating behavior affects them. The emphasis on sex and sexuality seems wired in to how humans view the world. If dogs wrote wikipedia, section 4.3 of the bonobos article would be what they taste like. --DHeyward (talk) 07:23, 29 January 2016 (UTC)[reply]

Scientific description of Anelasmocephalus hadzii (Martens, 1978)

Hello! Anelasmocephalus hadzii is a species of Harvestman, described by someone called Martens (don't ask) in 1978, but in what paper did Martens describe this Harvestman? I cannot find the answer on any obvious sources, but maybe you will have more luck? Megaraptor12345 (talk) 21:50, 27 January 2016 (UTC)[reply]

Google scholar finds 7 relevant records from Martens in 1978 [37], but I think there are only two publications, and the rest are spurious bibliographical records of the one rather famous work Spinnentiere, Arachnida: Weberknechte, Opiliones. People have been citing it as recently as the last few months (presumably some as a species authority), but I don't read any of the languages of the most recent citing works listed here [38].
This [39] Opiliones wiki says the book is great and describes many European species, and has a photo of the title page, but says the book is hard to find (surprise). Anyway, it seems very very likely that the species is described in the book published by Fischer Verlag, Jena, 1978. Either that, or Martens published a paper describing a harvestman species in 1978 without using the word "Opiliones" (unlikely) or Google doesn't know about it (possible, but still unlikely IMO). This is all just subjective evidence of course, if you need to be sure, I think you'll need to get a hard copy and someone who reads German. I'd imagine most research libraries could get you a copy through interlibrary loan. SemanticMantis (talk) 22:35, 27 January 2016 (UTC)[reply]

January 28

Acquired resistance to diseases such as Zika virus

When a person becomes infected with Zika virus, they get mildly sick, then they recover. Presumably this is because some response to the disease occurred in the body which took away the virus's ability to make the person sick. What is the nature of this immune response? How long does it last? In other words, if one got dengue fever, yellow fever, west nile, or zika (all somewhat related viruses per the article) could they catch the same strain of the same disease the next week? Or does the previous infection and recovery provide some immunity for some period of time? If the latter is so, then why can't a vaccine or immunization be devised? Has there been any discussion of women letting themselves get Zika when non-pregnant so that the next year they could have a baby who was not microcephalic despite exposure to mosquitoes carrying the virus? The worry seems to be so great that some Central American governments are advising women not to have babies for some unspecified period of time, during which time the governments plan the dubious goal of eliminating mosquitoes of the sort which are vectors. In the US midwest, governments have ineffectually spent a lot of money for years trying to get rid of mosquitoes which transmit the related west nile. Edison (talk) 04:32, 28 January 2016 (UTC)[reply]

Acquired immunity takes a while after infection to develop. As for your other questions, the general answer is "it depends". Some pathogens tend to not change very much. Smallpox is such a pathogen, which is why we were able to eradicate it: one vaccination and you're immune to all forms of it. Other pathogens vary widely. Influenza is a virus which changes epitopes very frequently, which is why there is no universal vaccine and they make a new vaccine every year. This is simply evolution in action: pathogens are constantly adapting to their hosts so they can reproduce and spread more effectively. Note also that creating vaccines is as much of an art as a science. There is a lot of trial-and-error that goes into vaccine development. And some pathogens are just not good targets for vaccination. Malaria is one example; the malarial parasites "hide" inside liver and blood cells most of the time, which shields them from the immune system. --71.119.131.184 (talk) 04:51, 28 January 2016 (UTC)[reply]
OK, then, to focus in: does Zika change its form like influenza, or is it pretty constant like smallpox? And how long does the acquired immunity last? Have individuals been infected by Zika multiple times? Edison (talk) 05:11, 28 January 2016 (UTC)[reply]
For the most part, no one knows for sure. Zika, though not exactly new, wasn't seen as very significant until recently. Consequently, it hasn't been studied very much. Zika is an RNA virus, like Influenza, and as a class RNA viruses are more prone to mutation than DNA viruses like smallpox. However, being an RNA virus is only part of the story in determining variability and resistance to vaccines. Polio is also caused by an RNA virus, but nearly eliminated due to vaccine efforts. By contrast, flu vaccines provide only seasonal resistance, and efforts to develop an HIV vaccine have largely failed so far. Zika is a relative of dengue. There is currently no vaccine for dengue, but dengue vaccine development is considered "promising" [40]. Acquired immunity to dengue appears to have long-term persistence; however, there are four major variants of dengue virus, and acquired immunity apparently only provides protection against viruses of the same type. It is generally expected that acquired immunity to Zika will prove persistent, but it is too early to know for sure. Dragons flight (talk) 09:04, 28 January 2016 (UTC)[reply]
I think the reason that Zika hasn't been well-studied is not so much that it's "new" it's that the recent large outbreak seems to be causing this horrible microcephaly birth defect. Zika outbreaks were noted as far back as 2007 but were not considered important. In the past, that connection to microcephaly either didn't happen or was sufficiently rare that nobody noticed it. Aside from the birth defect, Zika causes a very mild flu-like disease - and it's symptomless in 60% to 80% of those infected. There are literally dozens of similar viruses in those parts of the world that spread in similar ways and produce similar flu-like symptoms. A disease that's that innocuous simply doesn't gather a ton of research funding. It's still far from certain that it is the cause of the microcephaly outbreak - but now it's suddenly in the spotlight - so only now is science being brought in to deal with the problem. SteveBaker (talk) 13:32, 28 January 2016 (UTC)[reply]
I don't think it's exactly true that flu vaccines are only seasonal. Rather the virus change fast enough that no two seasons are identical. There is evidence that those exposed to the pandemic Spanish flu fared better than those that weren't exposed during a recent bird flu outbreak. The mortality data had pre/post spanish flu step changes that led scientist to believe that older people that survived Spanish Flu fared better than younger people that weren't born until after the pandemic. As for Zika, think of West Nile virus except transmission route is more difficult as it is only human-to-human through the mosquito. West Nile virus has both human and avian hosts. --DHeyward (talk) 07:52, 29 January 2016 (UTC)[reply]
Optimal_virulence is highly relevant when considering acquired resistance and viral mutation, and also speaks to why Zika might not have been a big problem in the past. The Red Queen hypothesis also comes in to play. One thing that is interesting is that acquired resistance of a mother can be passed on to a child via maternal immunity AKA passive immunity. For some pathogens/diseases e.g. measles, a vaccine given to the mother can confer a period of resistance to the foetus/infant child [41]. SemanticMantis (talk) 15:28, 28 January 2016 (UTC)[reply]
For "How long does it last?" specifically, the answer is that it varies widely, depending method of acquirement (e.g. infection or vaccine, type of vaccine, maternal immunity etc.), and on pathogen. Here's a paper that discusses the duration for pertussis [42]. In epidemiology, this refractory period is very important. SIR models make the assumption that immunity lasts forever, but SIRS models close the loop. So searching /SIRS model [disease]/ will get you to estimates of immunity persistence for diseases of interest. SemanticMantis (talk) 15:36, 28 January 2016 (UTC)[reply]
I was going to ask the same question myself a couple of times this week, but I think a lot is just not known. We only recently learned that Ebola can lurk in semen months after we thought the patient was cured, for example - so who would want to go out on TV and tell women to go ahead and catch Zika, get over it, get pregnant, and see if it can hang around and infect the fetus later on? We'd have to see data on women who had Zika and then became pregnant, and I'm not sure it's out yet, especially as there's no commercial test for it. This is just about the state of the art, such as it is, and it pretty much deliberately skirts the issue... part of the reason though is it's just not saying to do anything in particular, except try to figure out what happened for scientific reasons. Wnt (talk) 16:27, 28 January 2016 (UTC)[reply]
This is nearly pure OR (well I have heard some RS mention immunity) so I haven't mentioned it yet, but one thing which hasn't really been mentioned yet is the question of why the recent epidemics. Not just the South & Central American one but the Pacific Island one as well. The virus has been present in parts of Africa and SEA for many years. Is it just because of the global advanced in healthcare & monitoring of diseases combined with the links to microcephaly and GBS? Perhaps with some degree of randomness/bad luck

I'm not certain about Africa, but as far as I know the vectors are fairly widespread in SEA hence dengue fever outbreaks aren't that uncommon. Has there been a mutation increasing virulance or the effects of infections? Or is it possible that zika is actually fairly endemic to the parts of Africa and SEA where it's present and quite a few people get it, perhaps many of them when they are young. But due the the lack of clustering and some degree of herd immunity, you don't get epidemics. And perhaps likewise the side effects are harder to detect plus as mentioned people may be more likely to get it before pregnancy (at least for microcephaly, not sure about GBS). Whereas in the places with recent outbreaks, because it's new few people had immunity so it's now spreading through the population quickly hence the epidemic.

An interesting point is that if I understood our articles correctly when I read them a few days ago, it sounds like some researchers are thinking the epidemic in the Americas came from the Pacific Island epidemic. I'm not sure if they did any genetic-evolutionary examination. If they didn't and while I would presume epidemiologists know what they are doing, I wonder how well they've considered the possibility despite the lack of any known outbreaks, the frequency in SEA and Africa could be high enough to be the source.

Nil Einne (talk) 11:09, 29 January 2016 (UTC)[reply]

East Asia and Stroke

How come East-Asian countries are more affected by stroke compared to the rest of the world[43]? Is it genetics or is it diet related?

Has there been any studies on second-generation, third-generation, etc, East-Asian immigrants to western countries to see if the change in diet reduces the rate of incidence of stroke? Johnson&Johnson&Son (talk) 05:05, 28 January 2016 (UTC)[reply]

One possible reason is if they are less affected by other diseases. Everyone has to die from something, so if they have fewer deaths from cancer, heart disease, and diabetes, that would automatically mean more deaths from everything else, including strokes. Put another way, many of the same people who had strokes would have died earlier, of another disease, had they lived in the US and eaten a US diet, never having lived long enough to have a stroke. StuRat (talk) 05:38, 28 January 2016 (UTC)[reply]
Stroke rates are significantly increased in countries with bad air pollution, such as China and Indonesia. It is still predominantly a disease of older people, but accentuated by the poor air quality in many parts of Asia. India, which has worse air than China, has a younger population with higher rates of death due to diarrhea and other acquired diseases. If people are dying young due to infectious diseases, then they aren't dying of stroke in old age, which is one of the differences. Once one has access to decent health care and sanitation to reduce the rates of acquired diseases, then environmental and lifestyle factors become important. Only after one controls for the large differences in environment and lifestyle is one likely to see the impact of genetics. Dragons flight (talk) 08:45, 28 January 2016 (UTC)[reply]
That explains China, but AFAIK Japan and South Korea has excellent air quality. Johnson&Johnson&Son (talk) 09:14, 28 January 2016 (UTC)[reply]
South Korea also has an air quality problem partly due to wind transport from China, though not as bad as China. Japan does have pretty good air quality, but they may be something of a special case. Their population has the third highest average age in the world, and significantly higher than most of Asia, which is going to make diseases typical of old age more common in Japan. Dragons flight (talk) 09:25, 28 January 2016 (UTC)[reply]

I think this is where methodology and statistics are needed. It would be hard to distinguish ischemic stroke death from death from heart disease since they play off each other. I think the biggest takeaway is stroke and heart disease are first world robbers. Other diseases are more significant in non-first world countries. A person with clogged arteries that has atrial fibrillation that leads to a blood clot that travels to brain and dies - what does that mean for the stat? Heart disease or Ischemic stroke? They are too closely related to make a meaningful guess. "He's dead Jim" is as good as any.

What if we could eradicate the bad mosquitoes?

If we could and did eradicate harmful mosquitoes, would we end up regretting it? Do they have useful predators, or do they serve other benefits we need?? (I do understand that there are thousands of mosquito species.)Hayttom (talk) 18:40, 28 January 2016 (UTC)[reply]

Skeeters are definitely a part of the food chain, for dragonflies, birds, etc. DDT was tried, and it was effective in its way, but it had bad side effects on the environment. ←Baseball Bugs What's up, Doc? carrots→ 18:45, 28 January 2016 (UTC)[reply]
A more subtle approach is being tried, since the females need blood only after mating. The strain Aedes aegypti OX513A is short-lived and offspring will die before biting age. See the Oxitec solution. Dbfirs 19:02, 28 January 2016 (UTC)[reply]
Being part of the food chain isn't itself a problem. The question is, are they an irreplaceable part of the food chain ? That is, would other insects/animals fill their niche (except for the blood-sucking part) ? Or, even if nothing does, are there any predators of mosquitoes which are solely dependent on them ? If not, then hopefully those predators would survive, although their numbers might decrease. StuRat (talk) 19:10, 28 January 2016 (UTC)[reply]
Actually, the scaremongering of DDT killed way more people, through malaria and other disease, than the chemical ever could and environmental damage is manageable. Fear of DDT and fear of nuclear power are my "bad science coupled with overwhelming PR = dangerous policy" pet peeves. --DHeyward (talk) 08:30, 29 January 2016 (UTC)[reply]
The impacts of such an eradication cannot be predicted with much confidence. But some scientists think that it would be OK to eradicate mosquitoes [44] [45], meaning with very little alteration to non-mosquito biodiversity, or overall ecosystem function and ecosystem services. Here's the related article in Nature [46]. Basically, thinking goes like this: ecological niches can be filled by other organisms with similar functional types. As competition from mosquitoes went down, other freshwater insect larvae would do better, and the fish that eat little bugs are just as happy to munch on other larvae. There might be more problems in the arctic, and many plant species would lose their current pollinators (these are both covered in the Nature blurb).
Rather than use broad spectrum pesticides like DDT, people are currently researching the ethics, legality, and science of various other techniques that are far more selective, and in principle can kill only mosquitoes. See Mosquito_control for an overview. Sterile_insect_techniques, combined with gene drives (nice FAQ from Harvard here [47]) could (theoretically, in principle) eradicate mosquitoes and only mosquitoes from Earth. Another possibility is a biological control agent such as a virus that kills mosquitoes but does not infect other insects [48]. As I said at the start, this is still uncertain. Some experts think there wouldn't too many problems, while other experts would still urge caution. Here's a video of another entomologist [49] on the issue of mosquito eradication. Ecology is a very tough field, and is in some ways still in its infancy, since there's very little we can predict with much confidence about any eradication or introduction of species. SemanticMantis (talk) 19:21, 28 January 2016 (UTC)[reply]
The most harmful mosquitoes are, in fact, sort of "domestic animals". For instance, Aedes aegypti is uniquely adopted to inhabit human settlements and is not found far from them. It's eradication will have no impact on the wild life. See this. Ruslik_Zero 20:13, 28 January 2016 (UTC)[reply]
That's a very good point. Here's a scholarly review article on the domestication/ evolution of human commensalism in A. aegypti [50]. SemanticMantis (talk) 22:39, 28 January 2016 (UTC)[reply]

Here is an interesting article on the subject: Would it be wrong to eradicate mosquitoes? Richerman (talk) 22:40, 28 January 2016 (UTC)[reply]

physiology

why are potassium ion channels slow to open and close compared to sodium ion channel? — Preceding unsigned comment added by 117.102.36.38 (talk) 18:55, 28 January 2016 (UTC)[reply]

Are you asking what function that serves or what physical mechanism causes it to happen? Looie496 (talk) 18:58, 28 January 2016 (UTC)[reply]

Mounting with adhesive

When mounting a solid light-weight cube with adhesive, is it more likely to stay attached if adhered to a wall or to the ceiling (assuming the same amount of adhesive either way)? My instinct is that the ceiling might well provide a more durable connection because torquing against the wall might aid in peeling the object off the wall, but I'm not really sure. Dragons flight (talk) 19:08, 28 January 2016 (UTC)[reply]

Wow! This is a tough one to answer without using calculus! I think you're going to have to integrate the normal force at all points of contact to determine the net torque, which is coupled right back in as another force, superpositioned on top of the normal force, to provide a net normal force at all points. You're going to need a model for the rigidity of the cube, an approximation for its fulcrum (presumably the lower edge, but maybe not!), and a valid model for the net force of adhesion provided by the adhesive, as a function of normal force, contact area, and so on. If you really want to make your life difficult, you'll have to account for shear force, too!
To the extent to which you can neglect these horrible details, the adhesive on the wall is working against the shear, and against the weight in the ceiling case. If the adhesive is uniform and behaves isotropically, (which is probably a terrible approximation), then you can solve for which scenario gives you better margin against adhesive failure. You need a detailed, robust, and valid model of the adhesive to answer this subset of the question. (That involves a bunch of awful tensor math!)
I bet you're going to have to use an engineering approximation, or empirical data, to get a useful answer. Adhesion is one of those un-sung areas of physics where the math is arguably harder than the mathematics in general relativity, but doesn't get popularized in science museums.
Besides, we have an easy alternative: we can simply test the behavior for a specific adhesive!
Nimur (talk) 19:12, 28 January 2016 (UTC)[reply]
I don't need to write a dissertation on the topic. Experimental evidence would be fine. However, it feels like the kind of experiment that has certainly been done before, so I was hoping there might already be a general answer that one or the other orientation is typically more resistance to failure. Ultimately, given the low mass and large contact area, I rather suspect that either orientation would be fine. Though, since I can't know that for sure, I would prefer to go with whichever option gives the greatest likelihood of success. Dragons flight (talk) 06:34, 29 January 2016 (UTC)[reply]
Sure, but the details matter. Cyanoacrylate will probably perform worse than epoxy in shear (e.g. on the wall); but might perform better on the ceiling; and so on. Can you at least name the type of glue or adhesive? Nimur (talk) 15:24, 29 January 2016 (UTC)[reply]
(ec) I agree with your reasoning. Also, mounting on the wall may result in the object slowly sliding down the wall, especially in hot weather, when certain adhesives become more liquid. But really, I wouldn't use adhesive alone for either case. The adhesive might serve a purpose, say to stop rattling during vibrations, but really shouldn't be used alone to support weight as described. StuRat (talk) 19:15, 28 January 2016 (UTC)[reply]
StuRat, you know the composite materials that constitute important control surfaces on a modern commercial airplane like the Boeing 777 are glued on, right? And the outer layer of the space shuttle was also glued on, because welded metal wasn't sturdy enough to withstand ablation during reentry? Adhesive bonds can be stronger than riveted or welded metal, if the engineers and scientists design them correctly. Nimur (talk) 19:30, 28 January 2016 (UTC)[reply]
I doubt if a NASA adhesive is the one being suggested here. Also, even if the adhesive held, whatever it's attached to, like wallpaper, might not hold. There are reasons we don't glue shelves onto our walls and light fixtures onto the ceiling. For any substantial weight, you want nails/screws/bolts into the studs, joists, etc. (Although there are very light duty adhesive patches for hanging light objects on walls.) StuRat (talk) 03:53, 29 January 2016 (UTC)[reply]
Ordinarily I would reach for screws or nails, but the particular wall / ceiling in question is one which I am not allowed to put any holes in. Dragons flight (talk) 06:34, 29 January 2016 (UTC)[reply]
As noted below, adhesives can also damage a wall or ceiling. Instead of attaching it permanently, how about just putting the object on a high shelf ? I had a similar issue with a smoke alarm, and ended up putting it on top of a tall halogen torchère I no longer use. StuRat (talk) 18:46, 29 January 2016 (UTC)[reply]
Nimur's right, the math of adhesion is crazy hard stuff. But smart motivated capitalists have done a lot of the research and empirical testing for you! Check out these 3M adhesive hook thingies [51] that model is 3lbs load per hook, but they go up to at least 7.5 lb per hook [52], maybe higher. They are fairly magical IMO. The design routes forces in clever ways, plus the adhesive is very strong in the directions those forces apply --and they remove cleanly when you pull the adhesive in a different direction! So unless "light-weight" means more than ~15-20lbs (in which case you'd be approaching the limits of the wall surface, and should find a stud), I think you can do this with wall-mounted 3M hooks, either using as brackets or hooked to eyelets in the cube. SemanticMantis (talk) 19:39, 28 January 2016 (UTC)[reply]

Well, there's one obvious point: depending on the size of the item and how high it's positioned, mounting it on the ceiling may protect it from being bumped into by clumsy people passing by, when if it was on the wall they would knock it off. On the other hand, the ceiling may carry more vibrations from the floor or roof above, depending on what people or equipment (if any) are located there, and if so this may contribute to weakening the adhesive. --76.69.45.64 (talk) 23:45, 28 January 2016 (UTC)[reply]

I think you are correct and instead of figuring out one case, just imagine the cube is growing in volume but not mass. In the ceiling case, the surface area increases with the volume but the mass it supports is just the normal force to the ceiling. The growing cube on the wall, however, has a moment that increases as the cube increases which is in addition to the normal force (because the center of mass is moving away from the wall). --DHeyward (talk) 08:47, 29 January 2016 (UTC)[reply]
If this is a practical question rather than a theoretical one, could you glue it in the angle between a wall and the ceiling or even in a top corner of the room? Thincat (talk) 09:41, 29 January 2016 (UTC)[reply]
Agreed, tripling the surface area where adhesive can be used would certainly help. However, this requires that the corner be completely square, and they rarely are. StuRat (talk) 18:50, 29 January 2016 (UTC)[reply]
I had a tenant who used that double-stick foam tape on the wall to attach a small mirror, which wrecked the wall worse than any nail would-have. The glue made a mess and scraping off the foam was a hard. As for adhesive: I glued a large 1/4-inch thick mirror to the wall with 3M glue and it came off, broke into 1000-pieces, wrote 3M and got the $$$ for a new mirror. If you glue to the ceiling, at least you can brace the object against the ceiling better than something against the wall. Raquel Baranow (talk) 15:45, 29 January 2016 (UTC)[reply]
I don't understand that last part. How do you "brace the object against the ceiling" ? StuRat (talk) 18:42, 29 January 2016 (UTC)[reply]

January 29

Challenger Deep vs. mantle and magma

If you traveled through the Earth's crust horizontally (keeping with the curvature of the Earth) from the bottom of Challenger Deep, would you remain within the oceanic crust, or would you eventually hit magma (outside of a mid-ocean ridge or other volcanic event) or even the mantle layer? — Preceding unsigned comment added by 67.42.179.140 (talk) 09:47, 29 January 2016 (UTC)[reply]

Take a look at this cross section, you will see that you don't have to go very far away from the trench at 10.9 km depth to get below the oceanic crust and into the mantle. Further to the west and you would be into the asthenosphere in the back-arc basin, which is not 'magma' just hot enough mantle that it flows. At the spreading centre in the back arc-basin there will be a series of magma chambers that you would almost certainly run into. Mikenorton (talk) 10:18, 29 January 2016 (UTC)[reply]
Thank you sincerely! 67.42.179.140 (talk) 23:38, 29 January 2016 (UTC)[reply]

Dateline as applied to historical dates

I have frequently wondered about historical dates and if they were in USA time and date or, if occurring West of the International Date Line. Example: We are told a time (presumably local) and a date for the bombing of Hiroshima, Japan: August 6, 1945. But being far West of the date-line it was a day later in Japan. So if the bombing occurred on August 6 US date, then Hiroshima was bombed the morning of August 7, 1945. So many events in the pacific do not correct for this obvious modifier of date an time. Why not and did Hiroshima get leveled on Aug 7, in actuality? - When it was still August 6 hereSteveSmS 10:30, 29 January 2016 (UTC) — Preceding unsigned comment added by Smshepard51 (talkcontribs)

It depends on the circumstances, but for most events which are tied to a single country, local time is used (international events sometimes use UTC). The Hiroshima bombing happened on August 6th 8:15 AM Japan time, which would have been August 5th for most of the United States (except Hawaii) - the official transcript of Truman's announcement of the bombing says "On August 6, while returning from the Potsdam Conference aboard the U.S.S. Augusta, the President was handed a message from Secretary Stimson informing him that the bomb had been dropped at 7:15 p.m. on August 5.", which would be Eastern Time. Smurrayinchester 13:01, 29 January 2016 (UTC)[reply]
(A weirder example of this practice: Western media talks about Russia's October Revolution, even though according to Western calendars, it happened in November - at that time, Russia's Julian calendar was about two weeks behind the more widely used Gregorian calendar). Smurrayinchester 13:06, 29 January 2016 (UTC)[reply]
Also in World War I, the Armistice took effect at 11 am French and British time, but in Canada it's commemorated in Remembrance Day ceremonies at 11 am local time. It wasn't even 11 am in Germany: the Western Front was effectively a time zone boundary, and it was noon on the other side. Since I learned this, I've liked to imagine German soldiers taking prisoners and telling them to "drop your weapons and set your watches ahead 1 hour!" --76.69.45.64 (talk) 15:16, 29 January 2016 (UTC)[reply]
And, of course, the French Revolutionary calendar, which decimalized the months, was used to mark key events during the Revolution. Pro-monarchy revisionist historians have back-filled our textbooks with historically-equivalent Gregorian calendar dates, so we know (for example) when Robespierre was guillotined. As any staunch Republican can tell you, 18 Brumaire was the day that freedom ended, or began, or something. It's very difficult for me to keep dates and Republican-ideologies straight.
Equally problematic are those historical events that took place in space: for example, during the Apollo 11 mission, humans first set foot upon the moon either July 20 or July 21, depending on your time zone. The mission used Houston time for many flight purposes, so we often commemorate July 20. On the moon, both July 20 and July 21 occur on the same lunar day - in fact, half of the month of July is the same lunar day - but regrettably, few humans use the lunar day as the defining astronomical event for their calendar. Some great distant time in the future, when errors of a few thousand hours accumulate, I suspect the unix epoch will be defined to begin at the exact instant humans first walked on the moon, with error of just a few thousand hours or so.
Nimur (talk) 16:32, 29 January 2016 (UTC)[reply]
And, of course, the French had the Thermidorian Reaction, which like the October Revolution, only makes sense on the local calendar. The rest of us would have called it the July Revolution or July Reaction, but historiographically speaking, there's another coup d'etat in July, and we call that the July Revolution. --Jayron32 17:13, 29 January 2016 (UTC)[reply]

Gas giants and fast winds

When I watch astronomy documentaries that feature gas giants, I always hear about the extreme wind speeds and how devastating it would be on Earth. To me, this does not seem to be a fair comparison - wind speed is a relative quantity. Doesn't the danger just depend on how rapidly you navigate between different air currents? Could you comfortably fly at minimum true airspeed in a non-turbulent, 800km per hour air current on Jupiter? Plasmic Physics (talk) 10:30, 29 January 2016 (UTC)[reply]

Well yes, the devastation is only happening on Earth because we have 1. a ground that is not moving in sync with the winds and 2. things that are fixed to the ground. if you don't have that, there is nothing to "devastate".--74.101.111.23 (talk) 14:46, 29 January 2016 (UTC)[reply]
Right, and if Jupiter had a surface like Earth, that would slow the winds down dramatically. Similarly, on Earth, you tend to get much higher winds aloft, where there is no friction with the ground. Assuming your spaceship moved with the wind, the only devastation caused by it would be when you hit an eddy between two different bands (or spots), with winds moving in opposite directions. Then you would be in for some major turbulence. I wonder if we could make a ship that could survive that. StuRat (talk) 18:08, 29 January 2016 (UTC)[reply]
Jupiter's atmosphere is not "air" as we know it. If the pressure did not kill you immediately, you would suffocate trying to breathe the 89% hydrogen plus 10% helium mixture and there is no way that the reported crystals of frozen ammonia hitting at 800 km/h could be good for your flying machine. It is fair comparison to report that only the severest cyclone gusts recorded on Earth ever exceed 300-400 km/h, making comfortable flying impossible. AllBestFaith (talk) 16:34, 29 January 2016 (UTC)[reply]
I doubt anyone would consider an open cockpit design! Something like a bathyscape with robust cooling mechanisms might be feasible, which would have to enter the Jovian atmosphere in such a way as to match the atmospheric motions initially encountered as closely as possible. An existing terrestrial (if you know what I mean) submarine design, however, has been shown by Randall Monroe in his What if? blog to be inappropriate, as it would merely sink, melt and then be crushed. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 19:10, 29 January 2016 (UTC)[reply]
  • The OP did make it clear that the craft is travelling in a non-turbulent flow along with the atmosphere around it. In that case being "struck" by crystals of ammonia and so forth is irrelevant, since they will be moving along with the craft in the same speed and direction. μηδείς (talk) 04:13, 30 January 2016 (UTC)[reply]

How do I generate radio waves?

The wording looks ambiguous. I do not mean to say how do I use my body to generate radio waves. I mean to say how do I make an apparatus that generates radio waves? 140.254.70.165 (talk) 12:48, 29 January 2016 (UTC)[reply]

Your body is always radiating radio waves, as a result of black body radiation. Radio transmitter is the human made apparatus. Graeme Bartlett (talk) 13:04, 29 January 2016 (UTC)[reply]
A search on phrases such as build your own radio transmitter will yield instructions for projects ranging from Build a very simple AM radio transmitter (which includes some discussion of theory) to much more complicated ones. -- ToE 14:20, 29 January 2016 (UTC)[reply]
It's entirely easy to generate radio waves - almost everything involving electricity does so as a by-product of whatever they are supposed to do. The trick is to produce the rather exacting kinds of radio waves that are useful for some particular purpose - such as transmitting music to people's car stereos or TV signals to their homes - using them to measure the distance and position of an airplane in flight - or the speed of a car suspected of driving too fast - transmitting them from a bunch of satellites to help people know which streets they are driving along...you name it!
So to provide you with a useful answer, it would help if you could narrow down the purpose of doing this.
SteveBaker (talk) 15:41, 29 January 2016 (UTC)[reply]
To get deeper into the technology we have Radio transmitter design and the Wikibooks Electronics chapter on Transmitter design. The essential physics is that of launching an electromagnetic wave that comprises oscillating electric and magnetic fields. Heinrich Hertz first conclusively proved that these two fields can sustain each other in the form of a radio wave that can travel through a vacuum. Mathematically speaking, a propagating radio wave is a solution to James Clerk Maxwell's set of partial differential equations, his monumental achievement being this Classical field theory that unifies the phenomena of light, radiant heat and radio waves. The article Dipole antenna is invaluable to understanding how to launch a radio wave properly. Less properly, we are surrounded by apparatus that generates radio waves unintentionally: anything that makes an electric spark including arc welders, car ignition circuits and commutated electric motors emits untuned radio interference called "EMI" or "RFI". Virtually every modern radio receiver is also a weak radio transmitter due its local oscillator radiation. Today you may read about but not operate a Spark-gap transmitter. AllBestFaith (talk) 16:03, 29 January 2016 (UTC)[reply]
In addition to modern electronic transmitters,.other means have been used to create radio waves. These may not correspond to present communications regulations. Radio waves have been generated and used for commercial purposes in the distant past with electromechanical induction coil (spark) transmitters (Marconi and others), with high speed alternators (E.F.W Alexanderson) and with electric arcs (Poulson). An electromechanical doorbell buzzer generates radio waves. Many electrical devices such as light switches, aquarium heaters, and electric blankets generate unintended radio waves when they turn on and off, and the radio waves are capable of causing objectionable radio interference several houses away. Edison (talk) 18:48, 29 January 2016 (UTC)[reply]
Whatch it! You need a licence to operate a transmitter, and the transmitter must comply with govt mandated technical standards. Goverments operate monitoring stations that may pick you up. In some contries there may be haevy fines or jail. 58.167.247.93 (talk) 23:39, 29 January 2016 (UTC)[reply]
In the U.S., at least, you don't need a license to operate a very low-power, home-built transmitter. Most likely the transmitter described in ToE's link would qualify. Detailed regulations are here. You can also operate certain low-powered commercially available devices such as a walkie-talkie on Family Radio Service frequencies without having a license. Shock Brigade Harvester Boris (talk) 01:46, 30 January 2016 (UTC)[reply]
Some of the advice above is given in good faith by textbook readers who have never built a transmitter and wouldn't know how. I suggest that the OP contact a radio amateur or join his local amateur radio society, which may be the ARRL, and which exist in almost every country. There are important considerations of the law, your entitlement to transmit, and safety involved. Akld guy (talk) 23:52, 29 January 2016 (UTC)[reply]
  • Briefly short circuit a nine-volt battery with a length of wire. You've just made a radio transmitter. You can pick up the sound of the radio frequency waves generated by the sparks on any AM radio tuned to any station you'd like. --Jayron32 04:05, 30 January 2016 (UTC)[reply]

Family authoritarianism

A person who exhibits a tendency towards obedience in the realm of politics is called an authoritarian. What is the equivalent term for someone who exhibits such a predisposition towards members of their family within the context of psychology? I prefer a medical or psychiatric term. The closest I could come up with is eleutherophobe. 92.10.224.67 (talk) 12:57, 29 January 2016 (UTC)[reply]

In studies of social animals one finds the alpha male. A disparaging term for humans is the pseudo-psychological Control freak. AllBestFaith (talk) 16:15, 29 January 2016 (UTC)[reply]
Note: the term "alpha male" is deprecated in many, if not most scientific contexts. Our article needs serious improvement and updating. Last time the term came up, I explained some of my reasoning and gave several references, anyone interested can check it out here [53]. SemanticMantis (talk) 16:36, 29 January 2016 (UTC)[reply]
The problem is that you can't go to the doctor and seek a diagnosis/ask about symptoms for being disciplinarian. Hence I tried to stress the medical scope of this predisposition for individuals who are particularly acute. Its also possible that such a word is non-existent. Dunno really. 92.10.224.67 (talk) 17:10, 29 January 2016 (UTC)[reply]
Narcissistic_personality_disorder may be relevant. Of course this is not a term that means someone who expects obedience out of family members, but some people with this disorder do indeed expect obedience from family members. SemanticMantis (talk) 20:54, 29 January 2016 (UTC)[reply]
  • Depending on the expectations and age of the OP, perhaps the term parenting is enough to qualify for the proper terminology. --Jayron32 04:04, 30 January 2016 (UTC)[reply]

Sewing machine oil

What is it made from and what properties does it have for suitability in sewing machines?--213.205.192.13 (talk) 00:48, 30 January 2016 (UTC)[reply]

One special consideration is that it shouldn't stain, in case it gets on the fabric. A highly volatile oil that will dissipate on it's own would also be better than one which would require detergent to remove, as some fabrics, like felt, might be too fragile for that. Also, it shouldn't stink, so petroleum-based oils might not be the best choice. All of this would mean it would need to be applied frequently, though. StuRat (talk) 00:59, 30 January 2016 (UTC)[reply]
See here. --Jayron32 04:00, 30 January 2016 (UTC)[reply]

The odds against us being here

Some lifeform billions of years ago had eggs. Most of the eggs died. We are the decendant of the egg that survived, right?

So, is there any way to guess at the odds against that lifeform's DNA surviving to this day? Or the other way around, is there any way to take a ballpark guess at the odds against a person making it this far? Anna Frodesiak (talk) 01:27, 30 January 2016 (UTC)[reply]

Did you ask pretty much the same question sometime in the last year? If so, wherever it's archived might contain some answers. ←Baseball Bugs What's up, Doc? carrots→ 01:32, 30 January 2016 (UTC)[reply]
Hi Baseball Bugs. Did I? I don't remember asking. I do have a terrible memory. I am very sorry if I did. Anna Frodesiak (talk) 03:06, 30 January 2016 (UTC)[reply]
The earliest lifeforms didn't lay eggs. As for the odds of a fragment of DNA surviving lots of generations unchanged, that depends on the importance of that fragment. If it was absolutely critical to survival, then only organisms with it would survive, and would therefore pass it down. On the other hand, if the DNA fragment is of no value whatsoever, then it probably won't last long, with random mutations deleting or altering it (see genetic drift versus natural selection). StuRat (talk) 01:35, 30 January 2016 (UTC)[reply]
Hi StuRat. I don't really mean DNA, but more ancestry, as in parents' parents' parents' all the way back. Anna Frodesiak (talk) 03:56, 30 January 2016 (UTC)[reply]
  • The odds of a fact being the case are exactly 1, and we cannot replay the movie. That being said, replaying the movie from the beginning is the theme of Stephen J. Gould's Wonderful Life. He interestingly pointed out the fact that if the dog had not been caught, one single Labrador retriever would likely have driven the Kiwi extinct. μηδείς (talk) 04:09, 30 January 2016 (UTC)[reply]
Thank you, μηδείς. I will try to get that book. I am sorry I keep asking questions that are so hard to answer. I think the problem is that I don't quite know how to ask them properly. Best, Anna Frodesiak (talk) 04:22, 30 January 2016 (UTC)[reply]
What!?!? Never. As the official bête noire of the RD, I can certify that you have never asked an offending question. (I do still find it hard to believe you are not related to the Frodesiaks of Passyunk Avenue. But that's a side issue.) μηδείς (talk) 04:57, 30 January 2016 (UTC)[reply]
Not being able to "replay the movie" is, of course, a variant of the anthropic principle in the sense that the universe which produced the current state it is in is the only possible universe which could exist to produce the current state it is in. Historically, this was also part of the conjectures of Gottfried Leibniz and "Pangloss" in Candide, see Best of all possible worlds; the notion that we must live in the only way history could have turned out, and it couldn't have possibly been better or different. --Jayron32 05:03, 30 January 2016 (UTC)[reply]
I don't quite agree with your interpretation, Jayron32, but I do agree that anyone interested in the topic should be familiar with the items to which you have linked. The bottom line is that any such "replaying" would be imaginary. Physically, there is no way to do it, Doctor Who and The City on the Edge of Forever unwithstanding. μηδείς (talk) 06:09, 30 January 2016 (UTC)[reply]
If one wants to take it to a physics, rather than a metaphysics, issue, concepts such as the arrow of time, causality, entropy, the Second law of thermodynamics, and Minkowski space all require the fundamental concept of "we can't ever change the past so the present is the only possible present we could ever have." Of course, there are some interpretations of quantum mechanics which break principles of causality, and all principles of modern physics fly in the face of the clockwork universe such that all behavior at the fundamental particle level is stochastic and thus entirely unpredictable. So, if you want the physics answer, depending on your perspective, the odds are either "exactly 1 in 1" (that is purely deterministic) or "exactly 1 in infinity" (or never repeatable and thus purely unique). Interestingly, the answer is likely only one or the other (always or never) rather than any odds in between those two. Or in simpler terms, to paraphrase Murray Gell-Mann (among others, probably) "That which is not forbidden is mandatory". --Jayron32 06:39, 30 January 2016 (UTC)[reply]
Thank you so much, μηδείς. I do not mean to be a nuissance here. I promise to get better at phrasing the question. Perhaps this is better:
If a mother had two kids and only one survived, then couldn't the kid say "I had a 50% chance of being the one who did not make it"?
And I'm not asking just for fun. I really want to know. At dinner last night, people were saying it is important to have kids to keep the genetic line going. Someone said that a person's mom and her mom etc. beat the odds time and time again, especially when that mom was a squirrel way back when (bigger litters). Our parents are always the ones who made it. Anna Frodesiak (talk) 06:29, 30 January 2016 (UTC)[reply]
Or, the kid could say "I had a 100% chance of survival, because I didn't get hit by the car that killed my sibling" (or disease, or whatever). Stats are merely personal perspective under the illusion of mathematical indeterminacy. See Lies, damned lies, and statistics. --Jayron32 06:41, 30 January 2016 (UTC)[reply]
It would be 50% chance only if it was a prerequisite that one brother would die before reproducing. Just because one died out of 2 doesn't make it 50% chance. This is more of a maths question than a science question. Think of a pack of cards and draw one. You have a 100% chance of picking a card, 25% chance it is a diamond and a 2% chance it's an ace of diamonds. So from point of view of the end result it's 2% but all cards where 2%, so it not that surprising a card was drawn. So each individual being alive now has an almost vanishing probability of being here (think how many sperm does man produce in his lifetime). If you add all the probabilities up they are 1, so your answer depends on your view point. So at the start of life, you being you is extremely unlikely, but it's extremely likely something will be there (planet wide extinction excepted). As for the other way around this maybe workout able. How likely was it that Y-chromosomal Adam would be Y-chromosomal Adam, and not one of the other humans alive at the time, I don't know but maybe answerable. Dja1979 (talk) 06:56, 30 January 2016 (UTC)[reply]
You have a 50% chance of picking one of either of two cards before you pick them. You have a 100% chance of having already picked the card you already picked because it is in your hand right now and the other isn't. Stats are only useful in deciding the likelihood of future events. Events which have happened cannot be undone, and thus have already occurred to a known certainty. Schrodinger's cat is genuinely alive or genuinely dead even if you don't look, regardless of whatever the chance was that the particle would have decayed before it did. --Jayron32 07:06, 30 January 2016 (UTC)[reply]
Or as Richard Feynman put it: "You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won't believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!" --71.119.131.184 (talk) 08:32, 30 January 2016 (UTC)[reply]

Is electricity made up of electromagnetic or mechanical waves?

I know the difference between electromagnetic and mechanical waves (see [54], [55]); so since sound is a mechanical wave, and AC power can be heard, that would seem to indicate that electricity is a mechanical wave as well. But I thought that it was electromagnetic. Which is it? Eman235/talk 01:58, 30 January 2016 (UTC)[reply]

It's not a mechanical wave, and the sound isn't directly from the waves. Read your link to see the explanation (for an analogy, light could also create sound, if it warms something up that then expands or contracts) . Also note the dual wave/particle model of electricity (as well as light, etc.). The electron is the particle representation of a quanta of electricity. StuRat (talk) 02:04, 30 January 2016 (UTC)[reply]
Oh, I see. Electricity, under some circumstances, can cause a mechanical wave, but it isn't one itself. I'm assuming that AC is electromagnetic waves; but what is DC? Eman235/talk 02:34, 30 January 2016 (UTC)[reply]
See Direct current--213.205.192.13 (talk) 03:52, 30 January 2016 (UTC)[reply]
(edit conflict) AC is not electromagnetic waves, AC can cause electromagnetic waves in the radio frequency. Read Larmor formula for some background for the mathematics. "When any charged particle (such as an electron, a proton, or an ion) accelerates, it radiates away energy in the form of electromagnetic waves." Alternating current consists of electrons vibrating back and forth in place; to change direction the constantly accelerate and decelerate, thus generating a regular radio frequency wave; this is the source of what is called mains hum, which is what happens when this radio frequency wave is picked up by audio equipment. DC doesn't cause waves, but it does generate an electromagnetic field, see Ampère's circuital law, or Faraday's law of induction which is just the inverse of that. --Jayron32 03:55, 30 January 2016 (UTC)[reply]
Hmmm Jayron, I would have said that dc in a stationary conductor can only create a magnetic field (not an electromagnetic field. But maybe you are now going to provre me wrong! :)--213.205.192.13 (talk) 04:06, 30 January 2016 (UTC)[reply]
All electric fields are magnetic fields, and visa-versa. You can, of course, describe such a field using only the magnetic force, or using only the electric force, but in reality, the two forces coexist at right angles to each other. Electromagnetic_field#Dynamics discusses the historical separation of the two "fields" into distinct force fields, and the work of James Clerk Maxwell in marrying the two into a single concept, see also Maxwell's equations which describe the interrelationship between electric fields and magnetic fields into a single "electromagnetic field". The modern field of quantum electrodynamics preserves this marrying of the two forces, and applies principles of quantum theory to it. --Jayron32 04:55, 30 January 2016 (UTC)[reply]