Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 490: Line 490:


::Yes, that tap water mention in our article shocked me, too. Is it really safe to inject that, or get it under your skin by any other mechanism ? (I realize some is absorbed through the skin when you take a bath, but even that can cause cell damage given enough time.) [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 21:00, 27 January 2016 (UTC)
::Yes, that tap water mention in our article shocked me, too. Is it really safe to inject that, or get it under your skin by any other mechanism ? (I realize some is absorbed through the skin when you take a bath, but even that can cause cell damage given enough time.) [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 21:00, 27 January 2016 (UTC)

:I would call attention again to the question of cost (which SemanticMantis did bring up). I'm pretty sure an iontophoresis machine costs more than a needle and syringe. And even though the main machine is probably reusable, I imagine the part applied to the skin needs to be single-use for hygiene reasons. If the issue is simply the patient disliking injections, there are probably cheaper measures, like applying topical anesthetic before the injection. There's also been increasing attention given to [[intradermal injection]]s, which require a much smaller needle and thus reduce discomfort. --[[Special:Contributions/71.119.131.184|71.119.131.184]] ([[User talk:71.119.131.184|talk]]) 05:01, 28 January 2016 (UTC)


== Why do humans around the world cover the genitals? ==
== Why do humans around the world cover the genitals? ==

Revision as of 05:01, 28 January 2016

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 24

"freeform" electromagnetic coils

1. What's the proper name for these "freeform" electromagnetic coils[1]?

2. How are they made? Roughly which one of the following guesses is the closest?

A) wind one layer, spray some adhesives, and then wind another layer on top

B) Adhesives are added continuously as the winding continues

C) The magnet wire is pre-coated with an adhesive.731Butai (talk) 04:36, 24 January 2016 (UTC)[reply]

If there were a coilform with a central cylindrical core and endpieces, the coil could be wound as shown by rotating the coilform while the supply bobbin moved slowly back and forth via gearing to lay down straight layers. The endpieces could then be removed and the wound coil pushed off the central cylinder. The wire would tend to maintain its form, but clearly it would deform if stressed. It could be dipped in varnish to cement it together into a rigid form. Edison (talk) 05:05, 24 January 2016 (UTC)[reply]

Is the [2] one that you uploaded yourself or that you have information about? It is not obvious from the picture alone that this is intended to be an electromagnetic coil as it has no terminals, no adhesive can be seen and the wire could be uninsulated (not magnet wire). A spool of plain wire delivered from a factory could look like this. AllBestFaith (talk) 16:15, 24 January 2016 (UTC)[reply]

They are called 'self supporting' coils.--178.102.247.97 (talk) 17:41, 26 January 2016 (UTC)[reply]

Analogue live TV

Before digital "film", how was live television done? My mental image of television, pre-digital, was that the scene was recorded by a videocamera and microphone, the sounds modulated onto electromagnetic waves as with radio, the film developed and then the images somehow modulated onto electromagnetic waves, and chronological conjunction between the two processes is enforced to ensure that the video and sound be synchronised. This doesn't seem to fit with live TV, however, as there's no time to develop anything; how would it be possible to broadcast anything that wasn't a recording? Today, it's easy: you can basically use the same techniques as Skype, but that wasn't possible in the 1950s. Nyttend (talk) 05:40, 24 January 2016 (UTC)[reply]

Television cameras before the 1980s used video camera tubes. Basically they worked like a CRT television in reverse. In a CRT display, one or more tubes scan their beams across the screen to produce the picture. In tube cameras, the camera focuses incoming light onto a target and one or more tubes scan the target. --71.119.131.184 (talk) 05:50, 24 January 2016 (UTC)[reply]
Note also that for quite a long time, a lot of prerecorded stuff on TV was direct on to Videotape not film. In fact, sometimes the content may have gone from videotape to film.

And BTW home camcorders weren't that uncommon before everything went digital. America's Funniest Home Videos for example was before digital video was particularly common, and some Youtube videos also look they were probably recorded on analog tape.

Nil Einne (talk) 07:07, 24 January 2016 (UTC)[reply]

Let's put it this way. You (Nyttend) have the mental image "that the scene was recorded by a videocamera". But that's really two things: converting the scene into an electronic signal, and recording the electronic signal. In a live breoadcast, the electronic signal would be used directly (more or less) to modulate electromagnetic waves just as the audio signal is used in radio. (For live color TV it would also be necessary to convert the R/G/B signals from the camera into the applicable encoding, i.e. NTSC, PAL, or SECAM.) --76.69.45.64 (talk) 07:22, 24 January 2016 (UTC)[reply]
Incidentally, the processes Nyttend describes is John Logie Baird's "Intermediate Film Technique", used for a few months in 1937 in the UK, but obsolete since then. It introduced a delay of about 1 minute in a live broadcast - see this article. Tevildo (talk) 09:03, 24 January 2016 (UTC)[reply]
Germany transmitted film-intermediate TV earlier. [3] During the 1936 Summer Olympics experiments were conducted with both an analog electronic camera and with a mobile TV truck. On the roof of the truck was a film camera. The film was developed in the truck and then run through the transmitting apparatus. AllBestFaith (talk) 17:09, 24 January 2016 (UTC)[reply]

Live TV originally went "straight to air", with no intermediate recording and playback steps. The signals from the microphones and camera were combined (usually through vision mixer and audio mixer desks) into a composite signal which was distributed to the transmitter site, modulated onto an RF carrier, and broadcast. All of these were real-time analog processes, with no delay except for that inherent in signal processing and propagation. The Anome (talk) 10:09, 24 January 2016 (UTC)[reply]

The Anome has hit on it. Just like a phone call, there is no need for a TV camera and transmitter system to make a permanent record of anything: if you have an outside broadcast unit you can just turn on a camera, transmit that signal to a control centre and then put it out on the air, without at any stage 'recording' it permanently. Much early TV was broadcast live without any copy of it being kept. Before the days of cheap magnetic recording systems, if you needed a permanent record of it, you'd often literally just film a television set with a film camera. Similarly, most analogue phone calls have never (one assumes) been permanently recorded onto anything. They just go from one phone to another through the wires. Blythwood (talk) 13:02, 24 January 2016 (UTC)[reply]
Thanks to everyone! I had no idea that it was possible for the TV camera to do anything except impress each scene on a separate film still; I didn't know that they used CRTs to send imagery to a transmitter. I'd imagined that the first cameras of any sort that used neither film or U-matic videotape were digital cameras. Nyttend (talk) 15:26, 24 January 2016 (UTC)[reply]
U-Matic was really quite a late development. There's a lengthy history of different types of analog video format, starting, I believe, with two-inch quad, and I believe digital videotape made its first commercially successful appearance with the advent of D-1 recording. -- The Anome (talk) 23:58, 24 January 2016 (UTC)[reply]
I'm fairly confused why the OP is conflating video tape and film. The distinction here is IMO important because while it's easy think a camera which is exposing light on to film would not be able to modulate an EM signal, if your camera is already modulating a signal onto video tape it's harder not to see the possibility of bypassing the video tape step completely. Of course when you think about it more, even with film, you have to modulate the signal somehow. If you are doing this by shining light through the processed film and then using the resulting image to modulate a signal, why can't you just skip the step of going from light to film to light?

In any case, note that beyond broadcast TV, it wasn't that uncommon for CCTV systems to be display only, which was I believe the initial form of CCTV per our article.

Nil Einne (talk) 00:32, 25 January 2016 (UTC)[reply]

See also Kinescope. ←Baseball Bugs What's up, Doc? carrots→ 15:19, 24 January 2016 (UTC)[reply]
The modern equipment for conversion from movie film to electronic TV signal (for taping or immediate transmission) is a Telecine. AllBestFaith (talk) 17:21, 24 January 2016 (UTC)[reply]
@Nyttend: - as a related point, early video recording on magnetic tape was so expensive that often even when TV was recorded down onto tape or a film, the tapes were soon wiped over and reused - often tony TV executives seem really to have thought that this junk they were producing was surely of no lasting interest to anybody. The result is that much TV made as late as the mid-1970s, even in prosperous and stable countries and even by state broadcasters with a high image of themselves, was erased. So yes, a lot of early TV was sent through the TV broadcast system like a phone call, with no recording device of any kind set up to make a record of it for posterity. Blythwood (talk) 10:52, 25 January 2016 (UTC)[reply]
As an interesting example, when the BBC broadcast the first UK TV play of George Orwell's Nineteen Eighty-four on 12 December and 16 December 1954, the actors (and backing orchestra) had to perform the play live twice, as the first performance had not been recorded (except for small segments, such as outdoor scenes.) The second performance was, however, recorded for archiving, perhaps because its importance had become obvious. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 15:07, 25 January 2016 (UTC)[reply]
Fans of the BBC Doctor Who series are particularly aware of this. Being obsessive collector-ish people - they'd like to have every episode ever made, nicely recorded. But since the series started in 1963, many of the episodes were recorded on magnetic tape and subsequently wiped. Interestingly, many of the episodes that have been recovered were recorded by people watching the show on broadcast TV who points 8mm film cameras at the screen - others were duplicated and shipped to other countries who wished to show the series and the master copies subsequently erased. Fans of the show periodically find a recording of a long-lost episode by digging around in piles of junk in places as far afield as Nigeria and Dubai! Often the sound track of the tape has been over-dubbed with something else and someone's home audio-cassette bootleg has been synced to video from elsewhere to cobble together a low-quality version. It's hard to imagine the cost of a recording tape being even a tiny fraction of the cost of making a 30 minute TV show - but evidently that was the case because such a vast number of TV shows from that era are either lost forever - or would require the sheer determination of an army of Dr Who fanatics to track down the remaining scenes put them back together. SteveBaker (talk) 17:29, 25 January 2016 (UTC)[reply]

Considering starting a new project on here

Hello all! As a spare time project, I’m looking to do spend some time in the next few months messing around with R, its graphics packages in particular. I’d be interested in combining this with my contributions to Wikipedia (do two obsessions together!) - does anyone have suggestions for any publicly available molecular biosciences data that might be interesting to do something with? Preferably something I can't screw up too badly! Blythwood (talk) 11:43, 24 January 2016 (UTC)[reply]

A while back I got to messing with Module:ImportProtein, which is in Lua (see the talk page for an example), and like so many things... put it aside for "a while". If you're interested, I'm not reserving the copyright. :) I wasn't aware of any direct R integration with Wikipedia, though it would create interesting possibilities! Wnt (talk) 14:25, 24 January 2016 (UTC)[reply]
Some aspects of it might be too close to original research, which is fine for other places, but not welcome in wikipedia. On the other hand, you could research numerical data already on Wikipedia and plot it in a more visual way. --Scicurious (talk) 15:40, 24 January 2016 (UTC)[reply]
@Scicurious: - sure, but I'm more interested in creating graphs as example images, and would be very happy using made-up data that's relevant to a real situation. So 'here is a data visualisation of type X to show what it looks like' - I would be happy to put a disclaimer explaining that the data is fictional. Blythwood (talk) 23:39, 26 January 2016 (UTC)[reply]

Did the universe start with only neutrons?

If hydrogen fusion created all other atoms from hydrogen, and hydrogen is a proton and an electron, and fission reaction is the decay of a neutron into a proton and electron, then did the universe start with only neutrons? — Preceding unsigned comment added by 86.153.69.165 (talkcontribs)

Our current theories imply that there were subatomic particles before neutrons existed; and it seems like neutrons and protons both started emerging roughly around the same time. Have a read through Chronology of the universe, Quark–gluon plasma, and related articles.
I think your insight is good, in that you're looking for reverse- reactions (like beta decay) and trying to conserve charge; but you are missing some important complications that arise when we study sub-nuclear particles in great detail. We now know that there are lots of valid ways that we can break protons and neutrons apart if we use very high energies. Present theories for the early universe imply that our heavy particles were created around the hadron epoch after protons and neutrons coalesced from quarks. Before that time, the energy density was so high that we barely understand the rules that govern quark combination: what we do know is that there were no protons or neutrons yet.
Nimur (talk) 15:13, 24 January 2016 (UTC)[reply]

Reverse polarity Schottky diode

On Digikey in the diode section[4], what does the "schottky, reverse polarity" diode type stand for? I understand what a regular Schottky diode is, but am not sure what a reverse polarity Schottky diode is. Johnson&Johnson&Son (talk) 15:05, 24 January 2016 (UTC)[reply]

Could it be a Schottky diode for creating a reverse polarity protection?--Scicurious (talk) 15:47, 24 January 2016 (UTC)[reply]
The OP has linked to a diode selection guide where one chooses filter(s) to limit the selection. Applying the "Schottky Reverse Polarity" filter reduces the number of manufacturers (to 2) and introduces selection menus for reverse leakage and capacitance when reverse biased. They are all Schottky diodes and this is just the guide designer's way to offer a detailed reverse specification if needed. AllBestFaith (talk) 16:42, 24 January 2016 (UTC)[reply]
The reference is to the packaging. The standard D-67 package has the cathode as the base and the anode as the lug, and the standard DO-4 package has the stud as the anode and the terminal as the cathode. The "reverse polarity" diodes listed have the anode as the base for the D-67 packages, and the cathode as the stud for the DO-4 packages. Tevildo (talk) 18:44, 25 January 2016 (UTC)[reply]

Why do they make pipes out of lead?

^Topic ScienceApe (talk) 16:59, 24 January 2016 (UTC)[reply]

"Lead piping was used because of its unique ability to resist pinhole leaks, while being soft enough to form into shapes that deliver water most efficiently." ScienceApe, you constantly asking questions where the answer is the first Google hit on the question is starting to pass over the line separating "good faith curiosity" from "trolling". ‑ Iridescent 17:05, 24 January 2016 (UTC)[reply]
I don't appreciate being accused of trolling. If I wanted to troll, I would ask a bunch of nonsensical questions using multiple sock puppet accounts so you didn't know they were from the same person. ScienceApe (talk) 17:19, 24 January 2016 (UTC)[reply]
Maybe you are doing it, but was not caught yet. Anyway, it's difficult to see a purpose on your questions sometimes. Maybe you should perform a simple search for a question before you ask it here. Otherwise you would look more like a science ape, than as a science curious person. --Scicurious (talk) 17:50, 24 January 2016 (UTC)[reply]
I don't need to prove I'm not a troll, but you're free to believe whatever you like, however I'm not going to stop asking questions if I'm curious about something. However feel free not to respond to my questions. ScienceApe (talk) 20:41, 24 January 2016 (UTC)[reply]
Of course you don't need to prove anything. However, if you are disruptive on wikipedia including the RD by continually asking pointless questions, you should expect to be restricted. Even if you don't reach that level, you should expect to be ignored, even when you ask okay questions if you continue to ask pointless questions. Nil Einne (talk) 00:15, 25 January 2016 (UTC)[reply]
Ok well then you can report me and try to get me blocked, but your threats are not going to dissuade me from asking questions. ScienceApe (talk) 02:19, 25 January 2016 (UTC)[reply]
A real life reference librarian who frequently scolded patrons for not just looking stuff up themselves would soon be fired. If it makes someone that angry when someone asks a question that is easy to find an answer for, then the angry librarian should find other areas of Wikipedia in which to work. It is disruptive to scold people who ask question when it is not clear they are trolling, as it is not clear here.Edison (talk) 20:49, 24 January 2016 (UTC)[reply]
Thanks I appreciate that. If nothing else, this should have been kept to talk pages and off the reference desk. ScienceApe (talk) 02:19, 25 January 2016 (UTC)[reply]
I agree that you are entirely entitled to ask questions that can be answered by typing the question into google - and calling you a troll is hardly WP:AGF. We've even had questioners here for whom Wikipedia was the only website they were allowed to access! However, it might be nice to try Google first. It's different when someone is a first-time user, and we all occasionally get the wrong form of words and google doesn't help - but as a regular question-asker, it would be a courtesy to the volunteers here to at least give it a shot before posting. SteveBaker (talk) 14:48, 25 January 2016 (UTC)[reply]
We should compare with some of the alternatives available at the time:
1) Iron pipes: These can rust. While a small amount of added iron in the diet may actually be healthy, in antiquity people didn't know that iron pipes were healthier than lead. Also, the orange or brown water it produces doesn't look or taste good. And eventually the pipes can rust through. (There are water treatment methods to prevent rust, but they wouldn't have had those in antiquity, either.)
2) Ceramic pipes: These can crack, due to seismic activity, frost-freeze cycle, tree roots, or subsiding of the ground around it. Therefore, they tend to be leaky.
3) Copper pipes: These can corrode to produce green sludge and eventually fail from that corrosion. Similar to iron, a bit of added copper in the diet may actually be healthy, but they didn't know that in antiquity.
So, if you didn't know about lead poisoning, lead pipes seemed like a good option (or gold pipes, if you happened to be filthy rich). StuRat (talk) 17:39, 24 January 2016 (UTC)[reply]
Wooden water pipes were popular in London for mains water supply in the 16th to 18th centuries, but generally connected to lead pipes in peoples' houses. I believe that they were still being replaced in the 1960s. IIRC they were generally made from Elm wood which is resistant to rot when not exposed to air. Alansplodge (talk) 18:02, 24 January 2016 (UTC)[reply]
Yes, I forgot about wood. Bamboo can be used, too, since it's naturally hollow, although some type of sealant may be need at the joints. StuRat (talk) 18:51, 24 January 2016 (UTC)[reply]
My gut feeling is that this would have to do with metal prices - alas that article, unfortunately, doesn't contain even a current table, let alone historical data. It would be worth updating with information from various sites like this. But my impression is that lead is a cheap metal because it is not usable for very many things, and a pipe buried in the ground is one case where the weight and the softness don't count against it. Alas, even that didn't pan out in the end... Wnt (talk) 20:38, 24 January 2016 (UTC)[reply]
Lead is usable for a lot of things. If only it wasn't poisonous. --71.119.131.184 (talk) 23:51, 24 January 2016 (UTC)[reply]
Lead works reasonably well for large diameter pipes (ratio of surface area to volume of water being smaller) - providing that the water is flowing quickly through the pipes and doesn't contain chemicals that corrode it. It follows that lead pipes are a reasonable solution for the mains supply (large diameter, water constantly moving) - but a terrible choice for houses (small diameter, water standing still for a dozen or more hours at a time).
In the news right now, the children of Flint, MI are suffering the consequences of using corrosive chemicals in lead pipes that had functioned acceptably (without those chemicals) for decades. Their problems were that e-coli in their water supply had to be treated aggressively - and that treatment caused the lead in the pipes to dissolve into the water much more easily.
Obviously, with modern plastics that are cheap, more or less completely inert, and which will probably last for centuries, we have the technology so that we don't have to suffer any of the issues that come with lead pipes anymore. However, the cost of digging up streets to replace them is more than many communities can bear. Flint was desperately short of money - which is why they switched water supplies in the first place - and replacing those old lead pipes was evidently not an option.
SteveBaker (talk) 14:48, 25 January 2016 (UTC)[reply]
They normally put chemicals in the water to coat the pipes to prevent corrosion, but when Flint switched water supplies they stopped adding those critical chemicals, and once the old ones wore off the inside of the pipes, corrosion began. StuRat (talk) 20:57, 25 January 2016 (UTC)[reply]

Where can I find the coefficient of friction between nickel and polyethylene? Actually the coefficient of friction between nickel and any common plastic would be fine.

I found this site[5] that has the data for nickel and Teflon, but Teflon is little too difficult for me to get my hands on. Johnson&Johnson&Son (talk) 17:10, 24 January 2016 (UTC)[reply]

The coefficient of friction of plastics is usually measured against polished steel. PTFE (Polytetrafluoroethylene, brand name Teflon)'s coefficient of friction is 0.05 to 0.10. Polyethylene can be supplied in various grades for which this supplier quotes coefficients of friction 0.18 to 0.22. That is for steel. This table gives some comparison with nickel. AllBestFaith (talk) 17:55, 24 January 2016 (UTC)[reply]
Thanks. But the problem is I don't have a steel part; I have a part that's coated in nickel. Is there a way to derive or approximate the nickel-plastic CoF given the steel-plastic CoF? The engineershandbook.com[6] link you gave has nickel-glass, nickel-nickel, and nickel-steel CoF listed, but unfortunately it doesn't have any for nickel-plastic. Johnson&Johnson&Son (talk) 02:40, 25 January 2016 (UTC)[reply]
Look for firearms information. Nickel coated bolt carriers and nickel-teflon triggers are common. --DHeyward (talk) 05:14, 25 January 2016 (UTC)[reply]

January 25

Why do Northeast Megalopolis snowstorm records look like this instead of a northern bias?

Boston: 27.6 inches (2003)

New York: 26.9 inches (2006) (27.9" (2016) if the site became the nearest airport in the mid-20th century like the others)

Philly: 31.0 inches (1996)

Baltimore: 29.2 inches (2016) (Baltimore suggests this might just be the record since the airport existed (1950))

Washington: 28.0 inches (1922)

Washington Dulles Intl, Virginia: 32.4 inches (2010) despite this weather station only starting in the 1960s.

Does the distribution of water vapor by latitude have anything to do with it?

How sure are scientists that climate change will make single snowstorm records easier to break in the future? Sagittarian Milky Way (talk) 00:15, 25 January 2016 (UTC)[reply]

Let me speak to the relationship between how far north (or south, in the Southern hemisphere) you are and the amount of snow you get. The closer to the poles, the lower the temperatures. At low temperatures, less moisture evaporates from lakes, rivers, and oceans, especially once they freeze over. This makes for less snowfall. Therefore, there is very little snow at the South Pole, but, since it rarely melts, you see thousands of years worth of snowfall on the ground at once.
Now, this doesn't necessarily affect the snowfall amounts in this particular storm, as many other factors and local conditions are also more important, but it is a general trend. In fact, many of the places with the heaviest snowfalls historically are places which get lake effect snow, where air moves over warm water, picking up water vapor, then depostits it once it moves over colder land. Buffalo, New York is one such spot, with Lake Erie providing the (relatively) warm water. StuRat (talk) 00:49, 25 January 2016 (UTC)[reply]
Locations move. Equipment and methods change. And the forecast has uncertainty [7]. There is no reason to believe any of it is related to climate change as climate change has still remained unmeasurable as an observation of weather. --DHeyward (talk) 05:54, 25 January 2016 (UTC)[reply]
One might term this ocean-effect snow, as it involved water vapor being swept off the relatively warm ocean surface (warmer this year because it's a strong El Nino year) and meeting an Arctic air mass over the continent. It's a classic (several authorities are saying textbook) example of explosive cyclogenesis and it's a product of North American geography. The warm water of the Gulf of Mexico and consequent moisture streams and the presence of the Gulf Stream favor large snowfalls relatively far south. However, these snowfalls tend to be intense rather than frequent, so total snowfall over a season will be higher farther north where it's colder and stays colder for a longer time, but where there is less access to subtropical moisture. Topography helps too - the Appalachian Mountains lift moisture to colder altitudes and it rains or snows out, leaving places like Pittsburgh relatively dry in these kinds of storms. Some of the same geographic elements give rise to Tornado Alley in the spring. Warm moist marine air meets cold dry continental air, and boom. Acroterion (talk) 15:11, 25 January 2016 (UTC)[reply]
Actually we call it a Nor'easter  :) --DHeyward (talk) 16:09, 25 January 2016 (UTC)[reply]
A rare case of the media not emphasizing an unexpected new name for a common thing making it seem like it's new.. Ocean effect snow! (said in a deep, booming, echoing voice) Does this mean that if this (admittedly high sigma) weather pattern happened 3 weeks ago we could've had even more snow? The sea was warmer then and Manhattan air reached 11°F. Sagittarian Milky Way (talk) 16:40, 25 January 2016 (UTC)[reply]
DHeyward is correct, it's a textbook nor'easter, and the "ocean effect snow" is something I just coined to compare it against lake-effect snow, which happens on a smaller scale without needing a storm system. As for more snow, I devoutly hope not. I've been shoveling three feet of snow for the past two days and finally have it so the cars are free, we can take out the trash, get mail and let the dogs out in the back yard without losing them entirely. I think this storm system turned out to be as efficient as it could be. Normally as a nor'easter forms, the air temperature goes up as the wind starts to come from the ocean (i.e., from the northeast). Often that means that it turns to rain as the storm gets wound up. However, if there is a blocking high over the Canadian Maritimes the cold air can't be eroded by the storm and it stays cold enough to snow..Acroterion (talk) 18:00, 25 January 2016 (UTC)[reply]
  • See here. The prevailing explanation is that increased ocean temperatures causes more moisture to enter the atmosphere, increasing the amount of moisture available for large storm systems (hurricanes, nor'easters etc) thus making them more intense, and more frequent. --Jayron32 18:09, 25 January 2016 (UTC)[reply]

Stupid physics question (How can we see things more distant than the age of the universe?)

According to wikipedia, the age of the universe is 13.8 billion years. The origin of the universe was a single point which resulted in a big bang. The size of the universe is 91 billion light years. Nothing can go faster than the speed of light. In 13.8 billion years the size of the universe should be 13.8 light years right? Brian Everlasting (talk) 00:34, 25 January 2016 (UTC)[reply]

It's actually a very common question. The answer is that the space between large-scale structures in the universe expanded by a process called Inflation (cosmology). Dbfirs 00:38, 25 January 2016 (UTC)[reply]
This is wrong. The boundary of the visible universe is only affected by expansion since the CMBR last scattering time, around 380,000 years after the big bang. It is unrelated to inflation, which ended 10−something seconds after the big bang. -- BenRG (talk) 01:41, 25 January 2016 (UTC)[reply]
Yes, of course! Light didn't start out until after inflation stopped, so it is entirely Metric expansion of space (and that is speeding up). Dbfirs 09:56, 25 January 2016 (UTC)[reply]
(EC)see Cosmic Inflation. The part that makes it super confusing is that you would be correct IF the universe actually "big banged" INTO pre-existing space, but it didn't, SPACE it self formed along with the big bang. Vespine (talk) 00:40, 25 January 2016 (UTC)[reply]
There's also the complication of Metric expansion of space but this is minor by comparison. Dbfirs 00:44, 25 January 2016 (UTC)[reply]
The key point is that relativity says nothing can travel faster than the speed of light (in a vacuum) through spacetime. It says nothing about how quickly spacetime itself can move. This distinction is crucial for understanding things like inflation, but such nuance tends to be omitted from pop science descriptions, which tend to say almost-true-but-subtly-misleading things like "nothing can travel faster than light". --71.119.131.184 (talk) 00:55, 25 January 2016 (UTC)[reply]
Also in that vein, the size of the observable universe is 91 billion light-years. The size of the universe as a whole may be infinite: see shape of the universe. --71.119.131.184 (talk) 00:57, 25 January 2016 (UTC)[reply]
Someone deleted my time dilation and Theory of Relativity comment, but I don't see any changes in the View History tab. Willminator (talk) 01:08, 25 January 2016 (UTC)[reply]
What I was trying to say in my deleted comment is that time is relative according to the Theory of Relativity. Gravity affects time. For example, if someone were to approach a black hole, from the observer on Earth looking up, it would look like the person has slowed down for thousands of years, but from the person's point of view, only seconds would have passed. The light of a star that's let's say, 1000 light years away from Earth doesn't necessarily have to travel 1000 years to Earth from the perspective of an observer on Earth. Willminator (talk) 01:24, 25 January 2016 (UTC)[reply]
The image on the right shows how this works geometrically. Later times are at the top. The brown line (on the left) is Earth, the yellow line (on the right) is a distant quasar, the diagonal red line is the path of light from the quasar to Earth, and the orange line is the distance to the quasar now. You can verify by counting grid lines (which represent 1 billion (light) years each) that the quasar is 28 billion light years away along the orange line though the light took only about 13 billion light years to reach us. -- BenRG (talk) 01:49, 25 January 2016 (UTC)[reply]
It's kind of funny though. I mean, the quasar is expected to be 28 billion ly away, but we don't know it didn't sprout a star drive and is coming on right behind the light ray. And in the frame of reference of the light (or someone arbitrarily close to lightspeed) no time at all has passed, and the distance is zero! (We're all just foreshortened a lot) Of the two, the frame of the lightspeed traveller is at least one we could be in, while the other distance is a spacelike estimate, so surely it is more meaningful to say it is 0 ly away than 28, right?  :) Honestly though, what confuses me greatly with that diagram is what happens if something moves away from us. What exactly does it look like when a galaxy, after space cleared, has simply moved far enough away that by the time we look at it its light is almost infinitely redshifted and unable to reach us at all? (this is related to something else I don't understand, which is why the lines for us and the quasar diverge at such a sharp angle on that figure, rather than each moving down a line of "longitude" on that horn thingy. Wnt (talk) 15:51, 25 January 2016 (UTC)[reply]
Wnt, if you look more closely, you'll see that the brown and green lines for us and the quasar are each "moving down a line of "longitude" on that horn thingy". (Don't confuse the diagonal-ish red line of the light from the quasar to us with the brown line on the far left for us.) The "lines of longitude"" show static positions in space that are moving apart as time progresses "upwards" only because "space" itself is stretching.
On this scale, only something moving at a substantial fraction of light speed for a long time will show up as moving across rather than "along" the static "longitude" lines.
As for your galaxy that is "almost infinitely red-shifted", this does occur and means the galaxy is close to being beyond the Observable Universe from our point of view (as we are from its). {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 23:04, 25 January 2016 (UTC)[reply]
The boundary of the visible universe is the cosmic microwave background. Its redshift is about 1100, large but still infinitely far from infinity. Any astronomical object we can see will have a redshift smaller than that (unless it's retreating very rapidly relative to the Hubble flow). The CMB is the boundary simply because the earlier universe was opaque to light. But it was transparent to neutrinos and gravitational waves, so I guess "visible universe" is a better name than "observable universe". -- BenRG (talk) 00:44, 27 January 2016 (UTC)[reply]
You're right that the notion of "distance now" is somewhat dubious since we don't know what has happened in the last umpteen billion years, and the spacetime interval to everything we see is zero. But, for better or worse, when astronomers are quoted in the popular press saying that an astronomical object that they just saw is X billion light years away, the orange line is what they mean.
In the diagram the Earth and quasar are both assumed to be stationary relative to the Hubble flow. This is approximately correct for Earth, and it's almost certain to be approximately correct for any distant object that's bright enough for us to see, because its speed is an average of the original speeds of the huge number of particles that make it up. If an object is moving significantly relative to the Hubble flow then its redshift is the special-relativistic redshift/blueshift of the source object relative to the Hubble flow, times the cosmological redshift, times the (small) redshift/blueshift of Earth relative to the Hubble flow. -- BenRG (talk) 00:44, 27 January 2016 (UTC)[reply]

Liquid non-Newtonian fluids?

Which non-Newtonian fluid or fluids would be considered only a liquid, not a plastic solid nor a colloid that involves liquid mixed with solid particles unless there's a colloid that is considered to be only a liquid, nothing in between? I have learned that not all fluids are liquids, but that all liquids are fluids. A couple of examples of non-Newtonian fluids are toothpaste, ice in the case of moving glaciers, ketchup, lava, pitch, and much more. They don't flow easily and consistently like water and other Newtonian liquids do. I read that pitch is said to be the world's densest liquid, but it is also considered to be a viscoelastic, solid polymer. What does that mean? Is it always a liquid? Can one look at the molecular structure of non-Newtonians fluids to determine which ones are truly liquids? Willminator (talk) 01:05, 25 January 2016 (UTC)[reply]

Is shampoo enough of a liquid for you? I don't think mine has any solid particles in it, but some might. The Kaye effect is a cool demonstration of the non-newtoniannness of shampoos and soaps, check the video refs at the bottom of our article. Also oobleck is indeed a colloid, but you can make it so that it flows nearly as easily as water. At that point, it requires a lot of force to see the shear thickening though. Check out shear thinning and shear thickening if you haven't, they discuss some additional examples and comcepts. SemanticMantis (talk) 14:45, 25 January 2016 (UTC)[reply]
What about ketchup, mustard, and toothpaste? Also, what does it mean for pitch to be a viscoelastic, solid polymer if it is supposedly the highest viscous liquid? Is it a liquid always or not? Willminator (talk) 03:23, 27 January 2016 (UTC)[reply]

Are long underwater submarine cruises harmful to human health?

AFAIK the effect of low gravity on bone density means that being in space is fundamentally harmful to humans, and no matter how fit and well-trained they are, this imposes a limit on how long astronauts can stay in orbit. Is there a similar physiological reason why long underwater cruises on a nuclear submarine would be harmful to the crew, and if so roughly how long could they stay underwater? Or would the food run out before anything else became an issue? I guess a modern sub is able to carry some gym equipment; what about sunbeds to replace exposure to sunlight? 94.12.81.251 (talk) 11:56, 25 January 2016 (UTC)[reply]

This is a review of medical problems for naval personnel in the Royal Navy's Vanguard-class submarines. Their routine patrols are about 3 months in duration. Mikenorton (talk) 12:30, 25 January 2016 (UTC)[reply]
That's an interesting study - but it doesn't really tell us much because it's only run over 3 month patrols. Over 74 patrols, each with a 150 man crew (340,000 man days) they only had to pull someone out of the boats 5 times - twice were appendicitis, once for a "Chemical eye injury", once for a seizure and once for severe traumatic hand injury. I'd bet that the eye and hand injuries related to the work being done and the seizure and appendicitis cases are probably within the norms for 340,000 man hours of any other human situation.
Looking at the problems that were not sufficient to cause the crewmember to be evacuated - we have lots of other injuries - things like chest pain - and "acute opiate withdrawal". But, again, nothing that looks like problems due to being cooped up in a submarine for three months.
So from a cursory glance, there are no issues that would prevent longer missions (except of course that the submarine can't carry enough food for longer trips).
I think we'd need data from much longer trips. But a lot has to depend on monitoring and initial crew quality. The guys who are going to spend a year on the ISS get studied in minute detail before being launched up there. Submarine crews also get health checks - but I can pretty much guarantee that it's nothing like as careful as with ISS crews. That's evident from the crewmember who suffered from "opiate withdrawal"...I can't imagine that being remotely possible with ISS crews.
Looked at another way - it's hard to imagine how submariners could be worse off than the ISS crews. They don't have the gravity problems - or the lowered atmospheric pressure issues that ISS have - they have more space to move around in - and the larger crew presumably makes the mental health issues of being cooped up in a small space more manageable. Submariners get plenty of exercise and "real" food (well, more real than the ISS crew get) - and they don't generally suffer from things like solar radiation that the ISS crew have issues with. So you'd expect them to do much better.
I think we'd need longer studies and with more controlled crew selection and pre-processing before we could reasonably conclude an amount of time. SteveBaker (talk) 14:19, 25 January 2016 (UTC)[reply]
Steve remarks that it is "hard to imagine how submariners could be worse off than the ISS crews..."
Well, there is, of course, combat in submarine warfare. As unfathomable as it may seem, in this decade, for state-against-state naval warfare to occur, it is a real threat and it is one reason that large Navies still spend lots of resources to train and maintain crews and prepare for undersea warfare.
In December, I was gifted a non-fiction book, Pig Boats, about submarine warfare during World War II. It details the raw unpleasantries of the war for submarine crews. If you can imagine a way to cause health-harm to a human, the submariners had to deal with it at some time during the war. One advantage the astronauts on International Space Station have is that for the most part, nobody is actively trying to harm or destroy them.
Nimur (talk) 14:51, 25 January 2016 (UTC)[reply]
I also don't think space weather makes the ISS rock and churn on a regular basis, as terrestrial weather does with subs. Last guy I talked to who served on a sub mentioned how some of them hated rising to periscope depth due the increase in motion sickness it could cause. Crew in a nuclear submarine probably also go longer than ISS crew without seeing the sun. This seems like a pretty serious issue, light levels are carefully studied and controlled on subs [8]. While the ISS crews may have their own problems with light, at least they can often look out the window and see the sun. Here's an interesting ref on disorders in circadian rhythms that mentions submarines [9]. SemanticMantis (talk) 15:25, 25 January 2016 (UTC)[reply]
Our OP is concerned with long underwater cruises on a nuclear submarine - they don't spend much (if any) of that time at periscope depth - so seasickness is not a significant issue. Even if seasickness were a problem - it's a short term, non-life-threatening phenomenon that would not limit the amount of time a person could spend in a submarine - so it's not relevant to answering this question.
Similarly, any likelyhood of there being combat missions for these craft has zero impact on the OP's question - which is how long you could live in one of them.
Comparisons with WWII submarines is also pretty irrelevant. A typical nuclear submarine is huge...they are not the cramped, cold, miserable places you'd imagine from seeing WWII craft. SteveBaker (talk) 16:56, 25 January 2016 (UTC)[reply]
I don't care to argue with you. But if you want to actually help OP, you could try supplying references. SemanticMantis (talk) 17:24, 25 January 2016 (UTC)[reply]
As aside, Steve, I don't think the "acute opiate withdrawal" is what you suspect. (That is, it isn't a situation where someone who got addicted to painkillers while landside, nobody noticed when he came aboard, and then he went into withdrawal when his supply ran out at sea.) The footnotes indicate that it was a patient who was prescribed opiate analgesia for pain and who abruptly stopped taking his meds without discussing it with his doctor. Yeah, it's less likely aboard the ISS, but in principle an astronaut could ignore the flight surgeon and stop taking his prescribed meds in orbit, too. TenOfAllTrades(talk) 20:50, 25 January 2016 (UTC)[reply]
Here's a few more scholarly articles on light and circadian rhythms in submarines [10] [11], and one naval report [12]. SemanticMantis (talk) 17:43, 25 January 2016 (UTC)[reply]
Regarding User:SteveBaker comments above, I also fail to see how the ISS could be less detrimental to your health than a functioning nuclear submarine. Unless you are in something like the Russian submarine Kursk. It might seem counter-intuitive, but crews of nuclear submarines are even less exposed to radiation according to this source than people living above the surface. The background radiation is quite low inside a submarine. The health concerns for the ISS is not only the lack of gravity, they are also exposed to cosmic rays. Scicurious (talk) 19:46, 25 January 2016 (UTC)[reply]

IR laser line generator.

I have an idea for a commercial application needing a laser line generator. These are tiny little gadgets costing a few bucks that include a low power laser source and a lens to spread the light out like a fan over maybe 60 to 120 degrees, sealed into a cylinder a couple of centimeters long. (You see them in supermarket barcode scanners, for example).

I know that red and green laser line generators in the <5mWatt range are considered to be class 1 or 1/M laser devices - which means that they're "safe for consumer use". A line-laser is considerably safer than a regular laser pointer because the energy is spread over a wider area, hence class 1 or 1/M rather than class 2 like most laser pointers.

But I'm considering switching from a red light laser to an IR line-laser of identical power and beam spread. I realize that IR lasers are invisible - so there is a risk of someone staring into the thing without knowing it's there, they don't invoke the blink reflex or close down the iris.

Trouble is, I can't figure which class these IR devices belong to and there is no indication on the manufacturer's web site to tell me.

Does anyone know the guidelines about these classes of device? Does the class get better if I limit the power to 3mW or even 1mW?

TIA SteveBaker (talk) 17:15, 25 January 2016 (UTC)[reply]

I think you'll need to get a copy of ANSI Z136.1 to rigorously answer this question. It does not seem to be freely available. Laser_Institute_of_America, the official secretariat of ANSI in this matter, will sell you a print or electronic copy. Here [13] is the TOC and index. Here [14] is a comparison of the 2014 standard compared to previous versions. SemanticMantis (talk) 17:34, 25 January 2016 (UTC)[reply]
The way I learned it in school, every non-visible laser was automatically treated as if it were a Class IV laser. However, if you dig very deeply into, say, OSHA standards, they do not (for the most part) actually distinguish between different classes of laser when specifying workplace safety requirements. As SemanticMantis correctly pointed out, the ISO, ANSI, and IEC technical specifications that define commonly-used laser classification terminology are neither free nor zero-cost.
Invisible lasers are inherently more dangerous: you won't even notice when they malfunction, or when they reflect specularly off a distant object, and so on.
Nimur (talk) 19:14, 25 January 2016 (UTC)[reply]
Classification based on continuous wave power at various wavelength.
I can't speak to the accuracy, but laser safety does have charts regarding safe exposure and classification at near-infrared wavelengths. One of the graphs, reproduced at right, suggests that near-infrared wavelengths would be considered Class 1 ("safe under all conditions of normal use") at power levels less than around 0.5 mW and class 3R (or worse) at higher power levels indicating at least some risk of eye injury. If you are definitely going to work with such lasers, I would strongly recommend you verify such safety information with reputable third parties. Dragons flight (talk) 12:27, 26 January 2016 (UTC)[reply]
0.5mW is really low. I've only found cheap 'near' IR line lasers at 1.0mW - but that's spread out over around 90 degrees of 'line'. I wonder whether the classification system would be OK with me physically limiting how close someone could get their eye to the laser such that no more than (say) 1/10th of the entire line could impinge on their eye? Assuming that the light is spread out evenly, that would limit the practical exposure to 0.1mW - which ought to be really safe.
The trouble with "reputable third parties" is that they want to charge a lot of money for their services! I'll certainly go that route before making a final product - but if it's clear that an IR line-laser a non-starter then I'd rather not fork over the cash! SteveBaker (talk) 15:08, 26 January 2016 (UTC)[reply]
One option would be to move to longer wavelengths. I was reading the graph at ~800 nm, but if you can move to short-IR at say 1500 nm, you can apparently go to 10 mW at Class 1. Not sure what wavelengths of lasers are available though. As you say, there is probably a fair argument that a line source is much less dangerous than a point source, but I'm not sure how they officially consider such details. Dragons flight (talk) 15:40, 26 January 2016 (UTC)[reply]
The key to safety with lasers - or any other energetic item - is not whether the device is safe when used correctly. It's about ensuring high confidence that the product is idiot-proof, accident-proof, and so forth. Spreading laser energy using a lens or optic does truthfully reduce the hazard - as long as the optic is correctly operating. If the laser is only eye-safe when the beam is spread into a large angular pattern, then what would happen if the device is dropped or misaligned, and the same laser light energy no longer travels through the optic? Now the device has become unsafe.
This is why lasers are classified for safety, and fully-assembled optical systems are not: the beam-spreader may reduce hazard, but the laser itself is still a Class IV, (or whatever).
And when your laser light is invisible - you will not see it when the beam escapes from its designed optical path.
Steve runs a small business - he doesn't have a corporate training department to provide a mandatory Laser Safety class; he doesn't have some weaselly guy from Health and Safety department running around telling him what he may and may not do; he doesn't have a collared shirt from the legal department telling him how to reduce corporate liability or ensure compliance in every municipality where his business might be construed to operate; and Steve is (as evidenced by his comments) interested in cutting costs. Steve, I really try to avoid interjecting pure, unreference-able advice when I contribute to the reference desk, but here's some free advice from a person who has worked with lasers; err on the side of caution. Do not try to manufacture or sell or use an invisible laser. If you want to play with something safer, try removing the blade-guard from a circular saw, or dumping the charge out of shotgun shells, or generally anything else where you can see and avoid the hazard.
Nimur (talk) 17:31, 26 January 2016 (UTC)[reply]
I think that's a little unfair. My wife and I use a couple of 100 Watt IR lasers every day - in machines that I hand-built from plans (and then heavily modified). So, yeah - I know that lasers are decidedly dangerous, and invisible ones, doubly so and I have a ton of respect for them. Which is why I'm asking the question rather than just building something that might expose someone to danger! My thought processes for this design go like this:
  1. Is it even plausible that an IR laser can be considered "safe" for consumer use?
  2. Is it more plausible that an IR line laser can be considered "safe"?
  3. Does it matter how I enclose it to limit the fraction of the laser line that could impinge on the eye?
  4. If all of those things suggest that this concept is feasibly something that would pass regulatory/safety muster - THEN I can go spend a pile of cash to get an expert to sign off on the design.
  5. If the expert says it's OK - I can make and sell my gizmo.
The point being that steps (1) through (3) need to be considered BEFORE I spend money on step (4) or proceed to step (5). If it's very obvious from available public data that a 1mW IR laser line generator cannot be considered safe for a consumer product - then I can dump the idea and go back to a red laser line generator which I know is class 1M because the manufacturer says so. If it seems likely that an IR laser would also be legal/safe - then I can spend the money to (hopefully) rubber-stamp that answer...with a suitably low probability that I'll be paying money to get a "No!" answer!
16:22, 27 January 2016 (UTC)
Steve, I know you're a smart guy and I trust your judgement... even your step-by-step procedural thought process makes sense. But what I'm trying to say is that the answers to steps 1 through 3 are "no, no, and no;" and if you did have a giant corporate bureaucracy-machine, they'd be the ones telling you "no, no, no, run away screaming."
For perspective, take a long, hard, un-biased, non-fiction look at how a real DVD player protects its laser diode. Disregard fictional videos and "tutorials" from internet enthusiasts who believe that they have "taken out the laser" to play with it.
Can you still go ahead and do it? Sure. Sometimes, great innovation takes place because a smart person pushes beyond the envelope of normal procedure. Maybe once every few decades, a great leap forward occurs in commercial applications for laser optics. But, a lot more often, somebody goes permanently blind, and a giant lawsuit bankrupts everybody involved.
For what it's worth, I definitely did recall that you have a powerful cutting laser, and it's probably among the most dangerous items in your house, or even in your entire neighborhood; so you probably aren't chomping at the bit to encourage members of the public to borrow time on it. It's not a toy. There are lots of things in your house that you wouldn't want people playing with.
Nimur (talk) 16:57, 27 January 2016 (UTC)[reply]
Yeah - improperly handled, a 100W IR laser is a terrifying weapon! It's not for no reason that our two machines are labelled "The Death Ray of Ming the Merciless" and "Illudium Q36" respectively. However, once they are mounted in a nice opaque metal box with magnetic switches to disable the power when the lid is opened, or the smoke extractor isn't pulling air, or the water chiller isn't producing adequate flow rates and appropriate temperature water...they are really pretty idiot-proof machines. Of course, if you do happen to encounter an idiot - then you shouldn't expect them not to be able to bypass the safeties and do extreme amounts of damage - but that's true of very many other things one has around the house. There is no doubt in my mind that a small family car is by far more dangerous than a suitably enclosed 100W laser.
Anyway - I'm convinced that there isn't enough evidence that a 1mW IR laser line generator is considered (legally) safe (although I'm pretty sure it would actually be safe in practice) - so I guess I'll stick with the red laser this time around. SteveBaker (talk) 23:23, 27 January 2016 (UTC)[reply]

Little bugs in uncooked pasta

On more than one occasion, I have noticed little bugs in boxes of uncooked pasta. I see them as soon as I open an otherwise unopened box. Or when I put the pasta in boiling water, I see the little bugs rise to the top. Needless to say, it's disgusting. Where did they come from? How did they get there? And how do I prevent this in the future? I have done a Google search and got a lot of mixed and contradictory results. Help! Thanks. 2602:252:D13:6D70:6441:F28D:D981:B287 (talk) 18:01, 25 January 2016 (UTC)[reply]

What do they look like? All kinds of pests might infest a pantry, and there may be some slight differences in how to treat. A common one is the pantry moth AKA Indian mealmoth. Does that look like the right bug? Another common pantry pest is the flour beetle. One thing to look into is whether you are buying contaminated goods or if the pests are getting in to your food at your house. If it's the former, buy different things. If it's the latter, there are steps you can take. Here are some reliable sources for how to treat and prevent pantry pests: UC Davis [15], Clemson [16], Utah State Extension [17]. Transferring things like pasta to airtight containers is a good first step. SemanticMantis (talk) 18:08, 25 January 2016 (UTC)[reply]
Hard to describe. They are so tiny, they are about the size of a pencil-point tip. Perhaps it is that pantry moth AKA Indian mealmoth that you linked above? (I assume the photos in that link are magnified many, many times over?) I have read that they are already in the pasta (as eggs? or larvae?) and that they then hatch after they come into the house. In other words, they are already in the box when I buy it at the store. Help! Thanks! 2602:252:D13:6D70:6441:F28D:D981:B287 (talk) 18:19, 25 January 2016 (UTC)[reply]
Don't sound like it. You should be able to see the light coloured segmented larvae (grub) (which may look slightly similar to a Maggot) if they were Indian mealmoths. By the time they are fully grown, the size may be about the size of a broken of pencil tip i.e. perhaps 5 mm-10mm long, not simply the point. You'll also see their waste (the threads). And if you've been having the problem for a while it's likely you'll see the months around your house. I think it's more likely to be some sort of Flour beetle from your description. Or maybe a Wheat weevil or some variety of Oryzaephilus or something like that. Nil Einne (talk) 18:47, 25 January 2016 (UTC)[reply]
Yes, it does sound more like the Wheat weevil. Yes. What do I do? 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 18:41, 25 January 2016 (UTC)[reply]
(EC) Note that if they are Indian mealmoths, you should transfer even opened containers since they are great at piercing plastic bags. Even then, you may find they make their way into containers which seem airtight. (It's possible the food or container was already infested with eggs but I'm not convinced it always was. There's also the fact that sometimes you see signs of the infestation, but no dead larvae or moths despite the product not being opened in a while.) Freezing generally kills the eggs and some people recommend it even if you're planning to throw out the food, particularly if your rubbish won't be picked up for a while. (If you're home composting you definitely want to freeze. Indian mealmoths are not something even helping your composting.) All these combined mean even if it only a localised infestation, it can be quite difficult to get rid of and may require a fair amount of stuff to be thrown out. Nil Einne (talk) 18:28, 25 January 2016 (UTC)[reply]
I don't mind throwing the old boxes out. No problem. But what do I do with the new boxes? The ones that I bring home from the store? Thanks. 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 18:45, 25 January 2016 (UTC)[reply]
Did you read my three last links above? They tell you very clearly how to deal with pantry pests. The short version is: buy food without pests, and store food properly. While it can happen that goods are infested before you buy them, things are more likely colonizing the food in your pantry. Invest in some airtight containers, and inspect your food as soon as you get it home. If you detect pests in the food when you first get it, return it to the store and complain. SemanticMantis (talk) 18:49, 25 January 2016 (UTC)[reply]
(EC) I was mostly referring to Indian mealmoths. But the point is to remove the infestation. If you've successfully removed the infestation (it will take a few weeks or months to be sure), there shouldn't really be anything much to do other than taking resonable precautions like storing food in airtight containers, keeping an eye out for reinfestation and not buying too much food (i.e. using the food fairly fast). As was mentioned by SemanticMantis, if the food is infested at point of purchase, it's probably better to simply buy food that isn't infested. If you really want to treat infested food, I suspect freezing will work for most insect pests (particularly multicyclic freezing). Nil Einne (talk) 18:53, 25 January 2016 (UTC)[reply]
We've dealt with them by putting grain products into metal bins as soon as they come in from the store, and by using pantry moth traps (available at your favorite home improvement store) that use a pheromone lure to a sticky trap. They also help to monitor whether you've got a problem. Use cans and traps for a couple of life cycles and you should be free of them. Acroterion (talk) 18:55, 25 January 2016 (UTC)[reply]
Home-stored product entomology Anna Frodesiak (talk) 19:01, 25 January 2016 (UTC)[reply]
Thanks for the indent and sorry for just dumping the link there. I couldn't reach they keyboard because I was snuggled up under the blanked with only a mouse because it's like -54 Kelvin here. By the way, consider zipping over to the Home-stored product entomology talk page about that article and it's big 5 pests (more than that, me thinks!). Anna Frodesiak (talk) 00:08, 26 January 2016 (UTC)[reply]

Thanks. If I buy a box of pasta, can I just throw it in the freezer? If so, would I throw it in the freezer as is (in the original box)? Or put the contents of the box in some other container? And, if I freeze it, what do I need to do when I want to cook it? Thaw it? Defrost it? Or just cook it right from the frozen state? Thanks! 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 19:23, 25 January 2016 (UTC)[reply]

I think you can freeze it in the original container and toss it right in the boiling water from there. If there's any left, you might want to seal the closed package in a plastic bag, to prevent freezer odors from being absorbed. You might use the plastic bag right from the beginning, too, as pasta boxes don't seem to be properly sealed to me (which is probably how bugs keep getting in). You might try another grocery store and another brand of pasta, as one or the other obviously has an insect problem. StuRat (talk) 19:51, 25 January 2016 (UTC)[reply]
Yeah, the solution to bugs in your pasta is not freezing all your pasta forever. It's buying pasta with better quality assurance and using proper storage... SemanticMantis (talk) 20:17, 25 January 2016 (UTC)[reply]
I don't see the need to freeze indefinitely. The point of freezing is kill the eggs. If you don't have an existing infestation and practice decent vigilance and do your best to, you'll hopefully not get another one if you kill eggs before an infestation can take hold. (There is a risk that freezing won't kill all eggs particularly if the food has a lot of them and you are buying a lot of food that is infested. It's also likely some species have eggs that are hardy enough to survive freezing, or even cyclic freezing.) I do agree that this doesn't seem the smartest solution as opposed to just avoiding products that are infested. Nil Einne (talk) 13:34, 26 January 2016 (UTC)[reply]

A few people have said that I should buy food that is not infested. That is obvious. But, how do I know that when I am at the store? I don't see or notice these bugs until after I get home and open the box and start to cook. So, at the store, how would I have seen/noticed this? Obviously, one does not open up the box of pasta in the store. It sits at home until the day I decide to use it. 2602:252:D13:6D70:186C:D475:39EF:E0EC (talk) 20:56, 25 January 2016 (UTC)[reply]

I suggest you start opening them when you get home, to determine if they have bugs, because, if they do, those bugs might infest other items in your kitchen, like cereal boxes. You may already have an infestation problem, in which case you would have to take action to clear that up. Another alternative, once you've found the pasta to be clean, is to store it in glass or tin containers, which, unlike those paper boxes, seal tightly enough to keep bugs out. StuRat (talk) 21:04, 25 January 2016 (UTC)[reply]
As already mentioned, you should be storing your food in airtight hard containers if you are having problems. It's ideal if you do this from the getgo since some insects can pentetrate the plastic bags. If you open your food and find it is already infested, and return it to the store. If this happens often or they refuse to accept returns purchase the same (or nearly the same) day when they are clearly defective and unless you're giving them older items must have been defective when purchased you should do what's already been suggested and shop somewhere else. (If they do accept returns and it happens often and you still want to shop, I guess they'll probably know they have a problem and so keep letting you make returns despite the fact you're always doing it. If they do start to make problems, I guess you could say you'll open it in front of the staff.) Nil Einne (talk) 13:31, 26 January 2016 (UTC)[reply]
Storing non-perishables in your refrigerator should be a good test. If it still has bugs, they most likely came from the store or the manufacturer, not from your hone. ←Baseball Bugs What's up, Doc? carrots→ 16:34, 26 January 2016 (UTC)[reply]

My refs above

Sorry, it looks like I've inserted some references where they shouldn't be, and I can't seem to delete them. Can anybody more experienced with wiki editing remove those 3 links please? Thanks Mike Dhu (talk) 21:02, 25 January 2016 (UTC)[reply]

I changed them to bare urls, so that they don't appear at the bottom of the page - I hope that's what you wanted. Mikenorton (talk) 21:08, 25 January 2016 (UTC)[reply]
Great, thank you, I need to learn more about how to edit wikipedia, I must have selected the wrong tag for the references. Sorry, I'll stick to sandbox for a while :-/ Mike Dhu (talk) 21:16, 25 January 2016 (UTC)[reply]
There's a way to embed references within a section on a talk page, but the exact syntax is not coming to mind just now. ←Baseball Bugs What's up, Doc? carrots→ 21:30, 25 January 2016 (UTC)[reply]
Please see this edit.—Wavelength (talk) 21:49, 25 January 2016 (UTC)[reply]
Yes, one of those previous edits, before Mikenorton kindly corrected my mistake, was my reply to a question where I wanted the references to appear, but they were appearing at the bottom of the ref desk. I used the 'ref' tag instead of square brackets, but realise now they are used for different purposes. I should probably post the rest of this reply as a question on the computing ref desk but I'll ask here first out of courtesy as this is where I made the mistake. I'll be spending a bit of time in sandbox now, but would it be possible to change the tooltips to give more information? Whenever I hover over any wiki markup it just displays a tooltip that I should click on it to insert it, without explaining specifically what the markup is for. I appreciate that for those of you who have spent a bit of time editing wikipedia it's second nature to know what tags to use, but more info on tooltips would make it easier for newcomers. Thanks Mike Dhu (talk) 00:22, 26 January 2016 (UTC)[reply]
There's no need for self-deprecation; in many ways, the problem is with this page and not anything you did "wrong". Considering that we're ostensibly here to supply references, we're not actually well set up to provide them in the same manner that articles do. Matt Deres (talk) 15:50, 26 January 2016 (UTC)[reply]
I didn't think I was being self-deprecating, just acknowledging that I made a mistake while I'm still learning how to edit wikipedia, but thank you. It would be nice to have more info from the tooltips though, so I'll post a question on the computing ref desk about that. Mike Dhu (talk) 22:07, 26 January 2016 (UTC)[reply]

January 26

Atmosphere of Venus / compressing C5O10N2

I was musing over a Venus terraform in a very crude way, considering what the prevalences in a 92 bar atmosphere are relative to Earth:

CO2: 8878% vs 0.040%
CO: 0.16% vs ~0
N2: 322% vs 78.084%
Ar: 0.64% vs 0.93%
H20: 0.18% vs 0.001%-5%
He: 0.011% vs 0.0005% (waaaat?)
Ne: 0.064% vs 0.0018%
SO2: 1.38% vs ~0

I get that in order to make Venus atmosphere Earthlike, you basically have to dump 364 pounds of C, 970 pounds of O, 36 pounds of N and a mere 0.1 pounds of S onto every square inch of the planet's surface - or beneath it. Ignoring the S, which can clearly be made a solid but seems too small a component to figure into a bulk formula, that's an empirical formula of C5O10N2. Now we saw in a thread above that even bulk CO2 can be pressurized into an extended solid, but is there any way to predict what this composition would turn into, at what pressure? For example, I'm thinking you might get two NO2 groups on a C5O6 extended structure (almost a polyketone, though adjacent ketones are high-energy and disfavored) ... but I certainly don't know that. It seems empirically like a doable experiment, but has anyone done it? (yes, I realize that this is nearly 1 million times harder than fixing global warming, possibly using similar carbon sequestration technology, without a local infrastructure, and so it is not going to be done by any normal means we know of today... and the planet is still extremely dry)

Another question: why so much helium? I thought the conventional wisdom was that light gasses are lost, but if you sequester away the other stuff there's like 200 times more helium on Venus than here - even though the argon level is lower. and there's also 60 times more neon. I thought a noble gas was a noble gas... There might be an answer at [18] but I didn't have access, and can't riddle it out by the abstract. Wnt (talk) 17:27, 26 January 2016 (UTC)[reply]

The thermosphere of Venus is much colder than that of Earth. Therefore the losses of light gases are smaller. Ruslik_Zero 20:37, 26 January 2016 (UTC)[reply]
Well! I went back, looked at that article and atmosphere of Venus - sure enough, Earth's thermosphere can get up to 2500 C, and Venus can get up to ... 120 C or so. I have no idea why. But I thought the popular wisdom was that all Venus' hydrogen was lost to space. How can that be so if helium doesn't escape nearly as much as on Earth due to the colder thermosphere? I clearly have more shocks in store for me here. Wnt (talk) 14:26, 27 January 2016 (UTC)[reply]

Research article query

According to the article found at [19], figure 2 implies that as the hydrogen content in chromium approaches nothing, the FCC and HCP crystal structures of chromium become stable at ambient pressure. This is a problem, because it is know for a fact that the BCC is the solely stable structure at ambient pressure and temperature. How does one reconcile this implied inconsistency? Plasmic Physics (talk) 20:42, 26 January 2016 (UTC)[reply]

I don't know about "known as a fact" (or crystal phases of chromium hydride, at that). In general, science is always preliminary. But in this case, the system is at 150°C, which certainly is not "ambient temperature". --Stephan Schulz (talk) 22:12, 26 January 2016 (UTC)[reply]
Chromium with zero percent hydrogen can hardly be considered as chromium hydride, can it? Looking at the pressure-temperature phase diagram of chromium, it should remain as BCC up to its melting point. So, the system being at 150 degrees should not matter. Plasmic Physics (talk) 22:54, 26 January 2016 (UTC)[reply]

Earliest predictions of man on moon in 60s?

May 25, 1961 President Kennedy announced the US goal to send men to the moon and return, 'by the end of the decade'. Are there records of earlier predictions by scientists, policy makers or governments (not looking for the 'Jules Verne' long literary history), that men could land on the moon in the 1960s or by when? I'm interested in finding the first informed, professional prediction that proved correct - men walking on the moon before the end of the 1960s. Thanks if you can point to a link or citation.

Extensive predictions, varying in accuracy and credibility from fringe lunatics to public statements by esteemed scientists, major movers and decision-makers! I have a stack of moon books at home written in the 1940s and 1950s; they make for great historical reading. If you'd like a complete listing, I can provide titles and authors.
Perhaps the first place to start is our article on Wernher von Braun:
"In 1930, von Braun attended a presentation given by Auguste Piccard. After the talk the young student approached the famous pioneer of high-altitude balloon flight, and stated to him: "You know, I plan on traveling to the Moon at some time." Piccard is said to have responded with encouraging words."
By the mid-1950s, the accuracy of the mission-statement was becoming very concrete and there are hundreds of scientific publications that aptly describe how a manned moon mission would probably look.
The reason that Mr. Kennedy's statement was so important was that he had the power to finance the program.
This 1979 documentary by James Burke, The Other Side of the Moon, is spectacular. He overviews the political clime, including interviews with several scientists and program managers. Among the key statements (somewhere probably around half an hour into the documentary) is a description of how they managed to get Mr. Kennedy to make a statement: it had been decided that it would be politically expedient for Vice President Johnson to formally advise the president in writing that a moon mission would be possible, and that it would be politically expedient for the President to proceed to order a study, and eventually make a formal public statement. The discussions that led to that point were quite extensive.
Nimur (talk) 23:34, 26 January 2016 (UTC)[reply]
See Moon in fiction, which lists many previous stories of human landings on the moon. Use your own judgment as to how realistic any of the twentieth-century stories were. Robert McClenon (talk) 23:37, 26 January 2016 (UTC)[reply]
But the question was about "informed, professional predictions", not fiction. --76.69.45.64 (talk) 05:59, 27 January 2016 (UTC)[reply]

I'm the OP: After further research myself, I came on this, which I think is the type of information I was looking for. Can anyone get closer, more specific, earlier in terms of the type of prediction this demonstrates: "Copenhagen, Denmark, Jan 8 (1960) (AP) - A Soviet rocket expert said today that man may set his foot on the moon some time in the 1960s. Stopping over at Kastrup airport en route to an international conference in Nice, France, Lt. Gen. Anatoloy A. Blagonravov told newsmen it is still too early to set a date for the firing of a manned moon rocket, 'but I would consider it probable that it may be sent to the moon within a brief period of years, possibly in 10 years." - ...I find it interesting that it was a Soviet expert that made this prediction, and that he predates Kennedy's declaration. This guy turns out to be pretty interesting as he was key in representing the Soviets in all talks on cooperation and joint space activities. Wikipedia has a brief article about him, but amending it to include the information about his prediction is above my skill set, I think. I also wonder if this info was hard to find or not obvious to Wikipedia researchers as the guy was a Soviet instead of an American? Research bias? I'm not accusing just wondering... — Preceding unsigned comment added by 94.210.130.103 (talk) 10:17, 27 January 2016 (UTC)[reply]

I would note that moon shots weren't really an independent new technology to predict. To this day, moon shots and other orbital activities serve as a sort of respectable face for ICBM development, and I believe on examination many components can be found in common. Since the latter was seen as a very high priority and subject to great planning and anticipation, the former should have been more predictable than if it were done solely by ivory-tower researchers looking to have a space jaunt. Wnt (talk) 15:34, 27 January 2016 (UTC)[reply]


Here's a good starting book: Realities of Space Travel, (1957), by edited by Leonard Carter. This book details the mechanisms of moon- and interplanetary flight, endorsed by several scientists from the American Institute of Physics, the British Interplanetary Society, and so on. Understand that this book, published in 1957, predates the formal existence of NASA...
The book is full of citations, science, math, technology reviews, and so on.
The introduction to this book walks the reader through an orbital dynamics equation to calculate the necessary energy budget for a manned rocket flight to the moon.
Later chapters detail the state of the art in technology, including rocket design, electronics, biomedical factors, and so on.
Nimur (talk) 17:15, 27 January 2016 (UTC)[reply]

January 27

Can a man's epididymis grow back if *all* of it is removed?

For reference: Epididymis. Futurist110 (talk) 00:05, 27 January 2016 (UTC)[reply]

I don't think so. Organs generally don't regenerate. Semi-exceptions: liver and brain. Why are you asking? Are you thinking of something like a vasectomy spontaneously reversing, which can happen? If so, that involves the vas deferens, not the epididymis. --71.119.131.184 (talk) 00:26, 27 January 2016 (UTC)[reply]
Why exactly can the vas deferens grow back but not the epididymis, though? Futurist110 (talk) 02:34, 27 January 2016 (UTC)[reply]
Indeed, if one tube/duct can grow back, then why exactly can't another tube/duct likewise grow back? Futurist110 (talk) 02:36, 27 January 2016 (UTC)[reply]
Good question. It doesn't really "grow back" in the sense of sprouting a new one from scratch. Exact vasectomy methods can vary a little (see the article), but in general the vas deferens is severed. Sometimes a portion is removed, but sometimes it's just cut, and the cut segments closed off with surgical clips or something along those lines. So, you can get minor tissue growth that winds up reconnecting the segments. Some additional procedures, like forming a tissue barrier between the vas deferens segments, have been tried to reduce the likelihood of spontaneous reversal. --71.119.131.184 (talk) 02:45, 27 January 2016 (UTC)[reply]
OK. Also, though, out of curiosity--can the vas deferens grow back if *all* of it is surgically removed? Futurist110 (talk) 02:58, 27 January 2016 (UTC)[reply]
In general, the less differentiated a tissue is, the easier it is for it to regenerate. The vas deferens are fairly simple muscular tubes, in contrast to the epididymis and testes, which are specialized organs, so it's not surprising that you can get some regrowth of the vas deferens. --71.119.131.184 (talk) 02:45, 27 January 2016 (UTC)[reply]
Is regrowth of the epididymis and testicles (after *complete* removal of the epididymis and testicles, that is) completely impossible or merely unlikely, though?
Also, please pardon my ignorance, but isn't the epididymis a tube just like the vas deferens is? Futurist110 (talk) 02:58, 27 January 2016 (UTC)[reply]
The epididymis is a 'tube' but a longer and more complex one and if it's removed completely it won't grow back (parts of it may, but not the enitre connection). If you snip the tube, then you've got a similar situation to a vasectomy, which can in rare circumstances reverse (repair) itself, but that's a very small step as opposed to regenerating an entire epididymis. The body is good at 'protecting' itself by repairing damage, whether that's by growing new tissue to reverse a vasectomy or repairing a damaged organ. What we lack is the regenerative capability to re-grow any parts that have been removed/destroyed completely, including the epididymis. I think a quick google search will make it clear that testicles don't grow back Mike Dhu (talk) 03:21, 27 January 2016 (UTC)[reply]
An aside on skin growth and stretch marks
Another exception is skin, which grows just fine, if given enough time (like when you gain weight). But, if you try to grow it too quickly, you get stretch marks and scars.StuRat (talk) 00:29, 27 January 2016 (UTC) [reply]
That's not an exception, just wrong. Skin contains elastic fibers that allow it to stretch during weight gain or recoil or shrink during weight loss. So someone who gains a large amount of weight does not grow new skin. Their skin stretches to accommodate the accumulation of fat tissue. Scicurious (talk) 14:13, 27 January 2016 (UTC)[reply]
The definition of "growth" here may be tricky. According to these two papers ( [20][21] ) the skin consists of epithelial proliferative units (though there may be some equivocation on the details) and each unit has its own stem cell. Given the chance, they clonally expand, but a unit without a stem cell can also be colonized. If you simply look at a section of skin, you're not going to see a lot of gaps where cells no longer contact -- something is taking up the slack. Yet at the same time, the hair follicles don't increase in number, and they have their own stem cells that can provide regeneration in case of injury. So when a baby's scalp becomes a woman's, you can say her skin grew, in the sense that it is probably thicker and stronger and has more cells in it than when she was a baby. But yet, the hair follicles are no more numerous, so the regenerative potential from that source is presumably reduced. I'm not sure exactly what happens to the average EPU size. Wnt (talk) 14:58, 27 January 2016 (UTC)[reply]
Just compare the surface area of a man at 150lbs to the surface area of the same man at 350 lbs a few years later. The skin "grew" under most any definition: more of it, more cells, new cells, more area, more mass, etc. Here's a nice paper that breaks down relative skin proportion in mice [22]. Unfortunately it's about two different strains of mice rather than weigh gain within strains but the point is that they bigger mice get more skin as they grow, just as humans grow more skin as they grow. SemanticMantis (talk) 15:11, 27 January 2016 (UTC)[reply]
Ahem. Stretch_marks "are often the result of the rapid stretching of the skin associated with rapid growth or rapid weight changes." That sentence is not sourced, but see table 2 here [23], and perhaps add it to the article for your penance :) Now, this may have been avoided had Stu given a reference, and you also are correct that the skin can stretch a lot. Some specific types of stretch marks are less influenced by weight gain, but that's a small detail in an issue unrelated to OP. Please let's endeavor to include references when posting claims to the ref desks. Thanks, SemanticMantis (talk) 15:04, 27 January 2016 (UTC)[reply]

U.S. currency subjected to microwaves

See HERE Is there any validity to this? Hard to filter out the nonsense/conspiracies. If so what is the mechanism of action? 199.19.248.82 (talk) 02:07, 27 January 2016 (UTC)[reply]

I wouldn't say it's that hard. Youtube videos and anything associated with Alex Jones or spreading conspiracy theories like godlikeproductions.com and prisonplanet.com are obvious stuff to filter out. Of the top results, that will probably leave you with [24], [25] & [26]. A quick read should suggest both the first and second link are useful. In fact despite the somewhat uncertain URL, the snippet probably visible in Google for the first link does suggest it may be useful, which is confirmed by reading.

A critical eye* on the less top results will probably find others like [27] which are useful.

I don't think it's possible to be certain how big a factor the metallic ink (or perhaps just ink) was responsible for the cases when the bills did burn, and how much of it is simply that stacking a bunch of pieces of paper and microwaving them for long enough will generally mean they catch fire. Suffice it to say they are probably both factors. It's notable that these stories claiming evils lack controls, they didn't try a stack of similar sized ordinary paper (obviously it would be very difficult to use the paper used for bank notes.

BTW, [28] isn't actually too bad in this instance if you ignore the crazies. It does look like one of the more critical posters there made a mistake. While the idea that some minimum wage employee is going to be microwaving $1000 worth of $20 bills or heck that they would just so happen to have that much cash in their wallet isn't particularly believable, if you read carefully the original story carefully [29] the min wage employee was someone else not the person who had the money. Still, as some of the other sources above point out, there are obvious red flags in the original story like the fact that they claimed to microwave over $1000 in $20 bills but only show 30 bills (i.e. $600) there. And that their claim the burning is uniform isn't really true. The amount of burning shown varies significantly and while it's normally in the head sometimes it seems much more in the left eye than the right.

Critical eye = at a basic level, anyone who seriously believes there are RFID tags in bank notes would be best ignored. And while forum results can have useful info, it often pays to avoid them due to the number of crazies unless you can't find anything better. Instead concentrate on pages that sound like whoever is behind them is trustworthy and check out by reading. To some extent anything which sounds like it's claiming to be scientific is potentially useful in this instance since while there are a lot of sites and people who claim science when they are actually into pseudoscience this is much more common with stuff like alternative medicine, climate change deniers or ani-evolutionists than it is with conspiracy theories about RFID tags in banknotes.

A lot of this can be assessed without having to even open the sites/links in the Goodle search result. Some others do depend on existing knowledge, e.g. knowing the URLs for Alex Jones or conspiracy theorist sites. Still you only need a really brief look to realise goodlikeproductions or prisonplane are not places you should count on for useful info.

Nil Einne (talk) 08:55, 27 January 2016 (UTC)[reply]

Given how extensive known privacy invasions have become, and how obvious the government's motive for spotting large amounts of money is, I don't think condescension is deserved. The people I saw made various hypotheses and tested them. However, some of the assumptions may be questionable. For example, I doubt that RFID is the only way to track a bill by penetrating EM radiation, and I doubt that RFID chips inevitably catch on fire in a microwave. I am very suspicious of the uses of terahertz radiation and lower frequency radio waves - obviously, the higher the frequency/shorter the wavelength, the smaller the receiver can be and the more readily it can dissipate heat to its surroundings. Alternatively, terahertz can simply penetrate the human body, as with airport scanners, and so if someone designed a set of terahertz dyes, probably some conjugated double bond system that goes on a really long but tightly controlled distance, then they can have their damned identifying codes marked out in a way you will see only if you can scan through the terahertz spectrum with a more or less monochromatic emitter and view what is transmitted and reflected. If I see someone do that experiment on a bill, I'll believe it's not being tracked... maybe... otherwise, I should assume it is (it's just a question of whether those interested in you are important enough to have access) Wnt (talk) 15:13, 27 January 2016 (UTC)[reply]
Their tests were very poorly planned (if you genuinely believe that something has an RFID either feel for the tag or look at it under a microscope as someone else who didn't believe their nonsense did, don't microwave). And as I already said lacked even the most basic control for even the stupid test they were doing. And they were either incapable of even counting, or didn't even show all their results. Results which didn't even show what they claimed to show. So condescension is well and truly deserved.

BTW most terahertz radiation can barely penetrate the human body (our article says "penetrate several millimeters of tissue with low water content"). The main purpose of most airport scanners using terahertz radiation is to penetrate clothes not the human body (they may be able to see the body which is quite different from penetrating the body).

Note that in any case, the issue of whether bills are being tracked is unrelated to the question (unless your claiming the cause of the notes catching fire in microwaves really was because of RFID chips which it doesn't seem you are) and wasn't discussed by anyone here before you. I only mentioned that the specific claim mentioned in some sources discussing microwaving money (the presence of RFID chips in money as a cause for them catching fire) was incredibly stupid.

Nil Einne (talk) 19:23, 27 January 2016 (UTC)[reply]

Are dogs racist?

Do dogs prefer their own breed for mating or at least, are more aggressive towards breeds far away from them? --Scicurious (talk) 12:46, 27 January 2016 (UTC)[reply]

Intriguing question. I have absolutely no idea how to answer it though... just picture the kind of laboratory you'd have to set up to try to socialize dogs under highly consistent conditions, then see whether they act differently. I'm tempted just to read anecdotes here, like [30]. Individual people describe dogs with out-of-breed associations, even as others say that you can just tell at a dog show, etc. The existence of the mutt is proof that any breed loyalty is not absolute ... it's also a reminder that the dogs people buy are often not the result of freely assortative (or non-assortative) mating. Wnt (talk) 15:25, 27 January 2016 (UTC)[reply]
The studies will be done more like sociology/ethology, not through controlled exposure experiments. They'll use things like surveys and observations and medical records and lots of relatively fancy statistics. E.g. these [31] people have survey data on dog-dog aggression by breed, but I can't see that they reported it! Even if breed was not a significant factor, they should say so...This paper [32] does have relevant data, (tables 2, 3) but the data are sparse and breed of the other dog is not reported. Here are a few more scholarly papers that look promising [33] [34]. OP can find many more by searching google scholar for things like /dog breed intraspecies agression/. If OP is interested in anecdotes and OR (which is potentially valuable here), I'd suggest asking at a dog forum. SemanticMantis (talk) 15:56, 27 January 2016 (UTC)[reply]
If you go to any large dog park, you'll see dogs of all breeds playing together - even when there are enough of one common breed for them to potentially group together. So it seems rather unlikely that they care very much. The only preferences I think I see are that there seems to be some kind of broad preference for other dogs of similar size. Our lab gets visibly frustrated with very small dogs...but whether that is due to their behavioral tendencies is hard to tell. SteveBaker (talk) 16:04, 27 January 2016 (UTC)[reply]
Steve, I just misread your message and began wondering why your laboratory was getting frustrated with small dogs! Presumably, you mean your labrador! This breed, rather surprisingly, has topped several lists for aggression - particularly when their home territory is "invaded" by people such as postmen. Regarding the OP, I have no references to support this but I very much doubt there would be a psychological racism about mating amongst dogs. There may be preferences according to size, but just the other day I saw a rather humerous photo of a male Chihuahua perched on the back of a female Great Dane so he could mate with her. Very probably staged though.DrChrissy (talk) 16:24, 27 January 2016 (UTC)[reply]

Excessive Inbreeding as practiced by humans on pedigree dogs (controversy) has caused genetic defects that would not survive under natural selection while there is likely evolutionary survival value to hybrid vigor. Dogs sensibly rely more on their Vomeronasal organ to evaluate the pheremones of a potential mate than on any version of a Kennel club breed checklist. AllBestFaith (talk) 17:17, 27 January 2016 (UTC)[reply]

It's not about mating and it's not about dogs, but rats can be racist, see Through the Wormhole S06E01 Are We All Bigots. Of course, they can be educated not to be racist. Tgeorgescu (talk) 20:28, 27 January 2016 (UTC)[reply]

Why don't some species have a common name?

Many species have a common name. Human. Squirrel. Rat. Dog. Whale. Dolphin. Fern. Some species don't seem to have common names. Entamoeba histolytica. Staphylococcus aureus. Candida albicans. Why don't scientists invent common names for specific parasites, bacteria, and fungi? Instead of Staphylococcus aureus, which can be a mouthful to say, the common name may be Staphaur bacteria. 140.254.70.165 (talk) 12:49, 27 January 2016 (UTC)[reply]

Also, of the above names, only two are single species in common use, H. sapiens and C. familiaris. Robert McClenon (talk) 21:59, 27 January 2016 (UTC)[reply]
My answer as to why scientists don't invent common names is that they don't need to, because scientists refer to the species by its taxonomic name. It is up to non-scientists to invent common names, since the scientists are satisfied with the scientific name. Why journalists and others don't invent common names for every species is described below. Robert McClenon (talk) 21:56, 27 January 2016 (UTC)[reply]
It is "an attempt to make it possible for members of the general public (including such interested parties as fishermen, farmers, etc.) to be able to refer to" them, according to common name. I find it difficult to find an exception, but if common people relate somehow to a species, then a common name exists. Otherwise not. --Scicurious (talk) 12:57, 27 January 2016 (UTC)[reply]
As to fishermen, I will note that often the same common word, such as "trout" or "bass", may be used differently in different English-speaking regions. Fishermen who are aware of regional inconsistencies in naming will often use the unambiguous scientific name to disambiguate. Robert McClenon (talk) 21:56, 27 January 2016 (UTC)[reply]
It could also be that scientists just aren't that creative with names... FrameDrag (talk) 14:46, 27 January 2016 (UTC)[reply]
It's the other way round. Scientists give each known species a name. Common people are not prolific enough to keep up with them.Scicurious (talk) 14:54, 27 January 2016 (UTC) [reply]
Staphylococcus aureus is known just as 'Staph', similarly Streptococcal pharyngitis is known as 'Strep'[35], so those are the common names. Mikenorton (talk) 13:10, 27 January 2016 (UTC)[reply]
Golden staph.
Sleigh (talk) 13:49, 27 January 2016 (UTC)[reply]
Bacteria are a special case, where the usual test for a species, whether it breeds with itself and not with related species, does not apply. This results among other things in so-called species, such as E. coli, that consist of a multitude of so-called varieties that are really so different in their behavior that they are probably multiple species. But the question originally had to do primarily with plants and animals. Robert McClenon (talk) 21:53, 27 January 2016 (UTC)[reply]
Does that refer to the color of the snot ? :-) StuRat (talk) 16:40, 27 January 2016 (UTC) [reply]
The thing about common names is that they need to be popular and commonly used - and it's hard to dictate that. People name things if they need to - and not if they don't. People don't need common names for organisms they'll never encounter or care about. Also, there are far too many organisms out there to have short, simple, memorable names for all of them. We tend to lump together large groups of organisms into one common name. "Rat" (to pick but one from your list) formally contains 64 species...but our Rat article lists over 30 other animals that are commonly called "Rat" - but which aren't even a part of the genus Rattus. So allocating these informal names would soon become kinda chaotic and nightmareish. Which is why we have the latin binomial system in the first place. That said, scientists very often do invent common names for things - so, for example Homo floresiensis goes by the common name "hobbit" because the team that discovered this extinct species liked the name and it seemed appropriate enough that it's caught on. Whether that kind of common name 'catches on' is a matter of culture. All efforts to get members of the public to understand that there is no difference between a "mushroom" and a "toadstool" and to adopt a single common name fail because the public believe that there are two distinct groups of fungi even though there is no taxonomic difference between fungi tagged with one or other of those two terms. Another problem is that common names are (potentially) different in every language...so would you have these scientists invent, document, propagate around 5,000 common names - one in each modern human language? It's tempting to suggest that the same name would be employed in every language - but pronunciation difficulties and overlaps with names for existing organisms or culturally sensitive terms would make that all but impossible. SteveBaker (talk) 16:00, 27 January 2016 (UTC)[reply]
The OP is overlooking the more obvious Hippopotamus and Rhinoceros, which have local names but in English are known by these Latin-based names - or by "hippo" and "rhino", which mean "horse" and "nose" respectively. ←Baseball Bugs What's up, Doc? carrots→ 17:07, 27 January 2016 (UTC)[reply]
Of course, the full names in Greek being "River Horse" and "Horned Nose". (the names derive originally from Greek rather than Latin, though arrive at English through Latin transcription. The native Latin word meaning horse is "equus", c.f. equine. The native Latin word meaning nose is "nasus", hense "nasal".) Of course, both names are wrong. Hippos are not particularly closely related to horses, and the growths on the faces of rhinos are not true horns. So even in the original language of Greek, neither name is related to actual Biology in any way. Such is language. --Jayron32 20:45, 27 January 2016 (UTC)[reply]
  • The other issue is that the vast majority of species don't have common English names at all, because English speakers don't commonly encounter them. Consider the 400,000 different species of Beetle. Of course, we have some names for beetles English speaking people run into every day, like ladybugs/ladybirds or junebugs, or japanese beetles (even these names have multiple species they cover though, and often mean different unrelated species in different geographies). We have 400,000 different latin binomial names for these species, because each needs a unique identifier, but seriously, we don't also need 400,000 unique different English names for them, especially where they aren't beetles anyone runs into in their everyday lives. --Jayron32 20:37, 27 January 2016 (UTC)[reply]
In many cases, the differences between the species may not be significant enough that a non-zoologist recognizes them as different species. The common name may refer to a genus, a family, or an order. Most beetles are just called beetles, unless someone has a reason to identify them more specifically, such as "Japanese beetle" as a garden pest. Even with mammals, and even with large mammals, people don't always see the need for distinctive common names. "Zebra" and "elephant" are not species but groups of species. There usually really isn't a need for a common name for every species. Robert McClenon (talk) 21:50, 27 January 2016 (UTC)[reply]

Disadvantages of iontophoresis for administering drugs ?

Iontophoresis#Therapeutic_uses doesn't list the disadvantages, but they must be substantial, or I would have expected this method to have replaced needle injections entirely. So, what are the disadvantages ? I'd like to add them to the article. StuRat (talk) 16:43, 27 January 2016 (UTC)[reply]

This [36] is a very specific study about a specific thing, but it says in that one case (of problem, treatments, drugs, etc) "In contrast, electromotive administration of lidocaine resulted in transient pain relief only" compared to other treatments, which were concluded to be better. Here is a nice literature review [37] that has lots of other good refs. I'm no doctor, but I don't get the idea that it was ever intended to replace needles entirely. For one, it seems much slower. Another is that the permeability of skin is different with regard to different size compounds, so some things may be too big to pass through easily. Another potential factor is the stability and reactiveness of the compounds to the electrical field. It's also clearly more expensive and rather new, compared to injections via syringe and hypodermic needle, which are cheap and have been thoroughly studied for efficacy. If you search google scholar, you'll see lots of stuff about bladders and chemotherapy, and nothing about using it as a method to deliver morphine of flu vaccine. I'll leave you to speculate why that might be... Also I think you are vastly underestimating the time scale at which the medical field changes. The key ref from the article [38] is preaching that we should do more research on this, and cites small trials. And it is only from 2012! SemanticMantis (talk) 17:17, 27 January 2016 (UTC)[reply]
I just saw it in a 1982 episode of Quincy, M.E., so it's been around for at least 34 years. If it really could replace all injections, then I would think it would have, by now. StuRat (talk) 17:28, 27 January 2016 (UTC)[reply]
Well, sure, the idea has been around for a while. I'd also suggest a TV show isn't a great record of medical fact. My second ref says "The idea of using electric current to allow transcutaneous drug penetration can probably be attributed to the work done by Veratti in 1745." I agree it can't replace all injections. I agree there must be things it won't work for, and cases where syringes are just better. I'm trying to help you find out what those cases and things are. The references I gave above, and especially the refs within the second, discuss some difficulties and problems, but you'll have to read them to see what they're really talking about and to understand the details. To clarify what I said above, EDMA only works well with an ionized drug. That alone would probably be useful to clarify win the article. As for timing, when I see research articles on EDMA written in the past few years talking about "potential", and "new frontiers," I conclude it is not yet widely used for many things, but it may become more widespread in the future. Maybe someone else wants to find additional references or summarize them for you in more detail, but that's all I've got for now. SemanticMantis (talk) 17:58, 27 January 2016 (UTC)[reply]
I think it's unlikely a TV show would completely make up a medical procedure that didn't exist. StuRat (talk) 21:08, 27 January 2016 (UTC)[reply]
As for flu vaccine, picture running an electrophoresis gel with a mixture of large protein complexes and a small molecule like lidocaine. I'm thinking you wouldn't get one out of the well before the other runs off the bottom. Flu antigen is just not going to go far under the influence of a few plus or minus charges; it's like putting a square sail on a supertanker. Wnt (talk) 18:31, 27 January 2016 (UTC)[reply]
Isn't there a nasal spray flu vaccine ? That implies that it can pass through the skin on the inside of the nose. Is the diff between that skin and regular skin so much that electricity can't overcome it ? StuRat (talk) 21:10, 27 January 2016 (UTC)[reply]
Live attenuated influenza vaccine goes through cells by the usual receptor-mediated process. Even the most delicate mucosa, like the rectum, shouldn't let viruses or other large proteins slip past - HIV actually finds its CD4 receptors on the epithelial cells, as far as I recall. Wnt (talk) 23:15, 27 January 2016 (UTC)[reply]
An obvious limitation is spelled out in the article but is easily missed: the substance needs to be charged. But for chemicals to penetrate where they need to be in cells, often you want them to be neutral. To give kind of a bad example, crack cocaine is a neutral alkaloid, while the cocaine powder is a salt, and clearly the user notices the difference. There are many substances of course which can be charged if the pH is weird enough... but I think that means exposing not only the outside of your skin but the interstices of the cells to the weird pH; otherwise the stuff could end up stuck somewhere in the outer layers of skin with no prospect of moving further. That said, testing my idea, I found [39] which says that lidocaine and fentanyl have been delivered by this route. Fentanyl has a strongest basic pKa of 8.77 [40] so apparently this is not insurmountable. That reference also says it has been used on pilocarpine and ... tap water??? Reffed to this, which says the mechanism is not completely understood (!) but I don't expect to have access. Well, this is biology, a field that is under no obligation to make sense, since the cells can react however they want to an applied current. I should look further... Wnt (talk) 18:25, 27 January 2016 (UTC)[reply]
Yes, that tap water mention in our article shocked me, too. Is it really safe to inject that, or get it under your skin by any other mechanism ? (I realize some is absorbed through the skin when you take a bath, but even that can cause cell damage given enough time.) StuRat (talk) 21:00, 27 January 2016 (UTC)[reply]
I would call attention again to the question of cost (which SemanticMantis did bring up). I'm pretty sure an iontophoresis machine costs more than a needle and syringe. And even though the main machine is probably reusable, I imagine the part applied to the skin needs to be single-use for hygiene reasons. If the issue is simply the patient disliking injections, there are probably cheaper measures, like applying topical anesthetic before the injection. There's also been increasing attention given to intradermal injections, which require a much smaller needle and thus reduce discomfort. --71.119.131.184 (talk) 05:01, 28 January 2016 (UTC)[reply]

Why do humans around the world cover the genitals?

Depending on the culture, humans may or may not cover the breasts or the nipples. However, across most cultures, it seems that humans cover the genitals. Is this a universal human trait? Are there human societies that don't cover the genitals? I remember watching a film adaptation of Romeo and Juliet, and the setting looked as if it took place during the Italian renaissance. The men in the motion picture dressed themselves in long pants that really highlighted their genitals. But they still wore clothing that covered them. 140.254.229.129 (talk) 18:29, 27 January 2016 (UTC)[reply]

We have many good articles that relate to this issue. See modesty, nudity, nudism, taboo, as well ass public morality and mores for starters. No, the trait of hiding one's genitals is not completely universal among humans. If you look through the articles above, you'll see there are exceptions in various places/times/cultures. Nature vs. nurture and enculturation may also be worth looking in to. SemanticMantis (talk) 18:58, 27 January 2016 (UTC)[reply]
"ass public morality" isn't covered in the linked article. Unless you count the links to regulation of sexual matters, prostitution and homosexuality and other articles which may cover ass public morality. Nil Einne (talk) 19:01, 27 January 2016 (UTC)[reply]
Codpiece. Sagittarian Milky Way (talk) 20:10, 27 January 2016 (UTC)[reply]
Merkin too if we're listing such things. SemanticMantis (talk) 20:56, 27 January 2016 (UTC)[reply]
If merkins are outerwear in a Romeo and Juliet film it isn't historically accurate. Sagittarian Milky Way (talk) 22:44, 27 January 2016 (UTC)[reply]
Aside from moral/sexual issues, there are also practical reasons:
1) Hygiene. Do you really want to sit in a chair after a woman menstruated on it or a man's penile discharge dripped on it ? Of course, exposed anal areas are even more of a hygiene problem, but it's hard to cover one without the other (especially in the case of women).
2) Safety. An exposed penis is a good target for dogs or angry people, as are testicles.
3) Cold. Unless you happen to live in a tropical area, it's likely too cold for exposed genitals a good portion of the year. StuRat (talk) 21:06, 27 January 2016 (UTC)[reply]
On StuRat's #3, note that the Yaghan people of Tierra del Fuego were one of the better-known <insert politically-correct word for "tribes" here> who didn't wear clothes, despite the maximum temperature in the summer in that part of the world being only about 10 C. Tevildo (talk) 22:17, 27 January 2016 (UTC)[reply]
Why did they do that? I've heard that's why it's called Tierra del Fuego (they just stood around fires their whole lives). Sagittarian Milky Way (talk) 22:48, 27 January 2016 (UTC)[reply]
Nay, the article says that they didn't spend *all* their time around the fires ... to the contrary, the women went diving in very cold ocean waters for shellfish. I have no idea, but I wonder if their lifestyle helped with cold adaptation, so these dives wouldn't be fatal?? Wnt (talk) 01:25, 28 January 2016 (UTC)[reply]
The assumption in the question is false. In Ecuador, in some tribes the men tie the penis to a string around the waist. Supposedly this is to keep fish from swimming up it when they bath in the river. National Geographics in past decades always had photos of naked natives. Edison (talk) 04:46, 28 January 2016 (UTC)[reply]

Scientific description of Anelasmocephalus hadzii (Martens, 1978)

Hello! Anelasmocephalus hadzii is a species of Harvestman, described by someone called Martens (don't ask) in 1978, but in what paper did Martens describe this Harvestman? I cannot find the answer on any obvious sources, but maybe you will have more luck? Megaraptor12345 (talk) 21:50, 27 January 2016 (UTC)[reply]

Google scholar finds 7 relevant records from Martens in 1978 [41], but I think there are only two publications, and the rest are spurious bibliographical records of the one rather famous work Spinnentiere, Arachnida: Weberknechte, Opiliones. People have been citing it as recently as the last few months (presumably some as a species authority), but I don't read any of the languages of the most recent citing works listed here [42].
This [43] Opiliones wiki says the book is great and describes many European species, and has a photo of the title page, but says the book is hard to find (surprise). Anyway, it seems very very likely that the species is described in the book published by Fischer Verlag, Jena, 1978. Either that, or Martens published a paper describing a harvestman species in 1978 without using the word "Opiliones" (unlikely) or Google doesn't know about it (possible, but still unlikely IMO). This is all just subjective evidence of course, if you need to be sure, I think you'll need to get a hard copy and someone who reads German. I'd imagine most research libraries could get you a copy through interlibrary loan. SemanticMantis (talk) 22:35, 27 January 2016 (UTC)[reply]

January 28

Acquired resistance to diseases such as Zika virus

When a person becomes infected with Zika virus, they get mildly sick, then they recover. Presumably this is because some response to the disease occurred in the body which took away the virus's ability to make the person sick. What is the nature of this immune response? How long does it last? In other words, if one got dengue fever, yellow fever, west nile, or zika (all somewhat related viruses per the article) could they catch the same strain of the same disease the next week? Or does the previous infection and recovery provide some immunity for some period of time? If the latter is so, then why can't a vaccine or immunization be devised? Has there been any discussion of women letting themselves get Zika when non-pregnant so that the next year they could have a baby who was not microcephalic despite exposure to mosquitoes carrying the virus? The worry seems to be so great that some Central American governments are advising women not to have babies for some unspecified period of time, during which time the governments plan the dubious goal of eliminating mosquitoes of the sort which are vectors. In the US midwest, governments have ineffectually spent a lot of money for years trying to get rid of mosquitoes which transmit the related west nile. Edison (talk) 04:32, 28 January 2016 (UTC)[reply]

Acquired immunity takes a while after infection to develop. As for your other questions, the general answer is "it depends". Some pathogens tend to not change very much. Smallpox is such a pathogen, which is why we were able to eradicate it: one vaccination and you're immune to all forms of it. Other pathogens vary widely. Influenza is a virus which changes epitopes very frequently, which is why there is no universal vaccine and they make a new vaccine every year. This is simply evolution in action: pathogens are constantly adapting to their hosts so they can reproduce and spread more effectively. Note also that creating vaccines is as much of an art as a science. There is a lot of trial-and-error that goes into vaccine development. And some pathogens are just not good targets for vaccination. Malaria is one example; the malarial parasites "hide" inside liver and blood cells most of the time, which shields them from the immune system. --71.119.131.184 (talk) 04:51, 28 January 2016 (UTC)[reply]