Wikipedia:Reference desk/Science: Difference between revisions
Line 232: | Line 232: | ||
::::The faster something moves, the heavier it becomes ([[relativistic mass]]). The [[kinetic energy]] of its motion, as viewed from your rest frame, is a kind of energy, and has mass per E=mc<sup>2</sup>. The more kinetic energy you add to the particle, the more massive it becomes, and the more energy it takes to speed it up. In the extreme case, ''all'' of the (relativistic) mass of a photon is energy - you might add more energy to it, but the mass increases in direct proportion, so the speed never changes. I should note that relativistic mass has become unpopular in recent years, but I feel like that's a fad - since ultimately, many kinds of "rest" mass are predicated on the kinetic and potential energy of the substituent particles. [[User:Wnt|Wnt]] ([[User talk:Wnt|talk]]) 16:02, 11 February 2016 (UTC) |
::::The faster something moves, the heavier it becomes ([[relativistic mass]]). The [[kinetic energy]] of its motion, as viewed from your rest frame, is a kind of energy, and has mass per E=mc<sup>2</sup>. The more kinetic energy you add to the particle, the more massive it becomes, and the more energy it takes to speed it up. In the extreme case, ''all'' of the (relativistic) mass of a photon is energy - you might add more energy to it, but the mass increases in direct proportion, so the speed never changes. I should note that relativistic mass has become unpopular in recent years, but I feel like that's a fad - since ultimately, many kinds of "rest" mass are predicated on the kinetic and potential energy of the substituent particles. [[User:Wnt|Wnt]] ([[User talk:Wnt|talk]]) 16:02, 11 February 2016 (UTC) |
||
Is it possible that it is just a respect for observation? |
|||
(''I'' am not waiting for answer) |
|||
~~~~Like sushi |
|||
== Immunity vs resistance == |
== Immunity vs resistance == |
Revision as of 01:19, 15 February 2016
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
February 8
marshy gas from mines
as during mining ,the marshy gas are evolve ,why this happen? please give the scientific reason.https://en.wikipedia.org/w/index.php?action=edit&preload=&editintro=&preloadtitle=§ion=new&title=Wikipedia%3AReference+desk%2FScience&create=Ready%3F+Ask+a+new+question%21# — Preceding unsigned comment added by Shahjad ansari (talk • contribs) 02:23, 8 February 2016 (UTC)
- See Methane#Occurrence. AllBestFaith (talk) 10:55, 8 February 2016 (UTC)
- See also firedamp. The methane is produced as coal is heated (due to progressive burial) and some of it is retained in the rock when the coal becomes uplifted sufficiently to mine, where it can be a problem. Mikenorton (talk) 21:42, 8 February 2016 (UTC)
Formula for lens
give the formula equation for lens ,in which one longitudinal part areat n1 refractive index , second part at n3 refractive index and lens of n2 refractive index.https://en.wikipedia.org/w/index.php?action=edit&preload=&editintro=&preloadtitle=§ion=new&title=Wikipedia%3AReference+desk%2FScience&create=Ready%3F+Ask+a+new+question%21# — Preceding unsigned comment added by Shahjad ansari (talk • contribs) 02:32, 8 February 2016 (UTC)
- Sorry, we don't do your homework for you. Check the articles Refraction and Lens (optics) for the info you need. 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 10:35, 8 February 2016 (UTC)
Possible to change taste buds in adulthood?
I'm 20 and hate the taste of vegetables unless it's been thoroughly cooked and/or mixed with other flavours. Could I change that and if so is there a known method? 2.103.13.244 (talk) 02:54, 8 February 2016 (UTC)
- Apparently it's in your genes. Googling "why some people vegetables" throws up some interesting links, including this one which suggests you need "bitter blockers".--Shantavira|feed me 11:08, 8 February 2016 (UTC)
- Technically that's a medical diagnosis, and we aren't supposed to do that. It's certainly possible that there would be some other mechanism in this case besides genetics, which is almost never 100%. Wnt (talk) 12:41, 8 February 2016 (UTC)
- Technically, that isn't a medical diagnosis, it's a biology reference. See User:Kainaw/Kainaw's criterion. Unless we're telling someone that a) they have a disease or b) what the disease is likely to do to them personally or c) how to treat their diseases, there is no problem with providing answers about human biology. --Jayron32 15:09, 8 February 2016 (UTC)
- Technically that's a medical diagnosis, and we aren't supposed to do that. It's certainly possible that there would be some other mechanism in this case besides genetics, which is almost never 100%. Wnt (talk) 12:41, 8 February 2016 (UTC)
"Apparently it's in your genes" diagnosis "this one which suggests you need "bitter blockers" treatment. μηδείς (talk) 18:55, 8 February 2016 (UTC)
- I think you're a bit too keen to be jumping on the 'medical advice' bandwagon. This isn't a question about a medical complaint, pointing out that it's genetic is not a diagnosis and offering links for the OP to follow up is not prescribing treatment Mike Dhu (talk) 10:12, 9 February 2016 (UTC)
- Have a look at our long, detailed, and well-referenced article taste. It's complicated, and involved taste buds, but also psychology, nutritional needs, evolutionary past, culture, childhood development, exposure, etc. etc. Most people I know enjoy some foods at age 40 that they did not at age 20. Here's a selection of articles that discuss aspects of how taste perception can change with age [1] [2] [3]. Here's a freely accessible article that discusses a bit about how children's diet preferences are shaped by the adults around them, and you might find it interesting background reading [4]. We have some references for treatment of [[5]] and also Avoidant/restrictive_food_intake_disorder#For_adults, so I would look at the refs there if I wanted to learn more details about methods for expanding my taste preferences. SemanticMantis (talk) 15:40, 8 February 2016 (UTC)
- My experience is that a lot depends on how the food is cooked. Generally (as our OP mentions), brief cooking retains flavor and long cooking destroys it. Generally, short cooking is what people want because they crave the maximum amount of flavor - but I suppose that if you don't like those flavors then the reverse might be the case. Unfortunately, cooking for too long destroys much of the nutritional benefits of eating vegetables - and also destroys any crunchy, textured sensations and reduces them to an unpleasant mush. Honestly, I'd recommend re-visiting the taste of lightly cooked (or even raw) veggies...and if that's still unpleasant, dump them into some kind of sauce that you like. A chili or curry-based sauce will annihilate the taste of almost anything! Also, it's a horrible generalization to say that you don't like "vegetables" - there are hundreds of different kinds out there - and they don't all taste the same. Gone are the days when you had a choice between carrots/broccoli/cabbage/peas/french-beans/corn. Now you can get 'baby' versions of lots of things - there are 50 kinds of beans out there - there are leafy greens of 20 different kinds to choose from - there are things like asparagus (which used to be ruinously expensive - and now isn't), avocado and artechokes to play around with. It would be really surprising if you hated all of them, and even more surprising if you hated all of them no matter how they were prepared. Modern cuisine encourages us to mix weird, contrasting things together - so go ahead and mix jalapeno peppers, a little melted chocolate and peas (yes, really!) - or cook your cabbage in orange juice instead of water (one of my personal favorites!) - or mix nuts and fruit into a green salad. There is no "wrong" answer here.
- I grew up in an environment where veggies were low in variety, and invariably over-cooked. When I married my first wife (who is an excellent French cook) - my eyes were opened to the incredible array of better options out there. SteveBaker (talk) 17:24, 8 February 2016 (UTC)
- My experience changing what I drink may be helpful. In my 20's I drank Mountain Dew (high sugar soft drink). Then I switched to herbal tea, but needed lots of sugar in it to make it palatable. I then gradually reduced the amount of sugar, and now I don't need any. So, I suggest you initially mix just a bit of veggies with something you like, then gradually change the ratio until it's mostly veggies. StuRat (talk) 17:30, 8 February 2016 (UTC)
- Incidentally, I notice that our OP recently asked a question about eating fruit that suggests that (s)he doesn't eat that either. That's a more worrying thing. SteveBaker (talk) 17:41, 8 February 2016 (UTC)
- I think Mouthfeel is something you may want to look at, along with food neophobia and there's also RFID, an escalated version of picky eating. It's interesting that SteveBaker mentions the texture of food. I wouldn't touch vegetables until my early 30s, even though I had a girlfriend who worked as a chef at The Savoy in London (I'm sure your wife is much better Steve!). I disliked the "flavor" of foods from my childhood until my early 20s and retrospectively I think it was more the texture I didn't like. Mike Dhu (talk) 17:09, 9 February 2016 (UTC)
- The thing with texture is that you can play around with it to an amazing degree. Consider just the potato. You can have creamy mashed potato, mashed potato with deliberate chunks of potato and/or skin in it, you can have french fries, boiled potatoes (with and without skin) and also roasted and baked potato. You can do hash-browns or fry crispy potato skins - or you can make potato chips. That's a MASSIVE variation in texture and crunch with just one vegetable being involved. With creativity, you can do similar transformations with other veggies too. If you don't like (say) peas - rather than just having warm round things - you can cook them, mash them, form them into patties, then fry them ("Peaburgers"!) - or you can blend them into a smoothie or a soup - there are lots of options if you're prepared to be creative and are open to trying new techniques. SteveBaker (talk) 17:27, 9 February 2016 (UTC)
- I totally agree with your points re the texture of food, but my point to the the OP was that the texture and the flavor of food may be interlinked. I like the taste of creamy mashed potato (not a vegetable of course), but lumpy mashed potato is something I can't eat, I find the lumps in it unpalatable, not because of the taste per se, but because I don't like the texture of it. Mike Dhu (talk) 19:19, 9 February 2016 (UTC)
- Yeah - you probably don't want to go there. What is a "vegetable" and what isn't is a topic of frequent and prolonged debate around here. Bottom line is that there is a strict scientific definition, a strict culinary definition and a whole messy heap of what-people-think-a-vegetable-is. From the lede of Vegetable:
- "In everyday usage, a vegetable is any part of a plant that is consumed by humans as food as part of a savory meal. The term "vegetable" is somewhat arbitrary, and largely defined through culinary and cultural tradition. It normally excludes other food derived from plants such as fruits, nuts and cereal grains, but includes seeds such as pulses. The original meaning of the word vegetable, still used in biology, was to describe all types of plant, as in the terms "vegetable kingdom" and "vegetable matter"."
- So...um...I claim victory. A potato is a vegetable. <ducks and runs> SteveBaker (talk) 20:57, 9 February 2016 (UTC)
- I can see how that could lead to a very lengthy discussion, and in my mind I always thought of potatoes as a vegetable, in the same way that I think of poultry and fish as meat (although I've just looked at the meat article and see the same situation applies). Anyway, good job you ducked (bad pun, I know!) Mike Dhu (talk) 11:08, 10 February 2016 (UTC)
- Yeah - you probably don't want to go there. What is a "vegetable" and what isn't is a topic of frequent and prolonged debate around here. Bottom line is that there is a strict scientific definition, a strict culinary definition and a whole messy heap of what-people-think-a-vegetable-is. From the lede of Vegetable:
- I totally agree with your points re the texture of food, but my point to the the OP was that the texture and the flavor of food may be interlinked. I like the taste of creamy mashed potato (not a vegetable of course), but lumpy mashed potato is something I can't eat, I find the lumps in it unpalatable, not because of the taste per se, but because I don't like the texture of it. Mike Dhu (talk) 19:19, 9 February 2016 (UTC)
- The thing with texture is that you can play around with it to an amazing degree. Consider just the potato. You can have creamy mashed potato, mashed potato with deliberate chunks of potato and/or skin in it, you can have french fries, boiled potatoes (with and without skin) and also roasted and baked potato. You can do hash-browns or fry crispy potato skins - or you can make potato chips. That's a MASSIVE variation in texture and crunch with just one vegetable being involved. With creativity, you can do similar transformations with other veggies too. If you don't like (say) peas - rather than just having warm round things - you can cook them, mash them, form them into patties, then fry them ("Peaburgers"!) - or you can blend them into a smoothie or a soup - there are lots of options if you're prepared to be creative and are open to trying new techniques. SteveBaker (talk) 17:27, 9 February 2016 (UTC)
Falling from a building
If someone fell from the fifth floor of a building, would they die or just be badly hurt? 2607:FB90:1225:2047:A4E6:5421:24F2:7B82 (talk) 03:49, 8 February 2016 (UTC)
- It depends how they land and what they land on. ←Baseball Bugs What's up, Doc? carrots→ 03:59, 8 February 2016 (UTC)
- If they land on concrete? 2607:FB90:1225:2047:A4E6:5421:24F2:7B82 (talk) 04:12, 8 February 2016 (UTC)
- Then it depends on how they land. But their odds are not good. Here is someone's idea for a strategy. ←Baseball Bugs What's up, Doc? carrots→ 04:16, 8 February 2016 (UTC)
- It would be far better to land on a Life net. That's a little article I wrote a few years ago. Cullen328 Let's discuss it 04:20, 8 February 2016 (UTC)
- Obviously. But the OP specified concrete. ←Baseball Bugs What's up, Doc? carrots→ 05:02, 8 February 2016 (UTC)
- It would be far better to land on a Life net. That's a little article I wrote a few years ago. Cullen328 Let's discuss it 04:20, 8 February 2016 (UTC)
- Then it depends on how they land. But their odds are not good. Here is someone's idea for a strategy. ←Baseball Bugs What's up, Doc? carrots→ 04:16, 8 February 2016 (UTC)
- If they land on concrete? 2607:FB90:1225:2047:A4E6:5421:24F2:7B82 (talk) 04:12, 8 February 2016 (UTC)
- On page 17 of this OSHA document [6], figure 6 shows the distribution of workplace fatalities as a function of number of feet fallen. From that, you can see that a small number of people died after falls of less than six feet - and most people in the workplace who die after falling, fell less than 40 feet...which is less than 5 floors. So for sure, lots of people die every year from falling fell from considerably less height than the 5th floor.
- A few other sources I checked with suggest the the risk of death starts to go up sharply at falls of around 8 to 10 meters - with about a 50/50 chance of dying if you fall from 15 meters and a near certainty of dying at around 25 meters. A typical building floor height is about 3.5 meters - so 5 floors would be 17.5 meters - and that's about a 75% chance of death. But there really is no 'safe' fall height. People trip and fall and whack their heads against something as they reach ground level and die as a result - so even a fall from zero height can be fatal.
- CONCLUSION: If you fall from the 5th floor - you have roughly a 3 in 4 chance of dying - there is no 'safe' distance.
- SteveBaker (talk) 04:59, 8 February 2016 (UTC)
- Would it be a quick death or a long and agonizing one? 2607:FB90:1225:2047:A4E6:5421:24F2:7B82 (talk) 15:13, 8 February 2016 (UTC)
- I don't see any data on that. One would presume that a head-first impact would be quick - and feet-first much less so - but it's very hard to say, and as skydivers soon discover, bodies rotate during free-fall in ways that can be hard to control. I wouldn't want to make any bets on that one. SteveBaker (talk) 17:07, 8 February 2016 (UTC)
- Quick, call the Mythbusters before they're cancelled! FrameDrag (talk) 20:48, 8 February 2016 (UTC)
- I don't see any data on that. One would presume that a head-first impact would be quick - and feet-first much less so - but it's very hard to say, and as skydivers soon discover, bodies rotate during free-fall in ways that can be hard to control. I wouldn't want to make any bets on that one. SteveBaker (talk) 17:07, 8 February 2016 (UTC)
Is it best for a man/woman to see a male/female psychiatrist respectively?
Just curious if it's generally best for a man to see a male or female psychiatrist and for a woman to see a male or female psychiatrist, or if there's no recommendation in the psychology community. 2.103.13.244 (talk) 05:22, 8 February 2016 (UTC)
- Most psychiatrists base their treatment on pills. I hardly see how it could matter the gender of those who prescribes you pills. Psychiatrists are also not necessarily part of the psychology community, they could be psychotherapists too, but primarily they are physicians. I suppose you want to know whether gender of psychologists, psychotherapists, counsels and the like matter.
- On the practice it's clear that psychiatrists are mostly male, and the psychology community is mostly female. That reduces your chances of picking a specific gender. Anyway, the role of gender in the quality of psychotherapy seems to be negligible, in the same way that you don't need a therapist with the same age, religion, race, as you. I see that it could even be an advantage to have a certain distance from your therapist, since you both are not supposed to enter a private relationship. --Llaanngg (talk) 11:35, 8 February 2016 (UTC)
- [citation needed] for a lot of this, perhaps most importantly on the first sentences of each paragraph. SemanticMantis (talk) 15:30, 8 February 2016 (UTC)
- SemanticMantis, here they are:
- [7] "Like many of the nation’s 48,000 psychiatrists, Dr. Levin, in large part because of changes in how much insurance will pay, no longer provides talk therapy, the form of psychiatry popularized by Sigmund Freud that dominated the profession for decades. Instead, he prescribes medication, usually after a brief consultation with each patient"
- [8] "Psychiatry, the one male-dominated area of the mental health profession, has increasingly turned to drug treatments."
- [9]: The changing gender composition of psychology.
- And [10] Need Therapy? A Good Man Is Hard to Find. "He decided to seek out a male therapist instead, and found that there were few of them."
- I do admit though that the effect of gender matching with your therapist (or not) is debatable. The debate is still open. I suppose it comes down to the patient's world-view. If it's important for the patient, then probably it can influence outcome. The same probably applies to ethnicity. --Llaanngg (talk) 09:56, 9 February 2016 (UTC)
- [11]"As Carey's timely article notes, there is nothing in the rather limited mainstream scientific literature on gender and treatment outcome suggesting unequivocally that either males or females make better, more effective psychotherapists."
- [12] "a female therapist genuinely is able to help a male client as well as a female client, and a male therapist is truly able to help a male client as well as a female client, the fact is that if a client comes in with a pre-conceived notion about the therapist based on gender, it has the potential to affect treatment if not addressed."
- --Llaanngg (talk) 09:56, 9 February 2016 (UTC)
- User:Llaanngg, thank you. Your claims sounded reasonable, but this is, after all, a reference desk :) SemanticMantis (talk) 14:51, 9 February 2016 (UTC)
- For some people, maybe. A psychiatrist is indeed different than a psychologist, but gender match in medical and therapeutic professions can indeed be a factor in outcomes. Here is a study that specifically looks at effects of gender matching in adolescents [13]. That one is freely accessible, these two studies [14] [15] are not, but they also discuss gender matching in therapeutic contexts. Note that all three also discuss matching of ethnicities as a potential important factor too. SemanticMantis (talk) 15:30, 8 February 2016 (UTC)
Having been treated by half a dozen psychiatrists and therapists, I will say that the race/culture, age and gender of your treatment providers definitely matters in some cases, even for "pill prescribers" because your story may sound different to different doctors. For example, I've been routinely noted to have "poor eye contact" and be diagnosed with borderline personality disorder and bipolar disorder by old white men, but younger psychiatrists are more up to date on neuroscience research and my female psychiatrists (including a South Asian) tend to agree with post-traumatic stress disorder or complex PTSD. Also Asian treatment providers definitely get cross-cultural struggles and Asian cultural values like conflict aversion, whereas white providers often don't, frequently chalking it up to some personality defect or saying that you're "non-assertive". Yanping Nora Soong (talk) 16:06, 8 February 2016 (UTC)
- I'd say that if it's important for you as a patient, then, it is important for the outcome. However, I don't believe it is a general factor per se.Llaanngg (talk) 09:56, 9 February 2016 (UTC)
cramps or a "charley horse" after orgasm
My girlfriend often has serious cramps (or a charley horse)after she has an orgasm. The cramp is usually in her lower left calf. This is not a medical question. I am just curious how an orgasm and a cramp in the lower leg can be connected (given the very different muscles involved). 147.194.17.249 (talk) 05:41, 8 February 2016 (UTC)
- For bemused readers.... Charley horse. Ghmyrtle (talk) 08:49, 8 February 2016 (UTC)
- Orgasm often involves muscular contractions not just in the groin area, but throughout the body -- so in some cases, different muscles can cramp after orgasm. (I know first-hand, I've pulled a leg muscle once or twice during sex.) FWIW 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 08:42, 8 February 2016 (UTC)
- Differ love and porn! Porn can be violent. In some cultures sex is a secret and porn is the only “manual” and not a good advice at all. We have wikipedia and it sould give some more reliable information. The next step is You to care what You are doing. But some human are very fragile. When the charley horse is always on the same place You can find the reason. --Hans Haase (有问题吗) 11:37, 8 February 2016
(UTC)
- Does Hans Haase 有问题吗's post above make sense to someone? In this case and in previous cases too I am unable to even guess what he's trying to say. --Llaanngg (talk) 11:45, 8 February 2016 (UTC)
- Yes, I get the basic gist of it, and I usually can with Hans' posts. Then again, I have lots of experience reading listening to ESL. Respectfully, this is not the best place for such comments and discussion. SemanticMantis (talk) 15:19, 8 February 2016 (UTC)
- Does Hans Haase 有问题吗's post above make sense to someone? In this case and in previous cases too I am unable to even guess what he's trying to say. --Llaanngg (talk) 11:45, 8 February 2016 (UTC)
- Our articles on this are really, really bad. Charley horse confounds multiple conditions and multiple colloquial terms until there's no telling what is what. Cramp does virtually the same - it is hard for me to accept that the usual sort of "charley horse" has anything to do with failure of ATP to loosen muscles, since generally it is a sudden onset of a muscle contraction. We'll have to look this one up from scratch... after which, we might want to rewrite those articles quite nearly from scratch. Wnt (talk) 12:06, 8 February 2016 (UTC)
- I should share the first good reference I found at [16] (I just did a PubMed search for leg cramp and this was one of the first things) Apparently there is a treatment for leg cramps ...... it involves injecting 5 ml of 1% lidocaine into the "bifurcation of the branches that is located in the distal two-thirds of the interspace between the first and second metatarsals" - this is a nerve block of "the medial branch, which is the distal sensory nerve of the deep peroneal nerve". The site is on the inside of the base of the big toe. The effect was to reduce cramps by 75% over a two-week study period. As part of their discussion they say
The mechanism(s) of leg cramps are yet to be clarified, but disturbances in the central and peripheral nervous system and skeletal muscle could be involved (Jansen et al. 1990; Jansen et al. 1999; Miller and Layzer 2005). Electrophysiologically, cramps are characterized by repetitive firing of motor unit action potentials at rates of up to 150 per sec. This is more than four times the usual rate in maximum voluntary contraction (Bellemare et al. 1983; Jansen et al. 1990). In a human study, Ross and Thomas indicated a positive-feedback loop between peripheral afferents and alpha motor neurons, and that this loop is mediated by changes in presynaptic input. This loop is considered a possible mechanism underlying the generation of muscle cramps (Ross and Thomas 1995). The frequency of nocturnal leg cramps has also been suggested to result from changes in hydrostatic pressure and ionic shift across the cell membrane in the calf muscles in the recumbent position, inducing hyperexcitability of the motor neurons. Consequently, the pain of the cramps may be caused by an accumulation of metabolites and focal ischemia (Miller and Layzer 2005). The difference in these conditions in each patient may explain the diverse symptomatology of the cramps.
So the thing I'm thinking of is possibly, not certainly, related to some kind of feedback, possibly via the spine only, between sensation of what the body part is doing and a motor response. It seems easy to picture how infrequent activities might somehow jiggle such a sensitive mechanism. Honestly, because this is a regulated phenomenon with different characteristics than usual contraction, I'm not even entirely sure it is pathological - for all I know, the body might be administering it as some sort of health intervention on itself. Note that I definitely cannot and will not diagnose the woman involved here - there are a thousand things she could be experiencing that aren't what I have in mind. Wnt (talk) 12:25, 8 February 2016 (UTC)
- Have the OP and his girlfriend tried different positions? Seriously: I myself often used to (and still occasionally do) get leg cramps when sitting on a hard chair for extended periods – this first arose during long services in a cramped (heh!) school chapel – but avoiding such a position makes them much rarer. It may be that different postures during the act might change the forces on the relevant muscles sufficiently to lessen the problem. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 15:19, 8 February 2016 (UTC)
Jump cushion
Are jump cushions ever used in firefighting in lieu of life nets? If so, how effective are they? Do they even actually exist, given that they're not on Wikipedia? 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 10:31, 8 February 2016 (UTC)
- See [17]. Quoted maximum jump height is 40m. AllBestFaith (talk) 10:49, 8 February 2016 (UTC)
How many defecators?
Is it possible to come up with a reasonable estimate of how many humans are defecating at any given moment? -- Jack of Oz [pleasantries] 11:56, 8 February 2016 (UTC)
- If I were to pull a number out of my ass...50 million. Make a ballpark assumption the average human spends 10 minutes a day pooping, seven billion humans, and there you go. Should be within an order of magnitude of reality. Someguy1221 (talk) 11:59, 8 February 2016 (UTC)
- Thanks for the key, Someguy (I've been away). (I quibble with the assumption of 10 minutes per day per person, but I can adjust the calculation.) -- Jack of Oz [pleasantries] 09:37, 14 February 2016 (UTC)
- Given that there are certain times when defecation is more likely (when you get up in the morning, and perhaps also before bed in the evening), the number doing it at any given time may depend on the population density of the time zones matching those times of day. First thing in the morning in China is likely to see a lot more poopers than the similar time in the mid-Pacific. — Preceding unsigned comment added by 81.131.178.47 (talk) 14:37, 8 February 2016 (UTC)
- Today's SMBC comic [18] is highly relevant to this question [19] . SemanticMantis (talk) 18:29, 8 February 2016 (UTC)
- Which of those two links should I follow? —Tamfang (talk) 08:10, 10 February 2016 (UTC)
Perspective machines
What's a perspective machine, or in particular, a railroad perspective machine? The main source for Nester House (Troy, Indiana) says "The building's 1863 design is attributed to J. J. Bengle, the inventor of the railroad perspective machine." Google returns no relevant results for <perspective machine>, and the sole result for <"railroad perspective machine"> is this main source. Nyttend (talk) 15:46, 8 February 2016 (UTC)
- I haven't the foggiest but my guess would be that he invented a machine that helped with making accurate perspective drawings. Architectural drawings showing a building from an angle are normally axonometric projections where parallel lines stay parallel rather than using perspective. A nice perspective drawing helps with selling a design to a client. Dmcq (talk) 16:20, 8 February 2016 (UTC)
- Just had a look around and machine like what I was thinking of, the 'perspectograph plotter', was made in 1752 by Johann Heinrich Lambert, see [20], which is before that man's time. So it was either something else or a refinement on that. Dmcq (talk) 16:39, 8 February 2016 (UTC)
- There are several kinds of quasi-realistic perspective - "single point" and "two point" being the most commonly mentioned. I wonder whether the term "railroad perspective" might refer to single-point perspective - implying that the way that two parallel railroad rails seem to meet at the horizon. This is just a guess though...take it with a pinch of salt! SteveBaker (talk) 17:04, 8 February 2016 (UTC)
- Yes, long parallel straight lines are relatively rare in nature, and in that time frame railroad rails would have been an ideal application for a perspective drawing. StuRat (talk) 17:22, 8 February 2016 (UTC)
- My thoughts exactly. Thinking about a railroad "perspective-machine" didn't get me very far - but thinking in terms of a "railroad-perspective" machine definitely makes me suspect that we're thinking in terms of a single-point projection. Our article on Perspective mentions the word "railroad" three times when discussing this - so I'm starting to believe that this must be what's meant here. SteveBaker (talk) 17:31, 8 February 2016 (UTC)
- Typeset content describing the building in the cited PDF says "railroad perspective machine" and "Bengle", but the hand-written inscription on the drawing of the building says "railway perspective machine" and spells the name "Begle" (no "n" in it). Googling for "railway pespective" finds tons of hits for the same one-point perspective that SteveBaker suspected. I'm not finding anything in Google's patent database for Begle though ("perspective" is a poor search term, since damn near every object patent includes a perspective drawing of it). DMacks (talk) 20:29, 8 February 2016 (UTC)
- This newspaper article confirms that a "J. J. Bengle" lived in Denison, TX in 1907. I don't know how that ties in with any other known dates and places of residence of the architect. The newspaper article does not give any helpful details - "J. J. Bengle has returned from a trip to Galveston and other points." That's it in its entirety, I'm afraid. Tevildo (talk) 21:01, 8 February 2016 (UTC)
- I often wonder how people would feel, knowing that their only mark on modern history is the fact that they once returned from Galveston. :-( SteveBaker (talk) 15:17, 11 February 2016 (UTC)
- This newspaper article confirms that a "J. J. Bengle" lived in Denison, TX in 1907. I don't know how that ties in with any other known dates and places of residence of the architect. The newspaper article does not give any helpful details - "J. J. Bengle has returned from a trip to Galveston and other points." That's it in its entirety, I'm afraid. Tevildo (talk) 21:01, 8 February 2016 (UTC)
- When my father was employed by the State Railways many years ago, as an Inspector of Permanent Way, he showed me a device he used which I recall was called a "perspective sight". It was essentially a modified pair of binocculars. It is critical that ralway lines be accurately parallel and straight, but get out of true over time for various reasons. Bad weather (washouts from exceptionally heavy rain) and extremely hot days can cause the lines to buckle. If you look with the naked eye, you cannot see buckling that will derail a speeding train. Binocculars foreshorten perspective, so if you stand between the two railway lines and look along the track with binocculars, you see the distance reduced, and because of the binoccular's magnification, any buckling becomes easily visible. The binocculars the Railway supplied (the "perspective sight") had an adjustable pair of lines that converge on a point (the vanishing point). You adjusted the lines so that they aligned with the railway lines - giving a minor advantage in seeing any buckling. There were horizontal calibation marks (which have non-linear spacing due to viewing height & perspective) so that the inspector could say to the maintenance crew things like "go forward 320 metres and straighten there." They had a special instrumented carriage for detecting rail missalignment, but the binocculars facilitated a quick response to any problem due to extreme weather, regardless of where the instrument carriage was. 1.122.229.42 (talk) 00:53, 9 February 2016 (UTC)
- As a matter of curiosity, what country's "State Railways" did he work for? --76.69.45.64 (talk) 05:13, 9 February 2016 (UTC)
- The IP geolocates to metro Perth, so I expect that this is a reference to the Western Australian Government Railways. Nyttend (talk) 13:54, 9 February 2016 (UTC)
- That might explain why there was little concern about curved tracks...L-O-N-G stretches of dead straight train tracks there. SteveBaker (talk) 20:52, 9 February 2016 (UTC)
- The South Australian Railways actually. And I'm not within 1000 km of Perth. The poster previously at 1.122.229.42.58.167.227.199 (talk) 03:11, 11 February 2016 (UTC)
- Nothing quite as long and straight as the Trans-Australian Railway I'd guess though, the curvature of the earth probably matters more there! Dmcq (talk) 16:26, 11 February 2016 (UTC)
- Excellent info ! StuRat (talk) 00:58, 9 February 2016 (UTC)
- Wow! That's a typically ingenious invention for the era. Sadly, these days a couple of visible light laser beams would make a much simpler and more efficient solution. I wonder how they coped with warping around curves and across varying slope though. SteveBaker (talk) 03:38, 9 February 2016 (UTC)
- "Sadly"? What an odd perspective to find a simpler and more efficient solution to be sad. (No insult intended, just an observation.) Deli nk (talk) 14:20, 9 February 2016 (UTC)
- Sadly - because I love the ingenuity of the binocular approach...while recognizing that using a couple of lasers is probably a more efficient way to do it these days. SteveBaker (talk) 20:52, 9 February 2016 (UTC)
- Evenings and mornings was just what I was going to suggest, when you still have enough light to see the tracks, but not so much as to drown out the laser. That would make the inspector crepuscular. StuRat (talk) 03:31, 11 February 2016 (UTC)
Technology for the disabled
What is the current status for:
- Body part less people.
- Blind sighted people. exclude surgery.
Are there any satisfactory mechanisms out there to grant capability?
Apostle (talk) 18:31, 8 February 2016 (UTC)
- Fixed title to be proper English. StuRat (talk) 18:33, 8 February 2016 (UTC)
- 1) I assume you mean people missing body parts. See prosthetics.
- 2) I don't think most causes of blindness can be addressed without surgery, assuming implanting electrodes into the brain is considered to be surgery. I think there was some research on attaching a grid of electrodes (with just tape) on the back, and using those to convey visual images, so that might qualify. StuRat (talk) 18:35, 8 February 2016 (UTC)
- There is an enormous amount of technology for the blind - from talking clocks to software able to scan a printed document and turn it into artificial speech. — Preceding unsigned comment added by 81.131.178.47 (talk) 18:56, 8 February 2016 (UTC)
- Some blind people use a device that helps them to "see" using their tongues [21] [22]. SemanticMantis (talk) 21:16, 8 February 2016 (UTC)
- I'll go through the links... Thank you -- Apostle (talk) 22:36, 8 February 2016 (UTC)
And aAbout number 2): BBC was showing a program where this blind woman was viewing throw her eyes (black & white) fuzzily. The mechanisms they implanted inside her eyes are apparently compulsory to repair every 6 months. There was also a electrical box, her brain was probably connected... - can't recall properly.- The technology was very depressing; knowing that its the 21st century (or something). -- Apostle (talk) 22:36, 8 February 2016 (UTC)
- See visual prosthesis for this particular type of device. Tevildo (talk) 23:10, 8 February 2016 (UTC)
- The technology to interface nerve fibers to electronics is extraordinarily difficult. It's not like there is a red wire labelled "Video In" in the interface between eyes and brain - instead there is a large bundle of unlabelled nerves - all different from one person to another. It's not like each nerve is a "pixel" or anything useful like that. Maybe one of them says "There is a high contrast, vertical line, about a quarter the height of the retina that's moving left to right" - figuring out what to say to each nerve from a camera is beyond what we can currently do...we can try to rely on brain plasticity to cope with whatever incorrect data we're sending - but that's how you end up with fuzzy, low-resolution monochrome - and experimental devices that don't survive long-term implantation. Also there are at least a dozen reasons why someone might be blind - and each one needs a separate, and equally difficult solution. This is an exceedingly difficult problem and it may be decades before we have something that truly works and is actually useful to people. SteveBaker (talk) 03:34, 9 February 2016 (UTC)
- The neural plasticity is exactly what they rely on. The brain has an amazing ability to learn, and this includes learning which nerve corresponds to which pixel. And, for people who have been blind all their life, the mapping would never have been defined in the first place, since that happens as a baby, based on visual feedback. As for how to teach the brain quickly, I would suggest hooking up only the corner pixels in the image frame first, then, once they have been learnt, add more pixels, maybe one at a time, until the full grid has been learned. StuRat (talk) 18:44, 9 February 2016 (UTC)
- My mistake. I recall now that its was gray-black background instead of black, and white/light colour objects that she had to differenciate; was the only colour that she could see. The image via her eyes looked like as if you were turning a TV on and off about every 3-5 millisecond or something. She did/might have/had a box (unless I'm confusing with another program).
- Thank you all once again. I'll definitely look into it... Regards. -- Apostle (talk) 22:09, 9 February 2016 (UTC)
StuRat, SemanticMantis, Tevildo, SteveBaker: Just for clarification - Will it ever be possible to create glasses (or any other thing) for the blind people so that they can see, without an operation; given all the above facts still? -- Apostle (talk) 21:09, 11 February 2016 (UTC)
- This isn't a field in which I can confidently speculate, but it might be possible to stimulate the visual cortex with some form of RF or magnetic system fitted to the glasses - see Deep transcranial magnetic stimulation. Whether that will ever be safer than a surgical implant is another matter. Tevildo (talk) 21:40, 11 February 2016 (UTC)
- Thanks Tev, as long as there is a way... I'll definately look at it thoroughly now, in the near future. Regards -- Apostle (talk) 06:52, 12 February 2016 (UTC)
- In a sense, this already exists - the BrainPort device uses a pair of glasses with a camera - a small electronics box to process the data - and a small paddle that you place on your tongue. The paddle stimulates the tongue (which is really sensitive and has dense nerve-endings). The difficulty is that it takes a lot of time and practice to recognise pictures this way - and it relies on brain plasticity for the user's brain to "see" the image presented by the tongue. But people who stick with it are able to recognize objects and navigate while walking - so they can, in a sense, "see". Similar tricks have been done with arrays of electrodes attached to the skin - or even a grid of pins that mechanically push against the skin.
- However, what you're presumably asking for is full color images, fast updates for motion - a wide field of view and so forth - and that seems much harder. The idea of having a camera that stimulates the optic nerve remotely isn't going to be easy. But even if it were possible, we're expecting to re-use parts of the normal visual system and just to replace the retina. Whether that can be done or not depends critically on the REASON the person is blind. Sure, if their retina has stopped functioning, then 'plugging' the video into the cells behind the eye might work - but if the reason for blindness is brain damage or some problem in the optic nerve - then you'd need to find a different route. People who are recently blinded may still have a fully functional visual cortex - but people who were blind at birth may not develop all of that brain structure, so even if the original cause of blindness can be corrected, it may still be hard for them to recover fully. So expecting to find a single device that works for everyone is almost certainly impossible and we'll need a wide range of solutions in order to fix it.
- There is also a matter of cost and practicality. A surgical approach may well be cheaper and more effective than non-surgical methods. SteveBaker (talk) 16:30, 14 February 2016 (UTC)
- Yes, thanks for the clarification. Plan cancelled due to some other reason... -- Apostle (talk) 20:35, 14 February 2016 (UTC)
Accelerating a particle with light
If I accelerate a tiny speck of dust using light, what max speed could be it reach? Let's suppose that hypothetically we can know exactly where this speck of dust is, and that we know how to point a laser at it. --Scicurious (talk) 19:22, 8 February 2016 (UTC)
- Theoretically you could accelerate it to almost the speed of light. StuRat (talk) 19:24, 8 February 2016 (UTC)
- Assuming you find a void in space that (with much luck) presents no molecule of gas to hinder the speck's progress, there is still microwave background radiation defining an approximate cosmic rest frame, which would become blue-shifted as the particle approaches it as the light source you use becomes red-shifted - also starlight of course, which is similarly in a fairly consistent rest frame all around. As a result, if you assume a constant light intensity in a perfectly focused beam, I think there would be a maximum level that you can use at the beginning to avoid vaporizing the particle, which eventually becomes weaker than the oncoming radiation. On the other hand, if you continue to turn up your light source (or increase its frequency) then I suppose the particle might accelerate without limit, coming arbitrarily close to light speed. Unless, of course, I forgot something else... Wnt (talk) 19:52, 8 February 2016 (UTC)
- Isn't this how solar sails work? Nyttend (talk) 21:10, 8 February 2016 (UTC)
- So, you can approach the speed of light as much as you want, but not reach it ever? --Scicurious (talk) 16:15, 10 February 2016 (UTC)
- Yes, for two reasons.
- 1) Just with conventional Newtonian physics, you could never accelerate one object with mass to match the speed of another, by having them hit each other. Even if a star runs into a proton, the mass of the proton + star is now slightly more, meaning the total speed is slightly less, for it to have the same inertia.
- 2) Relativity prevents objects with mass from being accelerated to the speed of light, although this is tricky as it depends on precisely how "mass" is defined. See rest mass. StuRat (talk) 21:17, 10 February 2016 (UTC)
- The faster something moves, the heavier it becomes (relativistic mass). The kinetic energy of its motion, as viewed from your rest frame, is a kind of energy, and has mass per E=mc2. The more kinetic energy you add to the particle, the more massive it becomes, and the more energy it takes to speed it up. In the extreme case, all of the (relativistic) mass of a photon is energy - you might add more energy to it, but the mass increases in direct proportion, so the speed never changes. I should note that relativistic mass has become unpopular in recent years, but I feel like that's a fad - since ultimately, many kinds of "rest" mass are predicated on the kinetic and potential energy of the substituent particles. Wnt (talk) 16:02, 11 February 2016 (UTC)
Is it possible that it is just a respect for observation? (I am not waiting for answer) ~~~~Like sushi
Immunity vs resistance
Is there a difference, and if so, what is it? Are they the same but used for different species, or is there a clear but subtle difference? In other words, does "She is immune to the flu" mean the same as "She is resistant to the flu"? What about "This strain is resistant to drug X" and "This strain is immune to drug X"? 140.254.77.216 (talk) 19:51, 8 February 2016 (UTC)
- "Immune" means 100%, unless some qualifier is added like "partially immune". "Resistance" is less than 100%. StuRat (talk) 19:54, 8 February 2016 (UTC)
- The problem here is that you are using a literary definition of immune, StuRat, and that while I agree with you in that way, SemanticMantis and the heretical Wnt much more closely approach the received biological notion. In the school where I got my undergrad biology major (focusing in botany), you had to have four years of chemistry and four years of bio-major "bio" before you could even apply to take Immunology 396. So I would take their comments as read. μηδείς (talk) 02:47, 9 February 2016 (UTC)
- You know, I can see how you'd think that. The problem is that your explanation is completely incorrect in terms of medical and physiological terminology. Immunity_(medical) discusses how the term is used. An easy example sentence "All vaccines confer immunity, but not all vaccines are 100% effective, and so some people who have acquired immunity from a vaccine may still get infected." My dictionary says "Immune: resistant to a particular infection or toxin..." Wiktionary says "Protected by inoculation", Miriam Webster says "having a high degree of resistance to a disease <immune to diphtheria>". The only time immune means 100% resistance is in fiction, games, or legal matters. SemanticMantis (talk) 21:28, 8 February 2016 (UTC)
- Active immunity represents a process of natural selection within immune cells of the body (cell mediated immunity or antibody mediated immunity) by which molecules become common that (in some context) interact with a pathogen and allow it to be destroyed. In drug resistance, bacteria produce molecules that neutralize a drug, frequently by enzymatic means, often using plasmids to allow trading of useful resistances within a broader genetic background. So the selection for immunity takes place within an organism, but the selection for resistance occurs between organisms - most bacteria die, a few live and become resistant. So to be "resistant" to something is more of an inborn trait, generally speaking, while "immunity" usually implies past exposure to the agent or a vaccine etc. Exception, sort of: multidrug resistance in cancer occurs within an organism. But if you look at it another way, every cancer cell is out for itself, and (apart from the one that mutates) is either born resistant or not. Another exception, sort of: innate immunity may not require a selective response; the thing is, we rarely hear that someone is innately immune to a pathogen because they never know they might have gotten sick. This reminds me, say, of toxoplasmosis which preferentially affects those of the B cell type. (There was actually a huge outbreak in postwar Japan, and Japanese became known for "blood type personality theory", to this day never having been aware of the role of the protozoan in affecting their minds...) Wnt (talk) 20:05, 8 February 2016 (UTC)
- Wnt I work at a research institution where several groups study Toxoplasma gondii and I don't think I've ever heard of a connection between ABO blood type and susceptibility to infection. For the sake of satisfying my curiosity, could you link me to where you read that, (or maybe I misunderstood what you said up above). Thanks, PiousCorn (talk) 06:03, 9 February 2016 (UTC)
- @PiousCorn: I don't remember which source I originally went by, but [23][24] mention it. On the other hand [25] reports a lack of association with B blood type ... but rather, with Rh negative status! Also [26] says that. I had found the B blood type association in an older source ( [27] ) in a question I asked back in 2010 about it. [28] I think even back then I had lost track of some earlier source specifically about the Japan postwar outbreak... Wnt (talk) 09:22, 9 February 2016 (UTC)
February 9
Synthetic turquoise
Is there such a thing as fully synthetic turquoise (as opposed to imitation turquoise)? If so, how is it synthesized? 2601:646:8E01:9089:14B5:216D:30B1:F92 (talk) 06:02, 9 February 2016 (UTC)
- The second sentence of the lede in our article Turquoise says "In recent times, turquoise, like most other opaque gems, has been devalued by the introduction of treatments, imitations, and synthetics onto the market. - so evidently, there are synthetic stones out there. Geology.com says "A small amount of synthetic turquoise was produced by the Gilson Company in the 1980s...It was a ceramic product with a composition similar to natural turquoise." - so I guess it's arguable that this was not truly a synthesis of a material identical to the real thing. It goes on to say: "Synthetic turquoise, and turquoise simulants have been produced in Russia and China since the 1970s." - but no clue as to the manufacturing methods. SteveBaker (talk) 13:40, 9 February 2016 (UTC)
- I found the Gilson name also - searching brings up a chemical analysis of a different synthetic [29] - seems like this one is not perfect somehow - not sure how to define a yes or no answer about it though. Wnt (talk) 15:59, 9 February 2016 (UTC)
- Whew! So from what I gather, so far nobody made the real thing in the lab? That's good news for me, thanks! 2601:646:8E01:9089:A082:3561:E888:76F (talk) 01:02, 10 February 2016 (UTC)
- Maybe. "The Real Thing" is a little tricky here. Just how close do you have to get before you say it's "real"? SteveBaker (talk) 15:33, 10 February 2016 (UTC)
- A real gem comes from a little yellow idol, or the Cold Lairs, or is waiting for you behind the ranges... DuncanHill (talk) 15:44, 10 February 2016 (UTC)
Weight of paper
What will be the weight in kilograms of 0 r5eams of 60gsm paper having dimensions 10'x11x1' is this paper of A4 size.223.176.51.205 (talk) 12:09, 9 February 2016 (UTC)
- This looks like your homework question. Wikipedia doesn't do students' homework for them because that would negate the benefits of practicing at home. If there is some part of the question that you don't understand, or you have got stuck part way through, ask a relevant question about the part you don't understand and we will try to point you in the right direction. Dolphin (t) 12:21, 9 February 2016 (UTC)
- Also look up 'ream of paper' as it says how many sheets you have, the dimensions don't tell you that. Dmcq (talk) 12:50, 9 February 2016 (UTC)
No this is not a home work problem I a not a paper technologist I know 1 ream has 500 papers but I don't understand the basis weight concept and please tell me what is the weight of 1 ream paper or 1 of the 500 papers or how to calculate the weight because I cannot make it out from websites.223.176.51.205 (talk) 12:57, 9 February 2016 (UTC)
- A4 sized paper is .297 metres times .210, so a single sheet of paper has an area of approximately 0.062 square metres. Each square metre weighs 60g (as in 60 gsm: grammes per square metre). Thus 500 sheets weigh 500 x 60 x .210 x .297 = approx 1.87 kg.--Phil Holmes (talk) 13:02, 9 February 2016 (UTC)
- A4 is exactly a sixteenth of a square metre (0.0625) (see ISO 216 for details), so the weight is 500 divided by 16 times 60 g which is exactly 1.875 kg. In practice, Phil Holmes might be more correct because of the slight loss in cutting. Dbfirs 22:10, 9 February 2016 (UTC)
Really? If you're a "paper technologist" then you sure as hell ought to knowOK so you need to know that 'gsm' stands for 'grams per square meter'. You can easily calculate the total area of 500 sheets of paper of whatever size (length x width x number of sheets), convert to square meters. Then multiply by the gsm number to get the weight in grams. Then divide by 1,000 to get kilograms. SteveBaker (talk) 13:31, 9 February 2016 (UTC)
- Steve, the OP said they were not a paper technologist. I'm not a linguist, but I know what "not" means. DuncanHill (talk) 13:36, 9 February 2016 (UTC)
- Ooops! Sorry! My bad! SteveBaker (talk) 13:42, 9 February 2016 (UTC)
- Steve, the OP said they were not a paper technologist. I'm not a linguist, but I know what "not" means. DuncanHill (talk) 13:36, 9 February 2016 (UTC)
FWIW, a "ream" used to be 20 quires - or 480 sheets. Blame the British <g>. Collect (talk) 16:37, 9 February 2016 (UTC)
- NB, by definition a sheet of A4 paper has surface area of m2, or one 16th of a square metre. LongHairedFop (talk) 22:18, 9 February 2016 (UTC)
- Knowing that, don't you just wish they'd put 512 sheets into a ream? SteveBaker (talk) 15:32, 10 February 2016 (UTC)
- I do, it would be 2^5 square meters exactly.--Lgriot (talk) 20:25, 10 February 2016 (UTC)
- Knowing that, don't you just wish they'd put 512 sheets into a ream? SteveBaker (talk) 15:32, 10 February 2016 (UTC)
Widely distributed species
Phrynobatrachus ogoensis is a species of frog from western and central Africa. According to the article, which correctly reflects the IUCN Red List source, it's found in a small area of central Gabon and near Robertsport in Grand Cape Mount County, Liberia. How can a species be found in both spots, yet nowhere in between? I understand the concept of a species existing in disconnected locations that were once connected, e.g. the freshwater eel species (can't remember which one) found both in Europe and North America, and a species that's been human-transported from one spot to another, e.g. rats and house sparrows, but I don't imagine people transporting just another frog species in this manner, and what about the climate/topography would prevent the frog from spreading any farther from its current limited habitats in these highly rainforested regions? Nyttend (talk) 14:04, 9 February 2016 (UTC)
- Without knowing the specifics of frog distribution in Africa off the top of my head (man, if I had a dime for every time I said that phrase) there are a variety of elements in play that restrict species' expansion. As you note, the two areas may once have been contiguous and the species just died off in the middle areas. That (and the lack of further outward expansion) could be the result of many things, including direct human action altering waterways, draining marshes, and so on, or by various forms of pollution. Frogs are an indicator species (not in our article yet, so ref), which means that they are particularly susceptible to pollutants. In other words, the area between their current habitats might seem pristine to us and many other animals, but not to the froggies. It would also be interesting to see if there are other frog species that compete directly against ogoensis within the same ecological niche. Matt Deres (talk) 15:18, 9 February 2016 (UTC)
- The obvious answer is that the two locations probably represent two distinct species. The two populations were treated as the same species back in the 40s (before DNA was known) and that conclusion has persisted given the lack of any subsequent scientific effort to confirm or deny whether these two populations are from a single species. IUCN itself says they probably aren't a single species, but that more investigation is needed. Dragons flight (talk) 15:37, 9 February 2016 (UTC)
- It's entirely possible that the range was much broader, but has shrunk. Relict_(biology) describes this case. Think of how we have only small isolated patches left in the USA of old growth forest [30] or Tallgrass_prairie [31]. There are several species that may not exist only in those remnants, but will have very low density anywhere else.
- I don't know specifically what's up with this one particular frog, but the situation you describe is entirely consistent with how we think about species distributions in a conservation/management context, and it's all too common of a story. While the CA tiger salamander is not so extreme, check out the isolated pockets in the distribution here [32]. Many other redlisted amphibians will have similarly disconnected distributions, as their habitats are degraded and they become extirpated from all but the most remote and inaccessible environs. SemanticMantis (talk) 18:50, 9 February 2016 (UTC)
The extinction of sandboxes
It looks like kids these days do not have access to sandboxes anymore (unless it's a sandboxed browser). When and how did this shift took place? Who decided that they should go? I suppose they were deemed unsafe, but was this move absolutely necessary? --Scicurious (talk) 14:04, 9 February 2016 (UTC)
- I'm sure it frustrated cats in the neighborhood. ←Baseball Bugs What's up, Doc? carrots→ 14:34, 9 February 2016 (UTC)
- This site declares "If there’s one thing that kids love more than slides and swing sets, it’s the sandbox! These can be found in all parks and playgrounds and kids can safely play all kinds of games in there, or build sand castles and other cool thing with the sand." However maintaining the sandbox requires protecting it from rain and from all animals and pets, including insects. Observing a child's play with toy models in a small sandbox is a form of non-directive Play therapy attributed to child psychologist Margaret Lowenfeld. AllBestFaith (talk) 14:54, 9 February 2016 (UTC)
- (EC) 1) Plenty of kids have access to sandboxes. I think you must mean the decline of public sandboxes at children's parks, or perhaps you haven't noticed that small (coverable) backyard sandboxes like this [33] are still fairly common in the USA. 2) Very little is absolutely necessary. 3) Here is a selection of articles that describe some of the safety concerns [34] [35] [36]. I'm not sure about necessity and sandboxes, but exposing kids to Toxoplasma gondii seems like a good thing to cut down on, and that's just one of the more famous pathogens that can linger in sand... SemanticMantis (talk) 15:03, 9 February 2016 (UTC)
- Yes, I mean the public ones, it seems that they are more difficult to protect than a little one in your backyard. Scicurious (talk) 15:36, 9 February 2016 (UTC)
- Well put. The question also implies that this was an organized decision; toys fall in and out of fashion just like anything else. Matt Deres (talk) 15:20, 9 February 2016 (UTC)
- I think it could be a Health Hazards Regulation. They could have been prohibited, in the same way that not wearing a seat belt was banned. Scicurious (talk) 15:43, 9 February 2016 (UTC)
- The OP's premise is patently wrong, nearly every public park in my metro area, including those built or renovated in the past 10 years has a large open sand play area or sandbox in it. You can still buy sandboxes at Walmart and Target, and they sell large bags of "play sand" at Home Depot and Lowes. So the answer to the OPs "why?" question is "we can't tell you why, because the question makes no sense, because your premise is wrong". Unless the answer is " you aren't looking hard enough "--Jayron32 16:16, 9 February 2016 (UTC)
an aside on challenging the premise and reference desk conduct, e.g. who is supposed to do what.
|
---|
|
- In regard to the premise, here [37] is a NYT article from 1995 that gives some numbers, and says there were far more sandboxes included in city parks in the past. To wit "Since the 1970's, no new or renovated city playground designs have included sandboxes unless requested and lobbied for by the community, which also must maintain them." If anyone wants to find other stats for other areas, I'm sure they'd be appreciated. It seems as though the prevalence of sandboxes may change throughout time and place, which should really surprise nobody. It is clear that at least in NYC, there has been a precipitous decline in public sandboxes since the 1970s. SemanticMantis (talk) 18:38, 9 February 2016 (UTC)
- The time between when that article is written and when it is referring to as the halcyon days of sand box glory is as long as the time between now and when the article is written. An article from 20 years ago saying how awesome life was 40 years ago isn't all that relevant to our discussion today. --Jayron32 01:05, 10 February 2016 (UTC)
- So what? Do you really think there has been some resurgence of sandboxes since 1995? For that matter, OP never gave a timeline, he could be thinking in comparison to 10 years ago, or maybe 50. Here's another article about NYC that says "the number of sandboxes has dwindled from a peak of seven hundred to only fifty or so today" [38]. That article is from 2010, so I don't think it's fair to say the numbers are out of date. I only looked for NYC because it's a big famous city with a large parks dept. I don't disbelieve that your metro area still makes new sandboxes with new parks, but it seems like you're trying very hard to disbelieve the fact that public sandboxes do seem to have declined in many areas. This seems to be coincident with increasing awareness of some health concerns, and in 2008 the national sanitation foundation did an extensive study. That study and others are reported on here [39] in 2015, where parasitic roundworms are also mentioned to have been found in 2/10 daycare sandboxes. It is indeed hard to find good references on numbers of municipal sandboxes. But the references I do have show a decline. They also show an increasing concern from public health officials and doctors. Given these references that I found, along with my personal observations, those of the OP, and those implicit in many of the public safety articles, I conclude that there has been a change in public sandbox incidence in many places in the USA. This does not preclude any new sandboxes being built in your neighborhood this year. SemanticMantis (talk) 16:46, 10 February 2016 (UTC)
- The time between when that article is written and when it is referring to as the halcyon days of sand box glory is as long as the time between now and when the article is written. An article from 20 years ago saying how awesome life was 40 years ago isn't all that relevant to our discussion today. --Jayron32 01:05, 10 February 2016 (UTC)
- Wikipedia has an article about Playground surfacing and there are dozen options besides sand. The article does not mention a tendency towards other materials, but sand has all drawbacks, expecting cost, which is low.The Americans with Disabilities Act was passed in 1990, and sand does not comply with its requirements. So, it's clear to me that some communities could choose other materials for their playgrounds. And that's without entering into the Toxoplasmosis issue. Llaanngg (talk) 19:22, 9 February 2016 (UTC)
- Yes, this is the big issue. Sand gets very dirty. Modern playgrounds are more likely to use rubber surfacing or maybe bark chippings. Blythwood (talk) 06:09, 10 February 2016 (UTC)
- Sand isn't just used as a surface in a sandbox, it's used as a building material to build sand castles, etc. StuRat (talk) 20:56, 9 February 2016 (UTC)
- Indeed. We're almost surely talking about sandpits here, not the open areas under/around whole playgrounds of equipment. DMacks (talk) 21:59, 9 February 2016 (UTC)
- Sand isn't just used as a surface in a sandbox, it's used as a building material to build sand castles, etc. StuRat (talk) 20:56, 9 February 2016 (UTC)
- I don't know about in the US but, since retiring from my original occupation, I have worked as a relief caretaker in a number of local authority schools in the UK. One of the requirements of nurseries and early years units is that they must have provision for the children to play with sand and with water (usually both together). They often have facilities for this, both inside and outside and it is one thing that drives you mad when you have to clean it all up every evening - would you let your kids play with sand and water in a room with carpets? I have even worked at one nursery school where they had a one ton bag of soil and they asked me to regularly bring in a couple of buckets so the kids could mix it with water and sand and make mud pies - you can imagine the mess that made on the nursery carpets when they came back inside. The outside sandpit was always covered at night to stop cats and birds crapping in it but, obviously, in a public park it would be difficult to keep it covered and of course the public could drop sharp objects in it. However, the premise that children don't get to play in sand anymore certainly doesn't apply in the UK. Richerman (talk) 22:11, 10 February 2016 (UTC)
- Clearly the problem there is not with the requirement that kids get to play with dirt, sand and water - but that they should do it indoors. Why not let them do it outside - when weather permits - and not otherwise? SteveBaker (talk) 19:15, 11 February 2016 (UTC)
- They have automated catboxes that can comb the "lumps" out now. I wonder if a larger version could clean and then seal a sandbox at night. StuRat (talk) 22:24, 10 February 2016 (UTC)
- No doubt such a thing could be devised - it could filter, wash and dry the sand at intervals and return it to the sand box - but the cost of building and running such a contraption would likely be prohibitive. Personally, I doubt that would be a good idea, even if it was plausible. There is undoubtedly a trend in trying to keep children super-clean and far from all bacteria and other such nastiness - but sadly, it starts to look like doing that causes them to fail to gain immunity to a lot of the things they encounter. There are suspicions that this may explain the increase in some diseases such as asthma - which is especially prevalent in children that are kept "too clean". As humans, our children evolved to sit around in dirt, sand, etc - it's dangerous to assume that cutting them off from those situations is a net advantage. SteveBaker (talk) 19:13, 11 February 2016 (UTC)
- For which, see hygiene hypothesis. Matt Deres (talk) 02:10, 12 February 2016 (UTC)
- Yes, some level of exposure to microbes may be healthy, but that doesn't mean we should let our kids play with cat poop. There's an appropriate balance. StuRat (talk) 05:13, 12 February 2016 (UTC)
- Indeed, "moderation in all things" (especially cat pooh) SteveBaker (talk) 03:37, 13 February 2016 (UTC)
- Yes, some level of exposure to microbes may be healthy, but that doesn't mean we should let our kids play with cat poop. There's an appropriate balance. StuRat (talk) 05:13, 12 February 2016 (UTC)
Starkiller Base superweapon
In Star Wars: The Force Awakens, General Hux gives the order to fire Starkiller Base's superweapon, which emits an energy beam strong enough to destroy entire planets. When I first saw the film, my suspension of disbelief was briefly dropped, when I thought "there's no way that energy beam can travel lightyears in minutes", but then I thought "Hey, I'm watching a film with interstellar spaceships and talking aliens", and kept on with the story.
Now, onto my question. Suppose such an energy beam is possible. Ignore its power, it doesn't have to destroy anything, just get at its destination without getting too much spread out and diluted. It can be just a fancy light show. But it has to be visible to the naked eye.
How would the people on the destination planet see it coming? Would it appear as a slowly-moving bright spot in the sky, getting gradually brighter, until it illuminated the whole sky? Or would the people just suddenly find the sky all illuminated? JIP | Talk 20:11, 9 February 2016 (UTC)
- If it travels at the speed of light - they wouldn't see it at all until it arrived. If it travels faster than light then all bets are off because the laws of physics as we know them say that it's impossible - so any "What if..." answers would be nothing better than wild speculation.
- In the real world, even a visible-light laser is invisible as it crosses a vacuum - and unless it has enough power to ionize the air and make it glow, it'll be more or less invisible all the way until it hits it's target (maybe it might vaporize a few dust motes or something). If it is powerful enough to make the air glow, it still wouldn't be visible until it hit the air - it would basically pop into view as a glowing shaft of light in such a tiny fraction of a second - that it would appear to be instantaneous.
- But if it's fictional...it can look like whatever the director and the special effects department can imagine!
- SteveBaker (talk) 20:44, 9 February 2016 (UTC)
- OK, so it would go as I imagined, not as it was actually depicted in the film. I always thought the effect of a beam moving at light speed would have instantaneous effects when it's finally seen. Not like in the film where people can harmlessly watch it slowly approach for a few minutes, until it finally destroys the entire planet in a few seconds. I think the director made it move so slowly for dramatic effect. JIP | Talk 20:50, 9 February 2016 (UTC)
- I haven't seen the film, but the effect sounds totally unlike a laser, and more like a plasma ball, as in Ball lightning, but perhaps containing a Quark–gluon plasma to carry that sort of energy. It would have to cover most of the distance via a created Wormhole. I suspect that the film-makers were more worried about the impression on the viewer than they were about explaining the exact physics. Dbfirs 21:55, 9 February 2016 (UTC)
- Haven't seen the film, but if the region of space the beam passes through glows with ordinary light, and if the beam follows a spacelike path, then the beam would appear to emanate on the planet struck and move up into the sky. One way to see this is that if the beam is "instantaneous", linking the two planets at a single moment in their shared rest frame (assuming they're not moving relative one another) then it really isn't moving from one planet to the other - its appearance is symmetrical as seen from either world.
- However, it is conceivable that the design of the beam would call for it to build up in a large spacelike path while the energy accumulated, but then one end gradually moves at a sublight speed toward the planet until it discharges, etc. As a rule, you can write apologia for the worst sci fi plots if you think them through carefully. Wnt (talk) 22:41, 9 February 2016 (UTC)
- I see what you're saying - if the beam arrives faster than the light it emits along the way, then it's tempting to say that it's first visible where you are - then starts to appear backwards towards the source as the light from it's passage catches up with it's ultimate effect. But because the laws of physics don't allow for things that go faster than light, all bets are off. We can't make any reasonable statement about the physical reality of the square root of a negative number - and that's what the Lorentz transformation requires:
- When v2 is greater than c2, the v2/c2 is greater than one - and we have the square root of a negative quantity. So the mass, length, time-dilation and energy of this superluminal 'effect' are all impossible to calculate. We know that in the real world, we never see the square root of a negative number in an actual result. It always cancels out somewhere else. So there is really very little likelyhood of anything physical, that can transmit information, travelling faster than light...and if it did, the consequences are a mathematical impossibility. Causality itself falls by the wayside. Making any statement whatever about what that might look like is entirely unreasonable in light of what we know.
- Possibly the only reasonable speculation relates to the (not-real) concept of tachyons - which hypothetically might travel at beyond the speed of light. The kind of crazy math that results from this is that tachyons would require infinite energy to slow down to the speed of light (a kind of mirror image of regular particles that need infinite energy to reach the speed of light) - and that their lowest energy state would be when they were travelling at infinite speed. So even if we take a BIG stretch into the most hypothetical physics, we end up with a weapon who's effects would travel at infinite speed and not take the time that the beam weapon in StarWars takes.
- All bets are off. This is a fictional thing - and the appearance of it is whatever your imagination (or the plot) needs it to be. SteveBaker (talk) 15:30, 10 February 2016 (UTC)
- Well, the laws of physics allow phenomena to go faster than light, just not information. For example, the owners of this death-ish star might have launched a bunch of probes to line up along the trajectory of the intended attack years in advance, then ceremoniously press the button at the exact time they were all timed to go off ... in which case you would see the closest ones to the planet light up first. (Just ask a 9/11 truther ... they put explosives inside the planet, and the death star firing at it is just a misdirection...) Wnt (talk) 17:58, 10 February 2016 (UTC)
- There is, in fact, a detailed cannon explanation for why the beam from Starkiller base appeared in the manner that it did, but as it is pure science fantasy, I shall not sully the reference desk with such dribble. Anyone curious can look up the weapon's entry on the Star Wars Wikia. Someguy1221 (talk) 22:41, 10 February 2016 (UTC)
- Indeed, it is possible for a phenomenon of some kind to travel faster than light - but not in this case since the beam carries the information that someone on the death star pressed the "DESTROY THE PLANET!" button (and when and in which direction it was aimed and a whole lot more besides). Since information cannot travel faster than light, neither can a functional death ray. So, yes, it is indeed still hogwash. SteveBaker (talk) 15:12, 11 February 2016 (UTC)
- Well, the laws of physics allow phenomena to go faster than light, just not information. For example, the owners of this death-ish star might have launched a bunch of probes to line up along the trajectory of the intended attack years in advance, then ceremoniously press the button at the exact time they were all timed to go off ... in which case you would see the closest ones to the planet light up first. (Just ask a 9/11 truther ... they put explosives inside the planet, and the death star firing at it is just a misdirection...) Wnt (talk) 17:58, 10 February 2016 (UTC)
February 10
io photographs
When everyone was all excited about the New Horizons space probe reaching pluto I remember seeing photos of one of Jupiter's moons called io. 2 of these photos stood out to me. They were possible infrared photos or something similar. They appear to be black and white like THIS picture. You could see little bright spots/mushroom clouds from the volcanoes erupting. Where did these pictures go?! I can't find them on google images nor can I find them on wikipedia. Can anyone help me find them? 199.19.248.82 (talk) 00:18, 10 February 2016 (UTC)
- Our article on Io (moon) links to many great resources, including:
- Io at the NASA Jet Propulsion Lab Photo Journal archive
- New Horizons Io photo catalog from Johns Hopkins Applied Physics Lab
- Io photographs from the Galileo mission, archived at University of Arizona's Planetary Image Research Laboratory
- This might be the one you're thinking of, although the quality is not great. The bright spots are indeed volcanoes. Commons:Category:Photos of Jupiter system by New Horizons and commons:Category:Volcanoes of Io have some more images. Smurrayinchester 13:38, 10 February 2016 (UTC)
hops as a preservative
The beer article mention that hops acts as a preservatives a few times. Which chemical in hops exactly is providing the preservative effects?
The beer article also says "the acidity of hops is a preservative", so would other acids work as well? Johnson&Johnson&Son (talk) 08:06, 10 February 2016 (UTC)
- I notice that of the 2 references ([61} & [62]) used for that statement, the first no longer leads to relevant material and the second leads to the abstract of a possibly relevant article but does not mention the preservative property explicitly in the abstract (the property of aiding head retention is not quite the same thing).
- From my own informally acquired knowledge of brewing, the preservative effect was the reason for the introduction of hops in the mediaeval period, after which the taste effect became appreciated, but in the modern era – with better control of hygiene in the brewing process – the preservative effect is less relevant and the effects on taste and other factors (e.g. mouthfeel) predominate.
- I have a range of books about brewing at home which might contain the answer re hops, but will not be able to consult them until Thursday at the earliest. As for using "other acids", I'd assume it possible that other non-hop adjuncts formerly used such as sweet gale (Stonehenge Brewery still uses this for one seasonal beer) may have had preservative as well as flavouring effects. If however one was to use non-plant sources, I personally would no longer regard the resulting beverage as "beer" :-) . {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 15:16, 10 February 2016 (UTC)
- There may be two effects at work here, and hops role may be more in one than the other. The first is that hops may actually act as a preservative, that is it may chemically prevent spoilage. The second effect is that hops may mask spoilage by its strong flavor. That is, you taste the hops rather than the spoilage in the beer. this reference for example notes that herbal mixtures (such as hops, but also other herbs and spices known as Gruit) "mask unpleasant spoilage notes". One of the characteristics of India pale ales, or IPAs, is their extremely high hop content, which covered the "skunky" or "stale" taste of beer shipped from Britain to India on long overseas voyages. This beer blog notes "High hop levels can preserve a beer’s flavor in two ways: they have a limited ability to protect beer from spoilage by some microorganisms, and, more importantly, their bitterness can mask stale flavors." (bold mine). Several other sources about IPAs note the use of larger quantities of hops than normal to mask staleness, spoilage, or undesirable flavors. --Jayron32 15:47, 10 February 2016 (UTC)
- This reference, says that Primary Alpha Acids Humulone, Cohumulone, Adhumulone have an antiseptic effect, especially against Gram positive bacteria. DuncanHill (talk) 15:56, 10 February 2016 (UTC)
- EC: If you scroll down your original link to hops, there's a subheading about chemical composition, which on expansion isn't simply about taste. It's the release of Alpha_acid and Beta_acid in the fermentation process that acts as a preservative. I think other acids could act as a preservative in beer, but then would it still be beer per se, as the hops are an integral part of the process. When fermenting wine Sorbic acid can be added as a preservative, so you could put some of that in fermenting beer as a preservative I guess. I'm not sure how that would affect the rest of the fermentation process or taste though. Mike Dhu (talk) 16:16, 10 February 2016 (UTC)
- As far as other acids working, yes, anything that moves the pH outside the range a particular bacteria likes will act as a preservative to prevent that particular bacteria from growing. However, there are acidophile bacteria that may thrive in those extremes, so keeping those out is also important. Of course, the acid may also kill the yeast, so could only be added after the brewing process is complete, and people won't like extremely acidic beer either, so it would need to be later neutralized. Thus, there are easier ways to preserve it with modern technology. StuRat (talk) 17:29, 10 February 2016 (UTC)
- I will point out that hops is the closest biological and linguistic relative of hemp, (i.e., Cannabis), and that the two words are either cognates or very closely related wanderworts. See also soma, which seems to be some sort of brewed drink, perhaps from poppies or hemp. μηδείς (talk) 02:56, 11 February 2016 (UTC)
- The berries of the "chequer tree" Sorbus torminalis were widely used in England to flavour beer before the arrival of new-fangled hops (in the 15th century). Our article describes them as "usually too astringent to eat until they are over-ripe". I don't think I've ever seen one, but they're related to the rowan which certainly has acidic-tasting berries. Alansplodge (talk) 17:50, 11 February 2016 (UTC)
Unknown bird
Can anyone help me identify the bird shown? The photo was taken in the Ngorongoro crater, in Tanzania, in January. Thanks.
. --Phil Holmes (talk) 13:53, 10 February 2016 (UTC)
That's the chap. Thanks for your help. --Phil Holmes (talk) 15:20, 10 February 2016 (UTC)
water temperature and baby bath
If the baby bath water feels at all warm to the touch (hand or elbow) does that necessarily mean that its temperature is above 37C, since the human temperature is (approx) 37C? Or can the water be 32C, still warm to the touch, because there is a difference between how we sense temperature on our skin and our core body temperature? If the water is really 32C (as indicated by the thermometer), will it necessarily feel 'cold' since my core temp is 37C? I'm trying to understand if there is a difference between core body temperature, and our sensation on the skin of warm/cool. Thanks if you can point me to a credible info source. — Preceding unsigned comment added by 94.210.130.103 (talk) 20:25, 10 February 2016 (UTC)
- Our article on thermoregulation covers some of this. Your specific questions about whether or not something will "feel cold" are going to be highly variable from person to person and at different times (as the link above suggests). Broadly speaking, we're not very good at gauging temperatures. Our article at thermoreceptor is not very detailed, but my own experience is that we seem to feel temperature changes rather than actual temperatures. Matt Deres (talk) 21:12, 10 February 2016 (UTC)
- Yes, skin temperature is what is being compared. You can verify this yourself by cooling one hand (just go outside with only one glove, for a bit, in winter), then put both hands in water that you have verified is body temp with a thermometer. The water will feel much hotter on the cold hand. StuRat (talk) 21:11, 10 February 2016 (UTC)
- Notice too that babies are more delicate to the temperature. Bath water should be just above 100 F (which are the 37 C you mention) to prevent chilling or burning the baby. In case of doubt, simply use a bath thermometer. --Scicurious (talk) 00:33, 11 February 2016 (UTC)
- Incidentally, the reason we're so poor at determining temperature by touch is that what our senses really detect is the rate at which heat flows out of or into the skin. That's a function of both temperature of the skin, temperature of the thing you're touching and the thermal conductivity of that thing. That's why wood feels warmer than metal when both are at the same (below 37C) temperature...wood is a poor conductor of heat and metal is good, so we are fooled into thinking that metal is "colder" because the heat leaves our skin much faster than it does when touching wood.
- StuRat's example of sensing temperature with a hand which is cold is also caused by this since the rate of heat flow into the cold hand is faster than into the warm hand.
- Bottom line is that we simply don't have a sense that can judge temperature directly...even though we all seem to think that we do.
- SteveBaker (talk) 15:05, 11 February 2016 (UTC)
What does the X and Y (of chromosomes) stand for?
93.126.95.68 (talk) 20:45, 10 February 2016 (UTC)
- See X chromosome, Y chromosome and XY sex-determination system. --Jayron32 20:47, 10 February 2016 (UTC)
- They don't stand for anything, that's what they actually look like: [40]. Other than the Y chromosome, most healthy human chromosomes look something like an X, with some looking more like a U or V: [41] (see image 5). The Y chromosome, on the other hand, is missing a part, and that makes it look more like a Y. StuRat (talk) 20:54, 10 February 2016 (UTC)
- No, that's incorrect. According to our article on X_chromosome it was so-named because "...Henking was unsure whether it was a different class of object and consequently named it X element, which later became X chromosome after it was established that it was indeed a chromosome. The idea that the X chromosome was named after its similarity to the letter "X" is mistaken. All chromosomes normally appear as an amorphous blob under the microscope and only take on a well defined shape during mitosis." And according to our article on Y chromosome, that name was chosen simply because it came after "X". Matt Deres (talk) 21:04, 10 February 2016 (UTC)
- Interesting, but I bet those temporary names would have soon been replaced, had they not turned out to physically match the appearance of each during mitosis. (To me, the more obvious terms would have been "male" and "female" chromosomes.) StuRat (talk) 21:08, 10 February 2016 (UTC)
- The ZW_sex-determination_system also doesn't have chromosomes that look like letters, and the letters don't stand for anything there either. The other main sex-determination system is X0_sex-determination_system, but I'm not sure if the X looks like an X there or not. SemanticMantis (talk) 22:10, 10 February 2016 (UTC)
- Your bet would be foolish. All chromosomes look like an X (during early mitosis). Why aren't they all called X according to your 'logic' then? Fgf10 (talk) 08:04, 11 February 2016 (UTC)
- For the same reason you don't give all your kids the same name (unless you're George Foreman). Because it would obviously be confusing to call them all the same thing. Of the sex chromosomes, only one type looks like an X and the other resembles a Y. StuRat (talk) 16:21, 11 February 2016 (UTC)
Does DEMKO approve Schuko (CEE 7/7) plugs?
I owned an old washng machine which had a Schuko plug (the "French-German compromise" CEE 7/7): among various certification labels (VDE, CEBEC, ÖVE...) there was also the symbol of DEMKO, even if Denmark did not accept Schuko plugs until very recent times. Can someone tell me why there was a DEMKO certification label on that plug?--Carnby (talk) 21:13, 10 February 2016 (UTC)
- Yes. The Schuko plug originates in a patent granted in 1930 to a Bavarian manufacturer Bayerische Elektrozubehör AG. The company ambition, now partly realized, was to create a Europe-wide standard. It would be natural to seek individual European national approvals, especially in countries bordering Germany that are markets for German goods, at the earliest opportunity so that the approval logo could be included on the injection-moulded plug. DEMKO, the National Body for testing of electrical products sold in Denmark existed already before the Schuko patent(s) and could issue its D-Mark approval at any time. However since 1978 electrical products no longer need to carry the D-Mark for sale in Denmark. Safety note: A Schuko plug for a metal-cased washing machine is safe to use with an earthed Schuko wall socket but it creates a safety hazard if plugged into a different non-earthed 2-pole socket. AllBestFaith (talk) 13:47, 11 February 2016 (UTC)
Climate averages of Bacău
The page about the Romanian city of Bacău still has no climate averages; could someone please tell me where I can find a reliable source about climate averages for Bacău region?--Carnby (talk) 21:20, 10 February 2016 (UTC)
- Have you tried Weather Underground (weather service)? I think they usually have this information somewhere for many places. It's usually my first stop for weather info. --Jayron32 21:23, 10 February 2016 (UTC)
- AFAIK Wunderground does not show reliable climate averages (WMO recommends at least 30 years of daily record[ing]s)--Carnby (talk) 21:55, 10 February 2016 (UTC)
- The Romanian Wikipedia has this at ro:Bacău, cited to the Administrația Națională de Meteorologie (which would be your best bet for further info):
Evoluția elementelor climatice măsurate la Stația meteorologică Bacău | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Luna | Ian. | Feb. | Mar. | Apr. | Mai | Iun. | Iul. | Aug. | Sep. | Oct. | Nov. | Dec. | |
Temperatura minimă (°C) | -4,13° | -4,58° | -0,30° | 5,04° | 10,18° | 13,92° | 15,83° | 15,95° | 10,37° | 5,60° | 0,85° | -1,45° | |
Temperatura maximă (°C) | 2,38° | 2,50° | 9,68° | 15,73° | 22,35° | 25,82° | 28,77° | 28,45° | 21,84° | 16,43° | 8,30° | 3,86° |
For how long bacteria and viruses can live outside of the body (not in laboratory conditions)?
For how long bacteria and viruses can live outside the body- not in laboratory conditions? For example if someone has influenza or bacteria disease and he sneezed and spread the bacteria or the viruses and they reached to the bed / table / chair etc. (the other places where people used to touch). If someone touch these places he should be infected? (People say that HIV for example is destroyed right after some seconds after it goes out of the body/ Is that true?)93.126.95.68 (talk) 21:23, 10 February 2016 (UTC)
- It varies a lot, depending on the pathogen. Since you mentioned HIV specifically, no, that is not true. Or at least it is not generally true that the virus always is destroyed within seconds of leaving a body.
“ | Hepatitis B virus (HBV), hepatitis C virus (HCV) and human immunodeficiency virus (HIV) can all survive outside the human body for several weeks, with virus survival influenced by virus titer, volume of blood, ambient temperature, exposure to sunlight and humidity. | ” |
- From this [42] study published in 2007. Here's a nice overview of virus survival in the environment [43], it discusses several different groups. SemanticMantis (talk) 22:03, 10 February 2016 (UTC)
- Thank you for the information. The study is amazing. 93.126.95.68 (talk) 22:44, 10 February 2016 (UTC)
Does chlorine destroy viruses like it does to bacteria?
Can chlorine destroy viruses like it does to bacteria? If it can, what is the mechanism? 93.126.95.68 (talk) 23:00, 10 February 2016 (UTC)
- This is a substantially complicated subject. The short answer is yes, and it varies. There are LOADS of resources if you google chlorine virus inactivation. Can you make your question more specific? Or at least convince us this isn't a homework question? Vespine (talk) 00:15, 11 February 2016 (UTC)
Yes, it can. Chlorine reacts with double bonds (see Halogenation). Bleach (sodium hypochlorite, NaOCl) works in a somewhat similar way. I won't get into how the reaction works on the molecular level, but viruses, like bacteria, contain double bonds between two carbon atoms. Chlorine reacts with those double bonds. The usual result is a carbon-carbon single bond with a chlorine atom on each carbon. Disruption of the double bonds either destroys the virus's protective coat or its ability to reproduce, or both. Roches (talk) 00:23, 11 February 2016 (UTC)
- Thank you. I asked it just for to know if I clean an area by chlorine if it's also against viruses. Today when I clean the working surface in the kitchen the question raised in my brain. no homework at all. 93.126.95.68 (talk) 01:27, 11 February 2016 (UTC)
- The answer is still "It depends." Some viruses have thick protein coats which require a higher concentration of chlorine to inactivate them. Generally, according to a US Centers for Disease Control study, many of the enteroviruses (among the viruses they cited were Hepatitis A, Poliovirus, the Noroviruses (implicated in outbreaks of food-borne illness), and Rotavirus) are "moderately" resistant to chlorine's disinfectant effects, compared to bacteria. However, sodium hypochlorite-based cleaners such as the "Clorox" brand disinfectants are over ten times more effective than disinfectants using alcohol, phenol, or quaternary ammonium compounds at killing both bacteria and viruses.
- The most resistant micro-organisms to chlorine disinfection, according to this study, are the protozoa, and some of these can cause very nasty diseases - Entamoeba histolytica, Giardia intestinalis, Toxoplasma gondii and Cryptosporidium parvum were cited in particular to be both highly resistant to chlorine disinfection and to be persistent to various degrees in water supplies, with Cryptosporidium parvum being the most troublesome micro-organism found in water supplies - it caused the largest waterborne-disease outbreak ever documented in the United States, making 403,000 people ill in Milwaukee, Wisconsin in 1993. loupgarous (talk) 07:18, 11 February 2016 (UTC)
- Thank you. I asked it just for to know if I clean an area by chlorine if it's also against viruses. Today when I clean the working surface in the kitchen the question raised in my brain. no homework at all. 93.126.95.68 (talk) 01:27, 11 February 2016 (UTC)
What is the physiological reason for inappetence?
In a lot of conditions, especially in cases of infections, there is inappetence. What is the physiological reason for that? (I know for example that the fever caused for destroying the bacteria and viruses). I thought that the explanation is the the body want to fight with the pathogen and the eating disturbs it, because the body needs to Invest energy in the digestion. Am I right? 93.126.95.68 (talk) 23:10, 10 February 2016 (UTC)
- I don't think "inappetence" is an actual English word, or at least not one commonly used in medicine or biology. The usual technical term is "anorexia".
- Unfortunately, for a lot of people, that word has become synonymous with anorexia nervosa, and in fact anorexia is a link to that article. Our article for what you want is at anorexia (symptom). That's probably where you should start looking. --Trovatore (talk) 23:40, 10 February 2016 (UTC)
- Indeed, inappetence forwards to Anorexia (symptom) (which is different from anorexia.--Scicurious (talk) 23:44, 10 February 2016 (UTC)
- I'd just use the common term "lack of appetite". That's clear to everyone, except perhaps a geologist. :-) StuRat (talk) 00:20, 11 February 2016 (UTC)
- If you want to search the technical/medical literature, it's probably good to know the name, which is "anorexia". You can use "-nervosa" to filter out that condition.
- It seems "inappetence" actually is a word, at least according to Wiktionary, but I still think you are not likely to find much in English under that name. --Trovatore (talk) 00:23, 11 February 2016 (UTC)
- Yes, it's in the OED with cites from 1691 to 1887. Dbfirs 11:59, 11 February 2016 (UTC)
- In the case of an intestinal infection, like the flu, the body can't always tell it from food poisoning, so avoiding any more (potentially bad) food until the condition clears is the wise course of action. StuRat (talk) 00:24, 11 February 2016 (UTC)
- Please don't answer science quesions in terms of natural teleology. Wisdom is a function of conscious reasoning, not unconscious bodily reactions. --76.69.45.64 (talk) 19:51, 11 February 2016 (UTC)
- I didn't think it was necessary to describe the full evolutionary process, but apparently it is: Those individuals who continued to eat food when they were in intestinal distress were more likely to ingest more of the bad food which gave them the trouble in the first place, and thus die, and therefore pass down fewer genes to offspring. In the case where the intestinal distress was unrelated to the food, the loss of a meal or two (for those whose genes caused them to lose their appetite) was not likely to cause death, so had little evolutionary cost. The net result, then, would be evolutionary pressure to lose one's appetite when one had intestinal distress. Note that there's no reason to expect this evolutionary pressure to be unique to humans, so this reaction may have developed long ago in our evolutionary past. (Scavengers, on the other hand, being largely immune to food poisoning, may not react in the same way.) StuRat (talk) 05:22, 12 February 2016 (UTC)
- Without having any idea of the answer, given I focused in botany with my undergrad Bio major, the OP's question was well formed, and anorexia as a psychological condition has quite a different meaning from mere physiological inappetence due to a temporary infection. I find the above responses vary between irrelevance and rudeness. μηδείς (talk) 02:45, 11 February 2016 (UTC)
- Anorexia nervosa is a psychological condition. Anorexia by itself is lack of appetite. --Trovatore (talk) 03:47, 11 February 2016 (UTC)
- In fact, our article which was linked above by Scicurious over 3 hours before Medeis's reply (so I guess is one of the rude or irrelevant replies) includes links to the International Statistical Classification of Diseases and Related Health Problems and Medical Subject Headings links (okay these are wikidata but I'm pretty sure they would have been there before any reply) on the symptom and several references (I think 4) which discuss anorexia of infection. Nil Einne (talk) 14:26, 11 February 2016 (UTC)
- Purely as a thought experiment, perhaps your body has decided that the costs/dangers of bringing in new food and other possible issues, such as poisons and pathogens, outweigh the short and long term disadvantages of burning the body's reserves. And +1 to Medeis. Greglocock (talk) 02:58, 11 February 2016 (UTC)
- The word "anorexia" literally means lacking appetite, but it's very commonly used as an abbreviation for anorexia nervosa, so its use this way could cause confusion. ←Baseball Bugs What's up, Doc? carrots→ 04:23, 11 February 2016 (UTC)
- Nevertheless I believe it is the usual term in medicine, in English, for lack of appetite. However both words get plenty of hits on Google Scholar, so I can't be sure. --Trovatore (talk) 04:31, 11 February 2016 (UTC)
- While I know counting search hits isn't generally useful when in the thousands, for me, 'anorexia -nervosa' on Google Scholar gets a few hundred k. 'inappetence' gets around 10k but many of these seem to be in animals. You need to include something like 'inappetence patient' or may be 'inappetence human' and that reduces results further. Doing something like 'inappetence -cat -dog -bovine -reindeer -sheep -cattle -porcine -cats -dogs -rabbit -horse -salmon -goat -rats -poultry -pigs -monkey' still seems to manage to find quite a few non human results. Even in animals, 'anorexia cat' seems to find a lot more results than 'inappetence cat' although not all results relate to anorexia in cats. Possibly dog is a better example since you avoid discussions of CAT scans and Cognitive analytic therapy, but I'm not a dog person. Nil Einne (talk) 16:26, 11 February 2016 (UTC)
- Nevertheless I believe it is the usual term in medicine, in English, for lack of appetite. However both words get plenty of hits on Google Scholar, so I can't be sure. --Trovatore (talk) 04:31, 11 February 2016 (UTC)
- Appetite is regulated by a number of factors including leptin, ghrelin, cholecystokinin, and neuropeptide Y. In acute illness, an number of inflammatory cytokines are released. Among these, interleukin-1 alpha (IL1A)[44] which activates Tumor necrosis factor alpha (TNF alpha)[45] are known to suppress appetite. Interleukin-18 is also known to suppress appetite.[46] As there is a complex interplay of signal molecules, I do not want to make this sound like a complete answer. IBiologicalMe (talk) 13:26, 11 February 2016 (UTC)
- I should also add that the references I found are the first one I could find, but are probably not the best ones. BiologicalMe (talk) 13:32, 11 February 2016 (UTC)
Thank you very much! the informaition about the factors is very interesting! 93.126.95.68 (talk) 18:37, 11 February 2016 (UTC)
February 11
Is it normal to never get angry?
I've been annoyed but never angry. Ennyone57 (talk) 03:49, 11 February 2016 (UTC)
- It is for you. GangofOne (talk) 06:25, 11 February 2016 (UTC)
- Emotions, like most all mental phenomena, are highly subjective and hard to quantify--so, it's hard to give any empirical comparison of whether your mental state with regard to anger (or most any emotion) is atypical. All of that said, human being clearly vary quite considerably in how they react to vexing or personally offensive stimuli. You may want to take a look at our articles affect (psychology), affect display and blunted affect, though note that each of these focuses more on behaviour than mental stimuli (again, going back to the deep issues with try to study the emotions themselves, which many cognitive scientists feel may present some by-nature-insurmountable difficulties). I will say this much--if you feel that you have no problem with the intensity of your other emotional states, I (personally) wouldn't waste any time feeling "abnormal" for a lack of particularly strong anger. Some people just run cool by nature and the result is often a very positive influence on those around them. That said, if your lack of intensity of emotion in this, or any, context makes you feel uncomfortable, incomplete or confused, a qualified psychiatric professional may be able to help you sort those feelings. Unfortunately, our policies here prevent us from digging too deep into that topic, since it impacts at least somewhat on our "no medical advice" standard. Snow let's rap 06:36, 11 February 2016 (UTC)
- Macmillan Dictionary defines ANGER as the strong feeling you get when you think someone has treated you badly or unfairly, that makes you want to hurt them or shout at them. That definition may be extended to the case of someone close to you being treated badly. If the OP considers their own reaction to such an event, which in this stressed world is not hard to visualize, then that qualifies as the OP's own anger reaction. It need not present visible symptoms or have to match the anger reactions of other people. AllBestFaith (talk) 13:06, 11 February 2016 (UTC)
- This seems like a good functional definition, though it doesn't actually get at whether the process internally is different for the OP. I wonder if a more meaningful approach wouldn't be to do comparative measurements with fMRI or something. Such studies exist [47][48] though it seems dicey to measure "genuine" rage except in weird scenarios like the first. I mean, as much as in speech we might correlate the feeling you get when you read an article about camps in North Korea to the feeling you would have if you actually caught your wife's rapist between a blind corner and a baseball bat, I don't know if it's really the same emotion at all - how much of it lies in the actual intent to do actual harm? (AFAIR there is an aspect of repression from the frontal lobe in all this, but I'm not sure that "without it" it is "the same thing") Wnt (talk) 13:53, 11 February 2016 (UTC)
- Macmillan Dictionary defines ANGER as the strong feeling you get when you think someone has treated you badly or unfairly, that makes you want to hurt them or shout at them. That definition may be extended to the case of someone close to you being treated badly. If the OP considers their own reaction to such an event, which in this stressed world is not hard to visualize, then that qualifies as the OP's own anger reaction. It need not present visible symptoms or have to match the anger reactions of other people. AllBestFaith (talk) 13:06, 11 February 2016 (UTC)
- I think the amygdala is the main center in charge of emotions like this, the adrenal gland produces the main associated hormones. If you don't feel much anger you might not feel much fear either as in the fight or flight reflex. A bit is good but we don't have to fight or flee saber toothed tigers nowadays. Dmcq (talk) 16:38, 11 February 2016 (UTC)
- I guess I'd have to ask how you know you're not angry? It's a bit like asking whether the color you see as "red" looks the same to me. How would you ever know? It's quite possible that your feeling of annoyance "feels" the same as the feeling I'd describe as "anger" - but that your external manifestations are kept more firmly under control. It's very difficult to compare inner sensations between individuals like that. SteveBaker (talk) 19:02, 11 February 2016 (UTC)
- For anyone wanting to dive more into this topic, the fancy-pants philosophical term for these "inner sensations" is qualia. --71.119.131.184 (talk) 23:33, 11 February 2016 (UTC)
- Yes, indeed. SteveBaker (talk) 02:55, 13 February 2016 (UTC)
- For anyone wanting to dive more into this topic, the fancy-pants philosophical term for these "inner sensations" is qualia. --71.119.131.184 (talk) 23:33, 11 February 2016 (UTC)
- Wikipedia articles exist on Anger and Anger management. Two quotations are:
- "We all experience anger; anger only becomes a serious concern when an individual is angry too frequently, too intensely, and for too long." - Novaco
- "Anyone can become angry, that is easy...but to be angry with the right person, to the right degree, at the right time, for the right purpose, and in the right way...this is not easy." - Aristotle
- AllBestFaith (talk) 14:32, 12 February 2016 (UTC)
Making a slushie without sugar
Slush (beverage)#Sugar states that sugar is needed to act as an antifreeze. My question then, is if some other "edible antifreeze" could be used (excluding salt, because I don't think anyone would want that, even if it worked). StuRat (talk) 16:18, 11 February 2016 (UTC)
- Or you could try using ethanol. DuncanHill (talk) 16:26, 11 February 2016 (UTC)
- Well, of course Stu should try using ethanol, but given it is a less viscous liquid than water, rather than a solid like sugar that dissolves in water, it might defeat the slush goal. I am partial to protein shakes, which I make with skim milk, which does have its own inherent sugar. Perhaps Stu can hint at what his underlying goal is? Oh, and I also love milkshakes made with Breyer's low carb/no sugar added ice cream. μηδείς (talk) 19:24, 11 February 2016 (UTC)
- Two words: margarita, daiquiri. --Trovatore (talk) 19:26, 11 February 2016 (UTC)
- I like the ability of a slushie to cool me down more quickly than just a drink with ice cubes (no doubt because I actually consume the crushed ice in the case of a slushie, rather than waiting for it to melt first). However, I don't want all that sugar. I drink zero calorie iced peppermint "tea", no sugar or artificial sweeteners, so if I could make that into a slushie, that would be ideal. StuRat (talk) 19:51, 11 February 2016 (UTC)
- Sucralose? (Disclaimer, I have never actually tried this.) shoy (reactions) 15:03, 12 February 2016 (UTC)
- I wouldn't want sucralose in anything I intend to consume. StuRat (talk) 19:03, 12 February 2016 (UTC)
- I think the importance of freeze point depression by sugar may be a bit overstated. Have you simply experimented with a no-sugar peppermint shlushie? Also I do think a touch of glycerine may help as Duncan suggests, and it has a sweet taste too. You can usually find it at larger drug stores, or buy it online. It is not, however, zero calorie [49]. Propylene_glycol is used in many processed foods in small amounts, it's not as easy to come by though, and some people have concerns about ingesting it, though it is an FDA approved food additive. Some other counties have stricter limits on propylene glycol, hence this recent kerfluffle over flavored whiskey [50]. Ethylene_glycol is of course right out. SemanticMantis (talk) 15:37, 12 February 2016 (UTC)
- I do actually have some food-grade propylene glycol. I don't yet have a slushie machine, as I didn't want to buy one unless I could find a good recipe for peppermint tea that doesn't use sugar. I will see if I can find a recipe with propeylene glycol. StuRat (talk) 19:07, 12 February 2016 (UTC)
- In Britain you can buy glycerine in most supermarkets and many corner shops, as well as in chemists. DuncanHill (talk) 15:45, 12 February 2016 (UTC)
- Yes it seems more commonly used and sold in UK compared to USA. Not sure why, that might be its own interesting question... SemanticMantis (talk) 18:53, 12 February 2016 (UTC)
- I see you ruled out sucralose, but there's some stevia+erythritol stuff I like to use that is semi-comparable with sugar. Searching erythritol slushie turns up a bunch of references, including a bunch of patents from 2010, since after all, no idea is too obvious to patent; still, courtroom proceedings can be a good thing, provided the defendant shows up nicely manicured and wearing his very finest suicide vest. Wnt (talk) 22:16, 12 February 2016 (UTC)
I don't really want any sweetener at all, if it can be avoided. Now I'm thinking maybe I need to make snow cones out of peppermint tea ice cubes, to get something cold and healthy without involving a chemistry experiment. Snow cones deal with the crystal formation a bit differently, by letting ice crystalize, but just shaving it down so the crystals are small enough to be edible. You do need to eat it with a spoon, though, instead of a straw. StuRat (talk) 22:24, 12 February 2016 (UTC)
The mask should defend on the sick or from the sick?
I saw in Eastern Europe people that put masks on their faces. My question about is: Is this mask defends on the person that put it on his face or it defends on the people around that worried about the sneeze and cough? It's not clear for me this issue. In addition, How could it be that very small viruses and bacteria can not get out of the mask while having sneeze or cough that are considered powerful. 18:42, 11 February 2016 (UTC) — Preceding unsigned comment added by 93.126.95.68 (talk)
- A mask could either protect the wearer from the world, or the world from the wearer. Our article Surgical mask says that people in Japan who are ill often wear masks to reduce the risk of passing the disease on. Our article has a photo of a situation in the USA where people were not permitted onto public transport unless wearing a mask - and clearly that would be to prevent them from passing on disease rather than for their personal protection from an external source. A simple mask won't prevent all bacteria and viruses from spreading but certainly it would reduce the degree of risk. Our article points out a surprising 'bonus' benefit which is that they "remind wearers not to touch their mouth or nose, which could otherwise transfer viruses and bacteria after having touched a contaminated surface". SteveBaker (talk) 18:54, 11 February 2016 (UTC)
- Thank yoou for the answer.93.126.95.68 (talk) 13:36, 12 February 2016 (UTC)
- Do you really mean Eastern Europe? I have only seen people in East Asia using them. At least I have never seen them around Poland, Ukraine. --Scicurious (talk) 21:27, 11 February 2016 (UTC)
In which time have you been there? Have you been there in the summer time or in the winter? Of course, I'm not saying that the entire people here or even most of them wear it as you can find in East Asia, but it's not uncommon here especially in this time. 93.126.95.68 (talk) 00:16, 12 February 2016 (UTC)
- I sometimes see people wearing masks here in Auckland, New Zealand, and they are without exception Asians. Akld guy (talk) 00:27, 12 February 2016 (UTC)
- It was surely not winter, but even then. I associate mask wearing while you're sick with Asia. I thought it was a kind of respect for the community type of things. Far away from the fuck you attitude of the West.--Scicurious (talk) 21:15, 12 February 2016 (UTC)
- I sometimes see people wearing masks here in Auckland, New Zealand, and they are without exception Asians. Akld guy (talk) 00:27, 12 February 2016 (UTC)
Why do doctors give saline to the patient instead of water?
I know that the saline is near to our physiological level of our blood (isotonic), and water is hipertonic or hypotonic, but my question is about the management of liquids through the vein (IV), that it's different from management of liquids through the mouth. So if I understand well the water that comes though the mouth latter transforms to isotonic. What is the way or the mechanism that it happens? 93.126.95.68 (talk) 18:48, 11 February 2016 (UTC)
- Great question! Blood contains a balanced mixture of contents from the food you eat (which contains more salt than you need on most modern diets) and from water you drink. In addition, your body gets rid of excess water and/or salt as needed through urine to keep its tonicity at the right level. Blythwood (talk) 19:28, 11 February 2016 (UTC)
- The answer to your headline question is because it's sterile, doctors don't give saline water to drink, they use saline solution in a drip (through the veins, as you put it) in some cases to rehydrate patients (and also to administer medication). I'm not too sure about your question re hypertonic vs hyponotic, as I understand those terms they are relative to the overall balance of water and salts/sugars in your blood. A biologist will give you a better answer, but in a healthy person I think osmosis will balance the water your cells need and your internal organs (Kidneys in this case) will process and excrete the excess salts Mike Dhu (talk) 19:59, 11 February 2016 (UTC)
- Both the question and the answers above are a bit confusing, but if I'm not mistaken you're asking why it's fine to drink water, but you have to use isotonic saline for intravenous ('through the vein') administration? You are entirely correct that normal saline is isotonic to our blood, ie it doesn't disturb the finely regulated levels of solutes in the blood too much. Water is indeed hypotonic to blood, and directly infusing it would rapidly lead to things like hyponatremia or hypokalemia. (Note: there are medical reasons to use hyper or hypo-tonic saline) Why doesn't this happen when you drink water? The answer is that is does happen. See water poisoning. The trick is to eat food or another source of salts (oral rehydration therapy) along with the water. This will be digested and absorbed in the blood stream, which, together with the regulatory mechanisms in the kidney, maintains healthy levels in the blood. Of course, with our modern Western diets we generally get too much sodium, leading to problems such as high blood pressure. It's a fine balance. Fgf10 (talk) 20:29, 11 February 2016 (UTC)
- Yes, this is exactly what I meant to ask. Thank you for the answer. But according to what I know (and you can correct me if I'm wrong) normal people can survive with water some days, then according to what you're saying that they need to use the trick of eating something with the water, how can they survive for a long? I thought about other two possible options: 1) The body has a store of salts - as it as for sugar. and when coming hipotonic water the body secrete salts. 2) The body knows how to take the water and divide it into parts. Then it takes the salts that it needs for isotonic liquid and the rest of the H2O it removes by the urinary system. I would like to get your opinion about.93.126.95.68 (talk) 00:12, 12 February 2016 (UTC)
- It's the second. The kidneys regulate the amount of solutes excreted in the urine, to maintain homeostasis. As to why isotonic solutions are used for IVs, it's because if the tonicity of your blood becomes out of whack, it can kill you. Cellular processes will be disrupted, and cells can even rupture. --71.119.131.184 (talk) 00:57, 12 February 2016 (UTC)
- Yes, this is exactly what I meant to ask. Thank you for the answer. But according to what I know (and you can correct me if I'm wrong) normal people can survive with water some days, then according to what you're saying that they need to use the trick of eating something with the water, how can they survive for a long? I thought about other two possible options: 1) The body has a store of salts - as it as for sugar. and when coming hipotonic water the body secrete salts. 2) The body knows how to take the water and divide it into parts. Then it takes the salts that it needs for isotonic liquid and the rest of the H2O it removes by the urinary system. I would like to get your opinion about.93.126.95.68 (talk) 00:12, 12 February 2016 (UTC)
- The body can regulate the expulsion of water through the kidneys, and the intake of water through the intestines, to a point. Injecting water intravenously can easily overwhelm your body's sophisticated homeostasis and cause rapid illness, while it normally takes a prolonged period of time to get water poisoning through simply drinking it. That is, your body is equipped to handle large (but not absurd) amounts of water entering through the mouth, but not directly into the bloodstream. And, really, why would it? It's not something that was reasonably possible until somewhat recently in human history. Someguy1221 (talk) 04:53, 12 February 2016 (UTC)
The critical reason for using isotonic saline is to avoid hemolysis. Red cells exposed to hypotonic solutions will lyse, and in sufficient quantity, the hemoglobin can cause kidney injury ((hemoglobinuria). There is nothing to prevent the sterilization of deionized or distilled water, and they are used as the diluents for some drugs. Bacteria will grow better in saline solutions (a variety of balanced salt solutions are used in microbiology laboratories) than distilled water. A 5% (weight/volume) solution of dextrose is also isotonic, and a commonly used solution. Hypertonic solutions may be used when the goal is to replace electrolytes rather than water. The choice of solutions for intravenous therapy is a complex subject. — Preceding unsigned comment added by BiologicalMe (talk • contribs) 18:13, 12 February 2016 (UTC)
February 12
Magnetic hovering spheres
Is it possible to suspend a hollow ferric sphere inside another slightly larger one and have them separated by some sort of magnetic repulsion and have them spin in opposite or even dissimilar directions? — Preceding unsigned comment added by 66.87.83.70 (talk) 02:51, 12 February 2016 (UTC)
I'm going to take a punt on this and suggest No as an answer, because I don't see how to have the internal shell supported by repulsion, in a stable configuration, or by external fields, at all. The answer to the second bit is easier, yes if you can suspend the inner sphere then I'm sure you'll be able to motor it with respect to the outer one. Greglocock (talk) 03:50, 12 February 2016 (UTC)
- Earnshaw's Theorem says there can be no stable configuration of stationary magnetized (or electrically charged) objects. Perhaps it's possible to do what you want if (and only if) the spheres are rotating (similar to what the Levitron does) but I'm not sure about this. It would also be possible if there were some kind of feedback that adjusts the strength of the magnets based on the position of the spheres. Mnudelman (talk) 04:15, 12 February 2016 (UTC)
- If you could suspend one sphere inside another by repulsion alone (which the word "hovering" seems to suggest) I can't see any way of imparting a spin to it as it would be completely enclosed. Richerman (talk) 07:24, 12 February 2016 (UTC)
- You could start spinning the outer sphere, leaving the inside sphere stationary. StuRat (talk) 07:47, 12 February 2016 (UTC)
- Then they aren't spinning in "opposite or dissimilar directions". Richerman (talk) 10:27, 13 February 2016 (UTC)
- From the frame of reference of either sphere they are. StuRat (talk) 17:36, 14 February 2016 (UTC)
- There are several different levitation methods that circumvent Earnshaw's Theorem that are listed in the article on magnetic levitation. Also, since the term ferric simply means "iron containing", a sufficiently cooled outer sphere consisting of an iron-based superconductor can levitate or suspend an inner ferromagnetic sphere via the Meissner effect. --Modocc (talk) 08:47, 12 February 2016 (UTC)
- Earnshaw's Theorem is really a corollary of the n-body problem, and levitation examples represent special cases with constraints on the objects to establish a metastable equilibrium; i.e. it holds stable for long enough on time frames humans care to stare at it, but it is not perfectly stable under long enough time frames. --Jayron32 15:32, 12 February 2016 (UTC)
- Just so; I think this is the only known method by which the OP's scenario might be theoretically feasible. this] is not exactly the scenario envisaged, but it will give an impression of the kind of superconductor/ferromagnetic principles involved, for those unfamiliar with the Meissner effect and it's recent headline-grabbing "quantum levitation" applications. Snow let's rap 15:03, 12 February 2016 (UTC)
I'm going to ask the obvious question here: How would you know if the inner sphere was rotating? shoy (reactions) 15:05, 12 February 2016 (UTC)
- I think the same way you can tell that a gyroscope is spinning, even when it's encased in an opaque sphere (outer sphere stationary and inner sphere rotating with respect to that frame fits my interpretation of OP's words). The crux of this question is the time frame Jayron hints at. Certainly the OP's situation is possible, if only lasting for a few nanosecends. Certainly such a configuration will not be stable for billions of years. Beyond that I think someone has to do some nontrivial physics and math. SemanticMantis (talk) 16:20, 12 February 2016 (UTC)
- The thing I don't get here is that Earnshaw's theorem is only about one kind of force at once. But the sphere has weight also! Is there really no way to, I dunno, rig a hollow bar magnet so that the field is strong around the rim and a little weaker inside, and use it to repel one magnetic pole of a sphere that is weighted to keep it from inverting? Wnt (talk) 17:03, 12 February 2016 (UTC)
- (Newtonian) gravity is just another inverse-square force (like electrostatics), so it adds nothing beyond what the theorem already covers. --Tardis (talk) 17:42, 12 February 2016 (UTC)
- Ok. Forget the magnetic part for now just replace that with high efficiency lube and/or some strategically placed ball bearings. Point is The least friction as possible. What if any physics implications are present w this system? Does this change the weight/ density of the system? any other observations that a lay person might not consider?66.87.81.187 (talk) 19:54, 12 February 2016 (UTC)
- Think about the two dimensional equivalent, say two flywheels on the same axle, one larger than the other. They really aren't a very interesting system to investigate. Greglocock (talk) 22:51, 12 February 2016 (UTC)
- I recently saw a show that had this guy lift with one hand a 200 lb. or some such extraordinary weight completely above his head for at least 3 seconds. He used a drill gun to get a giant flywheel spinning really fast. That enabled him to lift the very heavy wheel. He was not able to control it largely (I believe) because it was spinning in one direction. If there was another equal mass wheel inside that to counter act the imbalance could he hold it longer than 3 seconds?66.87.81.187 (talk) 02:57, 13 February 2016 (UTC)
- As far as I know, the only way a flywheel would help you hold up the weight (assuming it wasn't actually acting as a propeller, of course) is that it helps keep it stable. Probably the guy was just uncommonly strong, and he could hold 200 lb above his head with one hand for three seconds — that's not unbelievable for a serious weight lifter.
- If you cancelled the gyroscopic effect as you suggest, then I don't see how you would get any benefit at all from it. It would cancel out the stabilization, and be as hard to hold as if the flywheels were stopped. --Trovatore (talk) 04:13, 13 February 2016 (UTC)
- He couldn't lift it with 2 hands when it was not spinning. Let alone with one and well over his head for extended time. I wish I could locate the clip. 66.87.81.187 (talk) 06:50, 13 February 2016 (UTC)
- Word of warning: Video clips showing amazing "sciency" things like that are very, very frequently faked - sometimes cleverly, other times not so cleverly. Easily more than half of these things on YouTube are faked. Gyroscopes seem almost magical - but their effect is only on the resistance of the system to rotation - physically moving (technically 'translating') any arrangements of so such contraptions is no easier or harder than if they aren't spinning. If what you think you saw was possible, we'd be using them as propulsion systems in spacecraft rather than just for attitude control. SteveBaker (talk) 17:40, 13 February 2016 (UTC)
- 'preciate the concern. But this was on a reputable science show on TV. It was rather new episode so it will no doubt be on again. 66.87.80.61 (talk) 22:05, 13 February 2016 (UTC)
- I recently saw a show that had this guy lift with one hand a 200 lb. or some such extraordinary weight completely above his head for at least 3 seconds. He used a drill gun to get a giant flywheel spinning really fast. That enabled him to lift the very heavy wheel. He was not able to control it largely (I believe) because it was spinning in one direction. If there was another equal mass wheel inside that to counter act the imbalance could he hold it longer than 3 seconds?66.87.81.187 (talk) 02:57, 13 February 2016 (UTC)
- Think about the two dimensional equivalent, say two flywheels on the same axle, one larger than the other. They really aren't a very interesting system to investigate. Greglocock (talk) 22:51, 12 February 2016 (UTC)
- THIS appears to be an example of that kind of thing. It's not a 200lb flywheel, it's only 40lb - but I think the effect is as we'd expect. Specifically, note that he's not holding the apparatus stationary above his head - it's on a trajectory that he appears to have little control over. Still, it's not exactly trivial to explain what's going on...sadly, that's rather typical of gyroscopes. The math ain't pretty! SteveBaker (talk) 04:11, 14 February 2016 (UTC)
Any disadvantage of graphite graphene batteries?
Is the whole hype around the graphite graphene battery warranted? Is there any reasonable known and maybe insurmountable disadvantage to it? --Scicurious (talk) 19:30, 12 February 2016 (UTC)
- May be that it does not exist? Ruslik_Zero 20:38, 12 February 2016 (UTC)
- I think OP may be talking about Dual_carbon_battery, in which case the cited references are a decent place to start. SemanticMantis (talk) 20:47, 12 February 2016 (UTC)
- I meant actually graphene, and graphene batteries.--Scicurious (talk) 21:12, 12 February 2016 (UTC)
- I think OP may be talking about Dual_carbon_battery, in which case the cited references are a decent place to start. SemanticMantis (talk) 20:47, 12 February 2016 (UTC)
- The cynic in me is saying: you could do anything with graphene, if you could do anything with graphene. (It used to be 'nanotubes'...) Wnt (talk) 22:07, 12 February 2016 (UTC)
- Sure, but there is a debunking reaction when a technology is not living up to its expectations. I am not seeing this happening for graphete (yet). --Scicurious (talk) 22:52, 12 February 2016 (UTC)
- What's graphete? Anyway in terms of graphene, this source says "Graphene-based solutions have so far been notoriously difficult to manufacture on a large scale, thanks in part to the difficulty of isolating high-quality graphene" [51].
From what I can tell, there isn't really such a thing as a graphene battery (or cell). There are multiples proposals to use graphene in chemical cells, but these are usually just to coat the anode or cathode, perhaps to enable the use of different anodes or cathodes [52] [53] [54] [55] [56] [57] [58] most commonly for some variant of lithium cells although there are also other proposals like sodium and some use them in superconductors.
In the real world, any commercial usage of graphene in batteries is probably several years away at a minimum (e.g. [59]). Most of the sources seem to be in the form of "this battery may be better than what we have currently, if we can produce it commercial at a decent price and resolve any problems and doing all that still have something better than the current state of the art which includes doing it all before someone finds something better". There are a few in the form of "perhaps we can use this to produce such batteries commercially" [60].
It's not so easy to "debunk" something which is actually a lot of different proposals and where all most of them are saying is "we may be able to do this one day if we overcome all the obstacles". You may be able to find some informed discussion about the chances of success but really it can be quite difficult to know at such an early stage. Actually a lot of these sort of things are never really debunked as debunking doesn't make much sense if the published studies are real and accurate but what they are reporting is something still quite far from a commercial product. If you're lucky, and it seems there is sufficient hype and talk about graphene that this may happen there will be a fair amount of future analysis of why something never really achieved success (which is different from debunking). But there are a lot of technologies where all that well happens is there's a bit of hype and talk but they are never really able to overcome the obstacles to commercial production or usage and you never really hear about them again.
There is this review [61] although it's a bit old now.
Of course it's difficult to know for sure what's going on in the various commercial/private labs and there is one company who are talking of a graphene battery or supercapacitor of fuelcell or something [62] [63] [64] [65] but they don't seem to have a commercial product and frankly such hype gets boring. (And it's also fairly difficult to debunk when they have provided so little info and no products to test except for possible flaws or lies in their demonstrations.)
Hobbyking claim to be selling Turningy graphene batteries, I don't think anyone really believes there's any significant use of graphene (which I guess you could call debunking) [66] [67]. While Hobbyking and Turningy aren't quite as bad as many other Chinese sellers and tends to have batteries with capacities that normally aren't that far from advertised, they still can do bullshit marketing.
[68] actually says pretty much the same thing as Wnt, graphene seems more similar to carbon nanotubes (i.e. much hype but not actually used for much) than it does to silicon (i.e. one of the most important materials of the modern age). It seems someone in China was also claiming to produce a smartphone with several major components including the battery using graphane [69] last year but whatever happened to this it didn't seem to receive much attention.
- What's graphete? Anyway in terms of graphene, this source says "Graphene-based solutions have so far been notoriously difficult to manufacture on a large scale, thanks in part to the difficulty of isolating high-quality graphene" [51].
- Sure, but there is a debunking reaction when a technology is not living up to its expectations. I am not seeing this happening for graphete (yet). --Scicurious (talk) 22:52, 12 February 2016 (UTC)
- What is graphane? Anyway, thanks for your answer.Scicurious (talk) 13:20, 13 February 2016 (UTC)
- See our articles: Graphane and Graphene - it's not just a typo. Graphene is a mono-atomic sheet of pure carbon - Graphane has hydrogen atoms in the two-dimensional lattice. (Oh - and there is also a theoretical material called Graphyne...similar deal). SteveBaker (talk) 17:21, 13 February 2016 (UTC)
- OK, graphane exists and it could also store energy in the form of hydrogen. But this is only a potential use and not a reality (yet). Scicurious (talk) 21:34, 13 February 2016 (UTC)
- Although in this case it was just a typo, sorry for any confusion. Nil Einne (talk) 14:19, 14 February 2016 (UTC)
- See our articles: Graphane and Graphene - it's not just a typo. Graphene is a mono-atomic sheet of pure carbon - Graphane has hydrogen atoms in the two-dimensional lattice. (Oh - and there is also a theoretical material called Graphyne...similar deal). SteveBaker (talk) 17:21, 13 February 2016 (UTC)
- What is graphane? Anyway, thanks for your answer.Scicurious (talk) 13:20, 13 February 2016 (UTC)
Elemental composition and electrons in a white dwarf star, etc
What is the elemental composition of a white dwarf? I figure there is at least some hydrogen left. How fast are electrons traveling in a white dwarf star? How greatly does their mass increase due to relativistic effects? Does time dilation cause any interesting things? (I have heard that electrons are believed absolutely stable, so time dilation wouldn't affect decay rates, but I wonder if minor or subtle things can happen that are interesting.) In a white dwarf, could electrons be under so much pressure that they occupy shells Inside a nucleus, instead of occupying electron shells that "orbit" the nucleus? Are interesting chemical compounds or crystals formed due to the increased electron mass?155.97.8.168 (talk) 23:50, 12 February 2016 (UTC)
- did you read White dwarf, whech tells a bit aboug the elemental composition. The material is electron-degenerate matter. This is a gas or plasma, and you would not expect molecules or crystal structures to be present. The surface, which we see is not under such pressure, but atomic gases are what are observed. Graeme Bartlett (talk) 01:47, 13 February 2016 (UTC)
- Yes I did read it, it tells a bit. I was hoping to learn more.155.97.8.169 (talk) 03:15, 13 February 2016 (UTC)
- A white dwarf is the leftover core of a star that was more massive than a red dwarf but not massive enough to go supernova. So, the composition is that of the star's core, which is the "ash" produced by the fusion reactions during the star's lifetime. See Stellar evolution#White and black dwarfs for details. You were on to something with the speculation about electrons being "forced" into the nucleus. This does happen in even more massive stars. But when it does, the electrons and nucleons react, in electron capture. This is a "bad thing" for the star, because the electrons are helping to support the star against collapse. Stars that wind up as white dwarfs aren't massive enough for their gravity to cause electron capture, and so you wind up with a big ball of plasma supported by electron degeneracy pressure. More massive stars overcome this, and you get either a neutron star or black hole. You might find Crash Course Astronomy informative. --71.119.131.184 (talk) 11:55, 14 February 2016 (UTC)
Energy from black holes merging
News stories say that a gravity wave detector, LIGO, picked up Gravitational waves when 2 black holes merged 1.3 billion light years away. The merged black hole was reported to have less mass than the sum of the 2 original black holes, to the extent of "three Earth solar masses" which was released as energy in the form of gravity waves. If the Sun's mass is about 2 x 10 30 kilograms, then per E=mc squared this would represent a release of about 1.8 x 10 47 joules or 4.3 x 10 31 megatons of TNT per a site which does the calculation for lazy types like me. This was said to cause a barely detectable vibration in ultrasensitive detectors here, but what would it have looked like/felt like for an observer to the disturbance who was much closer? That is a hell of a lot of energy, but electromagnetic waves like light or heat would seem to have trouble escaping the combined gravitational field. ( I understand that there is electromagnet radiation from stuff falling into a black hole without there being a merger, but I'm more interested in the gravity wave's effects)The observed waves caused a change of about 1 part in 1020 at a distance of 1.3 billion light years, so by the inverse square law it would seem that at 13 light years distance, the effect should be about 1 part in 10 (please check the math). What would that feel like? If ones head and feet were changing their distance by one tenth many times per second, would the observer be smushed into jelly? Would the wave be doing work and transferring energy to everything around it? Or, since a measuring stick checking the distance from head to feet would also be changing its dimensions, would there be no work done and no energy transferred? Edison (talk) 23:57, 12 February 2016 (UTC)
- Note : edited above to say 3 solar masses rather than 3 earth masses. Edison (talk) 00:07, 13 February 2016 (UTC)
- I don't know if they would be turned into jelly, but it seems to me that the molecules aren't moving because of it, it is the space itself that moves (gets rippled). Tgeorgescu (talk) 00:36, 13 February 2016 (UTC)
I'm also not sure that the gravity wave itself would have done that. It's warping all of space - so everything that's not massless shouldn't really notice. My feeling it that you'd see an abrupt red-shift and then blue-shift of light - then nothing special...and even that would be so brief, I'm not sure we'd notice it. I could be way off on this though - it's a hard thing to think about. SteveBaker (talk) 02:52, 13 February 2016 (UTC)- I believe you are way off on this. Changes in the curvature of spacetime are felt as tidal forces, so my first instinct is to suggest that a sufficiently close observer would be ripped to shreds, like a high frequency version of spaghettification. Though I'm no physics expert, so I'm not sure if Edison's estimate holds for this phenomenon, though it's the same calculation that occurred to me when I read the paper. Someguy1221 (talk) 05:25, 13 February 2016 (UTC)
- The Sticky bead argument confirms that energy is transferred during the process - and energy generally results in damage - so I believe you are correct. (In my defense, I did say I wasn't sure!) SteveBaker (talk) 17:17, 13 February 2016 (UTC)
- I believe you are way off on this. Changes in the curvature of spacetime are felt as tidal forces, so my first instinct is to suggest that a sufficiently close observer would be ripped to shreds, like a high frequency version of spaghettification. Though I'm no physics expert, so I'm not sure if Edison's estimate holds for this phenomenon, though it's the same calculation that occurred to me when I read the paper. Someguy1221 (talk) 05:25, 13 February 2016 (UTC)
- Here is the press release from Caltech, and here is the paper in Physical Review Letters: Observation of Gravitational Waves from a Binary Black Hole Merger. These are the authoritative primary sources of information on the event. I'm not sure the answer to User:Edison's question is actually known to the scientists who published the finding. They did not discuss the effects of the event on objects close to the source. Probably the best answers will be found in The Astrophysical implications... , which is cited as the best discussion of the astrophysical implications of the source of the event, consistent with the best numerical models available. Nimur (talk) 03:16, 13 February 2016 (UTC)
- The merged black hole does not lose mass compared to the rest mass of the two merging black holes. The energy that is radiated comes from gravitational potential energy which just before they merge will have been turned mostly into kinetic energy with them whizzing round each other at a very high speed. Energy and mass are equivalent by Einstein's famous equation. Dmcq (talk) 10:43, 13 February 2016 (UTC)
- The power flux coming from the merger (as measured in joule per second per square metre) falls off according to the inverse square law, as it has to for conservation of energy. This quantity however is proportional to the square of the amplitude of the wave. Amplitude is what is measured (in dimensionless units) and reported as 10-21 and falls off inversely proportional to distance. At 1 million kilometres from the source, amplitude is still only 10-5. That would feel like sitting on a large speaker box. At 1000 km, the amplitude is about 1%, which may be enough to break your bones. Static tidal acceleration (which goes as da/dr=2GM/r3) at this distance would be about 16,000 s-2, or 1600 g per metre, which will rip you apart. An ant might survive. Anyway, X-rays from gas falling into the black hole(s) would have killed you before reaching that point.
- Molecules sitting stationary in space, without any acceleration, would vibrate relative to each other when a gravitational wave passes. They wouldn't feel any acceleration directly, but if the molecules are part of a measuring rod, they will feel the resulting elastic forces in the measuring rod. PiusImpavidus (talk) 13:20, 13 February 2016 (UTC)
- Apparently, between the 1920s and the 1950s there was disagreement among physicists about whether gravitational waves actually transfer energy to physical objects. The sticky bead argument by Feynman finally convinced almost everyone that this does actually occur. Mnudelman (talk) 15:35, 13 February 2016 (UTC)
- A working astrophysicist has published the answer I was seeking in a Forbes article. He says it is based on a back of the envelope calculation. Unfortunately, at the time I asked the question, all envelopes within my reach had their reverses totally covered in dense calculations. (I am not kidding). See "Could Gravitational Waves Ever Be Strong Enough To Feel?" by Brian Koberlein, astrophysicist and Senior Lecturer of Physics and Astronomy at the Rochester Institute of Technology. He says that if the observer were 1.3 light years from the event,"The entire Earth would shift in diameter by about a hundredth of a millimeter" with modest effects. He says that at 10,000 kilometers from the event, an observer would experience a variation one part in 1000. He does not mention the deadly static tidal acceleration, spaghettification, or radiation described by some above. The power being the square of the amplitude, so it varies linearly with distance rather than inverse square of distance makes sense, just as electrical power varies as the square of voltage. Edison (talk) 02:57, 14 February 2016 (UTC)
- It starts to sound like the other dangers of being a few thousand kilometers away from a pair of spinning/converging black holes means that your last concern is going to be the gravity waves! SteveBaker (talk) 03:37, 14 February 2016 (UTC)
- Before anyone says "AAAH! If there is a black hole ,"THE TIDAL FORCES WOULD TOTALLY SPAGHETTIFY YOU!" they should specify the distance from the black hole, and calculate the relative gravitational acceleration of the observer's head and feet. If the observer is distant enough that the head and feet accelerate about the same then there would be little "spaghettification." Edison (talk) 04:27, 14 February 2016 (UTC)
- It's not just the tidal forces - consider also the radiation. SteveBaker (talk) 16:07, 14 February 2016 (UTC)
- PiusImpavidus did calculate it above and got 1600 gees/meter of static tidal force at 1000 km, which is around the distance where the gravitational wave might start to damage a human body, so it's clear that the gravitational wave is not a big concern for this merger. But since the gravitational wave amplitude falls off as 1/r, the ionizing radiation as 1/r2, and the tidal force as 1/r3, it seems that for a sufficiently large (maybe impossibly large) black hole merger, there might be a distance range where the gravitational wave would kill you and the others wouldn't.
- PiusImpavidus and Koberlein didn't mention that the effect of the gravitational wave is not simply to stretch/squash you to some percentage of your original length. The value h ~ 10−21 that LIGO pretends to measure is actually unphysical, as pointed out in a thread below this one. It's more accurate to say that you're stretched by h''(t), which has units of s−2 = (m/s2)/m like the static tidal force. If your natural frequency is much lower than the gravitational wave frequency, which is certainly true of LIGO, then it makes little difference and you might as well say you feel h. But the speed of sound in water is ~1500 m/s (and even higher in bone), while the maximum frequency of this chirp was only 150 Hz, so I think the effect on a human body would be smaller than you'd guess by just looking at hmax. -- BenRG (talk) 07:50, 14 February 2016 (UTC)
- Before anyone says "AAAH! If there is a black hole ,"THE TIDAL FORCES WOULD TOTALLY SPAGHETTIFY YOU!" they should specify the distance from the black hole, and calculate the relative gravitational acceleration of the observer's head and feet. If the observer is distant enough that the head and feet accelerate about the same then there would be little "spaghettification." Edison (talk) 04:27, 14 February 2016 (UTC)
- It starts to sound like the other dangers of being a few thousand kilometers away from a pair of spinning/converging black holes means that your last concern is going to be the gravity waves! SteveBaker (talk) 03:37, 14 February 2016 (UTC)
- Another published popular science piece by an astrophysicist on the physical effects:at Gizmodo, where Dr. Amber Stuver of the LIGO Livingston Observatory in Louisiana says "...assume that we are 2 m (~6.5 ft) tall and floating outside the black holes at a distance equal to the Earth’s distance to the Sun. I estimate that you would feel alternately squished and stretched by about 165 nm (your height changes by more than this through the course of the day due to your vertebrae compressing while you are upright). This is more than survivable." Edison (talk) 13:35, 14 February 2016 (UTC)
- If someone is so inclined, it might be interesting to consider the gravitational wave impact of merging supermassive black holes, i.e. 105 times larger masses than the black hole merger currently observed. If you want to imagine feeling a gravitational wave, that is probably the best case for it, though I don't know if there is a distance at which a person would be close enough to feel the passing wave but far enough away to survive all the other impacts of being near a supermassive black hole. Dragons flight (talk) 21:13, 14 February 2016 (UTC)
February 13
under what conditions will a Grignard or sodium hydroxide polymerize an epoxide
Nucleophiles like Grignards and organolithium reagents open up the epoxy ring, and so do alkoxides. But when the epoxy ring opens, a new alkoxide is generated, which can then open another epoxy ring ... I'm aware epoxides are used as plastics but apparently this is done under conditions of "curing" and isn't as simple as adding base. Which makes me wonder -- what prevents the newly-formed alkoxy species (especially in aprotic solvent) from attacking another epoxide, which then will create a new alkoxy species to attack yet another epoxide? For example, if my intent is to react ethylene oxide with n-butyllithium, what's to prevent polyethylene glycol polymerization as a side reaction? Yanping Nora Soong (talk) 02:20, 13 February 2016 (UTC)
- Anionic polymerization of epoxides to make -CH2CH2O- polyether chains (with various substituents in place of the various H) is well known. One can consider concentration, reactant-ratio, competing nucleophilicity, and counterion effects. DMacks (talk) 22:04, 13 February 2016 (UTC)
Are PUFAs in cooking oils harmful?
Is the consumption of too much PUFAs harmful for health? The Wikipedia article polyunsaturated fat has no mention of any negative health effects. But I found some references, none of them reputed journals, which say too much PUFA is harmful. According to this article, PUFAs can cause free radical damage, excessive skin pigmentation, can damage pancreas cells, can impair protein digestion, can cause live damage. --IEditEncyclopedia (talk) 03:08, 13 February 2016 (UTC)
- Too much of anything is harmful for health. The source you cite is only the tiniest step up from NaturalNews -- notice the author's list of recommended health books is a a head-spinning catalogue of woo. Shock Brigade Harvester Boris (talk) 03:19, 13 February 2016 (UTC)
- There are 14 reviews and meta-analyses of the effects of PUFA on human health published in mainstream journals, some of which are free to read online. None of them find any dangers to PUFA consumption, even in babies. Now as Boris said, the poison is in the dose. Eat enough of anything and you'll probably get sick. And I'll note that the author of that article you linked got all of his information from the Weston A. Price Foundation, which is well known to advocate dietary advice that is completely divorced from science. Someguy1221 (talk) 05:43, 13 February 2016 (UTC)
In airliners the cabin pressure is slightly lower than the normal ground pressure for cost saving reason. This causes a barely noticeable discomfort for passenger.
Is the same thing done for private jets? In this case, the passengers are presumably rich enough for pay for the comfort of 100% ground pressure.
The same question also applies for the humidity as well. [Airliner]] cabin air are pretty dry due to, again, cost-cutting measures. What about private jets? Johnson&Johnson&Son (talk) 06:18, 13 February 2016 (UTC)
- According to [70], cabin pressure on private jets can be kept much closer to sea level than is typically done on commercial jets. However, maintaining standard atmospheric pressure is not always possible, as cabin pressure is fundamentally limited by the power output of the plane's engines. Someguy1221 (talk) 06:58, 13 February 2016 (UTC)
- I didn't realize that maintaining the cabin pressure, whether for airliner or ultra comfy private jets, took that much engine power. I thought that aircraft cabins were pressure vessel so maintaining a particular pressure (that's within design limits) is "free" in the energy sense? Johnson&Johnson&Son (talk) 07:08, 13 February 2016 (UTC)
- (I realise it's not "free" in the financial sense since stronger pressure vessel => more weight => more lift required => larger wings and/or bigger engines => higher costs.) Johnson&Johnson&Son (talk) 07:10, 13 February 2016 (UTC)
- maintaining standard atmospheric pressure is not always possible, as cabin pressure is fundamentally limited by the power output of the plane's engines. ' absolute bollocks in context. The power to pressurise the cabin is small compared with the power output of the engines. Greglocock (talk) 11:36, 13 February 2016 (UTC)
- If you're an expert, provide a better context or a better source. Someguy1221 (talk) 22:34, 13 February 2016 (UTC)
- Actually, let me clarify this myself, given that it was given in poor context. The cabin pressure is fundamentally limited by the electrical power outputted by the planes engines, and the efficiency of the cabin pressurization system, and the air-tightness of the cabin, and the ability of the fuselage to withstand the stress of pressure differentials. Someguy1221 (talk) 08:50, 14 February 2016 (UTC)
- (edit conflict) Stress is a big part of it - reducing pressure relaxes stress on the fuselage. See Cabin pressurization. Our article on bleed air (the air pumped out of the engines to power other systems) says that running cabin pressurization from the engines reduced their efficiency (although it doesn't say how much) but that many modern aircraft use electric pumps instead (which of course still need to get their power from the engine, but apparently this air is easier treat (i.e. to get to the right temperature and pressure). Smurrayinchester 11:46, 13 February 2016 (UTC)
- maintaining standard atmospheric pressure is not always possible, as cabin pressure is fundamentally limited by the power output of the plane's engines. ' absolute bollocks in context. The power to pressurise the cabin is small compared with the power output of the engines. Greglocock (talk) 11:36, 13 February 2016 (UTC)
- Why not have a look at some promotional literature for several common types of small jet?
- The Gulfstream 650ER is arguably the best that money can buy, (unless you're a serious super-billionaire who can afford a private Boeing). "At a cruise altitude of 45,000 feet/13,716 meters, a G650ER cabin is pressurized to an altitude of 4,060 feet/1,237 meters. That cabin altitude is almost two times lower than commercial airlines and significantly better than any non-Gulfstream aircraft in the large-cabin class." Forbes Magazine believes the sale price of this aircraft to be around $65,000,000, although as a general rule, if you want to get an accurate figure, you'd have to fly to the sales office, presumably in a credible aircraft.
- Somewhere down the line is the Cessna Citation CJ4. "The CJ4 features separate temperature-control zones for the cockpit and cabin, and digitally managed pressurization that maintains a sea-level cabin up to an altitude of 21,067 feet."
- A very nice, slightly more affordable aircraft is the Learjet 75. "The aircraft’s pressurization system provides a sea-level cabin up to 25,700 feet (7,833 m) and a maximum 8,000-foot (2,438-m) cabin altitude at 51,000 feet (15,545 m)."
- The KingAir is not a "jet," but is a turboprop and uses Jet-A; pilots of the KingAir can log turbine time. Textron includes this helpful documentary, Transitioning from Piston to Jet on their sales page. The KingAir cabin pressurization controls are a little different than the more expensive jets: this article from Flying magazine explains: "One reason is that with five psi maximum cabin pressure, the cabin altitude is near 12,500 feet when flying at 30,000 feet. That's legal, but not comfortable for every pilot."
- About the smallest pressurized cabin you can buy is a Mooney. An M20R is decidedly not a jet, but it will set you back a bit more than half a million dollars, brand new. I found a very nice 1969 Mustang for sale on Trade-A-Plane - N7727M, $135,000 - that has a pressurization switch and an automatic isobaric valve to keep your plane from exploding if you try to pressurize above 5psig (roughly 7,500 cabin pressure altitude at the aircraft's service ceiling). It is incredible what a little money can buy.
- Nimur (talk) 16:38, 13 February 2016 (UTC)
- Gulfstream's marketing appears to be outdated or misleading. The Boeing 787 Dreamliner is capable of 6,000 feet/1,830 m at 43,000 feet/13,006 m (service ceiling) [71] [72] [73]. I'm not sure what it would be at 45,000 feet but it's not likely to be almost two times 1,237 m. The 787 is definitely a commercial airline now and the ref didn't say anything about most. The Airbus A350 XWB is similar [74] although there are fewer in commercial service at the moment. Interesting enough, Cabin pressurisation suggestions the median pressurisation of both the 747 and A380 is actually below the 6000 feet level that Boeing claims is sufficient (as higher gives diminishing returns), although that probably means a fair few were higher at least for the A380. Some sources like [75] claim 5,000 feet for the A380, but that is contradicted by the study and I can't find any advertising from Airbus about it which suggests to me it's probably wrong. One interesting thing that the source does mention is the SyberJet SJ30 which is supposed to have a sea level cabin to 41,000 feet/12497m, although I'm not sure how easy it is to buy one (well unless you trust that they will deliver the new ones). I guess you could become friends with Morgan Freeman [76] [77] [78]. BTW some of those sources and [79] do discuss how passenger comfort is complicated and you shouldn't just focus on pressurisation. (The sources also discuss pressurisation levels for various airplanes but you may want to check this info, per earlier.) Nil Einne (talk) 19:35, 13 February 2016 (UTC)
- Indeed, fair points all around, Nil Einne. One hopes that the buyers of expensive jet aircraft will do a little independent research with multiple sources, and fact-check all the advertisement claims. It is very true that new airliners like the Boeing 787 and the Airbus A380 do provide higher cabin pressurization; this was a major marketing advantage for these new aircrafts. Here are some details: Boeing 787 from the ground up, and the 787 No-Bleed System Architecture, both articles from Boeing's corporate communications magazine. Here's an article from airline Lufthansa: Cabin Air Circulation. I can't emphasize enough that the 787 is only a few hundred million dollars more expensive than the G650-series; so it's bound to have a few enhanced systems. My point, really, is to express that aircraft passenger comfort systems have an incredible range from the very low-end to very high-end. In principle, nothing prevents you from operating an A380 or a 787 or similar large passenger airliner as a private jet; well, it helps if you're some kind of Amir. A handful of the most extravagant movie stars and business-people might fall onto that price-point, too. Most ordinary billionaires would find a smaller jet suitable for their needs, and a lot cheaper to operate. I suspect Gulfstream isn't expecting its clientele to be comparing their aircraft against performance specifications of a wide-body superjumbo. Nimur (talk) 22:08, 13 February 2016 (UTC)
- Gulfstream's marketing appears to be outdated or misleading. The Boeing 787 Dreamliner is capable of 6,000 feet/1,830 m at 43,000 feet/13,006 m (service ceiling) [71] [72] [73]. I'm not sure what it would be at 45,000 feet but it's not likely to be almost two times 1,237 m. The 787 is definitely a commercial airline now and the ref didn't say anything about most. The Airbus A350 XWB is similar [74] although there are fewer in commercial service at the moment. Interesting enough, Cabin pressurisation suggestions the median pressurisation of both the 747 and A380 is actually below the 6000 feet level that Boeing claims is sufficient (as higher gives diminishing returns), although that probably means a fair few were higher at least for the A380. Some sources like [75] claim 5,000 feet for the A380, but that is contradicted by the study and I can't find any advertising from Airbus about it which suggests to me it's probably wrong. One interesting thing that the source does mention is the SyberJet SJ30 which is supposed to have a sea level cabin to 41,000 feet/12497m, although I'm not sure how easy it is to buy one (well unless you trust that they will deliver the new ones). I guess you could become friends with Morgan Freeman [76] [77] [78]. BTW some of those sources and [79] do discuss how passenger comfort is complicated and you shouldn't just focus on pressurisation. (The sources also discuss pressurisation levels for various airplanes but you may want to check this info, per earlier.) Nil Einne (talk) 19:35, 13 February 2016 (UTC)
- If you think the discomfort from the change in air pressure is "barely noticeable", try sitting near a screaming baby when its ears are popping due to the pressure change. ←Baseball Bugs What's up, Doc? carrots→ 17:22, 13 February 2016 (UTC)
- On that topic - there is a great deal of misinformation regarding the required use of a safety restraint or seatbelt for a child when flying on a "private jet" - rather, when the child is a passenger on a flight conducted under Part 91 (or Part 135). Subsequent to one particularly severe fatal accident in 2009, FAA has clarified their seat belt guidance. This is a frequently-asked question in general aviation discussions. 14 C.F.R. §91.107 was amended in 2014; this is a good reminder to always check a current copy of the FARs, especially if you are going to be operating a private jet with small babies on board. Small children under two, like sport parachutists, are among the very few individuals who are especially called out as special exceptions to the normal rules for safety restraint; these individuals aren't impervious to accident, but federal regulations are a little bit more lenient regarding what safety equipment they require.
- If the small babies are also sport parachutists, they may use the floor of the aircraft as a seat. In such a case, the operator of the aircraft would be wise to consult an aviation attorney for a professional opinion clarifying the applicable rules, among other reasons.
- Nimur (talk) 17:59, 13 February 2016 (UTC)
- It's not my fault that you can only afford shitty economy seats.Johnson&Johnson&Son (talk) 02:07, 14 February 2016 (UTC)
Dissolving Low-density polyethylene
So I have a project. I am making a tea infuser out of polyethylene (LDPE to be exact). I’ve made the overall shape, and now I want to give it a matte finish. The model is too complex to be sanded efficiently, so I am looking for an alternative way I could make it less glossy. I think it might be possible apply some chemicals that will dissolve the top glossy layer of plastic and thus make it matte. Unfortunately I have no Idea what those chemicals might be. So I am stuck with 2 questions:
Firstly I would like to know if you think it is even possible to give a matte finish to a polyethylene part by rubbing/dipping it into some chemicals? What chemicals could that be? Maybe a strong acids or even paint thinner(possibly heated)?
The second question is if my tea infuser would still be foodsafe after I apply said chemicals? If I understand correctly, to dissolve polyethylene I have to subject it to a chemical reaction. Does this mean I will end up having a part made of something other than pure LDPE? Can all the leftover chemicals be washed off after I got the desired finish? 46.138.235.28 (talk) 11:32, 13 February 2016 (UTC)
- LDPE is pretty tough, and most solvents won't have an effect on it. That said, my experience (with vapour smoothing 3D printed parts, which means PLA and ABS) is that using solvents makes plastic look glossier, not matte. Rough sanding makes things go cloudy because it's a very random process - some parts get rubbed by the grains, some don't, so there isn't a flat surface to reflect light. Using solvents dissolves the surface evenly (and makes it flow a bit) so you get a glassy finish. Smurrayinchester 11:59, 13 February 2016 (UTC)
- Another way to think of how that works (and a kind of spherical cow idealization) is to imagine a surface that's already perfectly flat and smooth with tiny little cube-shaped bumps on it. Let's mentally divide the surface into a grid that's the same size as the little cubical 'bumps'. (If you ever played Minecraft, you should have a mental model for what I'm saying.) If a chemical that dissolves the surface is applied, then the grid cells that are already flat will get dissolved away at the exact same rate everywhere - so the surface will stay flat. But a grid cell containing a bump will be dissolved away simultaneously on all five exposed faces of the cube. So the material in that cell is removed five times faster than for the already smoothed cells...and that results in bumps flattening out much faster than the general surface is eaten away. The net result is that the surface ends up smoother.
- Now suppose that there cube-shaped holes in the material. The base of the hole only get lowered slightly - but the four adjacent grid cells get attacked from above and the side - so they dissolve away faster than the surrounding area - and that abrupt hole gets smoothed out laterally...and again, the surface ends up smoother.
- So where the surface is initially smooth, it stays smooth - but where there are abrupt changes in surface shape, a uniform erosion process will tend to make them flatter.
- A uniform erosion process clearly doesn't make the surface rougher - so rapidly heating the surface until it flows would have much the same effect. You need some source of randomness - which is what sanding or sand blasting does. I wonder if, in your case, applying an active chemical, mixed into a paste with some inert material (like sand) would produce the desired degree of randomness? SteveBaker (talk) 17:03, 13 February 2016 (UTC)
- A tea infuser is placed in a cup of boiling water. Water boils at 100 °C. LDPE withstands temperatures of 80 °C continuously and 95 °C for a short time. Making a tea infuser out of LDPE is courting disaster. AllBestFaith (talk) 18:16, 13 February 2016 (UTC)
- Indeed, I would avoid putting LDPE in contact with boiling water. Although I can find a few sources that say LDPE withstands boiling water, WP:OR - it will rapidly deform and/or melt when exposed to even these moderate temperatures near 100°C. Here is a source, Sterling Plastics of Minnesota, that cites several standard metrics, including a Heat deflection temperature of 120°F - much lower than boiling water or even warm tea. The plastic will melt at just above the boiling point of water. However, the thing about plastics is that their quality and material properties vary widely - you can't be "very very" certain that your material is guaranteed not to melt at 212°F - or even 180°F. This Material Safety Data Sheet for ExxonMobil's formulation of LDPE resin, hosted by SUNY Stony Brook, says it's insoluble in water and not particularly toxic... but I still wouldn't want to drink tea that "might" have some melted LDPE residue in it. The key takeaway is that you can't be sure exactly what goop your plastic sample contains, and you can't be sure what is going to dissolve in your hot water. Nimur (talk) 18:34, 13 February 2016 (UTC)
- A tea infuser is placed in a cup of boiling water. Water boils at 100 °C. LDPE withstands temperatures of 80 °C continuously and 95 °C for a short time. Making a tea infuser out of LDPE is courting disaster. AllBestFaith (talk) 18:16, 13 February 2016 (UTC)
- If the solvent dissolves the LDPE, then the LDPE dissolves the solvent. So only use a solvent you feel comfortable drinking with your tea. I didn't get to the bottom of the question of what plasticizers are there already and what effect they have - [80] makes LDPE sound like a safe alternative, but [81] says that estrogenic activity can be detected when LDPE is "stressed". I suspect melting, dissolving, and near-destroying over boiling water count as stress. Wnt (talk) 20:34, 13 February 2016 (UTC)
- Two careers ago I was developing manufacturing processes for bonding or glueing plastics. There are a variety of solvents available and they are almost without exception stupid things to drink. I recommend sand blasting with sugar. Greglocock (talk) 20:59, 13 February 2016 (UTC)
- Though I doubted as much, it occurs to me that there are environmentally friendly options like supercritical CO2 - looking this up, I find some articles like [82] that seem to suggest it dissolves LDPE, though almost without exception they're locked behind paywalls in obscure journals; this is one of those topics that We're Just Not Allow To Know About. But of course, it's essentially impossible for a hobbyist project to use CO2 for this anyway, so the grapes were sour anyway. Wnt (talk) 13:42, 14 February 2016 (UTC)
- Two careers ago I was developing manufacturing processes for bonding or glueing plastics. There are a variety of solvents available and they are almost without exception stupid things to drink. I recommend sand blasting with sugar. Greglocock (talk) 20:59, 13 February 2016 (UTC)
Damaging effects of gravitational waves
It seemed to me, though I don't really know anything about the subject, that if gravitational waves actually stretch space itself, then even intense waves would not cause any damage to physical objects as they passed through. However, at [83], it says that sufficiently strong waves would "rip you apart". Is that actually correct? 81.132.196.131 (talk) 20:34, 13 February 2016 (UTC)
- Spaghettification, due to a non-homogeneous gravitational field near a black hole, may be relevant. StuRat (talk) 20:41, 13 February 2016 (UTC)
- If the frequency of the wave is low, then your physical size will be preserved by the same forces that usually preserve it (electromagnetism and electron degeneracy pressure), but if the frequency and amplitude are high enough that they can't react in time, then you will rip/squish instead. Analogy: a boat can survive an ocean wave of any amplitude if the frequency is low enough (because any ocean height is equivalent to any other for floating purposes), but higher frequencies will damage it. -- BenRG (talk) 22:47, 13 February 2016 (UTC)
- Its a big theme in Star Trek: Enterprise (season 3) and Star Trek: Enterprise (season 4) but, as many ideas in scifi are, real physicians would discribe this as preposterous imagination contradicting the laws of their science. Additionally you could aproach such an imagination from logic. If that was real, why isnt the hole universe already ripped apart into dust, given these waves would be frequent, atleast in astronomic timescale, by all these waves that must have already occured in the past? --Kharon (talk) 07:29, 14 February 2016 (UTC)
Nutrients needed by cuttings in water, specifically Begonia
I've been growing Begonias from cuttings for a few years. Last year I took some leggy cuttings, from a red and pink plants, rooted them in water, and potted them in the early spring. They look just like the plant to the left, although that is not of my plants nor my upload.
This year I decided to trim the longest stems from the potting that looks like the one at right, and to root them in a clear vase. Given the temperature is 6F right now, I have taken my plants out of the window. These cuttings are doing well, about 6-8 inches long, with 2-3 inch roots. But I don't want to pot them until mid-April. when it will be warm enough for them to stay out all night.
My question is, other than three (smaller-than-peppercorn) balls of NPK fertilizer, do I need to add any other nutrients? Thanks. μηδείς (talk) 20:53, 13 February 2016 (UTC)
- Just wondering, how do you deal with the water going bad ? I would think you would need to frequently toss out the smelly water and replace it (hopefully with rainwater or melted snow or at least tap water that's been left out long enough to lose the chlorine compounds). But then that means you would lose the fertilizer every time you replace the water. StuRat (talk) 21:03, 13 February 2016 (UTC)
- My wife does a large amount of growing plants like this, and her recommendation is to simply pot them on into compost, but keep the plants indoors until it's safe to put them out.--Phil Holmes (talk) 10:35, 14 February 2016 (UTC)
- Why are do you call them American begonias when they are pelargoniums? Richard Avery (talk) 13:44, 14 February 2016 (UTC)
PDF Beiträge zur Araneologie (Beitr. Araneol.), vol. 2: Fossil spiders in Dominican amber?
Is there a PDF of Jörg Wunderlich's book Beiträge zur Araneologie (Beitr. Araneol.), vol. 2: Fossil spiders in Dominican amber? If so how can I download it? Very short question, so sorry, but it would be greatly appreciated if you could. Thanks, Megaraptor12345 (talk) 22:21, 13 February 2016 (UTC)
- I don't remember seeing it, although I might have; I'd have to use a search engine. Maybe you could try yourself? Search engines use something called a "web spider", something you seem to have an interest it. Try https://duckduckgo.com GangofOne (talk) 23:03, 13 February 2016 (UTC)
- Thanks guys, I found it. Megaraptor12345 (talk) 10:28, 14 February 2016 (UTC)
- What about respecting the copyright and buying the book? --Scicurious (talk) 00:49, 14 February 2016 (UTC)
February 14
Teaspoons vs mL
Nothing vitally important here - just a weirdness that I can't get out of my head:
My wife and I have both been sick for a while - we went to the doctor together, he examined us together and prescribed the exact same two medicines for each of us. We walked out of the office with four printed prescription forms, my wife turned them in at a local CVS pharmacy counter - and brought back matching pairs of identical quantities of identical drugs.
Later that day, it's time take one of them (which is in liquid form and comes with a syringe to measure out the quantity) - so I look on the label of the bottle labelled for me and it says "Take 5 or 10ml twice per day"...OK...so I ask my wife whether she's going to take 5 or 10ml - an she points out that on the label of her bottle, it says "Take 1 or 2 teaspoons twice per day". Eh? Google helpfully tells me that 1 teaspoon is 4.92892159 mL - so we both have the same dosage range. The syringes we got with the medication are identical - and both have scales in teaspoons and mL.
Sadly, we don't have the original printed prescriptions to hand - so I can't tell whether this happened at the doctors' office or at the pharmacy.
Why on earth did two prescriptions typed by the same doctor on the same day, issued by the same pharmacist using the same software to print the labels out - within 30 seconds of one another - wind up with different units?!
The only kinda/sorta possibility is that I have a distinct English accent and my wife is American - could it be that the doctor concluded that I'd better understand SI units? Maybe women are expected to understand cookery instructions and in some horrific act of gender discrimination can't be expected to understand milliliters?
Neither of those seems likely - does anyone have enough understanding of how medicine quantities get labelled to shed light on this weirdness? SteveBaker (talk) 04:37, 14 February 2016 (UTC)
- Why don't you just ask the pharmacist or the doctor? ←Baseball Bugs What's up, Doc? carrots→ 04:45, 14 February 2016 (UTC)
- Specifically, ask the pharmacist first. They should have the original prescriptions on record. If they're different, then you can ask the doctor when you see him again. --69.159.9.222 (talk) 05:44, 14 February 2016 (UTC)
- Because of the medical-advice prohibition I'm not going to speculate about what your doctor or pharmacist intended, but I will point out that there are two different teaspoon sizes in the US, one of ≈4.93 mL and the other of exactly 5 mL: see United States customary units#Cooking measures. The latter is mandated by the FDA for nutrition labels, but I can't find any comparable requirement for drug dosages.
- My non-medical advice is to lodge a written complaint about this. Mixing unit systems is a disaster waiting to happen, and teaspoons are especially bad because they're easy to confuse with tablespoons, and a lot of household "teaspoons" hold nothing near 5 mL. Not that you would make those mistakes but the sort of people they're trying to help by using teaspoons probably will. -- BenRG (talk) 06:35, 14 February 2016 (UTC)
- Interesting! So in fact, the dosages were intended to be absolutely identical rather than identical to within ~1%. That's really not critical in this case because the doctor specified a 5mL..10mL range (he said something like "start with 5mL - but you can take up to 10mL if it doesn't seem to be doing much good"). Each bottle did come with a syringe marked in both mL and teaspoons - so it would take an unusually stupid person to screw up - but I agree that using teaspoons is a disaster waiting to happen...I'm shocked when I see it in cooking recepies let alone in drug prescriptions! SteveBaker (talk) 15:39, 14 February 2016 (UTC)
- You're British, aren't you? I think it's pretty well known in North American kitchens that a teaspoon is a specific unit of measure that you should use a measuring spoon for, rather than an actual teaspoon. Likewise it's probably reasonably well known that it's equal to 1/6 of a fluid ounce. (Less well known is that "fluid ounce" has different meanings in US and Imperial measure! But they only differ by about 4%, not enough to matter for culinary purposes. As noted already, the medical usage that it means 5 ml is different again, but again only by a few percent.) --69.159.9.222 (talk) 18:47, 14 February 2016 (UTC)
- Interesting! So in fact, the dosages were intended to be absolutely identical rather than identical to within ~1%. That's really not critical in this case because the doctor specified a 5mL..10mL range (he said something like "start with 5mL - but you can take up to 10mL if it doesn't seem to be doing much good"). Each bottle did come with a syringe marked in both mL and teaspoons - so it would take an unusually stupid person to screw up - but I agree that using teaspoons is a disaster waiting to happen...I'm shocked when I see it in cooking recepies let alone in drug prescriptions! SteveBaker (talk) 15:39, 14 February 2016 (UTC)
- According to an information sheet from the U.S. Food and Drug Administration, that agency and several professional pharmaceutical organizations recommend against prescribing liquid medications in teaspoon dosages. The chances of confusion with tablespoons is too high, as are errors caused by using inaccurate household kitchen spoons. In my experience, CVS is a company that tries to do the right thing, so an effort to bring this to their attention at the corporate level may be worthwhile. Cullen328 Let's discuss it 07:19, 14 February 2016 (UTC)
- Again, they provided each of us with a syringe to do the measurement...so they did try. SteveBaker (talk) 15:39, 14 February 2016 (UTC)
- Maybe the prescription was filled by two different pharmacists? Maybe when they fill it they have to click a checkbox on the computer on what to print, and just randomly picked a different one? Ariel. (talk) 07:28, 14 February 2016 (UTC)
- I kinda doubt that...but I guess it's not impossible - but all four sheets of paper were handed to one person who disappeared off to prepare them. I doubt they would have split the task across two people. SteveBaker (talk) 15:39, 14 February 2016 (UTC)
- It seems very unlikely that the pharmacy's software would print labels with different units. It is much more likely that the manufacturers' own labels could differ, if you have been given bottles from different batches which may originally have been intended for export to different countries. Something intended for sale in countries which only use the metric system would not have been labelled with doses in teaspoons, while medicines intended for sale in the US or the UK might well. — Preceding unsigned comment added by 81.131.178.47 (talk) 11:37, 14 February 2016 (UTC)
- I don't think that's the case. The manufacturer isn't directly involved. The doctor sets the dosage level on the prescription form and the pharmacist transfers that information onto the label, which is printed along with the patient's name and the phone number of the doctors' office. I don't see how the manufacturer had much to do with it. I suppose it's remotely possible that they are from a different batch but they look absolutely identical apart from the label that the pharmacist printed. Usually, we can tell our doctor where we'll be getting the prescription filled - and he sends the prescription to the pharmacist directly so the medication is ready to pick up...but one of the two medications contained Codeine - and evidently that's a controlled substance, so the forms had to be printed out at his office. I have no clue why a hard copy printout is considered more secure. I really wish I had looked at those forms because then we'd know whether this discrepancy happened at the doctor's office or at the pharmacy. SteveBaker (talk) 15:39, 14 February 2016 (UTC)
- I did call CVS this morning - but had a hard time getting my question understood. All I could get out of the person on the phone was variations on the theme of: "Don't worry - 5ml is the same amount as a teaspoon - just take the amount it says." - but I think I was having a hard time getting my point across, and they (quite reasonably) assumed I was confused about the dosage rather than curious about the difference. SteveBaker (talk) 15:39, 14 February 2016 (UTC)
- Maybe call them again and say, "I'm just curious why one said teaspoon and the other said milliliters. Which would you normally use?" And go on from there. Anecdotally, the only medications I take are in pill form, and they are always expressed in milligrams, as opposed to "grains" or whatever it would be. ←Baseball Bugs What's up, Doc? carrots→ 17:25, 14 February 2016 (UTC)
- Yeah, but that's not the same thing. The standard measurement that you get for pills is "X pills, Y times per day" - the unit is "pills" - not grams or grains. The doctor and pharmacist need to care about how much drug is in each pill - but the consumer doesn't need to get involved in the units they use. SteveBaker (talk) 22:34, 14 February 2016 (UTC)
- No, it's not "X pills, Y times per day", it's "X nnnMG pills, Y times per day." ←Baseball Bugs What's up, Doc? carrots→ 00:25, 15 February 2016 (UTC)
- Yeah, but that's not the same thing. The standard measurement that you get for pills is "X pills, Y times per day" - the unit is "pills" - not grams or grains. The doctor and pharmacist need to care about how much drug is in each pill - but the consumer doesn't need to get involved in the units they use. SteveBaker (talk) 22:34, 14 February 2016 (UTC)
- Maybe call them again and say, "I'm just curious why one said teaspoon and the other said milliliters. Which would you normally use?" And go on from there. Anecdotally, the only medications I take are in pill form, and they are always expressed in milligrams, as opposed to "grains" or whatever it would be. ←Baseball Bugs What's up, Doc? carrots→ 17:25, 14 February 2016 (UTC)
- I take an elixir containing a controlled substance which is alternatively prescribed as 5ml t.i.d. or one teaspoon three times a day. I asked the CVS pharmacist about this, and she says that the label reflects what the doctor wrote, but that the pharmacy fills it in milliliters, and according to the US FDA they are considered equivalent. (BTW, always ask for the pharmacist for actual science questions, or you're liable to get a clerk.) I suppose that means I'm technically being cheated, but I never use a full month's worth (it's as needed, for bellyaches, see the talk page) so it don't befront me. μηδείς (talk) 01:04, 15 February 2016 (UTC)
Fuel vs oxidizer on Falcon 9
Which weights more on the Falcon 9 first stage, the fuel or the liquid oxygen? Johnson&Johnson&Son (talk) 06:51, 14 February 2016 (UTC)
- Falcon 9 burns RP-1 and LOX, which is basically kerosene and liquid oxygen. RP-1 is a mixture of alkanes with about twice as many hydrogen atoms as carbon atoms. To burn this one needs three oxygen atoms for every CH2 in the fuel to get water vapour and carbon dioxide. Given the atomic weights of these elements, the mass ratio of fuel:oxidizer is about 1:3.4. Most rocket engines (American ones at least) run slightly fuel rich to prevent engine damage, but still the liquid oxygen will far outweigh the fuel. PiusImpavidus (talk) 11:25, 14 February 2016 (UTC)
Killing yourself on valentines day
Is there a statistically significant increase in male suicides on valentines day? DonaldsTroosers8888 (talk) 12:49, 14 February 2016 (UTC)
- I suppose there is no effect of holidays on suicide rate. The link between Christmas and suicide has been debunked by many sources. I don't find sources about the link between Valentine's and suicide being criticized too, but there are also not so many people claiming there is a link between the two.
- Although suicides are not equally distributed along the year (see seasonal effects on suicide rates), the same article also highlights the facts that seasons play a role in suicide frequency, but not necessarily towards the winter months.
- Suicides have several interrelated aspects. These aspects are so complex that there's even a branch of science, Suicidology, to study them. It will be difficult to find a single factor (like holidays) that tips the number of suicides into one direction or the other. --Scicurious (talk) 13:37, 14 February 2016 (UTC)
- (E/C) I'm having a hard time finding anything reliable. There's this, but I was really thinking a site like this would have something spelled out - but they don't seem to (although the stats are broken down in almost every other way, so I might have simply missed it). We have a related article at Seasonal effects on suicide rates, but it also doesn't answer your question, though it does support the point that springtime in generally the time of the year with the highest rate. Closest thing I can find is the chart on Epidemiology of suicide that shows the changes month by month, but I don't know if it's fine grained enough to account for blips on a particular day. File is here. Matt Deres (talk) 13:46, 14 February 2016 (UTC)
- (EC) [84] in some places (in particular Birmingham in the UK) there appeared to be a statisically increase in
edit: attempted suicidesparasuicides. This probably includes maleedit: attempted suicides (particularly since male suicide rates tend to be higher)parasuicides but I'm not sure as they only look at age (adolescents seemed to be the worst affected) not gender. In other places (in particular the US) [85]this wasn't detectedthere was no significant increase in completed suicides (nor in homocides). However this was before 1990. Also it will likely depend on what you're comparing it to since suicide rates do vary depending on time of the year and even I think day. It sounds like February is a particularly bad time in the US (possibly in most of the Anglophone Northern hemisphere?).Anecdotally Valentine's Day does lead to a spike in calls to helplines in a number of places although again most of these didn't specify the gender of the caller and I'm assuming it's not always known [86] [87] [88] [89] [90] [91] (n.b. a number of these sources are actually just repeating what one of the other source said but I've included them in case there was a source I missed). A spike in calls to help lines doesn't definitely mean there will be a spike in suicides. Edit: Likewise a spike in parasuicides (or attempted suicides in general) doesn't definitely mean there will be a spike in completed suicides.
Nil Einne (talk) 13:59, 14 February 2016 (UTC)
- You could probably repeat the analysis for the US with more recent stats using a similar methodology. The most recent stats I found are here [92] and include 2007 and earlier. From a very quick look, it didn't look like there was a statistically significant increase. You could go further and compare rates for different age groups using data from here I think [93]. I didn't however see data which would allow a simple analysis for rates based on gender. Nil Einne (talk) 14:12, 14 February 2016 (UTC)
- One thing to additionally keep in mind is that you need to correct for the fact that Valentine's Day is not universal. Even assuming perfect statistics (unlikely!), worldwide the rate probably would show no change simply because Valentine's Day doesn't exist for huge portions of the population. So, specifying a country may be necessary. Matt Deres (talk) 13:53, 14 February 2016 (UTC)
- Also, our OP should consider the possibility, that while a few men feel suicidal when gilted by their one true love - a number who were already feeling suicidal for some other reason may be convinced not to act on that feeling following a gesture from someone who loves them on Valentines' day. Given that, there is no particular reason to assume an increase in the male suicide rate - it could be a decrease - or the two effects I describe might neatly cancel out.
- Another issue is whether the effect would be immediate enough to be detectable. Suppose there is indeed some disappointment on the actual day - it might take days or weeks for that to turn into actual action - so the statistics might become blurred over so much time as to be undetectable against the background suicide rate.
- SteveBaker (talk) 15:46, 14 February 2016 (UTC)
- If I died after being gilted, it would be a murder not suicide. Matt Deres (talk) 22:54, 14 February 2016 (UTC)
- I'd have cited Goldfinger. —Tamfang (talk) 00:59, 15 February 2016 (UTC)
- If I died after being gilted, it would be a murder not suicide. Matt Deres (talk) 22:54, 14 February 2016 (UTC)
Urine processing in the ISS
How does the recycling of pee work in the ISS? Do all astronauts pee into a common container and drink out of it (after processing, obviously)? Or does each astronaut have his own pee container and drinks only the water extracted from his own pee? --Scicurious (talk) 13:39, 14 February 2016 (UTC)
- Our article on the International Space Station says that "Liquid waste is evacuated by a hose connected to the front of the toilet, with anatomically correct "urine funnel adapters" attached to the tube so both men and women can use the same toilet. Waste is collected and transferred to the Water Recovery System, where it is recycled back into drinking water." So, they pee into a collective container. It has a reference, but the link is broken. Matt Deres (talk) 13:50, 14 February 2016 (UTC)
- It looks like they all drink each others, but not until it's all been distilled and filtered and whatever. You may find this interesting, good old Chris Hadfield, the go to guy for any ISS questions. Richard Avery (talk) 13:54, 14 February 2016 (UTC)
- A better WP article is at ISS ECLSS (International Space Station Environmental Control and Life Support System). Matt Deres (talk) 13:56, 14 February 2016 (UTC)
- It's worth mentioning that it doesn't only recycle pee - also sweat and water used in washing and left over from cooking, etc. Also, because it recovers about 97% of what you put into it, your pee gets recycled and re-pee'd about 30 times! It's interesting that it works by boiling the water and condensing the resulting steam - but because there is no gravity, they have to spin the thing like a centrifuge to get the steam out of the boiling water. SteveBaker (talk) 15:58, 14 February 2016 (UTC)
- Water is water. And if anyone is squeamish about it, it's well to keep in mind that all or most of the water molecules we consume have probably spent time in countless bladders of other creatures over millions and millions of years. ←Baseball Bugs What's up, Doc? carrots→ 17:21, 14 February 2016 (UTC)
- Indeed water is just water. But it's interesting to look at that old idea that there are at least a few molecules of water from any historical figure you care to name inside your body. There are roughly 1028 water molecules in a human being, and since we each hold (very roughly) 50kg of water in our bodies but drink (and pee) 2kg per day - we probably cycle through all of it every 25 days = but let's keep it simple and say: "around 15 times a year", so over a 70 year lifespan, we get through maybe 1031 molecules. There are 4x1047 in all of the oceans, lakes and rivers of the world - so it's pretty clear that even with perfect mixing, there is only a one in 1016 chance that any particular water molecule came from Napoleon Bonepart - but with 1028 molecules in your body right now - there could easily be 1012 molecules that he peed out coursing through your veins right now. Of course from the point of view of people spending 6 months in the ISS. all of their body water will have gone through the recycler 7 or 8 or so times during their stay - and will have been well mixed with that of all of the other astronauts many times over.
- Of course the idea of perfect mixing is untrue - and doubtless there are water molecules in our bodies that don't get flushed out and replaced continually...and perfect mixing of the oceans is far from true (there is relatively little mixing between the deep waters of the world and the surface). But a back-of-envelope calculation definitely makes it clear that we have nothing to be squeamish about when it comes to water recycling. SteveBaker (talk) 22:31, 14 February 2016 (UTC)
How was the distance to the source of gravitational waves measured?
If such a question has already been asked, please delete. I wonder how did they measure the distance to the two black holes (1.3Bn light years, I don't think triangulation is possible in this case), their respective sizes (30 solar masses for each one) and the energy released. One paper (either the WSJ or Financial Times) said that the amount of energy released was larger than the energy output of all the stars in the Universe! That sounds fishy to say the least. Thanks. --AboutFace 22 (talk) 16:55, 14 February 2016 (UTC)
- I don't know in this specific case, but one common method is to compare absolute magnitude (real brightness) with apparent magnitude (observed brightness). (Note that "brightness" isn't necessarily visible light, it can be any EM radiation or even gravity waves.) That is, if you know how bright something really is, you can tell how far away it is by how bright it appears to be at your location. StuRat (talk) 17:30, 14 February 2016 (UTC)
- Here is the press release from Caltech, and here is the paper in Physical Review Letters: Observation of Gravitational Waves from a Binary Black Hole Merger. These are the authoritative primary sources of information on the event. Paraphrasing the paper, the distance to the source is estimated by comparing the measured data against numerical models of the proposed source event. Our article, numerical relativity, introduces this methodology from a very high level. From these models, we can parameterize the luminosity distance and the redshift - both of which are measures of the "distance" from the event to the Earth. The distance is determined and validated using a variety of standard statistical data-fitting algorithms, and the authors place confidence in this method above 5σ. Nimur (talk) 17:47, 14 February 2016 (UTC)
- I wouldn't think red-shift would be as accurate, since it's not only due to the expansion of the universe, but also due to relative local motion, which may be unknown. StuRat (talk) 17:51, 14 February 2016 (UTC)
- The authors published their detailed calculations on arXiv as Properties of the binary black hole merger GW150914, cited from their main paper. In case you wish to follow some twenty pages of their horrible equations, they present the calculations that lead to confidence in this specific luminosity distance DL by way of a data fit, around page 7.
- I won't pretend to follow their work - nor to second-guess it - without spending at least a few hours to study it; but take a look at the extensive author-listing, spanning many many pages, to see how thoroughly this publication has been peer reviewed. The authors explicitly publish the error bounds on all of their model-parameters.
- Nimur (talk) 17:56, 14 February 2016 (UTC)
- The uncertainty on distance was ~40%. I don't think local motion was the biggest problem there. Dragons flight (talk) 17:58, 14 February 2016 (UTC)
- There are many places where the paper appeals to "standard cosmology" parameters when making some conversion; and cites, e.g., additional detailed publication of cosmological parameters specifically when converting from luminosity into redshift. These astrophysicists are professionals! They absolutely did think of all these difficult problems, and published extensive answers in the form of many many hundreds of supplemental papers. Nimur (talk) 18:03, 14 February 2016 (UTC)
- Quite the appeal to authority there. Yes, they are professionals, but professionals also make mistakes, like the ship that crashed due to a lack on conversion to metric units. (See Mars Climate Orbiter#Cause of failure.) However, I have no reason to think they made any mistakes here, as the error I mentioned likely falls well within the 40% error mentioned by Dragon's Flight. StuRat (talk) 23:08, 14 February 2016 (UTC)
It is very interesting. Thank you for references and the posts. Any estimate of potential frequency of such events in the future? --AboutFace 22 (talk) 21:12, 14 February 2016 (UTC)
I've also found by chance that Dr. Saul A. Teukolsky from Cornell has been involved in modeling black holes but his name is not among the authors of the paper in Physical Review Letters. --AboutFace 22 (talk) 21:29, 14 February 2016 (UTC)
Ingredients in drugs
Why do some medications contain lye and sulphuric acid? Example migraine drug.
Each unit dose spray contains sumatriptan (base) as the hemisulfate salt 5 mg in an aqueous buffered solution. Nonmedicinal ingredients: anhydrous dibasic sodium phosphate, monobasic potassium phosphate, purified water, sodium hydroxide, and sulfuric acid.
Th4n3r (talk) 18:31, 14 February 2016 (UTC)
- Since those are a strong alkali and acid, I would assume it's to neutralize an active ingredient which is itself the opposite. Note that while either would be harmful in higher doses, hopefully the tiny amount they include isn't enough to do so. StuRat (talk) 18:35, 14 February 2016 (UTC)
- Those are the last-minute adjustments to make the desired pH for the product. As it says, it's a buffer solution, so the chemicals aren't actually making a result with an extreme acidity or basicity. As StuRat says, it's to bring it back to neutral (and the buffer to help keep it there). DMacks (talk) 21:24, 14 February 2016 (UTC)
Did people in the dark ages know that the world had gone to shit?
Did people in the dark ages know that the world had gone to shit or were they blissfully unaware? Could we be in a dark age right now in 2016 and not know it? BrustyOlfIrl (talk) 18:40, 14 February 2016 (UTC)
- The dark ages were characterized by localization and loss of information. Currently it's the opposite: global connectivity and global sharing of information. ←Baseball Bugs What's up, Doc? carrots→ 18:59, 14 February 2016 (UTC)
- On the other hand, there's the worry that many of our records will be lost because digital media are (probably) less durable than paper and because their standards keep changing. —Tamfang (talk) 00:53, 15 February 2016 (UTC)
- Another way to look at it was movement away from the science of the Greeks towards explaining everything as the direct action of God. In that context, the US does seem to be sliding back into the dark ages, at least in some places.
- And, from the POV of the people in those situations, they think they are right and their predecessors were wrong. In the case of the Middle Ages, the commoners, if they knew the ancient Greeks had calculated the diameter of the Earth, would have thought they were idiots since obviously the Earth is flat. In the case of conservative US regions, they would think that all those "scientists" who say the Earth is over 4 billion years old are all just some weird liberal cult, since obviously the Earth is only a few thousand years old. StuRat (talk) 19:01, 14 February 2016 (UTC)
- There is always a risk of backsliding. Although it's useful to keep in mind this quote from historical satirist Will Cuppy: "It was called the dark ages because people then weren't very bright. They've been getting brighter and brighter ever since, until they're like they are now." ←Baseball Bugs What's up, Doc? carrots→ 19:11, 14 February 2016 (UTC)
It's not true that the world "went to shit" in the Dark ages - read our article and you will find that historians don't really believe that any more. It's more that there is a lack of written information about what was happening. You may also be interested in the following newspaper story from the UK - Church of England primary school headteacher sparks online ridicule after claiming evolution is only a theory Richerman (talk) 20:03, 14 February 2016 (UTC)
- That's an interesting choice of example. In the UK, the teaching of religion in schools is not only permitted - it's on the curriculum in state-run education. When one lone teacher proclaims Darwin's theory isn't fact - everyone is outraged and it makes headline news. But in the US, where the teaching of religion in public schools is not only illegal, but unconstitutional, entire states are able to pass laws making it a requirement to teach that Darwin's theory isn't fact - and only a small minority of people are outraged. I'm not sure what this says about a slide into the dark ages - but if it is the case that there is a slide back into ignorance and superstition, it's not happening universally. SteveBaker (talk) 21:54, 14 February 2016 (UTC)
- In continental Europe possibly not everything "went to shit", but in the UK it definitely did.
- See, for example, the end of chapter 1 and then chapter 2 "Life among the Ruins" of Robin Fleming's Britain after Rome (2010) for a pretty thorough summation of the material evidence as we currently have it from archaeology. By the late 300s -- even before the Romans had left -- the economy in Britain was already in terminal decline, even before the final complete collapse of the monetary economy which followed when the Romans stopped providing low-value bronze token coinage. By the middle of the 4th century iron production in Kent had already fallen to a mere 25% of its former level; by the year 410 it had completely ceased. At this same time -- mid 300s -- smaller villas begin to fail. Initially there is a wealth concentration, and some lavish building by the very richest both in the country and the towns. But soon by the 360s and 370s even the grandest villas are starting to fail. Damage is not repaired, principal rooms are converted into barns for animals and stores for corn. As the economy fails, so do manufacturing and craft skills -- airtight pottery, glass, iron nails, etc, etc all become unavailable. As Fleming puts it (p.20) "Nails, for example, seem such trivial things, but once they were gone Britain became a harder place. They grew scarce in the 370s, and by the 390s nails for coffins and hobnailed boots [previously very widespread] were simply no longer available, so the British slipped in the mud and buried the people they loved directly in the cold, hard ground". It is 200 years before the knowledge to make mortar and build buildings out of stone are reintroduced from Europe.
- The suburbs around towns start to become depopulated from the mid 300s onwards; after about 370 "both coin finds and pottery sherds almost disappear from these areas". This is where much craft manufacturing had been based. Towns became less and less well maintained. (p. 28) "Nevertheless, urban life persisted to the end of the century in most places and in some for a decade or two longer." However, "at some point in the early fifth century, though, urban life died completely, and all of Britain's towns, public and small, simply ceased to exist" ... "York, for example, reverted back to a marshland" ... (p. 29) "By 420 Britain's villas had been abandoned. Its towns were mostly empty, its organized industries dead, its connections with the larger Roman world severed: and all with hardly an Angle or a Saxon in sight."
- (p. 31) "There were no longer organized and interlinked markets. There was no tax, no money economy, no mass production of goods. [Production] surpluses ... had fewer uses and became increasingly difficult both to create and to store." ... (p.32) "Roman sites, particularly those of the fourth century, are littered with the remains of substantial buildings, coins, and broken and discarded manufactured goods, and excavators find scatters of everyday objects lying in broad swaths around every farmstead and villa. Fifth-century settlements, on the other hand, are practically invisible, so rare had ceramics and metalwork become, and so inconsequential their buildings."
- In parts of the West Country, some degree of organised administration did evidently continue, centred on reoccupied iron-age hillforts. (p.33) "But compared to fourth-century settlements in the neighbourhood, the first fifth-century inhabitants of Cadbury Congresbury had little. Most of the pots, the glass and the dressed stone were being used there in the second half of the fifth century, but they had been produced fifty or even a hundred years earlier. Some things unearthed at the hillfort -- the glass and some of the brooches for example -- may have been cherished family heirlooms or prized personal possessions, their longevity guaranteed by sentiment. But other objects look as if they had been looted from abandoned sites... the ruins of local villas." ... "Some of the pottery at Cadbury Congresbury, however, came from another source: it was probably salvaged from nearby third-century cemeteries, places where cremation burials lay, and where pots could be dug up, emptied of their human ash and then used for cooking or boiling water. The presence of such material at Cadbury Congresbury and other resettled hillforts points to people clinging to the material culture of the forebears no matter how grim the undertaking, no matter how great the humiliations of scavenging". (The site did subsequently pick itself up a bit, to what passed for an early medieval elite site; but most of the Roman material comforts were lost for good). Jheald (talk) 21:45, 14 February 2016 (UTC)
- Indeed, the Dark Ages is a fairly Euro-centric, and Anglo-centric, historiographical term. The same time period featured the Islamic Golden Age, the almost complete revival of the Mediterranean-encompassing Roman Empire under Justinian I, the Carolingian Renaissance, the Tang Dynasty, which may have been the height of Chinese civilization (see Pax Sinica), the Kamakura period in Japan, etc. Economically, even the late Middle ages were arguably worse for Europe than the so-called Dark Ages, what with the twin problems of the Mongol invasion of Europe, the Black Death, etc. --Jayron32 00:29, 15 February 2016 (UTC)
- No, it's not just Anglocentric. Although some reject it, the concept of a "Byzantine Dark Ages" is commonly held; see a short discussion of it on pages 265-266 of Greek Rhetoric Under Christian Emperors, which relies on George Ostrogorsky's influential History of the Byzantine State in noting the widespread decline of cities, general population declines, and barbarian conquests. See also "The Disappearance and Revival of Cities" chapter in Cyril Mango's Byzantium, the Empire of New Rome, which uses the term in examining the widespread regression of culture and economy in the seventh and eighth centuries (the period of Justinian II, for example), with the decline beginning as early as the sixth century, and continuing until the ascension of the Macedonians in the third quarter of the ninth century. Nyttend (talk) 00:58, 15 February 2016 (UTC)
Spectral response of cones and rods
I am doing some research on the spectral response of cones and rods for a law enforcement client who wants to know how badly night vision is impaired by LED vs incandescent/filtered red/blue lights on police cars.
I find that I can get exact emission curves for the lights, but when I try to figure out the response of the human eye, different websites and scientific papers give me somewhat different curves.[94][95][96][97][98][99][100]
I then found the following on Wikipedia:
Besides the obvious differences in the curves, are spectral absorption curves different from spectral response curves? If so, I think the pages that use the spectral absorption curves should use spectral response curves instead. --Guy Macon (talk) 19:57, 14 February 2016 (UTC
- Guy, this is not really my area, but I wonder if the difference is that the first (top) graph uses normalised data for the Y-axis whereas the other 2 do not. The way the absorbances do not drop off to zero in the top graph looks a little strange and I am wondering whether this is an artifact of attempting to normalise the data. Just a thought.DrChrissy (talk) 20:27, 14 February 2016 (UTC)
- The first diagram is from direct measurements of retinal cells in vitro, the other two are from experiments on living subjects. I think the difference is largely from absorption of (ultra)violet light in the lens. You can find raw data for those diagrams here. -- BenRG (talk) 21:25, 14 February 2016 (UTC)
Identification of a spider from Sydney
Hello, everyone! I remember in 2007 I was living in Australia, Sydney, and I was in a park, when I noticed a Eucalyptus tree. I stopped to examine the bark, when I noticed a large Spider staring back at me from lower down the tree. At the time I had been told it was best to avoid spiders as some in Australia were dangerous, so I beat a hasty retreat.
A braver friend of mine, who was with me, took a stick and gently touched the spider with it. Immediately, the spider raced up the tree at a very fast pace. It stopped further up. I obviously took no pictures, but it was, from head to abdomen, about 9cm long. It was mygalomorphid and was a little hairy, the overall colour being a silvery-brown. Any ideas what it might be? Just the family will do but if you have any other details it would be very much appreciated. And remember, I do not want a very definite answer, so just give me a common species that is commonly arboreal and is found in the New south wales region. Megaraptor12345 (talk) 20:36, 14 February 2016 (UTC)