Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 2606:a000:4c0c:e200:c9a:4b44:2e28:1611 (talk) at 20:46, 17 November 2017 (→‎A question about g-force: G-LOC). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 10

Infrastructure vs building engineering

Why is it that engineers and construction specialists working on buildings tend to stick to buildings, and those working on infrastructure tend to stick to infrastructure? I see very few who cross between the 2. 00:23, 10 November 2017 (UTC) — Preceding unsigned comment added by 94.10.251.123 (talk)

Different organisations do different things. A person working for a municipality or city local government may design roads and bridges, but would not be involved with buildings. Also someone working for a large building contractor would just get buildings to design. An independent engineering consultant would build up a reputation in one industry, and then get work from that industry. Graeme Bartlett (talk) 05:56, 10 November 2017 (UTC)[reply]
The outcomes and detailing are very different. Buildings are inhabited spaces with a particular level of finish, space conditioning and unique functional requirements suited for continuous use and habitation by people. Infrastructure (bridges, tunnels, dams, water systems, waste treatment, highways, power systems) are generally not inhabited, at least not to the same degree, and involve a different kind of detailing suited to function, with habitation a secondary concern, or no concern at all. Habitable environments are subject to building codes, life safety codes, energy conservation codes and the like and are closely regulated by building departments. Infrastructure is mostly governed by engineering industry standards for function and durability. While there is certainly overlap, the two involve differing skillsets. As an architect, I've worked with both, and infrastructure (or "heavy construction") requires a different kind of information presentation and detailing than general construction. If you're not set up for it it's hard to do efficiently or well. The same applies to contractors, who have to organize on different scales with different trades and equipment.
In the building area, designers and builders tend to specialize in general construction (large commercial or institutional structures) or light construction (small residential or retail spaces) for the same reasons. Light construction is subject to a lesser degree of regulation and scrutiny and uses different or less complex construction techniques. Acroterion (talk) 13:04, 10 November 2017 (UTC)[reply]

Civil engineering --Hans Haase (有问题吗) 19:52, 11 November 2017 (UTC)[reply]

"Beam me up, Scotty!"

OK, I think everyone knows what's the fundamental problem with teleportation -- because of the Heisenberg uncertainty principle, Scotty would come out the other side looking like scrambled eggs (in the literal sense!) However, suppose that instead of actually teleporting stuff, the machine worked by somehow temporarily creating a wormhole which could then be used as a shortcut through space-time -- would that work more-or-less like teleportation? 2601:646:8E01:7E0B:0:0:0:EA04 (talk) 03:04, 10 November 2017 (UTC)[reply]

Yep! And you could use it to build a galaxy-wide empire of fear and oppression until somebody else with starship technology destroyed your home-world from orbit. You'd be better off sticking with warp drives.
Nimur (talk) 05:24, 10 November 2017 (UTC)[reply]
Teleport has two common meanings, and probably a hell of a lot more uncommon ones. In one meaning, it means instant movement from one location to another. In most magical instances, that is how teleport works. Object instantly vanish and reappear elsewhere. In another, it means to move something from one place to another without physically moving it - the "instant" is lost. Star Trek uses the second meaning. An object is turned into energy, transmitted (at the speed of light, not instantly) to another location, and assembled again. Depending on the meaning of teleport you want to use, the answer to your question could be yes or no. You are physically moving an object one place to another - just taking a shortcut. That isn't teleporting in the Star Trek sense. But, you have the ability to instantly move an object from one place to another. That is teleporting in other (magical) popular sense. 209.149.113.5 (talk) 15:25, 10 November 2017 (UTC)[reply]
Wormhole models involving black holes have to deal with spaghettification and infinite values at singularities. The standard "follow the money" argument applies otherwise. If telepathy or precognition were possible, psychics would be rich, not hanging their shingles in the cheaper part of town. If teleportation works, "where are they?". μηδείς (talk) 22:49, 10 November 2017 (UTC)[reply]
That's not actually a disproof, because the preak would probably remember losing his last lone dollar in a last desperate trip to the casino, and then of course, that's what would happen. Wnt (talk) 15:08, 11 November 2017 (UTC)[reply]
I am quite aware that you can't prove a negative, but to quote Christopher Hitchens, "What can be asserted without evidence can be dismissed without evidence." μηδείς (talk) 21:38, 11 November 2017 (UTC)[reply]
We're nowhere near proving or disproving this kind of future tech, including direct teleportation without a wormhole. Look into some of W. G. Unruh's recent work - especially there are some papers by Qingdi Wang that seem absolutely mind-blowing if only I could understand a bit of them. [1] The nature of spacetime is much more ... fluid ... than we typically conceptualize, and right now it seems to take as much theorizing to explain why things don't teleport than why they could. Wnt (talk) 15:14, 11 November 2017 (UTC)[reply]
This is how they transfer people in The Culture series of novels. LongHairedFop (talk) 17:42, 11 November 2017 (UTC)[reply]

Septic shock

Are there any known cases of patients surviving and recovering from septic shock with no treatment? 193.240.153.130 (talk) 12:41, 10 November 2017 (UTC)[reply]

The article Septic shock reports that the mortality rate from septic shock is approximately 25–50%. It also states that sepsis has a worldwide incidence of more than 20 million cases a year, with mortality due to septic shock reaching up to 50 percent even in industrialized countries. There has been an increase in the rate of septic shock deaths in recent decades. Blooteuth (talk) 15:14, 10 November 2017 (UTC)[reply]
To be complete... There has been an increase in attributing deaths to septic shock in recent decades. Because septic shock often leads to stroke, heart failure, or respiratory failure, it is reasonable to attribute death to the result of septic shock rather than septic shock itself. 209.149.113.5 (talk) 15:28, 10 November 2017 (UTC)[reply]
Is the given mortality rate for treated or untreated septic shock, or is that difference not distinguished in those statistics? My understanding was that septic shock was pretty much always fatal if untreated, but that's a layman's vague memory. μηδείς (talk) 22:44, 10 November 2017 (UTC)[reply]
I cannot imagine a scenario where a case of septic shock would be known to medical authorities and not be treated. If a case where to occur and not be brought to medical attention I have difficulty imagining how that would be recorded as a survival. Richard Avery (talk) 10:56, 11 November 2017 (UTC)[reply]
@Richard Avery: Seek and ye shall find: [2][3][4] I'm not quite sure it's the accepted standard of care, but there are a lot more stories like this. Wnt (talk) 15:23, 11 November 2017 (UTC)[reply]
  • I find no direct address to the issue of total non-treatment, but Septic_shock#Epidemiology says that survivability without treatment goes down 4% per hour, and that septic shock is otherwise normally fatal within seven days or less. I was looking for a source for treatment before the advent of antibiotics, but it seems the disease was poorly understood until the recent discovery that it is more a problem of immune response than bacterial toxins. μηδείς (talk) 21:33, 11 November 2017 (UTC)[reply]

Physical Photo Stenography

This animation shows how numbers are hidden in the Ishihara test of Color blindness within a circle of dots appearing randomized in color and size. Blooteuth (talk) 15:24, 10 November 2017 (UTC)[reply]
The same image viewed by white, blue, green and red lights reveals different hidden numbers. Blooteuth (talk) 13:04, 15 November 2017 (UTC)[reply]

I want to recreate something I've seen. I want to have three photos - actual photographs. I want to have a blue, green, and red plastic filter - just little squares of colored plastic. I want to see a number appear in the photo when I place the filter over the photo. I've seen this as a child. It is also used in color-blindness tests. However, when I change the color in any area of a photo, it is obvious with or without the filter. What is the trick to making it hard to see the number without the filter, but obvious with the filter? 209.149.113.5 (talk) 15:11, 10 November 2017 (UTC)[reply]

I see the animation. It doesn't appear to be helpful for what I want. I want a physical photo that I can sit in a frame on my desk. I don't see a number in it. Then, I place a blue sheet of plastic over it and I can clearly see a number. I'm looking into the red-blue 3-D effects to see if I can make it work. I've tried a lot of ideas, but either I can clearly see the number with or without the filter or I can't see the number at all either way. 209.149.113.5 (talk) 15:31, 10 November 2017 (UTC)[reply]
"The Trick"? There are lots of tricks involved in making a high-quality optical illusion like the one described here - but if I had to name the single most important trick, it would be understanding that the color filter selectively attenuates chroma noise while equally attenuating luma noise. For your illusion to work, you must hide a very low-power signal in the chroma-channel and bury it among powerful luminance noise. Pick a color-space that lets you do that - HSV or YUV - for your noise-generator; then, blend with your original image.
First, let's correct a typo: you probably mean to say steganography, rather than stenography.
There are two parts to your task: (1) to create a special image that appears noise-like, or invisible, when viewed in normal conditions, and appears to contain your numeral when viewed only in one color channel; and (2) to combine that image with your original photograph.
Task 1 is construction of a the "stenographic image." (sic). This is the image that contains the secret message you want to convey, but only when viewed through the color filter. You can rely on certain facts of visual perception, and try to make the data appear noise-like by capitalizing on facts like human visual perception biases pertaining to contrast, illumination, edges and contours, and so on: these can inform the choice of your noise-generator. There's a lot of art to this: it's actually much more subjective than any other part of your task. Bear in mind that you are creating a synthetic data set that is going to be combined with an image in later processing: knowing this, you have a lot of options to choose from when you represent your noise. For example, you can choose to make your noise zero-mean: that entails representing each pixel as a signed data type, which is not a common way to represent a finished image product. Our article on steganography tools list several existing pre-packaged software options - very few of them are any good. The sort of task you describe tends to be so customized that it would require a custom software process designed just for your needs.
Task 2 is digital compositing; it is the actual steganography, or the hiding of the previous "noise-like signal" inside another photograph. You can use a wide variety of methods to blend these images. You can also use the special knowledge about how your image will be composited to help you craft the noise-like data in the first task. Compositing is, in itself, an entire field of study: you can add the images; you can multiply them; you can mix them according to a nonlinear relationship. Once again, this is as much art as science. The classical paper is Porter/Duff (1984). It gives you a fantastic overview of what your options are - I dare say, it is "mathematically-complete" (you have no other options besides what they describe in that paper). In the last decades, academic research, commercial productization, and practical experience have developed compositing in to one of the most elaborate areas of image processing. Artists and software designers spend years studying how to do it so that it looks good. In your case - intentional injection of a noise-like signal - you have the extra difficult job of preserving a noise-like character without perceptually damaging the final image.
From a practical point of view, some of the tools you may wish to use include a layer-capable image editor, like GIMP or Adobe Photoshop; and a programmable mathematical tool like MATLAB or the python programming language to synthesize noise and arrange it in the form of a raster image. If you can afford the commercial tools like MATLAB and its image processing toolbox, you will have great advantages, especially in terms of the ability to rapidly iterate your efforts and get immediate visual feedback.
Your task is not trivial; there are no easy automatic ways to do it. You will also need great familiarity with your tools, and the ability to carefully control them (typically this means writing program-code). You must be aware of all the manual- and automatic- image processing steps that your software tools will perform on your intermediate and final products to ensure that your steganographic work is not lost, for example, by automatic post-processing, image compression, or exported image- or file-format changes at the last step.
Nimur (talk) 18:17, 10 November 2017 (UTC)[reply]

Follow up question on dark matter and stars

Perhaps most stars everywhere are sited at dense wisps of dark matter to give the contraction a head start? After all, there is considerably more dark matter than ordinary matter, so it could be the determining factor.144.35.114.188 (talk) 15:45, 10 November 2017 (UTC)[reply]

Perhaps; perhaps not. What possible further response is there, given that no one knows anything about dark matter other than that it appears from indirect evidence to exist? As star formation was thought to be reasonably explicable before dark matter was conceived of, there would seem to be no need of your hypothesis (with apologies to Laplace). {The poster formerly known as 87.81.230.195} 90.200.138.27 (talk) 21:29, 10 November 2017 (UTC)[reply]
Yes but calculations based on no dark matter wisps vs. many dark matter wisps could indicate respective rates of formation. It has always seemed to me the current theory of star formation has been thought of as necessary rather than satisfactory.144.35.45.72 (talk) 21:56, 10 November 2017 (UTC)[reply]
This is not a forum or the place for speculation. Questions about articles or sources are more relevant than anyone's personal pet theories. That being said, this article The Bullet Cluster Proves Dark Matter Exists, But Not For The Reason Most Physicists Think was an interesting read, and seems to imply that dark matter may only play a very indirect role in the collision of interstellar gas. μηδείς (talk) 22:41, 10 November 2017 (UTC)[reply]
medeis. Along with your good contributions, you have made several disgusting offensive irrelevant comments here, and made silly power trips like one you have just made over the years, and also made false accusations. You are scarcely one to advise on what is appropriate in this venue. Anyway, I had good reason in asking my question here: I asked my socalled "pet theory" question for valid reasons: there is a good chance that some astrophysicist who volunteers might know whether the suggestion is reasonable or know how to refute it, and because it has almost certainly has been considered, perhaps briefly, in the astrophysics literature, and a reference could be provided to me...It is a shame you that you are often so difficult.144.35.45.72 (talk) 00:31, 11 November 2017 (UTC)[reply]
This is not a forum or the place for speculation or personal attacks. Questions about articles or sources are more relevant than anyone's personal pet theories (pet not being a "disgusting" or "offensive" term). That being said, this article The Bullet Cluster Proves Dark Matter Exists, But Not For The Reason Most Physicists Think was an interesting read, and seems to imply that dark matter may only play a very indirect role in the collision of interstellar gas. Please read that article, as it is the only source anyone has given, and even in good faith, so far. μηδείς (talk) 03:44, 11 November 2017 (UTC)[reply]

okay, thanks I guess.144.35.114.29 (talk) 20:42, 17 November 2017 (UTC)[reply]

Maybe the stars make dark matter 71.181.116.118 (talk) 23:37, 10 November 2017 (UTC)[reply]

That's not a bad idea! If that were true, and if we also supposed stars were the SOLE way of making dark matter, then since currently there's more dark matter than ordinary matter like stars etc, it would imply that long ago there was much much more ordinary matter than there is now, and it has since mainly converted to dark matter. That might be testable(and already thought of and tested).144.35.114.29 (talk) 20:41, 17 November 2017 (UTC)[reply]
Please feel welcome to speculate when asking questions; that's what we (should be) here for. You are apparently not the first to wonder this about dark matter: Dark star (dark matter) is more than just a fun movie. See [5] which says that axions and maybe WIMPs are not suitable for this sort of thing, but neutralinos might be... anyway, that article is confusing, and is intentionally untrackable back to a real source, but if you look at something like [6] you can see references back to a bunch of Paolo Gondolo papers in JCAP and one of them from 2010 supposedly should be the one that touches on this. I don't know what he's toking but it's laced with a whole lot of heavy mathematics, so you're in for a ride... Wnt (talk) 01:01, 13 November 2017 (UTC)[reply]
Thank you.144.35.114.29 (talk) 20:41, 17 November 2017 (UTC)[reply]

November 11

Salicylic acid and acetylsalicylic acid

What difference would it make if we took the first, instead of the second, for a headache?--Hofhof (talk) 01:06, 11 November 2017 (UTC)[reply]

And yet the ancient Greeks used it (in the form of willow bark) for headaches with no apparent ill effects. 2601:646:8E01:7E0B:0:0:0:EA04 (talk) 10:12, 11 November 2017 (UTC)[reply]
Sola dosis facit venenum! Rather a difference between concentrated solutions and small amounts in bark. Fgf10 (talk) 11:47, 11 November 2017 (UTC)[reply]
It's workable (and has a long history) to use salicylates from willows, but it's slightly unpleasant on the stomach. It was usually taken as a tea, made directly from the willow bark. Any more concentrated form of salicylic acid (see above) is wart remover and isn't consumed internally.
Nor is it a question of dose, it's a different compound. In fact, salicylic acid is so harmful that it wasn't taken directly (there was a brief Victorian period when it was, as it was much cheaper). The compound prepared from the tree is a sugar called salicin, and that is oxidised within the body to the acid form. However it's hard to produce large quantities of salicin cheaply. Salicyclic acid was only taken as a drug for a brief period after the development of industrial pharmacy, when salicyclic acid could be synthesised cheaply, without willows, and before the mighty Bayer of Germany invented Aspirin as a more acceptable form of it.
Charles Frédéric Gerhardt, one of the illustrious group of self-poisoning chemists, had first set out to synthesise a more acceptable form of salicyclic acid and found that acetyl salicylic acid was suitable. However his synthesis wasn't very good and he gave up, thinking that the compound was wrong, rather than it just being his process. It was some decades before his original work was really proven to be right, by Felix Hoffman at Bayer.
Note that WP's claim, "Aspirin, in the form of leaves from the willow tree, has been used for its health effects for at least 2,400 years." is just wrong (and a complete misunderstanding of a childishly simple and correct ref). But that's GA review and MEDRS for you - form over accuracy, every time. Andy Dingley (talk) 12:06, 11 November 2017 (UTC)[reply]
That claim was not supported by the source, so I've changed it. Dbfirs 12:54, 11 November 2017 (UTC)[reply]
First, salicylates have been used for at least 5000 years, as evidenced by Ur III, a tablet from Ur of the Chaldees from, if I recall correctly, roughly 300 years before the birth of the biblical patriarch Abraham. Here's one source. [7] Salicylate was available in other forms - beaver testicles notably concentrate it in a way that they would apply to toothaches and such, and were used by native Americans; there was even a myth in Europe from classical times that the beaver would castrate itself when hunters grew near to avoid capture. (see Tractatus de Herbis)
Second, the willow bark salicylates used 5000 years ago were far safer than the only ones allowed to be sold over the counter by responsible medical authorities today. [8] This is because salicin comes as a glycoconjugate that is not taken apart until after it passes through the stomach. By contrast, aspirin was invented a century ago by industrialists who noticed that the salicylates they sold were causing stomach injury, and who figured it was due to the acid, so (after first trying to "buffer" the acid, e.g. Bufferin) they put a simple acetyl group over the acid hoping to stop the damage. Same folks who brought you heroin as the non-addictive alternative to morphine (a racket that works to this day). Wnt (talk) 16:50, 11 November 2017 (UTC)[reply]
Thank you for that earlier history. I've added a brief mention to the article. Dbfirs 18:33, 11 November 2017 (UTC)[reply]
As willow bark has been mentioned, I think I will have a go at a bit of clarification. Pharmaceutical companies love to synthesize the most active component of any proven natural remedy and market it. Willow Bark contains salicylic acid but the 'therapeutic' dose of Willow Bark has far less salicylic acid per dose, so does not cause the same problems as therapeutic dose of pure salicylic acid nor acetylsalicylic acid. The reason for this is that Willow Bark also contains other compounds that work synergically, which enhance the therapeutic effect of the little Salicylic acid that Willow Bark has per dose. This is why many users of Willow Bark swear blind that it is more effective than drug-store bought acetylsalicylic acid pain killers. Placebo? Doctors take it rather than become dependent on anything stronger and addictive. Also many doctors are closet alcoholics. So even though Willow Bark is far from completely safe for a habitual drinkers is better than acetylsalicylic acid, acetaminophen, ibuprofen, naproxen, etc. These drugs do the kidneys in quicker. Yet, since one's HCP can't earn money from writing 'scripts for Willow Bark – one ends up being proscribed a synthetic. And why not. Your Doctor is running a business and he too, has to earn enough to put his kids through collage – and possible be the first in the street to own a Tesla etc.. Aspro (talk) 23:15, 11 November 2017 (UTC)[reply]
I wonder whether there are collage colleges. Akld guy (talk) 06:14, 12 November 2017 (UTC)[reply]
  • Rather than take the acid, why not consider the conjugate base of salicylic acid: in the form of something like Sodium salicylate? That avoids the rather obvious acid problem. Our current Sodium salicylate does suggest a set of therapeutic uses. Klbrain (talk) 01:27, 13 November 2017 (UTC)[reply]
That was the idea behind "Bufferin", but it was generally incorrect. I think it is more accurate to say that cyclooxygenase enzymes (especially COX-1) in the stomach are needed to prevent injury, and if salicylate is absorbed there it will inhibit those enzymes. Wnt (talk) 22:37, 14 November 2017 (UTC)[reply]

where can I find literature on anhydrous acid-base equilibria?

It is really frustrating to me as a tutor of organic chemistry that everyone assumes that acid base reactions always take place in water. I need more information about how to estimate pKa of an organic compound in say, ethanol or glacial acetic acid, given a pKa in water and pKb of a conjugate base (and vice versa), as well as the autoionization constant of the target solvent. Also, how would I calculate the change in pKas for polar aprotic solvents? 98.14.205.209 (talk) 15:41, 11 November 2017 (UTC)[reply]

It might not be a solvable problem at this point. Acid_dissociation_constant is our main article, and its "Acidity in nonaqueous solutions" section notes:
These facts are obscured by the omission of the solvent from the expression that is normally used to define pKa, but pKa values obtained in a given mixed solvent can be compared to each other, giving relative acid strengths. The same is true of pKa values obtained in a particular non-aqueous solvent such a DMSO.
As of 2008, a universal, solvent-independent, scale for acid dissociation constants has not been developed, since there is no known way to compare the standard states of two different solvents.
The following ref (cited in that article section) has some information about comparing pKa in different solvents, especially with respect to different structural classes:
  • Kaljurand, I.; Kütt, A.; Sooväli, L.; Rodima, T.; Mäemets, V.; Leito, I; Koppel, I.A. (2005). "Extension of the Self-Consistent Spectrophotometric Basicity Scale in Acetonitrile to a Full Span of 28 pKa Units: Unification of Different Basicity Scales". J. Org. Chem. 70 (3): 1019–1028. doi:10.1021/jo048252w. PMID 15675863.
DMacks (talk) 16:31, 11 November 2017 (UTC)[reply]


(ec)It appears that despite the deceptively simple looking equilibrium, pKa depends on both the solvent [9][10] and the ionic strength [11]. That IUPAC source mentions Davies equation, Debye-Huckel theory, Pitzer equation, Specific Interaction Theory. The pKa also depends on temperature in a way that varies based on the class of compound, yet follows some empirical rules within them. [12] Certainly I don't know this topic, but I should put these up to get started. Wnt (talk) 16:32, 11 November 2017 (UTC)[reply]
OK thank you, because I am trying to define the scope of problems that I can cover with my students, and in many cases I have to know much more than my students would need to know (to ace their exams) because their education (and mine) I realize sometimes seem to side-step certain problems with dogmatic assumptions. 98.14.205.209 (talk) 16:38, 11 November 2017 (UTC)[reply]

Systems of acid and non-conjugate base (e.g. ammonium bicarbonate, pyridinium dihydrogen phosphate, boric acid - acetate)

Why aren't systems like these covered as extensively in online pages? Almost every web page seems to stop at adding a strong base to a weak acid or strong acid to weak base which is *really frustrating*. I suddenly realize that we didn't really cover many non-conjugate buffers in undergrad (the most we did was ammonium acetate, which to be honest is CHEATING since pKa + pKb = 14 and is really just a hidden version of the acid / conjugate-base problem). Basically we have a weak acid and a weak base whose pKas and pKbs do not add up to 14. Surely there must be a better way than having to brute force it through a system of equations? 98.14.205.209 (talk) 16:34, 11 November 2017 (UTC)[reply]

The reason that things are covered less is because no one wrote about them. However that may be because the topic is not WP:Notable in itself. For example if I look for "pyridinium dihydrogen phosphate" nearly all hits are derivatives. The one that was not was an error. That suggests that it is not useful compared to anything else already known. Ammonium bicarbonate however is used as a buffer and there are numerous references as to its use. eg https://www.nestgrp.com/protocols/trng/buffer.shtml and a buffer calculator at https://www.liverpool.ac.uk/buffers/buffercalc.html Graeme Bartlett (talk) 22:00, 11 November 2017 (UTC)[reply]
Boric acetate is used as a buffer in this patent. SciFinder has about 10 hits for "pyridium phosphate", the result-set of which is annotated as being an uncertain ratio, and seem to have been studied as corrosion inhibitors. DMacks (talk) 22:21, 11 November 2017 (UTC)[reply]
TBE buffer is quite common in molecular biology - TAE buffer less so, but not unheard of. (The EDTA is just a preservative; these are between Tris (pH 8) and borate or acetate. Wnt (talk) 11:46, 12 November 2017 (UTC)[reply]

AXLE FLUX GENERATOR OUTPUT.

Rotor from a claw pole alternator

The permanent magnets used in constructing Axle Flux Generators are always arranged to have alternate poles such as: N-S-N-S-N-S etc. What would be the effect on the output waveform if I used similar poles such as: N-N-N-N-N etc. — Preceding unsigned comment added by Adenola87 (talkcontribs) 16:59, 11 November 2017 (UTC)[reply]

An axial flux generator? The usual source for building advice on these (small scale wind turbines) is Hugh Piggott's books or website. You need to alternate the magnets, so that there is a changing flux through the coils. If there is no change of flux, then there's no output.
A long-established design is the claw pole alternator. This uses a single field coil (so the flux in the armature is always the same direction) and has sets of interleave pole pieces from each end, so that it has the effect of a reversing field. Andy Dingley (talk) 19:10, 11 November 2017 (UTC)[reply]
I assume you mean axial flux. There's not much to be had with novel configurations compared with state of the art, you can buy off the shelf axial flux motor kits for a couple of thousand dollars that are 97% efficient. http://www.ata.org.au/wp-content/uploads/marand_high_efficiency_motor.pdf The CAD drawing on P9 of that presentation was my original package layout from 1994/5. Greglocock (talk) 19:23, 11 November 2017 (UTC)[reply]


Rifles with horizontal magazines

Unlike most guns with box-like magazines, some have cartridges kept in horizontal pipe-shaped magazines fitted just below their barrels (mostly clutch-action"lever-action" or "pump-action" rifles). Such an arrangement may be all a right for shotguns which always use cartridges with flat front-ends, but in the rifles the cartridges' front end is never flat, it may not be sharp ( like AK-47s etc.), and is rounded to some extent, but is still narrow enough to work as a fire-pin against the front-to-it cartridge lying with its most sensitive part (cap) just touching it (bullet-tip of the neighbor behind). Is this arrangement not considered risky ? Besides the gun may also receive some unexpected jerk etc. ?  Jon Ascton  (talk) 17:25, 11 November 2017 (UTC)[reply]

It's called a tube magazine, and it's used on lever-action rifles like the Winchester Model 94. There are a lot of videos of people fooling around with this configuration trying to set off a chain reaction in the tube. Conventional wisdom is that pointy bullets are dangerous in tube magazines, and that all rounds for such rifles should use blunt-headed shapes and soft alloys. Hornady makes a plastic-capped pointy round that's supposed to be safe, but most opinions seem to be that the added ballistic performance isn't worth the cost of the ammunition - lever-action rifles aren't really made for long-range fire, so the blunt ballistics make no real difference at ranges for which such guns are normally used. Acroterion (talk) 18:10, 11 November 2017 (UTC)[reply]
Cartridges that are detonated by pressure to their rim such as common .22 caliber (0.22 in = 5.6 mm) varieties are safer under pressure to their rear center from another cartridge tip in a tube magazine than Centerfire ammunition would be. Blooteuth (talk) 00:09, 12 November 2017 (UTC)[reply]
Rifle primers are designed to allow for light strike. For example, an M16 rifle has a free floating firing pin. When the bolt is retracted and closed as part of the cycle, the firing pin can strike the rifle primer and spec for the primer must allow for a light strike. The hammer spring must be sufficiently strong to project the firing pin into the primer. Pistol primers, however, are much more sensitive. Firing pins for pistols have many more safeguards to prevent any strike. --DHeyward (talk) 06:24, 12 November 2017 (UTC)[reply]
Primers are constructed for a specific, fast and powerfull impact of a Firing pin with a distinctive shape made out of high-alloy steel to ignite. See here a video about some Gentlemen trying hard to ignite Primers with pointy bullets in a tube. --Kharon (talk) 06:38, 12 November 2017 (UTC)[reply]
Tube magazines were quite common on early military repeaters as well, such as the Jarmann M1884, the Mauser 1871/84, the Krag–Petersson, the Kropatschek, the Murata Model 22, the Lebel Model 1886, the various Vetterli, and so on and so forth. While quite a few of those used blunt or rounded bullets and/or rimfire ammunition, some used spitzer bullets and centerfire ammunition with no major issues. WegianWarrior (talk) 07:02, 12 November 2017 (UTC)[reply]
Also see these three videos on YouTube. WegianWarrior (talk) 15:55, 12 November 2017 (UTC)[reply]

November 12

Etymology of the word 'male'

This has been moved to Wikipedia:Reference desk/Language. ←Baseball Bugs What's up, Doc? carrots→ 12:44, 12 November 2017 (UTC)[reply]

Made that a link to the specific section. --69.159.60.147 (talk) 23:03, 12 November 2017 (UTC)[reply]

November 13

Planet Venus and Jupiter conjunction.

How far apart are these two planets as we see them today. Their apparent closeness looks amazing but what's the truth. Richard Avery (talk) 08:52, 13 November 2017 (UTC)[reply]

Stellarium tells me that the separation right now is about 15 arcminutes, about half the angular diameter of the moon. --Wrongfilter (talk) 09:10, 13 November 2017 (UTC)[reply]
About 833,500,000 kilometres: they have lined up because Venus is between Earth and the Sun, and Jupiter is roughly on the opposite side of the Sun to both Venus and Earth. [13]. They are not really close - just in line with each other. Wymspen (talk) 09:32, 13 November 2017 (UTC)[reply]
Thank you Wymspen, exactly what I wanted. Sorry Wrongfilter I did not make my question quite clear enough. Richard Avery (talk) 15:08, 13 November 2017 (UTC)[reply]
The Daily Telegraph feature Night Sky in November mentioned the distance "half the diameter of the moon" but didn't mention the arcminutes - but then the moon is half a degree wide and it moves through a distance equal to its own diameter every hour. It was a cloudless morning, and the moon was up for comparison. I have a wide uninterrupted view down to the eastern horizon and I was out an hour before sunrise - but I forgot to look. As far as separations go, every so often there is an occultation of a bright star by the moon (occultations of planets are far rarer). The latest was of Aldebaran between 02:30 and 03:21 (GMT) on 6 November. The distance here is many light years. It happened to be cloudy, but even if it hadn't been I would have missed it because although the feature covers the whole month it is published on the first Monday. 82.13.208.70 (talk) 15:31, 13 November 2017 (UTC)[reply]
One of the most spectacular sights I have seen was a conjunction some 10+ years ago of Venus, Jupiter, and the crescent Moon shortly after sunset. Since you knew the focus of the crescent moon was the sun, by definition, and that Venus, the Earth, and Jupiter were all in the plane of the Zodiac ecliptic you could actually see the solar system in three full dimensions, rather than just as dots on a field. It was awe-inspiring. μηδείς (talk) 22:44, 13 November 2017 (UTC)[reply]
Sorry to nitpick but technically the zodiac's the band where the planets can be (+/-8+°) and the plane's the ecliptic. Sagittarian Milky Way (talk) 03:46, 14 November 2017 (UTC)[reply]
Not at all, that's an important and highly relevant correction, thanks. μηδείς (talk) 03:55, 14 November 2017 (UTC)[reply]
Ha! I see Jupiter and Venus rose at six o'clock, which explains why I didn't see anything. Small nitpick: Medeis is referring to this [14]. An added bonus is that the moon passed in front of Venus at 16:15 (GMT) on the Monday night, 8 December 2008. Now, if the three luminaries really had all been on the ecliptic they would have been in a straight line. The moon is only on the ecliptic when eclipses occur (hence the name). You can see from List of solar eclipses in the 21st century that she wasn't there on that occasion. 82.13.208.70 (talk) 11:35, 14 November 2017 (UTC)[reply]
I did also see the 2008 conjunction, but the crescent moon was not between and above the two on that occasion. The conjunction I am thinking of was in 2003 or 2004 IIRC and definitely not in Decemeber, and the moon was visible between but above the two planets, i.e., above the ecliptic. μηδείς (talk) 16:36, 14 November 2017 (UTC)[reply]
The moon is only on the ecliptic when eclipses occur (hence the name). Not quite. The ecliptic is where eclipses can happen. The moon crosses the ecliptic twice a month; an eclipse happens if that crossing coincides with a syzygy. —Tamfang (talk) 08:35, 15 November 2017 (UTC)[reply]
Whats so special about planetar "conjunctions"? They already happened a million times and they are not even interesting for a Hohmann Transfer. The only "profession" that finds interest in them is Astrology aka Pseudoscience. --Kharon (talk) 13:35, 14 November 2017 (UTC)[reply]
They are beautiful. ←Baseball Bugs What's up, Doc? carrots→ 15:11, 14 November 2017 (UTC)[reply]
[ec] μηδείς has already explained that to you. HenryFlower 15:12, 14 November 2017 (UTC)[reply]
Humans have emotions and an appreciation of aesthetic concerns. That's why they are special. --Jayron32 15:27, 14 November 2017 (UTC)[reply]
Humans have emotions - [citation needed] TigraanClick here to contact me 15:32, 14 November 2017 (UTC)[reply]
Of course I have seen conjunctions many times, and they are not beautiful in the sense of a flower or a colorized Hubble image, but this one was spectacular. I am a particularly visual thinker, and given one could see the Earth, Venus, Jupiter, (and the sun which had set), but whose position was obvious, defined a plane, and the moon was above that plane, instead of some dots on the flat sky, it was immediately clear to me that I was seeing a portion of the solar system in three dimensions, which is not normally obvious, and is a very rare event. μηδείς (talk) 16:36, 14 November 2017 (UTC)[reply]

November 14

USGS measurements

How the United States Geological Survey is able to measure earthquake magnitudes around the world? That said, do they have their own stations across the world or do they measure indirectly at home, deducing the magnitude from available data? Thanks.--212.180.235.46 (talk) 09:00, 14 November 2017 (UTC)[reply]

Instruments in the US, others around the world, and international agreements to share data - see National Earthquake Information Center Wymspen (talk) 09:51, 14 November 2017 (UTC)[reply]

I looked at this article and the talk, but didn't get the answer i want. Quite basic calculation (*) shows that, if greenhouse effect were absolutely perfect, atmosphere absorbing each and every parcel of energy from the surface (it doesn't matter whether it is absorbed through conduction, convection, phase transition, radiation or whatever), then back-radiation (let's call it B) peaks at a maximum A + C where A: absorbed by atmosphere (77.1 according to the picture in the article) C: absorbed by surface (163.3, same source) A+C: 240.4 BUT B is supposed to be 340.3 (same source), 100 higher that the calculated maximum.

Well, I don't expect NASA to be that wrong, and I think any error would have been long corrected, so i have to suppose that somehow Back-radiation is currently HIGHER than in a perfect greenhouse effect world. My question is: how?


(*) we are looking for a steady state, at equilibrium, stable (things get back there if some noise disturbs the system) solution. I leave you the easy calculation to get there, just gives you the only solution -- nothing else works.

  • surface receive directly C and A+C from back-radiation, for a total of A+2C, which are then all send up so surface is at equilibrium.
  • atmosphere gets A directly, plus those A+2C from the surface, for a total of 2A+2C; half of it (A+C) goes down (the same as back-radiation used just above, sanity check OK) , half of it (A+C) goes up (which is just as much as absorbed, sanity check OK)

185.24.186.192 (talk) 11:42, 14 November 2017 (UTC)[reply]

The greenhouse effect
Does the simplified schematic from greenhouse effect help? The greenhouse effect is based on a circular flow of energy trapped in the system (i.e. heat). If you look at the schematic, the total energy entering each level is equal to the total energy leaving each level, which corresponds to an equilibrium. (There is actually a slight imbalance these days due to global warming.) However, it is not the case that the back-radiation must equal the total radiation from the sun. The amount of back-radiation depends on the temperature of the atmosphere. Similarly, the amount of energy transfer from the surface depends on the temperature of the surface. The surface and atmosphere will warm up until they reach a temperature where the energy flows out equal those coming in. The warm temperatures at the surface are maintained, in part, by a circular flow of energy which we know as the greenhouse effect. The energy flows from surface to atmosphere and back again happen to be larger than those from the sun, but that isn't a problem as long as we are talking about a closed loop. Dragons flight (talk) 11:58, 14 November 2017 (UTC)[reply]
Thanks, but no, it doesn't help at all : figures are only slightly different (67 + 168 = 235, Vs 324 BR instead of 77 + 163 = 240, Vs 340), but share the same issue
There is equilibrium in each level indeed, and you would have the same equilibrium at each level by adding just any value, positive or negative, to both back radiation and upward radiation. Subtract 324 to back radiation (putting it at zero), and also 324 to upward radiation (down from 452 to 128), ant it still works. Add another 324 to back radiation (putting it at 648) and also 324 to upward radiation (up from 452 to 776), and it also works. Well, no, it doesn't. The system is then, in both case, out of equilibrium (even though each level is at equilibrium). A zero back radiation would also mean a zero up radiation from the atmosphere, so it would warm up and emits more and more back radiation, until reaching equilibrium value. Similarly a 648 back radiation is way too much, meaning huge loss to space, cooling down atmosphere, lowering back-radiation, until the equilibrium is reached
The point is, basic (too basic ?) calculation put the said equilibrium at a maximum of 240 (or 235, depending on schematic) in the perfect GHE case. While each schematic says that in a NON perfect GHE case, back-radiation is much higher, when it should be lower (nothing can beat perfect GHE scenario).
185.24.186.192 (talk) 13:39, 14 November 2017 (UTC)[reply]
Its just a very simplified model representation and you added elements which are not in that simple model. One result of that is of course that the numbers in the model no longer add up because you changed the "formula" that model is using (to result in equilibrium). Find another model that contain your elements or "manufacture" a model yourself (which you already kinda tried (wrong) with your question). --Kharon (talk) 14:01, 14 November 2017 (UTC)[reply]
I added elements which ARE not in that simple model, taken from wikipedia article or schematic provided by talk
I may be wrong, indeed i asked "how", so your answser "you are wrong" is just not an answser...
185.24.186.192 (talk) 21:40, 14 November 2017 (UTC)[reply]
Perhaps it is unclear, but the radiation from the surface and the atmosphere is determined by the temperature of each component not the flux. So, you can't just put in random values without also changing those temperatures (flux emitted is roughly proportional to T4). Why do you believe 240 is the maximum? It's not. Let's consider a different analogy. Consider an oven. It consists of a heating element, some food you want to cook, and an insulated box. If you want to maintain a constant temperature, then the heat being put into the heating element must equal the heat leaking out of the insulated box. If the insulation is pretty good hopefully, then not much energy is leaking, so that necessary flux to maintain a constant temperature is low. However, the flux of energy being radiated between the food and the box and back will be much higher. That's because the inside of the box can get much hotter than the outside. If the insulation were nearly perfect, you could imagine the oven being able to getting ridiculously hot and the internal energy fluxes between the food and the box getting arbitrarily large. This is true even if the heating element is only providing a relative trickle of new energy, since the heat can build inside until an equilibrium is achieved. It's the same with the greenhouse effect in planetary atmospheres. The sun provides new energy, which at equilibrium counters the losses, but the internal transfers of energy can become much larger than the source flux depending on the characteristics of the atmosphere. For a thin atmosphere (like Mars) nearly all surface radiation escape directly to space, the back-radiation is very low, and the temperature enhancement is negligible. For a thick atmosphere (like Venus), essentially all surface radiation is captured by the atmosphere, the back-radiation is enormous, and the temperature enhancement is huge. Earth happens to lie in between these extremes. Dragons flight (talk) 16:27, 14 November 2017 (UTC)[reply]
more food for the though here, thanks.
the radiation from the surface and the atmosphere is determined by the temperature of each component not the flux, but the flux determines the temperature:higher flux in or out respectivly warms or cool the element until flux in and out balance again.
Your oven analogy is perfect. Even a perfect insulation box radiates energy out because of its own temperature, and this temperature will increase until radiation out perfectly match radiation received by the insulation box from inseide. And you can even calculate it, and that is just what i did:
the heating element brings C, heating the insulating box until its temperature rise at appropriete level to radiating out C, no more, no less; A is zero (no direct heating to the insulating box, neither from the outside nor from the heating element inside); the insulating box also radiates C back into the oven (Back-radiation B = C), because othewise it would either cool or warm (if it were more or less), so the food actually gets B+C=2C heating (C from the heating element+ B=C backradiation), which it also send back to insulating box (so it receive 2C, send C out and C back in: balance respected) , and everything balance perfectly, and stay so because this is a stable equilibrium. So it doesn't gets ridiculously hot inside the oven, the maximum heating being A+2C, as calculated above, with A=0 in your oven case.
And that's why I believe 240 is the maximum backradiation: because calculation shows it to be. It is not a "random value". It is the absolute maximum in the most perfect insulation case (unless something is wrong here, but what?)
Now, I understand your point that surface emperature being more or less known, the surface upward radiation cannot be very different from 452. and so the back-radiation must be whatever needed to balance things out, and that's 324 from your schematic. Higher than 235
Well, the only sensible conclusion is that atmosphere is better than a simple insulation layer. A heat pump. Heat pump exist, we build some, so why not nature, but I don't see how this works nor where it would pump heat from, and it is not explained in wikipedia, if it were so. Back to the start: how is this possible?
185.24.186.192 (talk) 21:58, 14 November 2017 (UTC)[reply]
The insulating box doesn't radiate at the same rate inwards and outwards. 93.136.80.194 (talk) 08:20, 15 November 2017 (UTC)[reply]
I think you are right, but this doesn't explain why, and this actually is just another way to put my initial question: why would the insulating box (a perfectly absorbing, choked full of GHG, atmosphere) radiate at different rate inwards and outwards?
185.24.186.192 (talk) 11:58, 15 November 2017 (UTC)[reply]
Imagine a box made of two thin shells. Each shell is perfectly absorbing and radiates at the same rate inwards and outwards. When the inner shell receives 1 unit of energy, 0.5 is backradiated and 0.5 is sent to the outer shell. Of the latter 0.5, 0.25 is radiated out and 0.25 is backradiated onto the inner shell. Of that 0.25, 0.125 is radiated inside (total for inside is 0.625 now), and 0.125 is backradiated onto the outer shell, and so on. In the end, 2/3 of the energy is backradiated and 1/3 is let through outside. If you add more shells, you can make the fraction radiated out as small as you want.
If this box has reached equilibrium, the amount of heat radiated to the outside is equal to the amount being received by the system. But to get to that point, the box contents might have received far more energy than it could radiate for a long time, and this would have caused an arbitrarily large buildup of energy. The system may receive 1 W and radiate 1 W, but that doesn't preclude that there's 200 W bouncing off the box's inner walls (and that doesn't necessarily imply that the box has been heated to its capacity as an insulator and will start to disintegrate and radiate out much more than its usual fraction). 93.136.80.194 (talk) 19:13, 15 November 2017 (UTC)[reply]
(indent out)
I see, but, as you points out, this require 2 (or more) PERFECT boxes, not a single perfect one.
If the too boxes are not perfect, but rather 2 imperfect, each of it absorbing half of incoming energy from innerward, so that the multilayers system is still perfect, what happens? is there any multiplicative effect?
for "ground": initial heating: C ; backradiation from inner to bottom: C ; total emission : 2C, from which C to inner layer and C to outer layer
for outer layer: directly from ground:C ; send downward: C ; radiated outward: C ; received from inner layer: C
for inner layer: directly from ground:C ; radiated downward: C ; send to outer layer : C ; received from outer layer: C
No multiplicative effect. A perfect box is a perfect box, whether it is single layered or multilayered to achieve perfection. You can change the number of layer to infinite, change the ratio received by each layer, no matter what, you cannot beat perfection.
Well, you can, but you need some sort of heat pump, pumping energy from the outer layer(s) to the inner layer(s)
However, you made me think of a real engine, able to power such a heat pump, and it is gravity, powering lapse rate. Lapse rate allow the top of atmosphere to be lower temperature that bottom, so it allows higher emission downward that upward. It is it starting to make better sense.
It is already stated in relevant article that GHE was a misnomer, I now know it is a double misnomer: Lapse rate is involved, despite not being mentioned (methink it should, but i guess fixing the article is not that easy)
thanks, consider the question answered
185.24.186.192 (talk) 11:21, 16 November 2017 (UTC)[reply]
The "perfect" multilayered box you describe does not exist because radiation cannot "skip" layers. At each layer is absorbed and dissipated in all directions including back, so naturally less energy reaches the outer layers. Besides, what you're talking about doesn't describe the Earth's atmosphere because it simply wouldn't be an insulator; Earth's atmosphere's lapse rate proves that it does insulate. 93.136.10.152 (talk) 20:35, 16 November 2017 (UTC)[reply]

November 15

Positronium diameter

In the book "Parallel Worlds" Michio Kaku writes that in the Dark era of the universe intelligent life might survive by being based on positronium atoms which would be 10^12 parsecs in diameter. How come these atoms would be so huge when Wikipedia says that nowadays they're the size of an ordinary hydrogen atom? 93.136.80.194 (talk) 08:13, 15 November 2017 (UTC)[reply]

When positronium is in an excited state it becomes bigger. It does decay, but the higher the state, the longer its life time. It does not have to be so big to last a long time. This would be termed a Rydberg atom. Some may combine together to form Rydberg matter. A solid positronium chunk of matter based on this would be less dense than air. Graeme Bartlett (talk) 12:31, 15 November 2017 (UTC)[reply]
Let me try to understand your question: In a <book of fiction> the author writes about <some concept they made up by plucking an existing scientific-sounding word out of the air> and now you want us to explain it? You'd have to ask the author. It's their imagination. Explaining the fictional scientific concepts in real science terms is always a futile exercise. --Jayron32 13:32, 15 November 2017 (UTC) Sorry for the misunderstanding. Carry on. --Jayron32 16:06, 15 November 2017 (UTC)[reply]
FYI, Parallel Worlds is intended as a work of popular science non-fiction. That said, I don't know the answer to the IP's question or whether he is accurately describing what is presented in the book. Dragons flight (talk) 14:46, 15 November 2017 (UTC)[reply]
The book is on archive.org) (apparently legally), search for 'positronium'. Positronium#Natural occurrence also mentions this, with a link to a paper. Basically, they are talking about the distant future when the density of matter in the Universe is extremely low and after nucleons (protons and neutrons) have decayed away (if protons do decay). In such an environment huge positronium "atoms" can be stable over a long time scale (small positronium atoms would annihilate quickly) and seem to be the only thing that is still around if this scenario is correct. --Wrongfilter (talk) 15:56, 15 November 2017 (UTC)[reply]
So arbitrarily large atoms can be created? Why 10^12 pc then? 93.136.80.194 (talk) 19:52, 15 November 2017 (UTC)[reply]
"Atom" is a funny word here, and it depends on what you mean by an "atom". Positronium has some properties like an atom, in that it is metastable enough at current conditions to be studied, it forms chemical bonds with other atoms, etc. Indeed positronium hydride has been created long enough to be studied; the half-life of positronium being longer than some transuranium isotopes. But it isn't really an "atom", if you mean "A group of nucleons surrounded by an electron cloud". What it is is an electron and positron with enough quantum pressure to keep them in the same general area long enough to have consistent properties. The question being asked (and answered) by the 10^12 parsecs answer is something akin to "at what distance will a bound electron-positron pair be such that the quantum pressure keeping them apart would be sufficient to prevent them from collapsing together and annihilating?" and apparently that answer is "a trillion parsecs" I don't know the specifics of the math here, but that's how I interpret the result. Now, since this thing would only really be able to exist in a state that large if there were literally nothing else left to interact with it in ways that may disrupt its stability, that would be a very empty universe indeed. But I think that's the point, the author is looking for some sort of matter which would still exist. As long as you have matter, you can store information, and if you can store information, you're not yet at the end of time. --Jayron32 20:16, 15 November 2017 (UTC)[reply]
I see, thanks. 93.136.80.194 (talk) 20:33, 15 November 2017 (UTC)[reply]

Baked Beans

Question posed by a blocked user. ←Baseball Bugs What's up, Doc? carrots→ 19:29, 15 November 2017 (UTC)[reply]
The following discussion has been closed. Please do not modify it.

It is well known that baked beans can cause flatulence. According to the article this is "due to the fermentation of polysaccharides (specifically oligosaccharides) by gut flora, specifically Methanobrevibacter smithii. The oligosaccharides pass through the small intestine largely unchanged; when they reach the large intestine, bacteria feast on them, producing copious amounts of flatus."

1) Of the carbohydrate content of baked beans, what percentage is actually polysaccharides? For example, this can from Heinz says 11.4g of carbohydrate per 100g. How much of that is polysaccharides?

2) When the polysaccharides are feasted on by bacteria, how much of it gets absorbed by the human body or wasted?

Thanks 91.47.17.210 (talk) 10:09, 15 November 2017 (UTC)[reply]

See "Polysaccharide from Dry Navy Beans, Phaseolus vulgaris: Its Isolation and Stimulation of Clostridium perfringens", [15], a wonderful research paper that discusses both the polysaccharide content of a few bean varieties, and also gives measurements for how much gas is produced. What a world! SemanticMantis (talk) 17:34, 15 November 2017 (UTC)[reply]

Baked beans and polysaccharides

I read something recently that made me wonder about baked beans and flatulence. It is well known that baked beans can cause flatulence. According to the article this is "due to the fermentation of polysaccharides (specifically oligosaccharides) by gut flora, specifically Methanobrevibacter smithii. The oligosaccharides pass through the small intestine largely unchanged; when they reach the large intestine, bacteria feast on them, producing copious amounts of flatus."

The questions are:

1) Of the carbohydrate content of baked beans, what percentage is actually polysaccharides? For example, this can from Heinz says 11.4g of carbohydrate per 100g. How much of that is polysaccharides?

2) When the polysaccharides are feasted on by bacteria, how much of it gets absorbed by the human body or wasted?

Thanks, SemanticMantis (talk) 19:34, 15 November 2017 (UTC)[reply]

I have found a suitable reference on the topic, but I'm curious to see what anyone else can dig up on question 2). Polysaccharide from Dry Navy Beans, Phaseolus vulgaris: Its Isolation and Stimulation of Clostridium perfringens", [16], a wonderful research paper that discusses both the polysaccharide content of a few bean varieties, and also gives measurements for how much gas is produced. What a world! SemanticMantis (talk) 19:34, 15 November 2017 (UTC)[reply]
  • WP:POINTY "When one becomes frustrated with the way a policy or guideline is being applied, it may be tempting to try to discredit the rule or interpretation thereof by, in one's view, applying it consistently. Sometimes, this is done simply to prove a point in a local dispute. In other cases, one might try to enforce a rule in a generally unpopular way, with the aim of getting it changed.
"Such behavior, wherever it occurs, is highly disruptive and can lead to a block or ban. If you feel that a policy is problematic, the policy's talk page is the proper place to raise your concerns. If you simply disagree with someone's actions in an article, discuss it on the article talk page or related pages. If mere discussion fails to resolve a problem, look into dispute resolution.
The proxy for whom you have made yourself a proxy can always post under his real identity in 60 days from the time he was blocked. Hopefully that addresses your problem, SemanticMantis. μηδείς (talk) 04:03, 16 November 2017 (UTC)[reply]
I had figured it was the banned user Light Current. ←Baseball Bugs What's up, Doc? carrots→ 04:13, 16 November 2017 (UTC)[reply]

Off topic shouting. Take it to the talk page if you must SemanticMantis (talk) 13:41, 16 November 2017 (UTC)[reply]

It's already on the talk page, and it is on-topic. ←Baseball Bugs What's up, Doc? carrots→ 15:31, 16 November 2017 (UTC)}}[reply]

November 16

"After"-Talk

Is it just a myth or a fact that certain types of bugs can make it possible to listen what was talked in a room upto an even an hour after the talk has ended. I mean the device wasn't there when the conversation was on and installed, say, many minutes after the talkers had left the premises.  Jon Ascton  (talk) 05:13, 16 November 2017 (UTC)[reply]

Just a myth. Perhaps someone can link to a reference that debunks this fanciful notion? Dbfirs 08:43, 16 November 2017 (UTC)[reply]
(edit conflict) it is hard to debunk the general concept of after-talk-listener - there is no rock-solid physical principle that says you cannot, like it would be the case for a claim that you can listen before the talk happens. But any specific implementation I can imagine would be easily debunked.
For instance, "picking up the attenuated sound waves bouncing off the walls by a strong microphone" is next-to-impossible: (1) since the sentence spoken at t is bouncing off when the sentence at t+Δt is spoken, it will need a whole lot of deconvolution that may or may not be possible and will anyways surely worsen the signal-to-noise ratio; (2) except at resonant frequencies of the room, sound attenuates quite fast (i.e. the Q factor is low) (test: shout at the top of your lungs, and listen if you hear anything a few seconds after you stopped: you don't, which means the decibels drop fairly quick); (3) microphones are not much more sensitive than the human ear and way less complex as far as signal processing go (see e.g. [17], [18]), so if you cannot hear something it is usually a good guess that a microphone next to you cannot either. (I remember someone saying that the acoustic noise generated by air at room temperature was not far below the threshold of human hearing and some people with Hyperacusis could hear it, but I could not track a source to that effect - anyone else can, or is that just another myth?) TigraanClick here to contact me 09:07, 16 November 2017 (UTC)[reply]
Methinks it would require some kind of Echo chamber. But unless it could reverberate for a very long time, the OP's concept wouldn't work. Also, you'd likely have echoes of different parts of the conversation going on all at once, and it would require some tedious work to separate it out. ←Baseball Bugs What's up, Doc? carrots→ 09:26, 16 November 2017 (UTC)[reply]
The events were recorded when they occurred (and could then have been relayed later). There are many ways to record people which they are not necessarily aware of... The belief however reminds me of "wall memory", a spiritualist belief that objects, walls and houses could have memory which they could echo later in particular circumstances to explain paranormal encounters etc. —PaleoNeonate – 10:55, 16 November 2017 (UTC)[reply]
The device doesn't have to be inside the room to hear what's going on. See laser microphone. 82.13.208.70 (talk) 11:16, 16 November 2017 (UTC)[reply]
Perhaps the users of such devices spread rumours about recording with a device installed long after the event, just to hide how sensitive their real-time devices really are. Dbfirs 12:27, 16 November 2017 (UTC)[reply]
Oh, it's a myth, but it would be useful if you could provide a source/link for the original claim. It would be much easier for us to provide sources to debunk a particular and specific assertion, rather than just throwing open the field to try to prove a general negative.
For a thought experiment, though, consider the speed of sound is about 300 meters per second, and a good-sized meeting room might be 10 meters across. In one second, a sound originating from a point in the room will have bounced back and forth off the walls of the room 30 times. (It's even worse if you remember that rooms have ceilings about 3 meters high; that's 100 floor-ceiling bounces per second.) A minute later, a single short, clear sound will have bounced off at least a couple of thousand surfaces, getting spread out, attenuated, and jumbled into a bit of molecular jiggling indistinguishable from heat. A hard surface like concrete, drywall, or glass might reflect as much as 98% ([19]) of the sound energy that hits it back into the room—an echo. If we do the math for 2000 ideal 98% bounces, we get...2*10^-18 times the original intensity. And that's your best-case, because it doesn't account for the presence of soft sound-absorbing objects in the room, like chair cushions, drapes, or people, and it doesn't account for the miserable nuisance of multiple paths interfering with each other.
If I fire a pistol in an office with a closed door, and then open the door a minute later, the guys out in the hall don't hear a 'bang' when the door opens. Forget trying to get information about something like a conversation. TenOfAllTrades(talk) 13:48, 16 November 2017 (UTC)[reply]

How far from the center has a bound electron of the observable universe ever reached?

Out of zillions of atoms one has had an electron reach the most picometers from the center since the Big Bang. This distance should be estimatable right? Maybe it's "more wrong" to think of electrons this way than as clouds but you're only probabilistically estimating, not observing and changing actual electrons. Sagittarian Milky Way (talk) 08:57, 16 November 2017 (UTC)[reply]

Even before you get into quantum weirdness, your question is poorly defined. Say there are only 2 protons and 2 electrons in the universe. If the two electrons are both closer to proton A than to proton B, do you have two hydrogen atoms, or a positive hydrogen ion and a negative hydrogen ion? i.e. when an electron is far enough away from the atom, it ceases to be meaningful to define it as an atom (and that's before you get to issues regarding all electrons being interchangeable, and not having defined positions unless measured). MChesterMC (talk) 09:35, 16 November 2017 (UTC)[reply]
The greater issue then is that what you really have is just a set of data describing the location of charge centers in relation to each other and their relative movement. Concepts like "electron" and "proton" and "ion" and "atom" are human-created categorizations to make communicating about complex data like this meaningful to us. We define the difference between an atom and an ion based on our own (arbitrary but useful) distinctions. What makes something a thing is that we set the parameters for that thing. There is no universal definition for that thing outside of human discussion. --Jayron32 11:50, 16 November 2017 (UTC)[reply]
Also there is no such thing as the center of the Universe.--Shantavira|feed me 11:14, 16 November 2017 (UTC)[reply]
True, but I read it as being from the centre of the atom. My quantum mechanics isn't up to the task, but it should be possible to estimate a probable maximum distance just by multiplying the probability density function from the Schrödinger equation by the number of atoms being considered, perhaps just for hydrogen atoms. Whether such a distance could ever be measured in practice is questionable, but the mathematics based on a simple model should provide a very theoretical answer. Do we have a quantum mechanic able to offer an estimate? Dbfirs 12:22, 16 November 2017 (UTC)[reply]
You have to define your probability limits. The maximum distance is infinite if you don't define a limit, like 90% probability, or 99% probability, or 99.99% probability. If you set the probability to 100%, you get a literally infinitely large atom. --Jayron32 12:36, 16 November 2017 (UTC)[reply]
Yes, of course, that's what the probability density function gives, but if you find the distance at which the probability is ten to the power of minus eighty, then we have a theoretical figure for the maximum expected distance since there are about ten to the power of eighty hydrogen atoms in the observable universe. Statisticians might be able to refine this estimate, and I agree that it might bear little relevance to the real universe. Dbfirs 12:50, 16 November 2017 (UTC)[reply]
In the ground state, the distance you are asking about is ~100 times the Bohr radius of a hydrogen atom. However, in principle there exist an infinite number of potential excited states with progressively increasing orbital sizes. Very large orbitals involve energies very close to but slightly below the ionization energy of the atom. In that case the electron is only very weakly bound. Aside from the general problem that the universe is full of stuff that will interfere, there is no theoretical reason why one couldn't construct a very lightly bound state with an arbitrarily large size. Dragons flight (talk) 14:12, 16 November 2017 (UTC)[reply]
The important thing to remember here is that energies are quantized but distances are not. This is all what the uncertainty principle is about. You can't handwave some "yeahbuts" around; the position of electrons with a well-defined momentum are fundamentally unknowable which means that the chance of finding that electron at any arbitrary point in the universe is not zero. In a single-hydrogen-atom universe, we can construct a hydrogen atom of any arbitrarily large radius by asymptotically approaching the ionization energy of the atom (this is akin to the escape velocity in a gravitationally bound system). As the energy level of an electron approaches asymptotically close to the ionization energy, the radius of that atom tends towards infinity. Well, sort of. The radius itself is not a well defined thing, but any given radius definition (such as the Van der Waals radius) will tend to increase arbitrarily large values as one approaches ridiculously high energy levels. There are an infinite number of energy levels below the ionization energy, so you can get arbitrarily close to it without passing it. That's what DF is on about. In a real universe with other atoms, highly excited electrons are able to absorb enough energy from a stray photon to excite it past the ionization energy, so in practical terms in a real universe, there are practical limits to the size of atoms, but those are imposed by factors external to the atom, not factors based on internal forces within the atom. Purely as a system-unto-itself, there is no limit to the distance that a bound atom cannot remain bound. Only an energy limit. --Jayron32 15:15, 16 November 2017 (UTC)[reply]
There's no way you can find an answer to "there's a 50% chance a bound electron has not been x far from its atom center" in some way not really applicable to the real universe? (like if you were to measure the position of every electron once to good accuracy (clearly not possible for many reasons i.e. sentient life postdated atoms) there should be a 50% probability one of the bound electrons are x far, the most likely electron speed is y (which has to have an answer since superheavy elements get relativistic effects from the electrons moving that fast) it takes z time for an electron at the most likely or 50th percentile distance to move a reasonable distance away from where it was at that speed (say 1 radian, yes they don't really orbit), there's been w of these time periods since the percent of hydrogen atoms that were non-ionized is similar to now (does that even cover most of the time since stars?) and that could be taken as w more chances for an electron to get far so you can then calculate the distance with w times more atoms each being measured once? So if (numbers for example, not accurate) there were 1080 atoms and there have been 1034 periods z long so far you'd find the 50% probability maximum for 10114 atoms being measured once? (since good positional accuracy would screw with trying to measure the real universe's electrons' positions quadrillions of times per second) If cosmological weirdness has made the amount of (normal) matter within the observable boundary vary a lot I wouldn't mind if that was ignored to make the math easier) Sagittarian Milky Way (talk) 19:27, 16 November 2017 (UTC)[reply]
There's some fun stuff that happens with low-probability statistics and indistinguishable particles.
The probability that an electron is measured at a distance of a thousand light-years radially from the proton it orbits is very low but non-zero.
But - if you set up a detector, and you register one "count", can you prove that the electron you observed is the one you were trying to measure?
No, you cannot. Your detector might have measured noise - in other words, it might have been lit up by a different electron whose interaction with your detector was more or less likely than the interaction with your super-distant electron. Actually, the probabilities and statistics don't matter at all, because we have a sample-size defined exactly as one event. Isn't quantization fun?
In order to prove that it was the electron you were hoping to measure, you need to repeat the experiment and detect a coincidence. The probability that you will measure this is very low, but non-zero, squared - in other words, it won't be measurable within the lifetime of the universe.
Here's what Plato Encyclopedia has to say on this topic: Quantum Physics and the Identity of Indiscernibles.
Take it from an individual who has spent a lot of time counting single electrons - even with the most sophisticated measurement equipment, my electron looks exactly like your electron, and I can't prove which electron was the one that hit my detector.
Nimur (talk) 19:53, 16 November 2017 (UTC)[reply]

ethanol fermentation

A gram of sugar has 4 calories and a gram of alcohol has 7 calories. Would anyone be able to tell me what the approximate conversion rate of sugar to alcohol in ethanol fermentation is in calories? Eg if you put 400 calories of sugar into the reaction how many calories of ethanol do you get out? I've tried reading the article but all the equations are way over my head, sorry. Thanks for your time. — Preceding unsigned comment added by Oionosi (talkcontribs) 10:21, 16 November 2017 (UTC)[reply]

C6H12O6 → 2 C2H5OH + 2 CO2
translates into
180g sugar → 2x 46 g alcohol + 2x 44g CO2,
(those weight values can be found at respective article of glucose, ethanol and CO2)
4x 180 calories sugar (=720) -> 7x 2x 46 calories alcohol (=644) + 76 calories lost
As you see, this was not are way over my head, you underestimate yourself — Preceding unsigned comment added by 185.24.186.192 (talk) 11:36, 16 November 2017 (UTC)[reply]

Telephone lines

Can someone explain how telephone lines "work", i.e. how it is that they can carry multiple conversations simultaneously, rather than being busy for all the people on the exchange whenever any subscriber is on the phone? I looked at telephone and telephone line without seeing anything, and Google wasn't helpful either. I would imagine that the line would carry electrical pulses just one at a time (as on a party line), and multiple conversations would cancel each other out, but obviously our whole telephone system wouldn't work properly if that were the case. Nyttend (talk) 12:23, 16 November 2017 (UTC)[reply]

This has a pretty good explanation, getting down to how signals are encoded for travelling down the wire. --Jayron32 12:34, 16 November 2017 (UTC)[reply]
Hm, so nowadays it's electronic when going from exchange to exchange; not surprised, but I wasn't aware of that. And I didn't realise that there was a completely separate wire from every subscriber to the exchange, or I wouldn't have wondered. But before electronics, how was it possible for two subscribers from the same exchange to talk simultaneously with two subscribers from the other exchange, rather than one person taking it up and preventing other callers? Does Myrtle have multiple individual wires going to every nearby town's exchange, and a massive number of individual wires going to some nationwide center in order to allow Fibber to phone someone halfway across the country? Nyttend (talk) 12:49, 16 November 2017 (UTC)[reply]
Andy gives a good summary below. In the early days of telephone systems there really was a direct electrical connection that had to be made between each pairs of callers, and no one else could use the same wires at the same time, so each hub on the network had to have many separate wires available that could be variously connected to make one complete circuit for each call. However, we have long since abandoned that approach. Nowadays everything is generally digitized and travels over packet switched networks. Depending on where you live, and who provides the phone line, the digitization may happen inside your home or at some regional or central exchange. Dragons flight (talk) 14:25, 16 November 2017 (UTC)[reply]
  • There are several methods used historically.
  • Party line systems were connected simply in parallel. Only one could be used at a time.
  • Underground cables used a vast number of conductor pairs, one for each circuit. 100 pair cables were common, far more than could ever be used by overhead cables, which were mostly single pairs to each visible cable (the first telegraph signals used a single copper conductor for each visible wire, so a telephone circuit might need two wires and pairs of china insulators). Cables for the 'local loop' from the exchange to the telephone used a single pair for each phone. Cables between exchanges were of course circuit switched to only need enough pairs for the calls in progress (not the number of phones) and many calls would be local, within the same exchange.
  • Analogue multiplexing[20] was used from the 1930s (rarely) to the 1980s. Like a radio, this was a broadband system that packed multiple separate signals down the same cable by multiplexing them. Frequency division multiplexing was used, like an AM radio. Each telephone signal only needed a narrow bandwidth of 3kHz: 300Hz to 3.3 kHz. This meant that the largest trunk lines could carry several MHz signals over a coaxial copper tube conductor, several to a cable, and these could each carry thousands of voice phone calls - or a single TV signal, for the early days of national TV in the 1950s-1960s.
  • In the 1980s, PCM (pulse code modulation) came into use, where analogue phone signals were digitised, then distributed as circuit-switched digital signals. Usually this was done in the telephone exchange, but commercial switchboards[21] began to operate digitally internally and so these could be connected directly to the digital network, through ISDN connections (64kbps or 2Mbps). There was some movement to placing concentrators in cabinets in villages, where the local phones were digitised and then connected to a local town's exchange via such a connection (all phones had a digital channel to the exchange). This allowed simpler cables (such as a single optical fibre) to the village, but was less complex than an exchange in the village.
  • In the 1990s, the Internet became more important and packet switching began to replace circuit switching for digital connections between exchanges and commercial sites. The domestic telephone was still an analogue connection, rarely ISDN, and anyone with home internet access used a modem over this.
  • By 2000 the analogue telephone was no longer as important as the digital traffic. Also IP protocols from the computer networking industry replaced the mix of digital protocols (ATM, Frame Relay) from the telephone industry. Analogue phones became something carried over an IP network, rather than digital traffic being carried by analogue modems. BT in the UK began to implement the 21CN, as a total reworking of their legacy network. Andy Dingley (talk) 13:27, 16 November 2017 (UTC)[reply]
Thank you; this really helps. I don't much understand how radio works, but the idea of broadcasting at different frequencies I understand, so using a different frequency for each telephone conversation makes sense. Could you add some of this information to telephone line and/or telephony, since the first is small and the second mostly talks about digital? Nyttend backup (talk) 15:06, 16 November 2017 (UTC)[reply]
What he said about 100 circuits is the source for the old "all circuits are busy" message. ←Baseball Bugs What's up, Doc? carrots→ 15:30, 16 November 2017 (UTC)[reply]
Very rarely. It was exchange equipment that ran out first, not cables.
Telephone exchanges are obviously complex, but for a long time and several generations of technology pre-1980 (and the introduction of stored program exchanges, i.e. single central control computers) they consisted of line circuits, junctors and a switching matrix between them. Line circuits were provided for each local loop (i.e. each customer phone). Obviously the amount of equipment per-customer was kept to an absolute minimum, and as much as possible was shared between several subscribers. Typically[22] a rack of subscribers' uniselectors was provided, each one handling 25 lines. Several sets were provided, so each subscriber might be connected to 5, or even 10 on a busy exchange. When a subscriber picked up their phone, the next free uniselector would switch to their line (and only then the dialling tone was turned on). So no more than 1 in 5 people could make a call at the same time - any more than that and you didn't get dial tone (and maybe did get a busy tone or message instead).
Exchanges are connected together by cables, and the switching circuit for these is called a junctor (Junctor is a useless article). Again, these are expensive so the equipment is shared and multiple sets are provided, but not enough to handle a call over every cable at once. Traffic planning and the Erlang were important topics for telephone network design. For a pair of exchanges (called "Royston" and "Vasey") where all of their traffic is between the two exchanges and they don't talk to people from outside, then enough junctors might be provided to meet the full capacity of that one cable. Usually though, enough equipment was provided to meet the "planned" capacity for a cable and the "planned" capacity for the exchange, and the equipment racks (the expensive and more flexible aspect) would be the one to run out first. Only in exceptional cases would all the traffic land on a single cable, such that it was the cable which maxed out.
One aspect of more recent and packet switched systems, rather than circuit switched, is that they become more efficient at load sharing, thus "equipment busy" becomes rarer. Also we demand more, and the hardware gets cheaper, so it's easier to meet this demand. Andy Dingley (talk) 17:27, 16 November 2017 (UTC)[reply]
Thanks Andy, great stuff! I agree that junctor is a fairly poor article, and I'm interested to learn more about them. If anyone has any references on those please post them here, maybe we can use them to improve our article :) SemanticMantis (talk) 02:00, 17 November 2017 (UTC)[reply]
See also the phantom circuit, later reused in a patented method for supplying power, known as Power over Ethernet (PoE). --Hans Haase (有问题吗) 11:11, 17 November 2017 (UTC)[reply]
Many years ago I went to an "Open Day" at the local telephone exchange. There was a historical talk and we saw lots of metal rods with ratchets which moved up and down to make connections, making a clicking noise. I accepted an offer from the presenter to call my house - he didn't get through, but when I mentioned this later my mother said that the phone had been ringing intermittently all evening. 82.13.208.70 (talk) 16:36, 17 November 2017 (UTC)[reply]
See Strowger exchange Andy Dingley (talk) 17:23, 17 November 2017 (UTC)[reply]

What is a 'double domed appearance'?

"Double domed" cranium

Relating to an animal's (in this case, an elephant's) head. 109.64.135.116 (talk) 19:49, 16 November 2017 (UTC)[reply]

It looks like an Asian elephant looks. HenryFlower 19:54, 16 November 2017 (UTC)[reply]
A picture is worth several words →
2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk) 05:48, 17 November 2017 (UTC)[reply]

Sun's helium flash

Judging by the article about helium flash, our Sun will apparently exhibit this when it starts fusing helium. Do we know how bright this flash will be? Will it affect life on Earth? Is this the part where the Sun engulfs Mercury and Venus? Also, I suppose there will be a process of dimming once the hydrogen supply is exhausted while the Sun is collapsing. Is this near-instantaneous or will it take minutes/days/millenia? 93.136.10.152 (talk) 20:40, 16 November 2017 (UTC)[reply]

You can read Sun#After_core_hydrogen_exhaustion. Ruslik_Zero 20:50, 16 November 2017 (UTC)[reply]
Thanks, didn't think to look there. So the compression of the core is more or less gradual as Sun reaches the end of the red giant branch, until the moment of the helium flash? 93.136.10.152 (talk) 21:15, 16 November 2017 (UTC)[reply]
There's also a carbon flash with three heliums kung-powing into carbon and so on in stars with enough mass (more than the Sun). If the star's massive enough it can reach 1 million sunpower with only tens of sun mass and build up central iron ash till it loses structural integrity (since stars can't run on heavy elements). The star collapses till 200 billion Fahrenheit, bounces off, and explodes with the light of billions of Suns (and up to about a trillion sunpower of neutrino radiation). When the center reaches the density of a supertanker in a pinhead it becomes extremely resistant to further collapse (but not invincible) since there's only so many neutrons that can fit in a space unless it can force a black hole (or possibly get crushed into smaller particles). Sagittarian Milky Way (talk) 00:01, 17 November 2017 (UTC)[reply]
Will it affect life on earth? No, because by that time all life on earth will have died off. 2601:646:8E01:7E0B:5917:3E80:D859:DF69 (talk) 06:43, 17 November 2017 (UTC)[reply]

November 17

An odd ball

Odd ball

What is it?

It's not a wasp hive. It seems pretty solid when you break it open. It is heavy and hard. The one pictured is around 25cm high. They are all over the place in Hainan. Ants seem to like crawling around that one.

What is it?

Anna Frodesiak (talk) 06:09, 17 November 2017 (UTC)[reply]

COuld it be a bird nest of some sort? Swallows build nests of mud, though usually on cliff faces or under building eaves. --Jayron32 11:57, 17 November 2017 (UTC)[reply]
The acrobat ant builds nests like this (looks similar to me). Alansplodge (talk) 12:24, 17 November 2017 (UTC)[reply]
Looks like an ant colony to me too. I was a bit surprised to not see any ants in the photo, but OP says ants are all around, so that's also pretty good evidence. Tropical termites can also build roughly similar nests, but I think they usually also build covered galleries. Further destructive sampling would probably clear this up rather quickly, at least to determine ant or non-ant. SemanticMantis (talk) 15:06, 17 November 2017 (UTC)[reply]
Just a note - it's not wise to mess with these things. Sometimes insects swarm out and sting you to death - or if you're rock climbing you fall to your death. 82.13.208.70 (talk) 15:54, 17 November 2017 (UTC)[reply]

Evolution of anticoagulants

Why did some blood-sucking animals evolve anticoagulants if they suck in a manner where the blood doesn't have enough time to coagulate? E.g. mosquitos or leeches pierce the skin and suck directly, so the blood goes steadily into them, similar to injection needle which prevents coagulation. My guess is that anticoagulants were present before they evolved the necessary organs, allowing them to feed on spilled blood. Brandmeistertalk 16:30, 17 November 2017 (UTC)[reply]

I'm not sure I'm following your question. Does it help to know that ordinary saliva contains enzymes like amylase and lipase that start the pre-digestion of food? In other words, ordinary saliva is already pretty decent at breaking things up. Now, something like a vampire bat has even more advanced enzymes working for it (at least, according to our citation), but it was clearly a case where evolution took something that was already good at a certain activity and then enhanced it (to some extend, a form of exaptation). Matt Deres (talk) 18:26, 17 November 2017 (UTC)[reply]
As I understand, digestive enzymes and anticoagulants here are different stuff. Digestive enzymes may be needed to break down blood, but anticoagulants look redundant if the sucked blood goes straight for digestion anyway. Per clotting time, blood starts to coagulate in about 8 minutes, far more time than required for a mosquito to feed and fly away. Brandmeistertalk 18:45, 17 November 2017 (UTC)[reply]

A question about g-force

How much g-force a pilot who fly an airplane at 510 knots will endure? 37.142.17.66 (talk) 18:09, 17 November 2017 (UTC)[reply]

If they fly straight and level, then zero.
They only feel g forces if they manoeuvre, usually by pulling vertically (relative to the airframe) upwards (the wings can generate more lift than any yaw or roll forces). So loops or tight turns. Andy Dingley (talk) 18:37, 17 November 2017 (UTC)[reply]
I'm particularly talking about United Airlines Flight 175. 37.142.17.66 (talk) 18:49, 17 November 2017 (UTC)[reply]
I don't recall any particularly hard manoeuvring on that day. Also these were airliners, which just aren't built to pull many g. Andy Dingley (talk) 20:06, 17 November 2017 (UTC)[reply]
I am confused. Do you mean how much G-force was generated by the abrupt stop (crash)? --Lgriot (talk) 20:08, 17 November 2017 (UTC)[reply]
No. before the crash. according to National Transportation Safety Board the pilot nosedived more than 15,000 feet in two and a half minutes. so how much the g-force was on the pilot, if he really fly at 510 knots? 37.142.17.66 (talk) 20:44, 17 November 2017 (UTC)[reply]

[e/c]

Also, it depends on which axis (Gx, Gy, Gz) the G-force is applied before G-LOC occurs (unconsciousness due to hypoxia). 2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk)