Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 93.136.80.194 (talk) at 19:52, 15 November 2017 (→‎Positronium diameter). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 8

Earth-temperature brown dwarf

How massive would a brown dwarf have to be to have Earth surface temperature? Also what would his radius be then? Would this object be dense enough for a spacecraft to land or float on it (maybe with the aid of a balloon)? I'm thinking of something like WISE 0855-0714 but warmer. 93.136.44.140 (talk) 02:04, 8 November 2017 (UTC)[reply]

Temperature is a bit independent of size, and the older it is, the cooler it will be. Also at what point do you measure the temperature? If it is all gas the temperature may appear to radiate in the infrared at 300 K, but deep down it would be hotter. See Y-type star and Sub-brown dwarf for what we have. Graeme Bartlett (talk) 03:56, 8 November 2017 (UTC)[reply]
Well the core creates some heat (esp. if you need to fuse deuterium to get >300 K) so the temperature should over time level off asymptotically towards some limit, so let's say the object is free-floating, pretty old and at/near that temperature, and it's ~300 K. I suppose I'd want that temperature to be at the cloud layer at 1 bar pressure, but I'll settle for blackbody temperature if we don't know enough about sub-brown dwarfs to estimate in that detail. Can we infer mass & radius from this or will they depend heavily on metallicity and such? I'm wondering about this because I was intrigued by the fact that all brown dwarfs down to planets of Jupiter's mass have roughly the same radius due to matter degeneration. 93.136.44.140 (talk) 04:38, 8 November 2017 (UTC)[reply]
Easy: gas giants don't have surfaces!. μηδείς (talk) 16:48, 8 November 2017 (UTC)[reply]
Let's call the surface the place where atmospheric pressure is 1 bar, as in Jupiter#Atmosphere. 93.139.45.186 (talk) 17:35, 8 November 2017 (UTC) (original poster)[reply]
As the first reply says they get cooler with time as they don't do any fusion, or at least only the larger ones do and that stops after a while. The heat they generate is from gravity as they formed and it dissipates with time. There's not a great deal of difference between them and a gas giant like Jupiter except mass - they don't do anything too much different. They are not stars. To float you need something that is less dense than what it is floating in, the atmosphere can contain lots more chemicals than the sun but is still mostly hydrogen and helium so unless you can have a strong ball containing a lower pressure than outside like a submarine but much much lighter that would be pretty much impossible except with a large hydrogen balloon. Dmcq (talk) 17:59, 8 November 2017 (UTC)[reply]
Ok, I see I had some wrong assumptions. How about this: suppose there's a planet 5 billion years old and of Jupiter's composition, which evolved like Jupiter up to, say, 1 billion years after its "birth", when it was ejected by deus ex machina from its star system and ended up floating in interstellar space. Aliens descend into its atmosphere and measure that at the altitude where pressure is 1 bar, the temperature is 300 K. Is this enough data to estimate the planet's mass and radius, or do brown dwarfs vary too much in characteristics, or do we simply not know enough about this yet? 93.139.45.186 (talk) 19:51, 8 November 2017 (UTC)[reply]
Aliens in orbit around this body could estimate the mass directly from the gravitational force on their ship, and the radius by direct observation of occultations. The temperature and pressure would be irrelevant to such observations. μηδείς (talk) 22:43, 8 November 2017 (UTC)[reply]
Yeah sure, but can we? Is there a model that describes the relationship between temperature, radius, etc.? I'm interested in how to calculate that stuff. 93.139.45.186 (talk) 23:00, 8 November 2017 (UTC)[reply]
The speed of a body in orbit about a primary is independent of the orbiting body's mass, except that when we can calculate a barycenter of a system we can calculate the relative masses. If your brown dwarf has satellites of known distance and orbital velocity, then its mass could be determined. But if all we have are distant images, then we are making guesses based on libido and presumed composition. But a comparison of Saturn, large, cold and less dense than water with Jupiter, not too much larger, but hotter and denser, shows that there are many presumptions made when we don't have the full body of information needed, and are going on guesses due to brightness of the size and albedo. You can check the history of observations of Ceres and Pluto to see how difficult accurate measurements are without the needed observations. Of course your aliens would have those observation once they were in orbit, as they would know their own mass and orbital velocity and the actual albedo and specifically the radius of your brown dwarf. μηδείς (talk) 02:32, 9 November 2017 (UTC)[reply]
Ah yes, that makes perfect sense. I completely forgot how different Saturn is from Jupiter. I was hoping that one could make some nice relation like M-sigma or Stefan-Boltzmann law, given that there is a real and not very high limit on planet radius at low temperatures, but yes there would be too much variety in initial conditions to estimate anything. Thanks for the help folks. 93.139.45.186 (talk) 05:26, 9 November 2017 (UTC)[reply]
To be clear, let's remember that the atmosphere of Saturn and the atmosphere of Jupiter each contain layers of water vapor clouds. Jupiter's is at a depth of only 3 atm (300,000 kPa) -- see [1]. Saturn's is about ten times denser but still plausibly human-breathable, if supplemented with a trace of oxygen and if the humans are modified to survive industrial grade stink bomb and strong alkaline pH. On Saturn the main issue is keeping afloat, though the winds suggest the presence of extractable energy that can be used by sufficiently clever gliding organisms. With Jupiter the main issue is that 3g gravity, rather than normal Earthlike gravity on Saturn. The problem for colonizing a brown dwarf is that I would assume that the heat generated from within scales at least linearly with the mass, probably more than that, and of course so does the gravity. On the other hand it may not have a sun, which makes it ever so slightly cooler. It seems like a tall order, though this talk is no substitute for a measurement (after all, the internal circulation of heat is also a factor, and I won't even guess at that!) Wnt (talk) 15:04, 11 November 2017 (UTC)[reply]
Your point being that if humans weren't humans, but levitating hydrothermal vent bacteria, they could live in levitating hydrothermal vent conditions? μηδείς (talk) 21:46, 11 November 2017 (UTC)[reply]
Well, flamingoes survive severely alkaline conditions somehow, so I don't think the evolution has to go quite that far afield. Hopefully the levitation part can be provided by organisms/ecosystems native to Saturn already... Wnt (talk) 02:33, 12 November 2017 (UTC)[reply]
Of course, but these would be levitating bacteria, not humans who were levitating bacteria. Raymond Luxury Yacht (Throat-warbler Mangrove) 21:18, 12 November 2017 (UTC)[reply]

Name of fallacy: mixing up unrelated variables?

How would you call it when you lump together not quite analogous variables (for example, unemployed astrologers and unemployed astronomers) and calculate the average (or whatever)?--B8-tome (talk) 13:46, 8 November 2017 (UTC)[reply]

To mix apples and oranges? By the way you might like James–Stein estimator#Interpretation where they mix up the speed of light, tea consumption in Taiwan, and hog weight in Montana all together ;-) Dmcq (talk) 14:39, 8 November 2017 (UTC)[reply]
The corresponding article, Apples and oranges, has some further information under the 'See also' section. --Hofhof (talk) 20:36, 8 November 2017 (UTC)[reply]
Except there's no prohibition against combining apples and oranges or categorizing them together, only comparing them as if they were interchangeable. Comparing unemployed astrologers to unemployed astronomers could be a useful metric to figure out how gullible a society is. However, adding them together because you think they're similar fields - as in the OP's scenario - would be an error. I have a feeling there is a term from statistics that describes this situation, but I can't seem to put my finger on it. Broadly, it's an error in classification. Matt Deres (talk) 13:46, 9 November 2017 (UTC)[reply]
It isn't a fallacy, but in statistics this sort of situation is known as heteroscedasticity. Looie496 (talk) 01:12, 9 November 2017 (UTC)[reply]
Yes, and along the lines that Matt points out, it's not even always problematic. Certainly this is not a formal fallacy. Formal logical fallacies are about using logic faultily, not about making bad decisions. If anything, this may (sometimes) be classed as a informal fallacy, because the problem is with the assumptions, not the logic of any statement or argument. SemanticMantis (talk) 16:53, 9 November 2017 (UTC)[reply]

Industrial production of methane

What makes production of methane in an industrial scale so difficult? Can't you just use any biomass mixed with the proper bacteria to obtain it? Wouldn't the CO2 balance of burning this methane be neutral? --Hofhof (talk) 20:38, 8 November 2017 (UTC)[reply]

But methane is produced on an industrial scale in landfills by bacteria from any biomass. So, it is not very difficult. See landfill gas. Ruslik_Zero 20:45, 8 November 2017 (UTC)[reply]
That's not industrial production, that's just production. It only counts as an industry if you catch it and use it. Our article on that is at Landfill_gas_utilization. A perhaps more relevant article on general biomass production of methane is at Biogas. SemanticMantis (talk) 22:07, 8 November 2017 (UTC)[reply]
The question of carbon neutrality depends on the time scale in question. If for example, all the biomass was produced by plants and animals that were grown in a given year, then yes, the burning of methane produced by them would only release CO2 in to the atmosphere that had already been in the atmosphere in the beginning of the year. However, you also have to account for all the energy inputs to the hypothetical biogas plant (does is use any electricity, do people drive there to work, etc). Not to mention the carbon footprint of the biomass in question (E.g. corn has a massive carbon footprint, due to all electricity (and thus usually fossil fuels) needed to make the fertilizer for it (Haber process). All these considerations are part of why Life-cycle_assessment of energy and food production is very difficult to do properly. SemanticMantis (talk) 22:07, 8 November 2017 (UTC)[reply]

November 9

Forensic analysis

Are there forensic techniques which can determine, by analyzing microscopic scratches, markings, etc., whether a large rock had made an impact against a painted metal surface, in the absence of damage visible to the naked eye (provided, of course, that the surface had not been repainted afterward)? In other words, suppose someone threw a large rock against a car, but it bounced off without causing any visible damage -- is it possible to prove that this happened? 2601:646:8E01:7E0B:0:0:0:EA04 (talk) 09:25, 9 November 2017 (UTC)[reply]

I doubt much can be told if nothing is straightforwardly visible. The paint on modern cars is a flexible cross linked polymer, not an enamel, so there won't be small cracks. It is amazing what forensics can do but I can't see how even if they found something they could tell when it happened and cars are always being hit by small stones and this as they are going along. Dmcq (talk) 10:35, 9 November 2017 (UTC)[reply]
Without answering the specific question, which is probably outside of the realm of this desk, I can direct the reader to concepts like trace evidence analysis which is a HUGE sub-discipline within forensic science. Our article is short, but the answer to your question is part of trace evidence analysis, and if you want to know what that field can do, that is a phrase to punch into google to research more. --Jayron32 12:18, 9 November 2017 (UTC)[reply]
Thanks for the bad news. 2601:646:8E01:7E0B:0:0:0:EA04 (talk) 02:58, 10 November 2017 (UTC)[reply]
I don't see why this is a slam dunk. My parents' neighbours' son got drunk one night and hit 11 parked cars (including my father's) over some distance before returning home. The insurer was able to tell that the damage to my dad's car was done by the neighbours' car, by analyzing microscopic bits of paint on each vehicle. They refused to pay, and the driver had to be charged with "theft" of the vehicle by his parents, or they would have been liable for the damage, so he ended up being arrested based on that not-visible to the eye forensic evidence. This was in the 80's, so I suspect the forensic science is better now. μηδείς (talk) 22:57, 10 November 2017 (UTC)[reply]
As a car owner, I'm struggling to think how a large rock could be thrown against a car's painted exterior and not cause any damage visible to the naked eye. If there were no such damage, how would one know that the event had happened at all, unless it had been witnessed? I presume also that to prove such an impact, one would have to identify traces on both the car and the rock (which ought to be possible if any damage did occur and the rock is available for analysis). {The poster formerly known as 87.81.230.195} 90.200.138.27 (talk) 00:32, 11 November 2017 (UTC)[reply]
The relevant forensic idea IP and Medeis are discussing is Locard's exchange principle. DMacks (talk) 21:46, 11 November 2017 (UTC)[reply]
Sounds like the story in The Third Policeman that cyclists are part bicycle and bicycles are part human. When cyclists get to be more than half bicycle they spend most of their time leaning on walls or propped up by one foot on a kerb. ;-) Dmcq (talk) 22:13, 11 November 2017 (UTC)[reply]

iPTF14hls, cause of survival of explosions?

Could iPTF14hls contain a huge amount of weakly interacting dark matter? Because the dark matter is weakly interacting, it wouldn't get expelled by explosions, and its strong gravity could suck more fusionable matter in to fuel a later explosion. Thanks144.35.114.29 (talk) 19:47, 9 November 2017 (UTC)[reply]

Stellar objects do not contain dark matter as there is no plausible mechanism that can explain how dark matter is captured into a stellar object in the first place - it is weakly interacting after all. As to iPTF14hls it is likely not a supernova at all. It just looks similar. Ruslik_Zero 20:13, 9 November 2017 (UTC)[reply]
First of all, I'm sure you meant to say don't contain rather than do contain. Second, I didn't claim it was a supernova, and whether it is or isn't a supernova might not be relevant to my question. Third, weakly interacting dark matter still interacts via gravitation, and a dense plume of dark matter could plausibly been the birth site of the star. There are other plausible scenarios.
??144.35.45.45 (talk) 00:25, 10 November 2017 (UTC)[reply]
Yes, I meant "don't'. Ruslik_Zero 19:28, 10 November 2017 (UTC)[reply]

Site engineers

If site engineers are entry level roles, why are there trainee and assistant site engineer jobs and why do nearly all site engineer jobs require previous experience of it? 94.10.251.123 (talk) 23:56, 9 November 2017 (UTC)[reply]

The problem seems to be with the words following "if". Who claims this? Our article Construction engineering mentions some entry-level jobs suitable for a graduate in engineering. Some of these will be described as "trainee" or "assistant" depending on the complexity of the operation. Experience is highly valued in many professions.Dbfirs 07:43, 10 November 2017 (UTC)[reply]
I thought site engineer is an entry level role is it not? 94.10.251.123 (talk) 08:35, 10 November 2017 (UTC)[reply]
Where did you see that? ←Baseball Bugs What's up, Doc? carrots08:51, 10 November 2017 (UTC)[reply]
Terminology in this field is not standardized, and much depends on the scale of the project. Some companies have this as entry level job, others make it an intermediate level of management and administration. Most project engineers have construction experience and have a role equivalent to an assistant project superintendent, at least so they can provide supervisory coverage when the superintendent or assistant is absent. Acroterion (talk) 17:53, 10 November 2017 (UTC)[reply]

November 10

Infrastructure vs building engineering

Why is it that engineers and construction specialists working on buildings tend to stick to buildings, and those working on infrastructure tend to stick to infrastructure? I see very few who cross between the 2. 00:23, 10 November 2017 (UTC) — Preceding unsigned comment added by 94.10.251.123 (talk)

Different organisations do different things. A person working for a municipality or city local government may design roads and bridges, but would not be involved with buildings. Also someone working for a large building contractor would just get buildings to design. An independent engineering consultant would build up a reputation in one industry, and then get work from that industry. Graeme Bartlett (talk) 05:56, 10 November 2017 (UTC)[reply]
The outcomes and detailing are very different. Buildings are inhabited spaces with a particular level of finish, space conditioning and unique functional requirements suited for continuous use and habitation by people. Infrastructure (bridges, tunnels, dams, water systems, waste treatment, highways, power systems) are generally not inhabited, at least not to the same degree, and involve a different kind of detailing suited to function, with habitation a secondary concern, or no concern at all. Habitable environments are subject to building codes, life safety codes, energy conservation codes and the like and are closely regulated by building departments. Infrastructure is mostly governed by engineering industry standards for function and durability. While there is certainly overlap, the two involve differing skillsets. As an architect, I've worked with both, and infrastructure (or "heavy construction") requires a different kind of information presentation and detailing than general construction. If you're not set up for it it's hard to do efficiently or well. The same applies to contractors, who have to organize on different scales with different trades and equipment.
In the building area, designers and builders tend to specialize in general construction (large commercial or institutional structures) or light construction (small residential or retail spaces) for the same reasons. Light construction is subject to a lesser degree of regulation and scrutiny and uses different or less complex construction techniques. Acroterion (talk) 13:04, 10 November 2017 (UTC)[reply]

Civil engineering --Hans Haase (有问题吗) 19:52, 11 November 2017 (UTC)[reply]

"Beam me up, Scotty!"

OK, I think everyone knows what's the fundamental problem with teleportation -- because of the Heisenberg uncertainty principle, Scotty would come out the other side looking like scrambled eggs (in the literal sense!) However, suppose that instead of actually teleporting stuff, the machine worked by somehow temporarily creating a wormhole which could then be used as a shortcut through space-time -- would that work more-or-less like teleportation? 2601:646:8E01:7E0B:0:0:0:EA04 (talk) 03:04, 10 November 2017 (UTC)[reply]

Yep! And you could use it to build a galaxy-wide empire of fear and oppression until somebody else with starship technology destroyed your home-world from orbit. You'd be better off sticking with warp drives.
Nimur (talk) 05:24, 10 November 2017 (UTC)[reply]
Teleport has two common meanings, and probably a hell of a lot more uncommon ones. In one meaning, it means instant movement from one location to another. In most magical instances, that is how teleport works. Object instantly vanish and reappear elsewhere. In another, it means to move something from one place to another without physically moving it - the "instant" is lost. Star Trek uses the second meaning. An object is turned into energy, transmitted (at the speed of light, not instantly) to another location, and assembled again. Depending on the meaning of teleport you want to use, the answer to your question could be yes or no. You are physically moving an object one place to another - just taking a shortcut. That isn't teleporting in the Star Trek sense. But, you have the ability to instantly move an object from one place to another. That is teleporting in other (magical) popular sense. 209.149.113.5 (talk) 15:25, 10 November 2017 (UTC)[reply]
Wormhole models involving black holes have to deal with spaghettification and infinite values at singularities. The standard "follow the money" argument applies otherwise. If telepathy or precognition were possible, psychics would be rich, not hanging their shingles in the cheaper part of town. If teleportation works, "where are they?". μηδείς (talk) 22:49, 10 November 2017 (UTC)[reply]
That's not actually a disproof, because the preak would probably remember losing his last lone dollar in a last desperate trip to the casino, and then of course, that's what would happen. Wnt (talk) 15:08, 11 November 2017 (UTC)[reply]
I am quite aware that you can't prove a negative, but to quote Christopher Hitchens, "What can be asserted without evidence can be dismissed without evidence." μηδείς (talk) 21:38, 11 November 2017 (UTC)[reply]
We're nowhere near proving or disproving this kind of future tech, including direct teleportation without a wormhole. Look into some of W. G. Unruh's recent work - especially there are some papers by Qingdi Wang that seem absolutely mind-blowing if only I could understand a bit of them. [2] The nature of spacetime is much more ... fluid ... than we typically conceptualize, and right now it seems to take as much theorizing to explain why things don't teleport than why they could. Wnt (talk) 15:14, 11 November 2017 (UTC)[reply]
This is how they transfer people in The Culture series of novels. LongHairedFop (talk) 17:42, 11 November 2017 (UTC)[reply]

Septic shock

Are there any known cases of patients surviving and recovering from septic shock with no treatment? 193.240.153.130 (talk) 12:41, 10 November 2017 (UTC)[reply]

The article Septic shock reports that the mortality rate from septic shock is approximately 25–50%. It also states that sepsis has a worldwide incidence of more than 20 million cases a year, with mortality due to septic shock reaching up to 50 percent even in industrialized countries. There has been an increase in the rate of septic shock deaths in recent decades. Blooteuth (talk) 15:14, 10 November 2017 (UTC)[reply]
To be complete... There has been an increase in attributing deaths to septic shock in recent decades. Because septic shock often leads to stroke, heart failure, or respiratory failure, it is reasonable to attribute death to the result of septic shock rather than septic shock itself. 209.149.113.5 (talk) 15:28, 10 November 2017 (UTC)[reply]
Is the given mortality rate for treated or untreated septic shock, or is that difference not distinguished in those statistics? My understanding was that septic shock was pretty much always fatal if untreated, but that's a layman's vague memory. μηδείς (talk) 22:44, 10 November 2017 (UTC)[reply]
I cannot imagine a scenario where a case of septic shock would be known to medical authorities and not be treated. If a case where to occur and not be brought to medical attention I have difficulty imagining how that would be recorded as a survival. Richard Avery (talk) 10:56, 11 November 2017 (UTC)[reply]
@Richard Avery: Seek and ye shall find: [3][4][5] I'm not quite sure it's the accepted standard of care, but there are a lot more stories like this. Wnt (talk) 15:23, 11 November 2017 (UTC)[reply]
  • I find no direct address to the issue of total non-treatment, but Septic_shock#Epidemiology says that survivability without treatment goes down 4% per hour, and that septic shock is otherwise normally fatal within seven days or less. I was looking for a source for treatment before the advent of antibiotics, but it seems the disease was poorly understood until the recent discovery that it is more a problem of immune response than bacterial toxins. μηδείς (talk) 21:33, 11 November 2017 (UTC)[reply]

Physical Photo Stenography

This animation shows how numbers are hidden in the Ishihara test of Color blindness within a circle of dots appearing randomized in color and size. Blooteuth (talk) 15:24, 10 November 2017 (UTC)[reply]
The same image viewed by white, blue, green and red lights reveals different hidden numbers. Blooteuth (talk) 13:04, 15 November 2017 (UTC)[reply]

I want to recreate something I've seen. I want to have three photos - actual photographs. I want to have a blue, green, and red plastic filter - just little squares of colored plastic. I want to see a number appear in the photo when I place the filter over the photo. I've seen this as a child. It is also used in color-blindness tests. However, when I change the color in any area of a photo, it is obvious with or without the filter. What is the trick to making it hard to see the number without the filter, but obvious with the filter? 209.149.113.5 (talk) 15:11, 10 November 2017 (UTC)[reply]

I see the animation. It doesn't appear to be helpful for what I want. I want a physical photo that I can sit in a frame on my desk. I don't see a number in it. Then, I place a blue sheet of plastic over it and I can clearly see a number. I'm looking into the red-blue 3-D effects to see if I can make it work. I've tried a lot of ideas, but either I can clearly see the number with or without the filter or I can't see the number at all either way. 209.149.113.5 (talk) 15:31, 10 November 2017 (UTC)[reply]
"The Trick"? There are lots of tricks involved in making a high-quality optical illusion like the one described here - but if I had to name the single most important trick, it would be understanding that the color filter selectively attenuates chroma noise while equally attenuating luma noise. For your illusion to work, you must hide a very low-power signal in the chroma-channel and bury it among powerful luminance noise. Pick a color-space that lets you do that - HSV or YUV - for your noise-generator; then, blend with your original image.
First, let's correct a typo: you probably mean to say steganography, rather than stenography.
There are two parts to your task: (1) to create a special image that appears noise-like, or invisible, when viewed in normal conditions, and appears to contain your numeral when viewed only in one color channel; and (2) to combine that image with your original photograph.
Task 1 is construction of a the "stenographic image." (sic). This is the image that contains the secret message you want to convey, but only when viewed through the color filter. You can rely on certain facts of visual perception, and try to make the data appear noise-like by capitalizing on facts like human visual perception biases pertaining to contrast, illumination, edges and contours, and so on: these can inform the choice of your noise-generator. There's a lot of art to this: it's actually much more subjective than any other part of your task. Bear in mind that you are creating a synthetic data set that is going to be combined with an image in later processing: knowing this, you have a lot of options to choose from when you represent your noise. For example, you can choose to make your noise zero-mean: that entails representing each pixel as a signed data type, which is not a common way to represent a finished image product. Our article on steganography tools list several existing pre-packaged software options - very few of them are any good. The sort of task you describe tends to be so customized that it would require a custom software process designed just for your needs.
Task 2 is digital compositing; it is the actual steganography, or the hiding of the previous "noise-like signal" inside another photograph. You can use a wide variety of methods to blend these images. You can also use the special knowledge about how your image will be composited to help you craft the noise-like data in the first task. Compositing is, in itself, an entire field of study: you can add the images; you can multiply them; you can mix them according to a nonlinear relationship. Once again, this is as much art as science. The classical paper is Porter/Duff (1984). It gives you a fantastic overview of what your options are - I dare say, it is "mathematically-complete" (you have no other options besides what they describe in that paper). In the last decades, academic research, commercial productization, and practical experience have developed compositing in to one of the most elaborate areas of image processing. Artists and software designers spend years studying how to do it so that it looks good. In your case - intentional injection of a noise-like signal - you have the extra difficult job of preserving a noise-like character without perceptually damaging the final image.
From a practical point of view, some of the tools you may wish to use include a layer-capable image editor, like GIMP or Adobe Photoshop; and a programmable mathematical tool like MATLAB or the python programming language to synthesize noise and arrange it in the form of a raster image. If you can afford the commercial tools like MATLAB and its image processing toolbox, you will have great advantages, especially in terms of the ability to rapidly iterate your efforts and get immediate visual feedback.
Your task is not trivial; there are no easy automatic ways to do it. You will also need great familiarity with your tools, and the ability to carefully control them (typically this means writing program-code). You must be aware of all the manual- and automatic- image processing steps that your software tools will perform on your intermediate and final products to ensure that your steganographic work is not lost, for example, by automatic post-processing, image compression, or exported image- or file-format changes at the last step.
Nimur (talk) 18:17, 10 November 2017 (UTC)[reply]

Follow up question on dark matter and stars

Perhaps most stars everywhere are sited at dense wisps of dark matter to give the contraction a head start? After all, there is considerably more dark matter than ordinary matter, so it could be the determining factor.144.35.114.188 (talk) 15:45, 10 November 2017 (UTC)[reply]

Perhaps; perhaps not. What possible further response is there, given that no one knows anything about dark matter other than that it appears from indirect evidence to exist? As star formation was thought to be reasonably explicable before dark matter was conceived of, there would seem to be no need of your hypothesis (with apologies to Laplace). {The poster formerly known as 87.81.230.195} 90.200.138.27 (talk) 21:29, 10 November 2017 (UTC)[reply]
Yes but calculations based on no dark matter wisps vs. many dark matter wisps could indicate respective rates of formation. It has always seemed to me the current theory of star formation has been thought of as necessary rather than satisfactory.144.35.45.72 (talk) 21:56, 10 November 2017 (UTC)[reply]
This is not a forum or the place for speculation. Questions about articles or sources are more relevant than anyone's personal pet theories. That being said, this article The Bullet Cluster Proves Dark Matter Exists, But Not For The Reason Most Physicists Think was an interesting read, and seems to imply that dark matter may only play a very indirect role in the collision of interstellar gas. μηδείς (talk) 22:41, 10 November 2017 (UTC)[reply]
medeis. Along with your good contributions, you have made several disgusting offensive irrelevant comments here, and made silly power trips like one you have just made over the years, and also made false accusations. You are scarcely one to advise on what is appropriate in this venue. Anyway, I had good reason in asking my question here: I asked my socalled "pet theory" question for valid reasons: there is a good chance that some astrophysicist who volunteers might know whether the suggestion is reasonable or know how to refute it, and because it has almost certainly has been considered, perhaps briefly, in the astrophysics literature, and a reference could be provided to me...It is a shame you that you are often so difficult.144.35.45.72 (talk) 00:31, 11 November 2017 (UTC)[reply]
This is not a forum or the place for speculation or personal attacks. Questions about articles or sources are more relevant than anyone's personal pet theories (pet not being a "disgusting" or "offensive" term). That being said, this article The Bullet Cluster Proves Dark Matter Exists, But Not For The Reason Most Physicists Think was an interesting read, and seems to imply that dark matter may only play a very indirect role in the collision of interstellar gas. Please read that article, as it is the only source anyone has given, and even in good faith, so far. μηδείς (talk) 03:44, 11 November 2017 (UTC)[reply]

Maybe the stars make dark matter 71.181.116.118 (talk) 23:37, 10 November 2017 (UTC)[reply]

Please feel welcome to speculate when asking questions; that's what we (should be) here for. You are apparently not the first to wonder this about dark matter: Dark star (dark matter) is more than just a fun movie. See [6] which says that axions and maybe WIMPs are not suitable for this sort of thing, but neutralinos might be... anyway, that article is confusing, and is intentionally untrackable back to a real source, but if you look at something like [7] you can see references back to a bunch of Paolo Gondolo papers in JCAP and one of them from 2010 supposedly should be the one that touches on this. I don't know what he's toking but it's laced with a whole lot of heavy mathematics, so you're in for a ride... Wnt (talk) 01:01, 13 November 2017 (UTC)[reply]

November 11

Salicylic acid and acetylsalicylic acid

What difference would it make if we took the first, instead of the second, for a headache?--Hofhof (talk) 01:06, 11 November 2017 (UTC)[reply]

And yet the ancient Greeks used it (in the form of willow bark) for headaches with no apparent ill effects. 2601:646:8E01:7E0B:0:0:0:EA04 (talk) 10:12, 11 November 2017 (UTC)[reply]
Sola dosis facit venenum! Rather a difference between concentrated solutions and small amounts in bark. Fgf10 (talk) 11:47, 11 November 2017 (UTC)[reply]
It's workable (and has a long history) to use salicylates from willows, but it's slightly unpleasant on the stomach. It was usually taken as a tea, made directly from the willow bark. Any more concentrated form of salicylic acid (see above) is wart remover and isn't consumed internally.
Nor is it a question of dose, it's a different compound. In fact, salicylic acid is so harmful that it wasn't taken directly (there was a brief Victorian period when it was, as it was much cheaper). The compound prepared from the tree is a sugar called salicin, and that is oxidised within the body to the acid form. However it's hard to produce large quantities of salicin cheaply. Salicyclic acid was only taken as a drug for a brief period after the development of industrial pharmacy, when salicyclic acid could be synthesised cheaply, without willows, and before the mighty Bayer of Germany invented Aspirin as a more acceptable form of it.
Charles Frédéric Gerhardt, one of the illustrious group of self-poisoning chemists, had first set out to synthesise a more acceptable form of salicyclic acid and found that acetyl salicylic acid was suitable. However his synthesis wasn't very good and he gave up, thinking that the compound was wrong, rather than it just being his process. It was some decades before his original work was really proven to be right, by Felix Hoffman at Bayer.
Note that WP's claim, "Aspirin, in the form of leaves from the willow tree, has been used for its health effects for at least 2,400 years." is just wrong (and a complete misunderstanding of a childishly simple and correct ref). But that's GA review and MEDRS for you - form over accuracy, every time. Andy Dingley (talk) 12:06, 11 November 2017 (UTC)[reply]
That claim was not supported by the source, so I've changed it. Dbfirs 12:54, 11 November 2017 (UTC)[reply]
First, salicylates have been used for at least 5000 years, as evidenced by Ur III, a tablet from Ur of the Chaldees from, if I recall correctly, roughly 300 years before the birth of the biblical patriarch Abraham. Here's one source. [8] Salicylate was available in other forms - beaver testicles notably concentrate it in a way that they would apply to toothaches and such, and were used by native Americans; there was even a myth in Europe from classical times that the beaver would castrate itself when hunters grew near to avoid capture. (see Tractatus de Herbis)
Second, the willow bark salicylates used 5000 years ago were far safer than the only ones allowed to be sold over the counter by responsible medical authorities today. [9] This is because salicin comes as a glycoconjugate that is not taken apart until after it passes through the stomach. By contrast, aspirin was invented a century ago by industrialists who noticed that the salicylates they sold were causing stomach injury, and who figured it was due to the acid, so (after first trying to "buffer" the acid, e.g. Bufferin) they put a simple acetyl group over the acid hoping to stop the damage. Same folks who brought you heroin as the non-addictive alternative to morphine (a racket that works to this day). Wnt (talk) 16:50, 11 November 2017 (UTC)[reply]
Thank you for that earlier history. I've added a brief mention to the article. Dbfirs 18:33, 11 November 2017 (UTC)[reply]
As willow bark has been mentioned, I think I will have a go at a bit of clarification. Pharmaceutical companies love to synthesize the most active component of any proven natural remedy and market it. Willow Bark contains salicylic acid but the 'therapeutic' dose of Willow Bark has far less salicylic acid per dose, so does not cause the same problems as therapeutic dose of pure salicylic acid nor acetylsalicylic acid. The reason for this is that Willow Bark also contains other compounds that work synergically, which enhance the therapeutic effect of the little Salicylic acid that Willow Bark has per dose. This is why many users of Willow Bark swear blind that it is more effective than drug-store bought acetylsalicylic acid pain killers. Placebo? Doctors take it rather than become dependent on anything stronger and addictive. Also many doctors are closet alcoholics. So even though Willow Bark is far from completely safe for a habitual drinkers is better than acetylsalicylic acid, acetaminophen, ibuprofen, naproxen, etc. These drugs do the kidneys in quicker. Yet, since one's HCP can't earn money from writing 'scripts for Willow Bark – one ends up being proscribed a synthetic. And why not. Your Doctor is running a business and he too, has to earn enough to put his kids through collage – and possible be the first in the street to own a Tesla etc.. Aspro (talk) 23:15, 11 November 2017 (UTC)[reply]
I wonder whether there are collage colleges. Akld guy (talk) 06:14, 12 November 2017 (UTC)[reply]
That was the idea behind "Bufferin", but it was generally incorrect. I think it is more accurate to say that cyclooxygenase enzymes (especially COX-1) in the stomach are needed to prevent injury, and if salicylate is absorbed there it will inhibit those enzymes. Wnt (talk) 22:37, 14 November 2017 (UTC)[reply]

where can I find literature on anhydrous acid-base equilibria?

It is really frustrating to me as a tutor of organic chemistry that everyone assumes that acid base reactions always take place in water. I need more information about how to estimate pKa of an organic compound in say, ethanol or glacial acetic acid, given a pKa in water and pKb of a conjugate base (and vice versa), as well as the autoionization constant of the target solvent. Also, how would I calculate the change in pKas for polar aprotic solvents? 98.14.205.209 (talk) 15:41, 11 November 2017 (UTC)[reply]

It might not be a solvable problem at this point. Acid_dissociation_constant is our main article, and its "Acidity in nonaqueous solutions" section notes:
These facts are obscured by the omission of the solvent from the expression that is normally used to define pKa, but pKa values obtained in a given mixed solvent can be compared to each other, giving relative acid strengths. The same is true of pKa values obtained in a particular non-aqueous solvent such a DMSO.
As of 2008, a universal, solvent-independent, scale for acid dissociation constants has not been developed, since there is no known way to compare the standard states of two different solvents.
The following ref (cited in that article section) has some information about comparing pKa in different solvents, especially with respect to different structural classes:
  • Kaljurand, I.; Kütt, A.; Sooväli, L.; Rodima, T.; Mäemets, V.; Leito, I; Koppel, I.A. (2005). "Extension of the Self-Consistent Spectrophotometric Basicity Scale in Acetonitrile to a Full Span of 28 pKa Units: Unification of Different Basicity Scales". J. Org. Chem. 70 (3): 1019–1028. doi:10.1021/jo048252w. PMID 15675863.
DMacks (talk) 16:31, 11 November 2017 (UTC)[reply]


(ec)It appears that despite the deceptively simple looking equilibrium, pKa depends on both the solvent [10][11] and the ionic strength [12]. That IUPAC source mentions Davies equation, Debye-Huckel theory, Pitzer equation, Specific Interaction Theory. The pKa also depends on temperature in a way that varies based on the class of compound, yet follows some empirical rules within them. [13] Certainly I don't know this topic, but I should put these up to get started. Wnt (talk) 16:32, 11 November 2017 (UTC)[reply]
OK thank you, because I am trying to define the scope of problems that I can cover with my students, and in many cases I have to know much more than my students would need to know (to ace their exams) because their education (and mine) I realize sometimes seem to side-step certain problems with dogmatic assumptions. 98.14.205.209 (talk) 16:38, 11 November 2017 (UTC)[reply]

Systems of acid and non-conjugate base (e.g. ammonium bicarbonate, pyridinium dihydrogen phosphate, boric acid - acetate)

Why aren't systems like these covered as extensively in online pages? Almost every web page seems to stop at adding a strong base to a weak acid or strong acid to weak base which is *really frustrating*. I suddenly realize that we didn't really cover many non-conjugate buffers in undergrad (the most we did was ammonium acetate, which to be honest is CHEATING since pKa + pKb = 14 and is really just a hidden version of the acid / conjugate-base problem). Basically we have a weak acid and a weak base whose pKas and pKbs do not add up to 14. Surely there must be a better way than having to brute force it through a system of equations? 98.14.205.209 (talk) 16:34, 11 November 2017 (UTC)[reply]

The reason that things are covered less is because no one wrote about them. However that may be because the topic is not WP:Notable in itself. For example if I look for "pyridinium dihydrogen phosphate" nearly all hits are derivatives. The one that was not was an error. That suggests that it is not useful compared to anything else already known. Ammonium bicarbonate however is used as a buffer and there are numerous references as to its use. eg https://www.nestgrp.com/protocols/trng/buffer.shtml and a buffer calculator at https://www.liverpool.ac.uk/buffers/buffercalc.html Graeme Bartlett (talk) 22:00, 11 November 2017 (UTC)[reply]
Boric acetate is used as a buffer in this patent. SciFinder has about 10 hits for "pyridium phosphate", the result-set of which is annotated as being an uncertain ratio, and seem to have been studied as corrosion inhibitors. DMacks (talk) 22:21, 11 November 2017 (UTC)[reply]
TBE buffer is quite common in molecular biology - TAE buffer less so, but not unheard of. (The EDTA is just a preservative; these are between Tris (pH 8) and borate or acetate. Wnt (talk) 11:46, 12 November 2017 (UTC)[reply]

AXLE FLUX GENERATOR OUTPUT.

Rotor from a claw pole alternator

The permanent magnets used in constructing Axle Flux Generators are always arranged to have alternate poles such as: N-S-N-S-N-S etc. What would be the effect on the output waveform if I used similar poles such as: N-N-N-N-N etc. — Preceding unsigned comment added by Adenola87 (talkcontribs) 16:59, 11 November 2017 (UTC)[reply]

An axial flux generator? The usual source for building advice on these (small scale wind turbines) is Hugh Piggott's books or website. You need to alternate the magnets, so that there is a changing flux through the coils. If there is no change of flux, then there's no output.
A long-established design is the claw pole alternator. This uses a single field coil (so the flux in the armature is always the same direction) and has sets of interleave pole pieces from each end, so that it has the effect of a reversing field. Andy Dingley (talk) 19:10, 11 November 2017 (UTC)[reply]
I assume you mean axial flux. There's not much to be had with novel configurations compared with state of the art, you can buy off the shelf axial flux motor kits for a couple of thousand dollars that are 97% efficient. http://www.ata.org.au/wp-content/uploads/marand_high_efficiency_motor.pdf The CAD drawing on P9 of that presentation was my original package layout from 1994/5. Greglocock (talk) 19:23, 11 November 2017 (UTC)[reply]


Rifles with horizontal magazines

Unlike most guns with box-like magazines, some have cartridges kept in horizontal pipe-shaped magazines fitted just below their barrels (mostly clutch-action"lever-action" or "pump-action" rifles). Such an arrangement may be all a right for shotguns which always use cartridges with flat front-ends, but in the rifles the cartridges' front end is never flat, it may not be sharp ( like AK-47s etc.), and is rounded to some extent, but is still narrow enough to work as a fire-pin against the front-to-it cartridge lying with its most sensitive part (cap) just touching it (bullet-tip of the neighbor behind). Is this arrangement not considered risky ? Besides the gun may also receive some unexpected jerk etc. ?  Jon Ascton  (talk) 17:25, 11 November 2017 (UTC)[reply]

It's called a tube magazine, and it's used on lever-action rifles like the Winchester Model 94. There are a lot of videos of people fooling around with this configuration trying to set off a chain reaction in the tube. Conventional wisdom is that pointy bullets are dangerous in tube magazines, and that all rounds for such rifles should use blunt-headed shapes and soft alloys. Hornady makes a plastic-capped pointy round that's supposed to be safe, but most opinions seem to be that the added ballistic performance isn't worth the cost of the ammunition - lever-action rifles aren't really made for long-range fire, so the blunt ballistics make no real difference at ranges for which such guns are normally used. Acroterion (talk) 18:10, 11 November 2017 (UTC)[reply]
Cartridges that are detonated by pressure to their rim such as common .22 caliber (0.22 in = 5.6 mm) varieties are safer under pressure to their rear center from another cartridge tip in a tube magazine than Centerfire ammunition would be. Blooteuth (talk) 00:09, 12 November 2017 (UTC)[reply]
Rifle primers are designed to allow for light strike. For example, an M16 rifle has a free floating firing pin. When the bolt is retracted and closed as part of the cycle, the firing pin can strike the rifle primer and spec for the primer must allow for a light strike. The hammer spring must be sufficiently strong to project the firing pin into the primer. Pistol primers, however, are much more sensitive. Firing pins for pistols have many more safeguards to prevent any strike. --DHeyward (talk) 06:24, 12 November 2017 (UTC)[reply]
Primers are constructed for a specific, fast and powerfull impact of a Firing pin with a distinctive shape made out of high-alloy steel to ignite. See here a video about some Gentlemen trying hard to ignite Primers with pointy bullets in a tube. --Kharon (talk) 06:38, 12 November 2017 (UTC)[reply]
Tube magazines were quite common on early military repeaters as well, such as the Jarmann M1884, the Mauser 1871/84, the Krag–Petersson, the Kropatschek, the Murata Model 22, the Lebel Model 1886, the various Vetterli, and so on and so forth. While quite a few of those used blunt or rounded bullets and/or rimfire ammunition, some used spitzer bullets and centerfire ammunition with no major issues. WegianWarrior (talk) 07:02, 12 November 2017 (UTC)[reply]
Also see these three videos on YouTube. WegianWarrior (talk) 15:55, 12 November 2017 (UTC)[reply]

November 12

Etymology of the word 'male'

This has been moved to Wikipedia:Reference desk/Language. ←Baseball Bugs What's up, Doc? carrots12:44, 12 November 2017 (UTC)[reply]

Made that a link to the specific section. --69.159.60.147 (talk) 23:03, 12 November 2017 (UTC)[reply]

November 13

Planet Venus and Jupiter conjunction.

How far apart are these two planets as we see them today. Their apparent closeness looks amazing but what's the truth. Richard Avery (talk) 08:52, 13 November 2017 (UTC)[reply]

Stellarium tells me that the separation right now is about 15 arcminutes, about half the angular diameter of the moon. --Wrongfilter (talk) 09:10, 13 November 2017 (UTC)[reply]
About 833,500,000 kilometres: they have lined up because Venus is between Earth and the Sun, and Jupiter is roughly on the opposite side of the Sun to both Venus and Earth. [14]. They are not really close - just in line with each other. Wymspen (talk) 09:32, 13 November 2017 (UTC)[reply]
Thank you Wymspen, exactly what I wanted. Sorry Wrongfilter I did not make my question quite clear enough. Richard Avery (talk) 15:08, 13 November 2017 (UTC)[reply]
The Daily Telegraph feature Night Sky in November mentioned the distance "half the diameter of the moon" but didn't mention the arcminutes - but then the moon is half a degree wide and it moves through a distance equal to its own diameter every hour. It was a cloudless morning, and the moon was up for comparison. I have a wide uninterrupted view down to the eastern horizon and I was out an hour before sunrise - but I forgot to look. As far as separations go, every so often there is an occultation of a bright star by the moon (occultations of planets are far rarer). The latest was of Aldebaran between 02:30 and 03:21 (GMT) on 6 November. The distance here is many light years. It happened to be cloudy, but even if it hadn't been I would have missed it because although the feature covers the whole month it is published on the first Monday. 82.13.208.70 (talk) 15:31, 13 November 2017 (UTC)[reply]
One of the most spectacular sights I have seen was a conjunction some 10+ years ago of Venus, Jupiter, and the crescent Moon shortly after sunset. Since you knew the focus of the crescent moon was the sun, by definition, and that Venus, the Earth, and Jupiter were all in the plane of the Zodiac ecliptic you could actually see the solar system in three full dimensions, rather than just as dots on a field. It was awe-inspiring. μηδείς (talk) 22:44, 13 November 2017 (UTC)[reply]
Sorry to nitpick but technically the zodiac's the band where the planets can be (+/-8+°) and the plane's the ecliptic. Sagittarian Milky Way (talk) 03:46, 14 November 2017 (UTC)[reply]
Not at all, that's an important and highly relevant correction, thanks. μηδείς (talk) 03:55, 14 November 2017 (UTC)[reply]
Ha! I see Jupiter and Venus rose at six o'clock, which explains why I didn't see anything. Small nitpick: Medeis is referring to this [15]. An added bonus is that the moon passed in front of Venus at 16:15 (GMT) on the Monday night, 8 December 2008. Now, if the three luminaries really had all been on the ecliptic they would have been in a straight line. The moon is only on the ecliptic when eclipses occur (hence the name). You can see from List of solar eclipses in the 21st century that she wasn't there on that occasion. 82.13.208.70 (talk) 11:35, 14 November 2017 (UTC)[reply]
I did also see the 2008 conjunction, but the crescent moon was not between and above the two on that occasion. The conjunction I am thinking of was in 2003 or 2004 IIRC and definitely not in Decemeber, and the moon was visible between but above the two planets, i.e., above the ecliptic. μηδείς (talk) 16:36, 14 November 2017 (UTC)[reply]
The moon is only on the ecliptic when eclipses occur (hence the name). Not quite. The ecliptic is where eclipses can happen. The moon crosses the ecliptic twice a month; an eclipse happens if that crossing coincides with a syzygy. —Tamfang (talk) 08:35, 15 November 2017 (UTC)[reply]
Whats so special about planetar "conjunctions"? They already happened a million times and they are not even interesting for a Hohmann Transfer. The only "profession" that finds interest in them is Astrology aka Pseudoscience. --Kharon (talk) 13:35, 14 November 2017 (UTC)[reply]
They are beautiful. ←Baseball Bugs What's up, Doc? carrots15:11, 14 November 2017 (UTC)[reply]
[ec] μηδείς has already explained that to you. HenryFlower 15:12, 14 November 2017 (UTC)[reply]
Humans have emotions and an appreciation of aesthetic concerns. That's why they are special. --Jayron32 15:27, 14 November 2017 (UTC)[reply]
Humans have emotions - [citation needed] TigraanClick here to contact me 15:32, 14 November 2017 (UTC)[reply]
Of course I have seen conjunctions many times, and they are not beautiful in the sense of a flower or a colorized Hubble image, but this one was spectacular. I am a particularly visual thinker, and given one could see the Earth, Venus, Jupiter, (and the sun which had set), but whose position was obvious, defined a plane, and the moon was above that plane, instead of some dots on the flat sky, it was immediately clear to me that I was seeing a portion of the solar system in three dimensions, which is not normally obvious, and is a very rare event. μηδείς (talk) 16:36, 14 November 2017 (UTC)[reply]

November 14

USGS measurements

How the United States Geological Survey is able to measure earthquake magnitudes around the world? That said, do they have their own stations across the world or do they measure indirectly at home, deducing the magnitude from available data? Thanks.--212.180.235.46 (talk) 09:00, 14 November 2017 (UTC)[reply]

Instruments in the US, others around the world, and international agreements to share data - see National Earthquake Information Center Wymspen (talk) 09:51, 14 November 2017 (UTC)[reply]

I looked at this article and the talk, but didn't get the answer i want. Quite basic calculation (*) shows that, if greenhouse effect were absolutely perfect, atmosphere absorbing each and every parcel of energy from the surface (it doesn't matter whether it is absorbed through conduction, convection, phase transition, radiation or whatever), then back-radiation (let's call it B) peaks at a maximum A + C where A: absorbed by atmosphere (77.1 according to the picture in the article) C: absorbed by surface (163.3, same source) A+C: 240.4 BUT B is supposed to be 340.3 (same source), 100 higher that the calculated maximum.

Well, I don't expect NASA to be that wrong, and I think any error would have been long corrected, so i have to suppose that somehow Back-radiation is currently HIGHER than in a perfect greenhouse effect world. My question is: how?


(*) we are looking for a steady state, at equilibrium, stable (things get back there if some noise disturbs the system) solution. I leave you the easy calculation to get there, just gives you the only solution -- nothing else works.

  • surface receive directly C and A+C from back-radiation, for a total of A+2C, which are then all send up so surface is at equilibrium.
  • atmosphere gets A directly, plus those A+2C from the surface, for a total of 2A+2C; half of it (A+C) goes down (the same as back-radiation used just above, sanity check OK) , half of it (A+C) goes up (which is just as much as absorbed, sanity check OK)

185.24.186.192 (talk) 11:42, 14 November 2017 (UTC)[reply]

The greenhouse effect
Does the simplified schematic from greenhouse effect help? The greenhouse effect is based on a circular flow of energy trapped in the system (i.e. heat). If you look at the schematic, the total energy entering each level is equal to the total energy leaving each level, which corresponds to an equilibrium. (There is actually a slight imbalance these days due to global warming.) However, it is not the case that the back-radiation must equal the total radiation from the sun. The amount of back-radiation depends on the temperature of the atmosphere. Similarly, the amount of energy transfer from the surface depends on the temperature of the surface. The surface and atmosphere will warm up until they reach a temperature where the energy flows out equal those coming in. The warm temperatures at the surface are maintained, in part, by a circular flow of energy which we know as the greenhouse effect. The energy flows from surface to atmosphere and back again happen to be larger than those from the sun, but that isn't a problem as long as we are talking about a closed loop. Dragons flight (talk) 11:58, 14 November 2017 (UTC)[reply]
Thanks, but no, it doesn't help at all : figures are only slightly different (67 + 168 = 235, Vs 324 BR instead of 77 + 163 = 240, Vs 340), but share the same issue
There is equilibrium in each level indeed, and you would have the same equilibrium at each level by adding just any value, positive or negative, to both back radiation and upward radiation. Subtract 324 to back radiation (putting it at zero), and also 324 to upward radiation (down from 452 to 128), ant it still works. Add another 324 to back radiation (putting it at 648) and also 324 to upward radiation (up from 452 to 776), and it also works. Well, no, it doesn't. The system is then, in both case, out of equilibrium (even though each level is at equilibrium). A zero back radiation would also mean a zero up radiation from the atmosphere, so it would warm up and emits more and more back radiation, until reaching equilibrium value. Similarly a 648 back radiation is way too much, meaning huge loss to space, cooling down atmosphere, lowering back-radiation, until the equilibrium is reached
The point is, basic (too basic ?) calculation put the said equilibrium at a maximum of 240 (or 235, depending on schematic) in the perfect GHE case. While each schematic says that in a NON perfect GHE case, back-radiation is much higher, when it should be lower (nothing can beat perfect GHE scenario).
185.24.186.192 (talk) 13:39, 14 November 2017 (UTC)[reply]
Its just a very simplified model representation and you added elements which are not in that simple model. One result of that is of course that the numbers in the model no longer add up because you changed the "formula" that model is using (to result in equilibrium). Find another model that contain your elements or "manufacture" a model yourself (which you already kinda tried (wrong) with your question). --Kharon (talk) 14:01, 14 November 2017 (UTC)[reply]
I added elements which ARE not in that simple model, taken from wikipedia article or schematic provided by talk
I may be wrong, indeed i asked "how", so your answser "you are wrong" is just not an answser...
185.24.186.192 (talk) 21:40, 14 November 2017 (UTC)[reply]
Perhaps it is unclear, but the radiation from the surface and the atmosphere is determined by the temperature of each component not the flux. So, you can't just put in random values without also changing those temperatures (flux emitted is roughly proportional to T4). Why do you believe 240 is the maximum? It's not. Let's consider a different analogy. Consider an oven. It consists of a heating element, some food you want to cook, and an insulated box. If you want to maintain a constant temperature, then the heat being put into the heating element must equal the heat leaking out of the insulated box. If the insulation is pretty good hopefully, then not much energy is leaking, so that necessary flux to maintain a constant temperature is low. However, the flux of energy being radiated between the food and the box and back will be much higher. That's because the inside of the box can get much hotter than the outside. If the insulation were nearly perfect, you could imagine the oven being able to getting ridiculously hot and the internal energy fluxes between the food and the box getting arbitrarily large. This is true even if the heating element is only providing a relative trickle of new energy, since the heat can build inside until an equilibrium is achieved. It's the same with the greenhouse effect in planetary atmospheres. The sun provides new energy, which at equilibrium counters the losses, but the internal transfers of energy can become much larger than the source flux depending on the characteristics of the atmosphere. For a thin atmosphere (like Mars) nearly all surface radiation escape directly to space, the back-radiation is very low, and the temperature enhancement is negligible. For a thick atmosphere (like Venus), essentially all surface radiation is captured by the atmosphere, the back-radiation is enormous, and the temperature enhancement is huge. Earth happens to lie in between these extremes. Dragons flight (talk) 16:27, 14 November 2017 (UTC)[reply]
more food for the though here, thanks.
the radiation from the surface and the atmosphere is determined by the temperature of each component not the flux, but the flux determines the temperature:higher flux in or out respectivly warms or cool the element until flux in and out balance again.
Your oven analogy is perfect. Even a perfect insulation box radiates energy out because of its own temperature, and this temperature will increase until radiation out perfectly match radiation received by the insulation box from inseide. And you can even calculate it, and that is just what i did:
the heating element brings C, heating the insulating box until its temperature rise at appropriete level to radiating out C, no more, no less; A is zero (no direct heating to the insulating box, neither from the outside nor from the heating element inside); the insulating box also radiates C back into the oven (Back-radiation B = C), because othewise it would either cool or warm (if it were more or less), so the food actually gets B+C=2C heating (C from the heating element+ B=C backradiation), which it also send back to insulating box (so it receive 2C, send C out and C back in: balance respected) , and everything balance perfectly, and stay so because this is a stable equilibrium. So it doesn't gets ridiculously hot inside the oven, the maximum heating being A+2C, as calculated above, with A=0 in your oven case.
And that's why I believe 240 is the maximum backradiation: because calculation shows it to be. It is not a "random value". It is the absolute maximum in the most perfect insulation case (unless something is wrong here, but what?)
Now, I understand your point that surface emperature being more or less known, the surface upward radiation cannot be very different from 452. and so the back-radiation must be whatever needed to balance things out, and that's 324 from your schematic. Higher than 235
Well, the only sensible conclusion is that atmosphere is better than a simple insulation layer. A heat pump. Heat pump exist, we build some, so why not nature, but I don't see how this works nor where it would pump heat from, and it is not explained in wikipedia, if it were so. Back to the start: how is this possible?
185.24.186.192 (talk) 21:58, 14 November 2017 (UTC)[reply]
The insulating box doesn't radiate at the same rate inwards and outwards. 93.136.80.194 (talk) 08:20, 15 November 2017 (UTC)[reply]
I think you are right, but this doesn't explain why, and this actually is just another way to put my initial question: why would the insulating box (a perfectly absorbing, choked full of GHG, atmosphere) radiate at different rate inwards and outwards?
185.24.186.192 (talk) 11:58, 15 November 2017 (UTC) — Preceding unsigned comment added by 88.168.175.234 (talk) [reply]
Imagine a box made of two thin shells. Each shell is perfectly absorbing and radiates at the same rate inwards and outwards. When the inner shell receives 1 unit of energy, 0.5 is backradiated and 0.5 is sent to the outer shell. Of the latter 0.5, 0.25 is radiated out and 0.25 is backradiated onto the inner shell. Of that 0.25, 0.125 is radiated inside (total for inside is 0.625 now), and 0.125 is backradiated onto the outer shell, and so on. In the end, 2/3 of the energy is backradiated and 1/3 is let through outside. If you add more shells, you can make the fraction radiated out as small as you want.
If this box has reached equilibrium, the amount of heat radiated to the outside is equal to the amount being received by the system. But to get to that point, the box contents might have received far more energy than it could radiate for a long time, and this would have caused an arbitrarily large buildup of energy. The system may receive 1 W and radiate 1 W, but that doesn't preclude that there's 200 W bouncing off the box's inner walls (and that doesn't necessarily imply that the box has been heated to its capacity as an insulator and will start to disintegrate and radiate out much more than its usual fraction). 93.136.80.194 (talk) 19:13, 15 November 2017 (UTC)[reply]

November 15

Positronium diameter

In the book "Parallel Worlds" Michio Kaku writes that in the Dark era of the universe intelligent life might survive by being based on positronium atoms which would be 10^12 parsecs in diameter. How come these atoms would be so huge when Wikipedia says that nowadays they're the size of an ordinary hydrogen atom? 93.136.80.194 (talk) 08:13, 15 November 2017 (UTC)[reply]

When positronium is in an excited state it becomes bigger. It does decay, but the higher the state, the longer its life time. It does not have to be so big to last a long time. This would be termed a Rydberg atom. Some may combine together to form Rydberg matter. A solid positronium chunk of matter based on this would be less dense than air. Graeme Bartlett (talk) 12:31, 15 November 2017 (UTC)[reply]
Let me try to understand your question: In a <book of fiction> the author writes about <some concept they made up by plucking an existing scientific-sounding word out of the air> and now you want us to explain it? You'd have to ask the author. It's their imagination. Explaining the fictional scientific concepts in real science terms is always a futile exercise. --Jayron32 13:32, 15 November 2017 (UTC) Sorry for the misunderstanding. Carry on. --Jayron32 16:06, 15 November 2017 (UTC)[reply]
FYI, Parallel Worlds is intended as a work of popular science non-fiction. That said, I don't know the answer to the IP's question or whether he is accurately describing what is presented in the book. Dragons flight (talk) 14:46, 15 November 2017 (UTC)[reply]
The book is on archive.org) (apparently legally), search for 'positronium'. Positronium#Natural occurrence also mentions this, with a link to a paper. Basically, they are talking about the distant future when the density of matter in the Universe is extremely low and after nucleons (protons and neutrons) have decayed away (if protons do decay). In such an environment huge positronium "atoms" can be stable over a long time scale (small positronium atoms would annihilate quickly) and seem to be the only thing that is still around if this scenario is correct. --Wrongfilter (talk) 15:56, 15 November 2017 (UTC)[reply]
So arbitrarily large atoms can be created? Why 10^12 pc then? 93.136.80.194 (talk) 19:52, 15 November 2017 (UTC)[reply]

Baked Beans

Question posed by a blocked user. ←Baseball Bugs What's up, Doc? carrots19:29, 15 November 2017 (UTC)[reply]
The following discussion has been closed. Please do not modify it.

It is well known that baked beans can cause flatulence. According to the article this is "due to the fermentation of polysaccharides (specifically oligosaccharides) by gut flora, specifically Methanobrevibacter smithii. The oligosaccharides pass through the small intestine largely unchanged; when they reach the large intestine, bacteria feast on them, producing copious amounts of flatus."

1) Of the carbohydrate content of baked beans, what percentage is actually polysaccharides? For example, this can from Heinz says 11.4g of carbohydrate per 100g. How much of that is polysaccharides?

2) When the polysaccharides are feasted on by bacteria, how much of it gets absorbed by the human body or wasted?

Thanks 91.47.17.210 (talk) 10:09, 15 November 2017 (UTC)[reply]

See "Polysaccharide from Dry Navy Beans, Phaseolus vulgaris: Its Isolation and Stimulation of Clostridium perfringens", [16], a wonderful research paper that discusses both the polysaccharide content of a few bean varieties, and also gives measurements for how much gas is produced. What a world! SemanticMantis (talk) 17:34, 15 November 2017 (UTC)[reply]

Baked beans and polysaccharides

I read something recently that made me wonder about baked beans and flatulence. It is well known that baked beans can cause flatulence. According to the article this is "due to the fermentation of polysaccharides (specifically oligosaccharides) by gut flora, specifically Methanobrevibacter smithii. The oligosaccharides pass through the small intestine largely unchanged; when they reach the large intestine, bacteria feast on them, producing copious amounts of flatus."

The questions are:

1) Of the carbohydrate content of baked beans, what percentage is actually polysaccharides? For example, this can from Heinz says 11.4g of carbohydrate per 100g. How much of that is polysaccharides?

2) When the polysaccharides are feasted on by bacteria, how much of it gets absorbed by the human body or wasted?

Thanks, SemanticMantis (talk) 19:34, 15 November 2017 (UTC)[reply]

I have found a suitable reference on the topic, but I'm curious to see what anyone else can dig up on question 2). Polysaccharide from Dry Navy Beans, Phaseolus vulgaris: Its Isolation and Stimulation of Clostridium perfringens", [17], a wonderful research paper that discusses both the polysaccharide content of a few bean varieties, and also gives measurements for how much gas is produced. What a world! SemanticMantis (talk) 19:34, 15 November 2017 (UTC)[reply]