Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 471: Line 471:
: As to the underlying dynamics of the situation, then that's too much to describe here. But there aren't shock waves (there can be, but they're avoided), multiple accidental ignition points ("hot spots") are avoided as they're uncontrollable for timing and most importantly, there's a significant energy transfer around by optical means. If this causes a pre-ignition ahead of the designed flame front, that's a bad thing and the cause of "knock" (WP has no article on knock, as it confuses several unrelated effects). [[User:Andy Dingley|Andy Dingley]] ([[User talk:Andy Dingley|talk]]) 12:45, 13 December 2017 (UTC)
: As to the underlying dynamics of the situation, then that's too much to describe here. But there aren't shock waves (there can be, but they're avoided), multiple accidental ignition points ("hot spots") are avoided as they're uncontrollable for timing and most importantly, there's a significant energy transfer around by optical means. If this causes a pre-ignition ahead of the designed flame front, that's a bad thing and the cause of "knock" (WP has no article on knock, as it confuses several unrelated effects). [[User:Andy Dingley|Andy Dingley]] ([[User talk:Andy Dingley|talk]]) 12:45, 13 December 2017 (UTC)
::I am aware that it is a difficult topic to handle seriously. I still wish to see the result of the simple, grad-student-level computation that I described above, in spite of its unrealistic assumptions (adiabatic walls, 0D-model with two homogeneous areas, no radiative transfer...) that make for a limited applicability to real-world scenarii. This seems simple enough that it has already been done, but hard enough that I will screw up performing the thermodynamics myself. [[User:Tigraan|<span style="font-family:Tahoma;color:#008000;">Tigraan</span>]]<sup>[[User talk:Tigraan|<span title="Send me a silicium letter!" style="color:">Click here to contact me</span>]]</sup> 17:08, 13 December 2017 (UTC)
::I am aware that it is a difficult topic to handle seriously. I still wish to see the result of the simple, grad-student-level computation that I described above, in spite of its unrealistic assumptions (adiabatic walls, 0D-model with two homogeneous areas, no radiative transfer...) that make for a limited applicability to real-world scenarii. This seems simple enough that it has already been done, but hard enough that I will screw up performing the thermodynamics myself. [[User:Tigraan|<span style="font-family:Tahoma;color:#008000;">Tigraan</span>]]<sup>[[User talk:Tigraan|<span title="Send me a silicium letter!" style="color:">Click here to contact me</span>]]</sup> 17:08, 13 December 2017 (UTC)

== Looking for youtube video that showed a cartoon train in scenarios near the speed of light? ==

I remember seeing a series of 5-10 videos that helped explain relativistic speeds of a train that showed how 2 observers would witness seemingly paradoxical behavior of the train (such as passing through a tunnel that was '''''shorter''''' than the train's length) but that to one observer, the train would appear '''''longer''''' than the tunnel. It then took this paradox to further extremes such as having 2 gates that would "close" for a millisecond while the train was inside the tunnel, but to one observer (but not the other) the gates would close at different times, even if they closed simultaneously due to relativistic effects. [[Special:Contributions/67.233.34.199|67.233.34.199]] ([[User talk:67.233.34.199|talk]]) 17:22, 13 December 2017 (UTC)

Revision as of 17:22, 13 December 2017

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 7

Effect of CO2 emissions on the metabolic energy obtained from glucose

One of the basic biology factoids is that glycolysis + Kreb's cycle + oxidative phosphorylation gets "up to" 38 ATPs out of a glucose molecule.

It occurs to me that, just as the temperature of cooling water affects the efficiency of a power plant, the concentration of CO2 should affect the efficiency of respiration. Searching I found many articles on plants, but I wanted to look at the same from an animal point of view.

Back of the envelope calculation: carbon dioxide in Earth's atmosphere has gone from 280 to 407 ppm. Gibbs free energy includes a term for RT ln (407/280) = 8.134 J/K * 300 K * 0.374, which gets me 0.93 kJ per, I think, 1 mol of CO2 that crosses a membrane between these two concentrations. Since there are 6 CO2s per glucose, that gets 6.00 kJ of energy difference between the two conditions per starting glucose! If so, well, adenosine triphosphate cites a change in free energy of 3.4 kJ/mol, so that would be 1.64 fewer ATPs per starting glucose than in pre-industrial times. (Note this goes by ln, so it takes larger and larger increases in CO2 to reduce the ATP count further; a 1000000 ppm concentration would, by this calculation, reduce it by about ... 38 ATPs).

First, did I do something stupid in the calculation? I haven't used these chemical concepts this way before.

Next, is there a way to call shenanigans on it for external reasons? For example, the CO2 concentration in the lungs will be much higher on exhalation. However, I'm thinking that this might go up somehow in proportion to the air concentration ... actually, I have no idea one way or the other.

Could a decrease in ATP production tend to turn everyone into couch potatoes, make us feel like aerobic exercise is too hard, cause overeating and so on? Just woolgathering here. Wnt (talk) 02:34, 7 December 2017 (UTC)[reply]

This appears to say the first noticeable without equipment effects don't start till 1000 ppm. Sagittarian Milky Way (talk) 03:11, 7 December 2017 (UTC)[reply]
I think that type of value is determined by acute exposure rather than lifetime exposure. Also, note that it is possible to waste significant amounts of cellular energy (e.g. uncoupling agent) with relatively little perceived effect. Something like 2,4-Dinitrophenol could be marketed as a "dieting aid" (indicating substantial loss of energy), though it causes symptoms up to and including death at sufficient levels. If the math above is correct it describes less than a 5% reduction in energy from carbohydrates (less from fats because the H2O is not affected), which is scarcely noticeable for dieting purposes. Wnt (talk) 14:06, 7 December 2017 (UTC)[reply]
It'd be interesting to do an analysis of the rate of world record breaking in as many probably steroid-free less-team/people vs people (too many variables) outdoor aerobic sports as possible. I'm not sure if you could see a signal in the noise of numerous other factors i.e. once-a-generation+ x players causing more kids to want to be pro x players, and x players with sudden big improvements on the previous best are still appearing in major sports (like a man can be many inches too tall (Usain) and still pulverize 100m records, a man can shoot well while being bumped when no one else could (LeBron) and that huge Australian high school football player could possibly not be too slow to play NFL at that size. Are the outdoor more individual aerobic sports (i.e. marathoning) also still capable of having something like that happening in the near future screwing up trying to use their world record progression to see if something more than the usual slowdown of record-breaking is happening? If your math is correct it's probably been thought of by now and you'd probably hear about it at least as much as ocean acidification or runaway clathrates warming but it'd be nice to see research on someone kept at CO2s between 280 and 1000 for a long time. I now wonder if any important sports events had enough building-related CO2 or oxygen abnormality to affect the performances lol. Or if it'd be allowed to mess with the air of sports venues to increase your team's or event's performance or even favor the home team. Do the Denver Nuggets want to try installing airlocks and O2 injection during games? Does Tokyo want more world records in their games? Make them play in 40% O2 at 60% pressure! Sagittarian Milky Way (talk) 18:40, 7 December 2017 (UTC)[reply]
You problems are: (i) you do not take into account the limited efficiency of the Kreb's cycle. 38 ATPs contain much less energy than that is released during oxidation of 1 molecule of glucose. The remaining energy turns into heat. So, when you calculate balance of Gibbs (or free) energy you need to take this into account. (ii) In addition you seems to confuse Gibbs (or free) energy and energy itself. The free energy is only useful when you want to determine the direction of a reaction: will glucose oxidase or will it form instead!? On the other hand the true energy balance does not depend much on the external CO2 concentration. So, the Kreb's cycle consists of a fixed number of steps. Each step gives the fixed number of ATPs. This will not change as the CO2 concentration increases. The changes of the Gibbs energy that you refer to are essentially changes in entropy multiplied by temperature. Ruslik_Zero 20:47, 7 December 2017 (UTC)[reply]
@Ruslik0: The Krebs cycle involves a fixed amount of reduction and phosphorylation producing 10 NADH, 2 FADH2, and 4 ATP, it is true. But oxidative phosphorylation is less certain and is said to produce "up to" 34 ATP (3 per NADH and 2 per FADH2). There are costs in moving precursors and products (e.g. NADH, ATP) to keep them on hand to make the reaction go forward. Additionally, alternate mechanisms exist that generally have a lower yield. The result is that estimates tend to say things like "30 to 38 ATP", though it looks like eukaryotes can't hit the top figure.
Whenever there is a choice of regulated mechanisms, this means that the final energy figure has a chance to talk back to the reaction. For example, if ATP is produced abundantly in the mitochondrion I think it might be valuable to let it diffuse out through a channel, though I don't know that. But if anemic amounts are produced, a proton from the gradient might be spent wringing out ATPs from a less rich storehouse inside the organelle. Similarly, one of the alternate biochemical mechanisms might serve to add some extra free energy to the reaction to keep the process going. My memory of metabolism is limited, but it is surely not a rigid reaction that can be allowed to sputter to a halt based on a small change in free energy. Note that although a typical figure lists a metabolic efficiency under 40%, it is more like 50% in cells due to free energy considerations. [1] In other words, if ATP and ADP had exactly the same amount of energy in their chemical bonds, the cell has 10 times more ATP, so any reaction ATP + X -> ADP + Y would still be pushed forward, which means that that 10% of the energy was never actually lost to begin with. Now to be sure, yes, even if it is 50% I can say a 5% difference in ATP production is a 2.5% difference in efficiency, but it seems just as relevant no matter what I compare it to. Wnt (talk) 22:36, 7 December 2017 (UTC)[reply]
The Kerb's cycle efficiency can vary, of course, but it does not depend on the ambient CO2 partial pressure unless this pressure is extremely high. This is my point. Ruslik_Zero 20:26, 8 December 2017 (UTC)[reply]
If the free energy change was sufficient to extract nearly two extra ATP from the cycle when CO2 concentrations are low ... why didn't it evolve that way? The difference in free energy has to be reflected somehow, whether it is in the number of ATP produced or the maintenance of a high ATP/ADP ratio (which stores a large fraction of the output free energy as explained above) or some other form of chemical energy. Wnt (talk) 03:43, 11 December 2017 (UTC)[reply]

Natural satellites of planets

I glanced through the Wikipedia material on the satellites (or moons) of planets. It gives very specific numbers, all of them in the order of one to a few dozens, for the known satellites of Jupiter, Saturn, Uranus and Neptune. But it also says that there's no agreed-upon threshold for size that objects have to pass so they can be considered satellites. And there doesn't seem to be any specification that a satellite mustn't be located in the planet's ring system. So why don't we say instead that these planets have billions and billions of satellites, all but a handful of which are tiny ones found in their ring systems? --Qnowledge (talk) 07:33, 7 December 2017 (UTC)[reply]

You are right to include the qualifier "natural". If you look at space debris you will see the term "satellite" is usually reserved for larger objects. 92.27.49.50 (talk) 11:06, 7 December 2017 (UTC)[reply]
Also, see Natural satellite#Definition of a moon. Dolphin (t) 12:54, 7 December 2017 (UTC)[reply]
I'm not sure, but I think some of that has to do with the Roche limit. A ring system like Saturn's contains particles that AFAIK would continue to break up under tidal forces were they not held together by chemical forces, i.e. electromagnetism rather than gravity. I think such assemblages are reasonably disqualified from being moons. (I'm not sure at the moment what distinguishes ring particles from shepherd moons, should look up later if someone doesn't explain) Wnt (talk) 14:11, 7 December 2017 (UTC) (withdrawn - see below)[reply]
Probably being big enough to be seen and have a significant shepherding effect. Ring particles in this solar system are only the size of a house or smaller. Since shepherd moons are big enough to be seen by spaceprobes the Roche limit isn't absolute and natural satellites big enough to be seen by them and thus named and numbered can have enough structural integrity to avoid being broken up by tidal forces. Sagittarian Milky Way (talk) 17:44, 7 December 2017 (UTC)[reply]
A Ring system can be regarded as one or sometimes even multiple satellite entity for originating from a satellite that broke up but also in perspective to eventually bake together a new satellite, just like all other planets and satellites evolved. Its a placeholder for a potential satellite if you like and it also has a mass and motion fitting Kepler's laws of planetary motion. Its an ongoing exploration and discussion. You can read about it in Kuiper belt and Oort cloud which can be regarded satellites of our sun or orbits with multiple satelites, maybe even planets, without one body dominating the part. In that you can also find the distinction, what is generally counted as planet, satellite, meteorite of a star- or planetary system and what is not: The origin of its matter and its orbit. --Kharon (talk) 21:30, 7 December 2017 (UTC)[reply]
It appears my confidence in the Roche limit was misplaced. According to [2], "If you broke up all the satellites within the Roche limit of Neptune, you'd get a ring system that would not look too terribly different from Saturn's." So this clearly is not a criterion to say they aren't moons. It is a good reason to watch your step on Larissa (moon)! (N.B. [3] says Larissa is inside the Roche limit while our article says it will break up someday when it passes it. Since Roche limit depends on density I'm not sure good sources don't disagree, but can someone confirm an error?) Wnt (talk) 22:00, 7 December 2017 (UTC)[reply]
Astronomy is a very bad science that is completely unable or unwilling to keep theory and facts separately. Just some weeks ago they found a super massive black hole so old that its assumed formation process can not be added up with the age of our universe according to the big bang theory. So maybe the big bang theory is bullocks but no one dares to say it out loud!! Its a very mainstream centered science like economics where you are either neoliberal or outcast (to put it slightly exaggerated). So don't expect to much! --Kharon (talk) 22:22, 7 December 2017 (UTC)[reply]
Well, if it's a choice between bad science in astronomy and bad science in our current Wikipedia draft I know which I find a priori most likely. ;) Wnt (talk) 22:37, 7 December 2017 (UTC)[reply]
The Big Bang is not bollocks, it's how'd it get from ~380,000 years after the Big Bang to galaxies that's not well understood. Sagittarian Milky Way (talk) 22:49, 7 December 2017 (UTC)[reply]
That's a new record in Dogmatism. Now even whole (known) galaxies must be wrong if they don't fit the mainstream. Btw. the mainstream core argument is not the background radiation but the Redshift. --Kharon (talk) 23:16, 7 December 2017 (UTC)[reply]
I suggest you work on your reading comprehension and check again what Sag has written. He comments on our incomplete state of knowledge, not any galaxies that "must be wrong". And while the redshift was one piece of evidence for an expanding universe, there were competing models - see Steady State theory. The CMB, on the other hand, is very well explained by the big bang theory (it's the red-shifted image of the surface of last scattering), but does not fit into e.g. Steady State. --Stephan Schulz (talk) 00:52, 8 December 2017 (UTC)[reply]
This is way off topic, but I should note [4] describes the black hole and links the original paper (I didn't check Sci-Hub though). It's 10% younger than the previous record holder. There's something there about "episodic hyper-Eddington accretion". I have no idea, but my gut feeling is it seems odd to say that a region of twisted space 800 millions times the mass of the sun forming in 800 million years would be perfectly logical, but 690 million years buggers all belief. Wnt (talk) 00:11, 8 December 2017 (UTC)[reply]

Modified Atmosphere Article -Clarification Questions & Suggestions

Hi,I am no expert in this subject, but thought I would give some feedback in the hope of making the article easier to read & understand. I was unable by clicking the Talk button to access anything other than my account page User talk:Mpe123

severity of preparation (give example to make the meaning clearer) does it mean something like processing such as washing salad or grating carrots?

A paragraph/comparison chart outlining the difference between EMAP and MAP could be helpful,to me they sounded pretty similar.(respiring product,permeability,"an equilibrium modified atmosphere will be established in the package and the shelf-life of the product will increase.")

When gas flushing & compensated vacuum are 1st mentioned (paragraph above scientific terms) it would be good to have a note mentioning that more details are given further in the article.

Isn't a potato a vegetable? I checked 2dictionary definitions & it says that they are vegetables "An example of a gas mixture used for non-vegetable packaged food (such as crisps)"

"breathable" films called EMAP are mentioned at the beginning of the article, but later(packaging films section) are referred to as "MA/MH films" are they the same?

Thanks in advance! — Preceding unsigned comment added by Mpe123 (talkcontribs) 17:58, 7 December 2017 (UTC)[reply]

Article is Modified_atmosphere and as someone has already explained Mpe123 has hit the wrong Talk-Button. Just one thing: potatoes are vegetables (a living thing which emits and produces gases), crisps are definitely no potatoes and even less vegetables. 194.174.76.21 (talk) 18:24, 11 December 2017 (UTC) Marco Pagliero Berlin[reply]

Why isn't bitcoin mining causing the price of bitcoin to level off?

Bitcoin mining has now become hugely profitable at a bitcoin price of nearly $20,000. You could already make a modest profit at a bitcoin price of around $3000. So, why is the price going up and why isn't the bitcoin boom being accompanied by a boom in the sales of fast computer processors? Count Iblis (talk) 21:08, 7 December 2017 (UTC)[reply]

Your central assumption is wrong because the gratification for the mining is adaptive. There is no fixed rate like you would always get say 1 Bitcoin for solving/verifying a block. With the current high prize the gratification is most likely very very low now because there is only a limited number of blocks to be verified and many computers and computer pools try to solve one of these. Also if you add more computers and computer pools that is only more competition, more supply but not more demand, which is at the end essentially biting its own ass according to the rules of supply and demand. --Kharon (talk) 21:58, 7 December 2017 (UTC)[reply]
Are you replying to me or to the Count? The reward for mining is changing, but with a long-term predefined schedule - the reward is cut in half every 210000 "mined" blocks (which is once every couple of years). What is "adaptive" is the difficulty of creating a correct block (you need to find a nonce that will produce a hash with a given number of leading zeros, and that required number is adjusted to keep the rate of block creation roughly constant). --Stephan Schulz (talk) 22:11, 7 December 2017 (UTC)[reply]
No, i just saw an edit conflict and i did'nt want to rework it, in parts because of fearing to run into the next edit conflict and thus getting trapped into an adapting loop. --Kharon (talk) 22:31, 7 December 2017 (UTC)[reply]
  • One must be aware that the mining of Bitcoin gets you two kinds of money: the "Bitcoin mining" part where you create new Bitcoins that you get to keep, and the "transaction fee" part where people pay miners to validate their transactions in priority. I think I had read somewhere that the latter is what really gets you the money these days, but the only semi-serious source I could find is [5], whose numbers do not allow to compare the recent mining/commission parts in a meaningful way. TigraanClick here to contact me 10:19, 8 December 2017 (UTC)[reply]
    While true, transaction fees don't increase the Bitcoin supply - they only move Bitcoin from one market participant to another. It may cause people to keep mining, even if the built-in rewards are no longer cost effective, but it has not direct effect on the supply/demand situation. --Stephan Schulz (talk) 11:27, 8 December 2017 (UTC)[reply]
    Well, it does impact the supply and the demand of Bitcoin mining even if the impact on the Bitcoin possession market is limited, and the OP is explicitly about mining. (Maybe my indentation choice was questionable, since I was not really replying to your post after all.) TigraanClick here to contact me 12:22, 8 December 2017 (UTC)[reply]
Alternatively, many economists make the case that the price is overwhelmingly speculative, and is therefore detached from supply-and-demand economics. The actual availability of the "resource" - constrained by mathematical details, or otherwise - is no longer a contributing factor the market-price at which people are buying and selling.
Furthermore, fully 100% of the "bitcoin-to-dollar" conversion price is a snapshot of a secondary market - not "some" or "most," but fully all of that price is sustained on such a market. And it is an entirely unregulated secondary market! So this means that price arbitrage can occur with catastrophically enormous price-spreads - ratios that would be orders of magnitude larger than any other conventional marketplace.
In my opinion, I think I have composed my explanation using certain technical terms that are more ... shall we say, precise than the word "scam," but to the informed investor, these descriptions ought to carry equal weight.
For even more verbosity on the topic, here's Susan Athey, an economist specializing in internet commerce: Bitcoin Pricing, Adoption, and Usage: Theory and Evidence (2016). She's written several well-researched commentaries on bitcoin over the year. She couches her statements in even more jargon: given "the presence of frictions arising from exchange rate uncertainty," ... "the idea of bubbles seems salient for Bitcoin..."
Again, the language is florid but, in my reading, the implications are equally lurid. ...Scam.
To put it more bluntly: if you want to invest in bitcoin, just try to put a non-trivial amount of money (let's say, U.S. Dollars) into an exchange on some proverbial Monday; wait for the price to vary by some non-trivial amount; and try to get your money back out on the proverbial Friday.
See, in a regulated market, they have to give you your money. In fact, as of right now, in the United States, starting in 2017, they have to give it to you within two business days: this is called T+2 and it dramatically changed the financial marketplace this year - even though it got almost no media coverage outside of specialist investment and economics publications! But Bitcoin exchanges follow no such regulatory oversight. They can arbitrate your withdrawal, at any price, on any schedule. You won't be able to withdraw your proverbial investment return of 5%, or 50%, or 50000%, because the exchange maker sets the schedule for paying you.
The exchanges that convert bitcoin to hard-currency are ponzi schemes. What you will find is that you might be able to pull a few hundred dollars of "earnings" out of them, at a massively inflated price (so that they can sucker in the next guy with unrealistic inflated growth statistics). But macro-economics does not work via "a few hundred dollars." Even a 50x growth in an investment of a few hundred dollars still won't buy you a private jet! As soon as you attempt to invest any nontrivial amount of money, and try to reap your well-invested earnings, you will find your arbiter mysteriously goes bust in a bank run. This has already happened multiple times, but new suckers keep buying!
Nimur (talk) 22:36, 7 December 2017 (UTC)[reply]
The "boom"-part is nothing specific to bitcoin or other cryptocurrencies. There was a similar development in the Shadow banking system and we all read about how some companies that serve the tax havens lost their pants lately and what became visible. Behind all of this is a world financial system which contains more wealth than the whole world industrial economy can craft in 100 years.[6] One obvious side effect of this is a flood of "investors" desperately trying to put their wealth somewhere "save". Even a 10-year German government bond with negative interest rates was sold out in hours. There is going to be a huge financial "bloodbath" somewhere again soon and bitcoin looks like build to survive it almost as save as German government bonds. --Kharon (talk) 03:15, 8 December 2017 (UTC)[reply]
A few exchanges, including Coinbase, one of the largest, are regulated. Of course, I have no idea what kinds of standards are enforced by the regulators or how strictly they are enforced. Also, although it's true the vast majority of Bitcoin holders hold Bitcoins through a broker, Bitcoin intentionally doesn't require this. You can run Bitcoin wallet software on your computer and transact directly with others. Of course, then you are taking on the settlement risk yourself. --47.157.122.192 (talk) 03:19, 8 December 2017 (UTC)[reply]
Purely addressing the computing aspects of this: it's impossible to make a profit anymore mining Bitcoin on general-purpose CPUs, assuming those are what you mean by "fast computer processors". All "serious" Bitcoin mining today is done with custom hardware based on ASICs designed for the Bitcoin algorithm. Have you checked the price of those? --47.157.122.192 (talk) 03:19, 8 December 2017 (UTC)[reply]
A sharp rise in the price of tin has resulted in the re-opening of a mine in Cornwall. However, one mine which isn't going to be opened is this one: [7]. 82.13.208.70 (talk) 11:38, 8 December 2017 (UTC)[reply]

Some of our regular readers know that I've spent a portion of my infamous career indulging in the art, science, and business of mining and prospecting, and oh boy, do miners have a reputation for selling lodes! If you've never been involved in a dig, it's time to familiarize yourself with the long con, also known as the "solid business plan," in all its glory and wonder. There is no better resource to refer you to than the nonfictional account of Roughing It, in which comic masterpiece we acquaint ourselves with the original goldminers: metal prospectors in the Sierra Nevada. "What could a man say who had an opportunity to simply stretch forth his hand and take possession of a fortune without risk of any kind and without wronging any one or attaching the least taint of dishonor to his name?" Or, translated into 21st-Century-ese: "I've got a guaranteed stash of bitcoin buried in a landfill, and I just need some investors to help me with the up-front capital costs to start digging. If possible, I'd like to expense my jet travel, and a mule, too."
The deliberate choice of "mining" terminology to describe the software processing of these digital currencies greatly contributes to the amusement-factor - at least, for anyone who understands the in-joke!
Nimur (talk) 05:36, 11 December 2017 (UTC)[reply]

I'm calling a major [citation needed] on your claim that the surge in bitcoin pricing is primarily because people are looking at a safe haven. Sure it may have originally been a factor, but while I guess there may be a few who agree with you, most experts on economics question whether it's a significant factor with the current insane rises, generally suggesting as Nimur said it shows all signs of a bubble. Let's not forget other safe havens have not experience anything even close.

At a basic level, it's not that hard to understand either. If something has risen 10x in a few months, it's easy to see the attraction. If you'd only put $10k in it not long ago, you'd now have $100k. You've already missed the part of the boat, you don't want to miss the whole thing. And even those predicting doom are generally reluctant to say the value isn't going to go 10x more before there's a correction.

While Nimur had some points about the state of the market, I haven't seen strong evidence that you really can't get in or out as long as things stay as they are and you're not talking about too much money. (E.g. I'm not saying the Winklevoss twins could really easily get their $1 billion or whatever it is now.) Sure you may lose a silly amount compared to what you feel you should get but if you spent $10000 and are now getting $80k that's still a great deal. You're fine as long as there's enough suckers, er other people, who want to get in. When the shit hits the fan is if the value does collapse. You could easily find your $10k (or whatever) nearly gone.

But even then, this doesn't even mean that it's always a bad idea. The evidence suggests plenty of people who do think it's a bubble and fairly experienced investors are getting in, treating it similar to other high risk high reward things like VC. If you have enough money you can afford to put some in with the hope you'll get lucky and get out before it collapses. You have several different investments of the sort and you just need one to pay off for it to have been worthwhile. The issue is those who don't really understand the risks and are putting money in they can't actually afford to lose (or would otherwise be better places to be choosing less risky investments).

Nil Einne (talk) 12:06, 9 December 2017 (UTC)[reply]

The answer to the original question: bitcoin "mining" is actually on an approximately predetermined schedule of one block every 10 minutes (average), and whoever figures the block first gets the mining reward (12.5 BTC). Throwing more computing at the problem mines faster very temporarily - because every 2016 blocks, the mining difficulty is adjusted to be approximately every 10 minutes again. So, mining is an evolutionary arms race.
But this won't supply more bitcoin to drive the price down, because it's a somewhat-regular release of bitcoins - David Gerard (talk) 20:44, 8 December 2017 (UTC)[reply]
So is it more likely to mine a bitcoin at certain times like northern hemisphere summer (since the cost of computer cooling's higher there) and the 4th of July (since that's when the most populous rich country has its only major summer holiday)? (are there any miners that *don't* run 24/7/365? since they're pro now and have big hardware investments to recoup) Sagittarian Milky Way (talk) 21:12, 8 December 2017 (UTC)[reply]
Cooling does not add much cost difference. For example common mining/3D graphics cards need 250 watts and only 5-10 watts are needed for the cooling fans. The main factor is the price of electrical power which is why the biggest farms work in cooperation with a local power plant and why they are only located in countries like china and island, where electrical power is very cheap. That is also why most amateurs dont stand a chance in the long run unless they produce their own power very cheap. --Kharon (talk) 05:47, 9 December 2017 (UTC)[reply]
While you're right that power usage of the components tends to be significantly higher than the cooling costs, it definitely can be a factor. That's why you get things like [8] [9] [10] [11] [12] (the last one deals in particular with bitcoin). Just because you can get away with just leaving your desktop computer (which is probably idle most the time anyway) in your house with just the HSF doesn't mean it works at a large scale. Of course if your house uses AC you probably are paying for it anyway albeit it's likely almost lost in the noise if you just have a one or two computers which are idle most of the time. Of course as others have noted no one uses GPUs for serious bitcoin mining now anyway. Nil Einne (talk) 11:47, 9 December 2017 (UTC)[reply]
The price of electrical power for heavy industries is below 20 USD/MWh in Iceland.[13] [14]. 75% renewable energy there - mostly geothermic. --Kharon (talk) 15:24, 9 December 2017 (UTC)[reply]
What does that have to do with anything? The only source I linked to which talks about Iceland mentions that the price of power, the use of renewable energy and the cool climate are all factors. All the rest do not deal specifically with Iceland, including the one which talks specifically about Bitcoin. It's true that a number of them are dual use cases i.e. by reusing the heat you could reduce the cost by getting someone to pay for the heating, although it's clearly not happening in some cases e.g. the Norwegian one it's mentioned there's no charge to the people with the devices other than an initial setup cost. Maybe a per use cost is/was the long term plan, but as some of the other sources demonstrate, the cost of cooling is often a factor in large data centres so simply getting rid of the heat without having to pay can be useful. You haven't shown any sources that the cost of cooling isn't a factor or that it only adds 5-10W for cooling fans in large scale use, as you implied. As I said, you seemed to make the mistaken assumption that what works with a computer at your home (which is probably idle most of the time anyway) works when you have a lot of devices in a small area with very high constant use but it doesn't. And as I also said even with the home case it's not necessarily true that the only cost comes from running the fans, it probably doesn't if you have an AC. Nil Einne (talk) 07:11, 10 December 2017 (UTC)[reply]

December 8

Please identify a weapon

What is this weapon mounted on a Ugandan Army Casspir APC? I suspect is is a type of light mortar or grenade launcher. Roger (Dodger67) (talk) 09:00, 8 December 2017 (UTC)[reply]

My best guess would be the QLZ-87 grenade launcher. Mũeller (talk) 09:20, 8 December 2017 (UTC)[reply]
Perfect match, thanks Mũeller, and I see Uganda is known to be a user. Roger (Dodger67) (talk) 12:05, 8 December 2017 (UTC)[reply]

From Cheese Curd:

   Most varieties, as in Ontario, Quebec, Nova Scotia, Vermont, or New York State, are naturally uncolored. The American variety is usually yellow or orange, like most American Cheddar cheese, but it does not require the artificial coloring.

So what's making American cheese curd yellow or orange, if no artificial coloring is added? Is it because of some difference between Canadian milk and American milk? Or some other factor? Mũeller (talk) 09:17, 8 December 2017 (UTC)[reply]

Annatto.--Jayron32 09:21, 8 December 2017 (UTC)[reply]
Missing the point that Annatto is a plant pigment, not synthetic, so a rewrite is required. Roger (Dodger67) (talk) 12:07, 8 December 2017 (UTC)[reply]

Square signal as only clock signal?

We have so many signals like triangular, ramp, unit step, impulse, square, etc. Why do we use only square signal for the clock signal in digital electronics? Sunnynitb (talk) 13:59, 8 December 2017 (UTC)[reply]

For the same reason that the time "pips" on the radio are short and clear rather than fading in from no sound over a 30 second period. There is a definite sharp step that can be used to trigger events. -- Q Chris (talk) 14:28, 8 December 2017 (UTC)[reply]
Thank you for your reply, but I didn't get it. Please, if possible, explain it a little bit or provide me some link regarding this. Sunnynitb (talk) 15:27, 8 December 2017 (UTC)[reply]
The purpose of a clock signal is to ensure that several events occur at exactly the same time. A number of different electronic components receive the clock signal, and each of them performs an action at a certain point in time based on the clock signal. Suppose the clock signal were a triangular wave, and the components were supposed to perform their action at the peak of the clock wave. It would be difficult for the component to tell exactly when the clock signal was at its peak, because the voltage a short time before the peak or a short time after the peak isn't much different than it is exactly at the peak. So one component might perform its action a significant amount of time before or after another one. A square wave has the desirable property that the signal changes very quickly at the point where the wave rises and at the point where it falls. So if all the components are watching for when the clock signal changes, they will all detect that event at nearly the same point in time. CodeTalker (talk) 17:44, 8 December 2017 (UTC)[reply]
And squares enable this at double data rate unlike other sharp-jumped waves like sawtooth waves. This is used for instance to make RAM since c. 2000 twice as fast as it would've been otherwise. Sagittarian Milky Way (talk) 18:05, 8 December 2017 (UTC)[reply]
adding to the answers above, many components are allergic to slowly-changing signals. they can latch up in some indeterminate state (metastability) or begin to oscillate. read any datasheet for a digital IC, it will specify maximum rise and fall times. 78.53.108.2 (talk) 23:35, 8 December 2017 (UTC)[reply]
Digital logic doesn't like inputs that are in an intermediate state - it tends to turn on two transistors, one pulling the next state high, and the other the same next stage low. This increases power usage, and can overheat the device. In addition, most loigc circuits use the rising (or falling) edge of the clock signal, rather than the actual high (or low) period to trigger the event. LongHairedFop (talk) 11:49, 9 December 2017 (UTC)[reply]
The square signal is a theoretical ideal. The physical truth differs. The square signal rises and falls as fast as possible to synchronize as best as possible. An impulse is a non-periodic square. An op amp usually has a slower rise. It is specified as rising in volts per time. Using it as a comparator at higher frequency, it outputs an trapezoid voltage. Increasing frequency, it outputs an triangle. Analog signals like a sawtooth wave was used to control the beam on cathode ray tubes. --Hans Haase (有问题吗) 12:13, 10 December 2017 (UTC)[reply]
  • Digital electronics is (pretty much by its de facto definition) based on voltage levels of signal. So non-square signals are a bit of a problem for it.
"Digital" doesn't mean perfect though. When I started out as a wee sprog with BT, one of the first things I did was a very intensive month-long electronics course: two weeks analogue, two weeks digital. And the first thing we learned on the digital part was how analogue "digital" really was. We plotted transfer functions and calculated noise immunity for the various logic families (as far back as ECL). Then we looked at the time-based issues, such as rise times and signal ringing.
There are digital protocols that aren't based on voltage levels - current loops, Manchester coding and others, but these are generally specialised and used for either signal transmission or storage, rather than logic. Andy Dingley (talk) 23:10, 12 December 2017 (UTC)[reply]

current mass ULAS J1342+0928

The mass of the black hole is given as 800 million solar masses. Since the light that this object is seen by is from 13 billion years ago, how large could we expect its mass to be now? I suppose in this time it has had a chance to merge/collide with galaxies.144.35.114.190 (talk) 14:57, 8 December 2017 (UTC)[reply]

Supermassive black holes (the technical term) can have masses of tens of billions of solar masses (at least several percent of the galaxy they're in). Sagittarian Milky Way (talk) 15:20, 8 December 2017 (UTC)[reply]
Though reading that article carefully, it does also say that the theoretical maximum is around 50 billion solar masses, as the rate of growth slows above 10 billion. Of course, there is also the possibility that it no longer even exists!
How could a black hole disappear? (besides Hawking radiation, which would take way more than the current age of the universe for an 800 million solar mass black hole) Sagittarian Milky Way (talk) 23:29, 8 December 2017 (UTC)[reply]
Seemingly odd it does not matter how big a black hole is - the gravitation is the same - equal to the speed of light. So in itself a tiny black hole has the same potential to grow as the biggest black hole we will ever find. It depends how much mass is in its reach and how much time you calculate for the grow. 13 billion years seems allot but if you put the Galactic year into perspective our home galaxy just made 58 turns in 13 billion years. Also galaxies are usually separated by millions of lightyears of empty space between them. Not that much traffic locally unless you are really close to something. If that giant black hole ate its own galaxy it might need another 500 billion years for another galaxy to cross its path.
Additionally odd is that according to its mass it seems to have already eaten a few hundred million suns aka every thing around it, if you consider only very, very, very few stars are as big or bigger than our 640 lightyears close neighbor sun Betelgeuse, which has 11.6 times or VY Canis Majoris estimated at 17±8 times the mass of our sun (see List of most massive stars). That is why there seems no possible explanation how a black hole could accumulate so much mass just a few hundred million years after the so called big bang. --Kharon (talk) 04:52, 9 December 2017 (UTC)[reply]
The speed of light escape velocity defines the black hole. If it was less light could escape and it wouldn't be a black hole and it'd just be matter, not some crazy thing that bends reality to the degree it can only have spin, change and mass. That's like saying strange how all the boats weigh as much as the water they displace. Well duh. Sagittarian Milky Way (talk) 05:29, 9 December 2017 (UTC)[reply]
There is so much wrong in this comparison that i feel the urge to look for the biggest book about physics i can find, to slap you with that until i fall asleep from exhaustion. --Kharon (talk) 08:28, 9 December 2017 (UTC)[reply]

Bytownite - industrial uses?

Does Bytownite have any industrial uses? Thank you, DuncanHill (talk) 15:11, 8 December 2017 (UTC)[reply]

According to this, No. --Jayron32 15:33, 8 December 2017 (UTC)[reply]
In general plagioclase feldspars are very common and have some routine industrial uses like ground up in ceramics or used for gravel. [15] But it isn't very common; also, the alkali feldspars are apparently used more than plagioclase. [16] Apparently feldspar is used in ceramics at 20-25%. [17] Amusingly, someone is tracking this conversation elsewhere [18] and thinks the rarity works against it. The one thing I know it's being used for "industrially" is that some people are selling samples supposedly for healing chakras and such. Not sure how much of an industry that is. ;) Wnt (talk) 17:33, 8 December 2017 (UTC)[reply]
The "Duncan Hill" who asked the question on MinDat is the "DuncanHill" who started this thread. Neither of me is the IP who originally asked on the article talk page. DuncanHill (talk) 18:03, 8 December 2017 (UTC)[reply]

Could x moles of acid or base stochiometrically react with more liters or molarity of reactant if it's added quicker?

(at least up to a point) If there's never much more acid/base concentration in the reactant container than needed to cause a reaction to happen (cause it's added as slow as it's used up) then might it stop after less reactant than if the acid/base is added all at once and the initial concentration's much higher? Or does it all even out by the end so it doesn't matter much as long as "all at once "is still reasonably civilized (no exploding, decomposing, boiling, igniting, large temperature rises etc.) and "slower" isn't so slow that something like evaporation or oxidation changes the nature of the reaction much? Sagittarian Milky Way (talk) 20:19, 8 December 2017 (UTC)[reply]

Given my PO's advice, I am not going to hat this, but are you seriously asking whether one should combine an acid and a base quickly? You'd probably not just be failed out of high school chemistry, but you expelled and your teacher fired. This sort of BS question really doesn't belong here. Do you contribute to WP, or are you simply WP:NOTHERE? μηδείς (talk) 22:03, 8 December 2017 (UTC)[reply]
It doesn't have to be a strong acid and a strong base, it could be supermarket vinegar and baby teeth or those drain cleaners that take hours to work and clogs. I think it doesn't matter with acid/base neutralization anyway since titration's very accurate (but you're the expert). Even so, I've poured baking soda straight into supermarket vinegar and didn't have <10 fingers when HS chem teacher got tarred and feathered. Sagittarian Milky Way (talk) 22:24, 8 December 2017 (UTC)[reply]
I can picture something different happening if some extra reaction could occur at an extreme pH value. For example, if you have a sodium carbonate (Na2CO3) solution and you slooooowly drip in less than one equivalent of hydrochloric acid while stirring rapidly, I would expect you to end up with a nice buffer of sodium bicarbonate (NaHCO3). But if you pour in the HCl all at once without stirring, then that NaHCO3 will go fully to carbonic acid (H2CO3), which then can release carbon dioxide (CO2) by eliminating water (H2O). If much of the carbon dioxide bubbles away into the air, then you would have to keep stirring for a very long time indeed to reverse that (by picking up traces of CO2 from the air, which happens, but would continue to happen past the original equilibrium point). However, in this case, and I think in most others like it, exposure to concentrated acid effectively reduces the expected effect of the acid (in this case production of sodium bicarbonate), because (in this case) one proton is used up getting rid of the product that was created by another. Wnt (talk) 01:50, 9 December 2017 (UTC)[reply]

Using air bubbles to stabilize a ship

Cruiseships use stabilizers that act like wings to counteract the rolling movement of the ship. Would it be possible to get a similar result by using airbubbles? So, when a wave comes from starboard and the ship starts to tilt to portside, the ship would pump out air on starboard to decrease its buoyancy on that side? Joepnl (talk) 21:48, 8 December 2017 (UTC)[reply]

Any company that did this would open up its owners to liability for negligence if it sank, given that sinking due to loss of buoyancy is a well-documented factor. It's like telling your anger management patient to drink. μηδείς (talk) 21:58, 8 December 2017 (UTC)[reply]
I can't think of a single invention that wouldn't raise eyebrows in the legal department before it got implemented. Joepnl (talk) 00:21, 9 December 2017 (UTC)[reply]
The rolling motion of a ship is not caused by wave action. It is an oscillatory motion that represents conservation of mechanical energy - similar to the motion of a pendulum. When the ship's mechanical energy is too great it manifests as a rolling motion of excessive amplitude. The solution to the problem is to reduce the mechanical energy by using a force (or torque) that does negative work on the ship; and this is the function of the hydrofoils (also called stabilizers). Dolphin (t) 00:30, 9 December 2017 (UTC)[reply]
How would a ship roll if there are no waves? Look out of the window on a ship. Waves: the ship moves. No waves: the ship doesn't roll or pitch. I had the absurd idea that removing messages from trolls was OK but apparently it's not. Joepnl (talk) 00:58, 9 December 2017 (UTC)[reply]
A ship rolls at its natural frequency which is usually different to the frequency at which swells and waves arrive at the ship. Swells and waves will excite the rolling motion of a ship, but other things can do so too. For example, the wind blowing through the superstructure can excite the rolling motion even though the wind is steady in speed and direction rather than oscillatory like swells and waves. It is true that large swells and waves will excite strong rolling motion, but it isn't true to say that if there are no swells or waves there will be no rolling motion. Dolphin (t) 01:13, 9 December 2017 (UTC)[reply]


We were discussing "The rolling motion of a ship is not caused by wave action". Yes there are other possible reasons, including winds, asteroids, and fat people running starboard but those are not the main reason a ship moves. Joepnl (talk) 01:44, 9 December 2017 (UTC)[reply]

Movement of cargo could cause a ship to roll.194.126.80.63 (talk) 01:04, 9 December 2017 (UTC)[reply]

Do you mean list or roll? And @Joepnl: who's the troll? If they're blocked, revert'em. μηδείς (talk) 01:10, 9 December 2017 (UTC)[reply]
The actual cause of rolling is not relevant to the question, I think. The question was, can rolling be reduced by pumping air under the rising side of the ship? I suppose it is, but 1) you must pump so much air as to counterbalance the momentum of the ship and 2) the needed quantity of air must flow before the rolling-up ends (this can be very very much air very very quickly). And 3) the air must be eliminated from under the ship side before you begin to pump under the other side, lest the whole ship goes under (as Medeis remarks). These are possibly too many "must" for your idea to be really practicable. 194.174.76.21 (talk) 18:52, 11 December 2017 (UTC) Marco Pagliero Berlin[reply]
OK, it's probably a pie in the sky. I do think the cause of rolling is relevant when it would be implemented. That would be a device that predicts a wave coming in (these exist, but can't remember where I saw it). "We" don't want that wave, but pushing it away is what the hull already does, and would defeat the purpose anyway. But if you blow air into the wave, maybe 5 meters away from the ship, there is no upward force causes by pushing it down. The air should thin out the wave, making it less heavy/m3, and forcing it to even out in all directions. Then again, it probably needs too much air too fast. Joepnl (talk) 22:58, 11 December 2017 (UTC)[reply]
Am I missing something? Blowing air out the side of a ship under a wave at some distance would simply be a form of sideways propulsion, since there would be an equal and opposite reaction on the ship due to the expulsion of the air. This sideways propulsion by the air would entirely defeat the intended effect of cancelling out the sideways motion caused by the wave, no? μηδείς (talk) 22:03, 12 December 2017 (UTC)[reply]
I think not, but then again there's a reason I'm asking questions :). A wave "pushes" a bit, by lifting objects that will want to slide of the wave like a surfboard. Waves look as if it's water moving, but in reality it's just water going around in circles which probably does push and pull a bit sideways, but on a big ship the problem is the up and down movement. The bubbles are intended to dampen the up and down force when a wave meets the side of the ship. Joepnl (talk) 00:22, 13 December 2017 (UTC)[reply]

December 9

Is there a way to know that NASA didn't know about an asteroid?

I just spotted a sensationalist news article ([19]) that an asteroid capable of "destroying New York City" went past the Earth at a third the distance to the Moon, and NASA didn't detect it until it was headed away. They accept the explanation that NASA simply missed spotting the asteroid before that -- they are supposed to have found most of them but of course never all of them.

But of course, I can think of another potential interpretation. Maybe NASA knew it was coming but couldn't be sure it was really going to miss Earth, and stayed mum about it. Possibly some senior politicians were given a chance to shelter, but the general proletariat needs to work, not panic, right?

So... is there a way to post mortem what happened? Can you look at the asteroid's trajectory, look up how NASA did its scan, and say oh, it went here before they looked thataway and after they looked thataway? I would imagine that some post mortem would be expected to come out in a case like this. Wnt (talk) 01:38, 9 December 2017 (UTC)[reply]

If NASA failed to tell us something, it will eventually come out. But how would widespread panic have served anyone's purpose? ←Baseball Bugs What's up, Doc? carrots01:49, 9 December 2017 (UTC)[reply]
hehe, you remind me of the guy with the bullhorn at the World Trade Center who told the cubicle drones to get back to work. "نفديك بالروح وبالدم", American style, eh? Wnt (talk) 02:06, 9 December 2017 (UTC)[reply]
"We give you soul and blood"? What has that got to do with anything? ←Baseball Bugs What's up, Doc? carrots05:47, 9 December 2017 (UTC)[reply]
There's a rather creepy tradition in the Middle East of people chanting that they sacrifice their blood and their souls to (some leader). In the WTC I guess it was the corporation. And here, well ... a lot of little people have to keep going about their lives if the big shots are going to catch their cab and their plane to the deep shelter, no? Wnt (talk) 18:11, 9 December 2017 (UTC)[reply]
All of NASAs data is public domain, and nearly all of it is regularly published on publicly available scientific repositories. In particular, the data of the various sky surveys is available. Space is big and asteroids are small, often dark, and fast - we cannot constantly scan all of the heavens all of the time. Any data will first be seen by some random NASA scientists, not by upper management. Just imagine how big a conspiracy you would need to keep such a thing secret. Then read On the Viability of Conspiratorial Beliefs. --Stephan Schulz (talk) 01:57, 9 December 2017 (UTC)[reply]
This is what I would hope. But is the data really being reviewed by random scientists like Lowell looking at glass plates, or do all these sky surveys go into a big computer program that spits out candidates ... and might omit some, leaving those who think they control it none the wiser? Wnt (talk) 02:08, 9 December 2017 (UTC)[reply]
At max brightness (c. 6:30a EST) it wasn't especially dim and not far from the anti-Sun, the Moon was 65% full and less than 90° away but not especially close, it was over 13 or 14°N latitude which isn't very inconvenient and near the plane of the solar system which is the most likely plane to look for asteroids. However it wasn't discovered till November 10th but it's designation is still only 2017 VL2. This means 2017 VA to VH and VJ to VZ were discovered some time time after October (by definition), then the second cycle (2017 Vx1), then half of the third. This has continued for a third of a thousand cycles in a dark V half-month which means very few minor planets were discovered before it this November. The 5 or 6 days before W half-month began saw 13 times more asteroids discovered than the other 9-10 days. This suggests that asteroid survey activity sharply drops off around Full Moon even when it's not close enough to full for avoidance of moonlight and twilight to be impossible (one of the best places to see it would've been Guam, it wouldn't have been that much harder to find (at a different time of November 9) for much of the big observatories outside Asia-Pacific though). If any amateur astronomers saw it they might've thought it'd already been discovered and didn't call it in. Amateur astronomers have a much harder time discovering asteroids and comets than say 2000 or 1995. There's lots of computer surveys now that I think automatically tag all moving objects based on whether they've discovered. It'd still require any amateur astronomer who saw it on the 9th to not be interested enough in a not that dim moving pretty fast (so near Earth) to look up what it is and then discover it's not discovered yet. They'd also have to not keep up with contemporary near-Earth asteroid flybys. If they did they'd probably wonder why they hadn't heard of a bright asteroid this fast and look it up and become the discoverer instead (emailing its coordinates to the Bureau of Astronomical Telegrams (is it still called that?) should make NASA know about it) Sagittarian Milky Way (talk) 05:17, 9 December 2017 (UTC)[reply]
CENTRAL BUREAU FOR ASTRONOMICAL TELEGRAMS STOP --47.157.122.192 (talk) 18:28, 9 December 2017 (UTC)[reply]
I stupidly radread the story when I saw it somewhere else before you linked to it. Well I mean I read the first few lines. It quickly became clear it was utter nonsense. As with all Daily Mail (edit: well I wasn't sure it was DM when I clicked but strongly suspect it was) stories it's sensationalistic, missing the point that this sort of thing isn't exactly a rare event. We're finding out all the time that something got slightly close to earth and we only just realised a few weeks or months later. (Edit: And those are the ones we know about!) Many people thingthink we need to get better at detecting these things sooner, although they often also acknowledge in some ways there's probably little pointactual advantage at thethis time since silly action movies aside, if we do find something headed here even in good time there's probably little we can do with outour current level of tech. Nil Einne (talk) 12:12, 9 December 2017 (UTC) edited at 06:48, 10 December 2017 (UTC)[reply]
It's worth noting that we could be doing a lot more to spot asteroids that might threaten Earth, like putting a telescope between Earth and Venus, but we don't because there's no political will to do so. Few people vote based on candidates' positions on asteroid defense. Of course, "conspiracy theories" are non-falsifiable, so maybe there is one there but the data is only made available to the Secret Conspiracy, and maybe they have plans to escape to their secret NASA child sex dungeons on Mars, and so on and so on. --47.157.122.192 (talk) 18:28, 9 December 2017 (UTC)[reply]
It's also worth noting that NASA's purpose is about space travel, space probes, airplane technology and things like that, not particularly about searching for asteroids. Here's a page of theirs about the sort of things they're working on currently. --69.159.60.147 (talk) 07:40, 10 December 2017 (UTC)[reply]

Big bang and size of the universe

At the Big Bang, the entire universe (as i understand) is thought to have been contained by a singularity. Does that imply that the universe as it now exists cannot be infinite? or could the singularity have contained an infinite universe within it? rossb (talk) 09:35, 9 December 2017 (UTC)[reply]

No, it doesn't imply it can't be infinite. There's a really simple "toy example" that shows this.
Suppose the universe at the current comoving time (let's normalize in such a way that the current comoving time is 1) is described by a standard Cartesian coordinate system — every point has (x, y, z) coordinates. And then suppose that, for particles moving along with the Hubble flow, at any time t before the present (that is, t<1), if the particle has coordinates (x, y, z) now, then it had coordinates (tx, ty, tz) at time t.
Then you can see that the universe is infinite, and was infinite at every time t>0. However, at time t=0 (that is, the exact instant of the Big Bang), all particles were at the same point.
It's a little hard to visualize the discontinuity, but luckily enough you don't have to. In practice cosmologists (almost) never talk about the exact moment of the Big Bang. They can talk about what happened 300k years after it, or a second after it, or 10−35 seconds after it, but the Big Bang itself, no, they just don't touch that, usually. Could be it never happened at all; could be that time is an open interval that omits the instant of the Big Bang.
Of course there are all sorts of things wrong with the toy example in terms of relativity and known cosmology; it's not meant to be a serious proposal as to what happened. But it does show that the gross description of the Big Bang does not rule out an infinite universe. --Trovatore (talk) 10:28, 9 December 2017 (UTC)[reply]
I think this is the way to visualize the boundless universe. Let's say we live in a Universe of such a size that we cannot detect its boundary. Then all of a sudden a Big Bang happens here, right in the middle of our existence in such a way that every point of the space becomes a singularity. They all begin to expand and one of them will have all anthropomorphic parameters like our current Universe and eventually become populated by life forms. AboutFace 22 (talk) 16:11, 9 December 2017 (UTC)[reply]
No, I don't think that scenario (whatever it even means) is remotely responsive to the question. Mine, on the other hand, is. --Trovatore (talk) 20:03, 9 December 2017 (UTC)[reply]
  • The notion of the universe emerging from a singularity was put forth by Hawking and Penrose in 1970, and they have since abandoned it as incompatible with quantum mechanics when the universe was the size of the Planck length. The question as posed is decades out of date. μηδείς (talk) 23:08, 9 December 2017 (UTC)[reply]
    Unless I've missed some spectacular development, no one knows how to reconcile general relativity and quantum mechanics. So it strikes me as speculative to say what the eventual reconciled theory would or would not say about the universe on the order of a Planck time after the Big Bang. But that just makes it even clearer that there's no direct or simple reason that the Big Bang should exclude an infinite universe. --Trovatore (talk) 21:04, 12 December 2017 (UTC)[reply]
The argument is that the notion of a Planck time after the singularity is meaningless, as quantum mechanics (the Heisenberg Uncertainty Principle) prevents a time or even smaller than the Planck time or length being physically defined. The matter is discussed at some length in The Fallacy of Fine Tuning whose first half deals with the Big Bang at length and in detailed math, with lots of meaty references. I just returned the book, so don't have the author's name at hand, but it was written in the last decade.
Hawking also discusses there not being a singularity in a more layman-friendly way in The Universe in a Nutshell. So, again, premising anything on a singularity pro-or-con infinity is dubious at best. The persistence of the idea of a singularity in the popular press and the confusion between an unbounded and a numerically infinite universe are two huge impediments to the discussion. μηδείς (talk) 21:58, 12 December 2017 (UTC)[reply]
The first work you refer to seems to be by Victor J. Stenger, and appears to be more an anti-theist polemic than a work of (or even exposition of) physics, which of course doesn't mean there isn't good physics in it.
The difficulty of dealing with spacetime either spatially or temporally below the Planck scale is well-known (this was I think the key issue in the earlier question about whether the universe "is" a manifold or just "is modeled by" a manifold). It doesn't follow that just because no one knows how to do it, it can't be done. Physicists are a bit prejudiced against singularities from the get-go, so while I haven't read Stenger or Hawking on this particular point, I'm not too impressed that they wiped their brows at the first reasonable excuse and said, whew, no singularities, go home, nothing to see here.
All that said, I completely agree with you that there's no basis for concluding anything about the finitude or infinitude of the universe from a presumed singularity at the Big Bang, whether or not such a singularity exists. And that in itself answers the original question. --Trovatore (talk) 00:58, 13 December 2017 (UTC)[reply]
The idea of a singularity at the beginning of the universe seems a lot like the idea of a visual singularity at the horizon. I mean, the closer you look to the horizon the more stuff is packed into a confusing little space... then, eventually, some higher-order factor (the curvature of the earth) intervenes to conceal your view completely. Well, in the case of the early universe we're talking about unfathomable temperatures, particles with unfathomably relativistic speeds and/or high mass, according to unforeseen laws of unfathomably high energy physics. If a electron would need a thousand times the lifetime of the cosmos to complete one oscillation about a proton, and no atom can exist without being ripped apart a billion times over, is the time scale defined by our atomic clocks relevant? I tend to suspect that the length of time to cross the entire universe has always been a very long length of time in some more meaningful sense, especially when so many collisions would occur along the way... Wnt (talk) 22:52, 12 December 2017 (UTC)[reply]

Cold-tolerant trees

How do woody plants survive the winter in cold climates? Consider taiga forests, which typically experience temperatures far below freezing. Xylem#Evolution (one of the longest non-table sections I've ever seen in an article) mentions how some plants are able to tolerate the effects of freeze-thaw cycles on their physical structures, but I'm more wondering about water and nutrient transport: when things are frozen, how does anything move? Ice can't be transported, in particular; I would imagine that a frozen tree would die for lack of water, but obviously that doesn't happen with your average healthy tree. Nothing else in xylem, and nothing at all in phloem, as far as I could see. Nyttend (talk) 17:04, 9 December 2017 (UTC)[reply]

How do Trees Survive Winter Cold? by Michael Snyder, Commissioner of the Vermont Department of Forests. Alansplodge (talk) 18:05, 9 December 2017 (UTC)[reply]
Alan's link is good, it's largely about antifreeze, and note that there just isn't much transport going on in the Taiga during winter. Our best general article is at Cold_hardening (And is understandably kind of hard to find if you don't know what they call it. Maybe you can link it from a relevant section of the other articles?). Hardiness_(plants)#mechanism is fairly useless. Antifreeze proteins are a big part of it, see e.g. here [20] for recent scholarly work. SemanticMantis (talk) 18:08, 9 December 2017 (UTC)[reply]
Critical is the part about water being relocated to storage organs; I wasn't aware that this was an issue. I guess I shouldn't be surprised by the lack of nutrient transport, since girdling doesn't kill a tree immediately (and deciduous trees survive temperate winters without leaves), but I was completely unaware of this stuff. Thanks a lot! Nyttend (talk) 22:21, 9 December 2017 (UTC)[reply]
Your original supposition "that a frozen tree would die for lack of water" was not too far wide of the mark though Nyttend; "Winter dessication or frost drought is assumed to be one of the main causes of the upper limit of tree growth in high mountains outside of the tropics..." Trees at their Upper Limit: Treelife Limitation at the Alpine Timberline (p. 5). Alansplodge (talk) 11:10, 10 December 2017 (UTC)[reply]
Exploding tree is relevant here. Wnt (talk) 22:45, 12 December 2017 (UTC)[reply]

Why isn't northern Vermont covered in ice?

New York City was covered in ice 20000 years ago. Since then, the Earth has warmed up 9°F.

At present, according to the cities' Wikipedia articles, New York City has a mean temperature of 55°F, and Burlington, VT has a mean temperature of 46°F, a 9°F difference.

This means that New York City 20000 years ago and Burlington, VT at present should have about the same climate with regard to temperature. Why isn't Burlington, VT (and southern Quebec) covered in ice like New York City was 20000 years ago? — Preceding unsigned comment added by HotdogPi (talkcontribs) 21:37, 9 December 2017 (UTC)[reply]

If temperatures warm, ice caps shrink and sea levels rise. Conversely, drop the worldwide temperature by 9°F, and increasing ice will lower sea levels, and oceanside cities like New York suddenly won't be on the ocean; the climate will be more continental because of the new inland location. Also, note that glaciers can exist where they can't form — since Arctic glaciers can spread southward, rising temperatures (and receding northern glaciers) mean that they have to travel a good deal farther to reach Burlington now than they did to reach New York before, even if the weather were similar. Glaciers won't always spread to adjacent places cold enough for them to tolerate (see #Ice-free northern Greenland), but they will in many situations. Nyttend (talk) 22:18, 9 December 2017 (UTC)[reply]
Also it seems hard for ice to survive for tens of millennia if the average annual air temperature was 46°F. Any precipitation in the summer would tend to be rain. These areas must've cooled more than the global average while the rainforests cooled less than the global average. Interestingly, global warming causes the poles to warm faster than the tropics. As a nitpick, only part of New York City was under ice at the last glacial maximum. Manhattan, the Bronx and the northernwestern halves of Brooklyn, Queens and Staten Island were under ice and the southeastern halves of those 3 boroughs weren't. There's no non-glacial ridge or something with this orientation and the ice limit parallels the 20000 BC coast so continentality may have something to do with the ice limit orientation. Sagittarian Milky Way (talk) 22:42, 9 December 2017 (UTC)[reply]
  • Imagine a simple world-model in which the average yearly temperature is 40 degrees F. That could be consistent a mild climate with six months of 30 degree temperatures, where the snow never melts, and six months of 50 degree temperatures, where the snow melts slowly. Or it could be consistent with an extreme climate, where the 'winters' are 0 degrees F, and the 'summers' are 80 degrees F. Note that the average temperatures are the same: (30+50)/2 = (0 + 80)/2 = 40; but in the extreme climate all the snow melts quickly at the beginning of the warm period, while in the mild climate, even though the winters are warmer, the snow melts much more slowly over the summer, and some ice packs can last all year, building into glaciers.
This seeming paradox arises because although the average temperatures are the same on a year-round basis, snow doesn't care whether it is 0 degrees F or 30 degrees F. It won't melt in either case. The only thing that matters is the summertime temperature--the average temperature is irrelevant. Hence knowing only the average temperature difference between NYC and upstate Vermont tells you nothing. What matters is the summer temperatures in those places. μηδείς (talk) 22:59, 9 December 2017 (UTC)[reply]
The current annual temperature range of NYC is at least 43.9°F (more if you use daily averages instead of monthly and July avg highs instead of July avg means). Do you have evidence for your implied claim that the average annual temperature range of NYC when it had glaciers (and extra continentality) might've been only 20°F? Without seeing the evidence it seems possible but unlikely. Sagittarian Milky Way (talk) 23:35, 9 December 2017 (UTC)[reply]
Whom are you addressing, SMW? You have indented under me, ask "Do you have evidence..." and mention 20°F that I see nowhere else in this thread. μηδείς (talk) 01:20, 10 December 2017 (UTC)[reply]
50°F in summer and 30°F in winter is a 20° annual temperature range. Maybe glaciers formed that way somewhere near the West Coast but this doesn't seem to be the open and shut answer to the NYC/Vermont paradox (which it turns out is actually (at least partially) because the OP was mistaken and the NYC avg annual temperature was only ~33°F or less). Sagittarian Milky Way (talk) 22:20, 11 December 2017 (UTC)[reply]
Oh. That's a simple random coincidence based on my arbitrary choice of values with the same mean, and of no relevance to my point. I could have compared a year with constant 31 degree winters and constant 33 degree summers to one with 0 degree winters and 64 degree summers. The yearly snow accumulation would be vastly different (assuming the 0 degree winters didn't stop snow from falling due to lack of moisture in the air). You never mentioned "range" before my first post, although you did mention "average" four times, if I counted correctly. Range and avereage are totally different beasts. Hence my point that the average by itself is irrelevant. μηδείς (talk) 03:35, 12 December 2017 (UTC)[reply]

Temperature change is not uniform. Here is one estimate of the spatial pattern of change since the last glacial maximum: [21] Those temperature anomalies are in degrees C, so roughly double them if you want to think in degrees Fahrenheit. The biggest changes were associated with the melting of the Laurentide Ice Sheet over North America, where temperatures locally may have warmed >20 C (>36 F). For New York and Vermont, it looks like the estimated change is around 12-20 C (22-36 F), so much larger than the global mean change. Dragons flight (talk) 10:44, 10 December 2017 (UTC)[reply]

Warming continues to affect the poles more than anywhere else -- see https://data.giss.nasa.gov/gistemp/maps/ and make a map for say the latest five years vs. a reference period. (as I understand it, "the poles" are more or less land with high year-round albedo, water insulated by ice; melting that ice therefore increases surface temperature, which is averaged into a planet-wide temperature) Wnt (talk) 17:05, 10 December 2017 (UTC)[reply]

I think you are mistaken about the temperature change since the last glaciation. It probably was 9°C, not 9°F. See File:EPICA_temperature_plot.svg. Ruslik_Zero 19:58, 10 December 2017 (UTC)[reply]

@HotdogPi: In fact it may be lower than that: this paper indicates that the temperature anomaly may have been about −12°C or lower near present day New York. Ruslik_Zero 20:17, 10 December 2017 (UTC)[reply]
No, 9 F (5 C) is about right for the global mean change (plus or minus a few degrees of uncertainty). More at the poles (like EPICA), and less in the tropics. There was a good deal of spatial variability. See the map in my previous post. Dragons flight (talk) 22:33, 10 December 2017 (UTC)[reply]
There is some debate about the mean change. In addition all those anomalies are relative to the pre-industrial temperatures which are probably by ~1°C lower than the present day value. Ruslik_Zero 20:10, 11 December 2017 (UTC)[reply]

The way snow melts

I noticed that when the temperature rises above zero after a colder period, sometimes all that is left of a layer of snow, is where I walked through it, just a track of footprints made of snow.

Is it because compressed, denser snow is more resistant to the heat? Languagesare (talk) 21:50, 9 December 2017 (UTC)[reply]

Its mostly because normal "fresh" snow contains allot of air. Since air is a very good insulator it will slow down the complete melting. --Kharon (talk) 22:52, 9 December 2017 (UTC) Wrong. Much more complicated. --Kharon (talk) 06:05, 10 December 2017 (UTC)[reply]
Are you saying that the footprints "compress" the air as well as the snow? ←Baseball Bugs What's up, Doc? carrots23:09, 9 December 2017 (UTC)[reply]
Kharon seems to have misinterpreted the question, since the OP says that the unpacked snow melts, and only his footprints are left, not that the footprints melt and only the fresh snow is left. μηδείς (talk) 02:41, 10 December 2017 (UTC)[reply]
It may be that Kharon is from a place that rarely sees snow. You and I both have had plenty of exposure to snow, and the OP's question describes a familiar phenomenon. And the OP's answer-in-the-form-of-a-question is what I would assume to be the explanation. Snow that's more densely packed will tend to take longer to melt. ←Baseball Bugs What's up, Doc? carrots03:22, 10 December 2017 (UTC)[reply]
Yes, I was wrong and i misunderstood the question. It seems compacted snow takes longer to melt. I tried to read up about that but most of the literature on snow and melting is focused on Glaciers. However one major accelerating factor for the melting process seems to be that the melting water from a melting surface will flow down easier into uncompressed snow, that way transporting the heat from the surface down faster the less compact the snow crystals are baked together or even transformed into polycrystals. --Kharon (talk) 06:05, 10 December 2017 (UTC)[reply]
On mountains, uncompacted powder snow is often blown away leaving raised footprints, an important clue that a slope is not likely to be avalanche prone; however the loose snow is likely to have accumulated dangerously elsewhere, so they are both a good and a bad sign for winter mountaineers and off-piste skiers. [22] Alansplodge (talk) 11:02, 10 December 2017 (UTC)[reply]
No one has pointed out that snow has a greater crystalline surface areas than compacted ice by the order of several magnitudes. Back when ice was stored in ice houses, straw or saw dust was used to fill up all the gaps to stop the circulation of air. Air contains water vapor and water vapor transports heat. Water vapor has a high vapor pressure, and at the very moment the air temperature rises above 0 deg C, the dew point will suck heat carrying moisture out of the air. So of course, un-compacted air permeable snow is going to melt much quicker. Aspro (talk) 22:34, 12 December 2017 (UTC)[reply]

United States map

Aaaarrrgh! Please help. I'm looking for a normal, high quality map of the US, like what you'd have on the wall. You know, a bit of topo, roads, cities, colours, that sort of thing. It is absurdly hard to find for me. Many thanks! Anna Frodesiak (talk) 22:18, 9 December 2017 (UTC)[reply]

Do you want to buy a physical map, or just have a machine readable one with a large size and a free license? You can also search google: https://www.google.com/search?q=united+states+map&dcr=0&tbs=sur:fmc,isz:lt,islt:4mp&tbm=isch&source=lnt&sa=X&biw=1280&bih=650&dpr=1.5 Graeme Bartlett (talk) 23:40, 9 December 2017 (UTC)[reply]
There's a company called WallPops selling through Walmart and Target that has National Geographic maps of the US for sale. They're about $15 dollars. Just Google it really. They're not hard to find.--Jayron32 03:28, 10 December 2017 (UTC)[reply]

Actually, I just want to view it on the computer. I just want that sort of map. Anna Frodesiak (talk) 05:17, 10 December 2017 (UTC)[reply]

Is this downloadable one any good? Or this screen view? Or did you want one you could use in a Wikipedia article? Alansplodge (talk) 10:49, 10 December 2017 (UTC)[reply]
Those are great! Thank you!
Now, I know commons has good maps, but how on Earth are people supposed to find them? Many people must go there for a normal map, as I described above, but just not be able to find one. Can anything be done? Maybe some sort of lead map at the top of the main cat? Anna Frodesiak (talk) 11:17, 10 December 2017 (UTC)[reply]
Anna Frodesiak, try to export OpenStreetMap. There are already several views. --Hans Haase (有问题吗) 11:41, 10 December 2017 (UTC)[reply]
Thanks Hans Haase. Actually, that's not a bad idea but the high res links above are best. Cheers and many thanks. Anna Frodesiak (talk) 02:24, 11 December 2017 (UTC)[reply]

December 10

Towed array

What is the minimum depth under the keel which is required to safely deploy a towed array without it ripping away from dragging on the sea bottom? Is this depth the same for submarines and surface ships? 2601:646:8E01:7E0B:F9B4:9A86:7938:FC5D (talk) 02:14, 10 December 2017 (UTC)[reply]

No they aren't the same. Civilian towed arrays are a bit more sophisticated than their military counterparts, and can be actively flown at different depths, handy if a ship wants to cross your stern during a survey. I believe military submarine streamers are slightly negatively buoyant, I don't know about military surface ships arrays. In the days before solid streamers a big part of deploying a surface streamer was ballasting it to neutral buoyancy, since if a section dived the sea pressure would force the paraffin up, emptying the tube, encouraging a deeper excursion. Very messy. This is the design I worked on, I think http://www.ldeo.columbia.edu/research/office-of-marine-operations/seismic-equipment-and-operations
The depth is typically 10 m below the sea surface for a surface array. Greglocock (talk) 02:42, 10 December 2017 (UTC)[reply]
There are definitely many different types and some likely technically independent from distance to the ground. However an array must be roughly in one straight line, because the real water column is divided into Thermocline or thermal layers which de- and reflect acoustic signals like a Mirage or Fata Morgana can de- and reflect light. A "hanging" towed array would make it a challenge to keep the array in one of these layers and therefor massively reduce the arrays capability to detect sound sources, their direction and distance reliable. --Kharon (talk) 06:32, 10 December 2017 (UTC)[reply]
OK, the array in question is a military underwater array for submarine detection and tracking (such as the TB-23). 2601:646:8E01:7E0B:F9B4:9A86:7938:FC5D (talk) 07:58, 10 December 2017 (UTC)[reply]
I think if the sub is stationary the streamer hangs vertically. But I don't know. As you know the density of water varies and the compression of the array varies with depth so it is not possible to have a neutrally buoyant streamer for all depths. It would be nice to put active depth keeping modules into the streamer but that is not compatible with a winched system (on a ship the deck crew can fix wings on as the streamer is deployed). Greglocock (talk) 19:41, 10 December 2017 (UTC)[reply]
Nope, modern civilian arrays can use various position sensing techniques so they know the shape of the streamer and can then adjust the signal processing to account for bends in the streamer.Greglocock (talk) 19:41, 10 December 2017 (UTC)[reply]
[un-indent] OK, does anyone happen to know the minimum safe depth for deployment? I'm researching for a manor-house detective novel which takes place aboard a nuclear submarine, and I need to know how far offshore they have to be in order to deploy the array! (In Red Storm Rising, it says that 200 feet below the keel is too shallow for a towed array -- but that's a work of fiction, so I can't rely on that alone.) 2601:646:8E01:7E0B:F9B4:9A86:7938:FC5D (talk) 03:35, 11 December 2017 (UTC)[reply]

Submarine crew

How big is the engine watch on the Virginia-class submarines, and how many of them are officers and how many enlisted? (If this is classified, please say so and I'll cross this question out.) 2601:646:8E01:7E0B:F9B4:9A86:7938:FC5D (talk) 02:17, 10 December 2017 (UTC)[reply]

If it's classified, no one here will know the answer. ←Baseball Bugs What's up, Doc? carrots03:19, 10 December 2017 (UTC)[reply]
People do things they regret, Bugs. https://www.theguardian.com/us-news/2016/aug/20/us-navy-sailor-jailed-for-taking-photos-of-classified-areas-of-nuclear-submarine http://www.foxnews.com/us/2017/01/25/pardon-me-navy-sailor-in-jail-for-submarine-photos-pleads-for-mercy-from-trump.html We can hope no one would share classified information, for their own sake. μηδείς (talk) 04:05, 10 December 2017 (UTC)[reply]
I googled "crew size of virginia class submarine" and found this top-secret US Navy website:[23]Baseball Bugs What's up, Doc? carrots04:50, 10 December 2017 (UTC)[reply]
Doesn't say how many of them work in the engine room, though. 2601:646:8E01:7E0B:F9B4:9A86:7938:FC5D (talk) 07:55, 10 December 2017 (UTC)[reply]

Nitrate testing chemical

Two questions:

  1. What chemical is used to conduct the nitrate test for freshwater from API?
  2. What chemicals (or class of chemicals) could create a false positive?

Background: I tested my hand-me-down aquarium's water and the test turned cherry red within 30 seconds (which is bad). The Miata should be like 20 ppm, not 200+, given the bio load and time since water change. My thought was that a terracotta pot leeched something into the water (silicate most likely). Fish show no signs of stress which makes me think it's not actually nitrate. EvergreenFir (talk) 07:22, 10 December 2017 (UTC)[reply]

Link to help explain the API reference.--Phil Holmes (talk) 12:01, 10 December 2017 (UTC)[reply]

Glock pistol's walls design

Why are Glock pistols kind of square on the top? What advantage has such design? I assume a part won't be more resistant that at it's thinner spot, and it's easier to create uniform parts than to create parts with variable thickness. --Hofhof (talk) 13:37, 10 December 2017 (UTC)[reply]

If you do a Google search for Glock squared off slide or similar, you'll find lots of discussions on this point. Selecting one at random ([24]), there appear to be a number of plausible factors. (You will also find plenty of flame wars. Gun aficionados are as prone to bickering over minutiae as any other class of fanatic.)
  • Reduced manufacturing cost. Squared-off blocks are often easier and cheaper to make.
  • Deliberately increased weight. Since the Glock pistol has a fairly lightweight plastic frame, putting a bit more heft in the metal slide can reduce perceived recoil. It also helps with durability of the slide—the increased mass means it doesn't move as fast when cycling.
  • Improved robustness of locking lug(s). The square slide permits use of a single large square lug, rather than multiple radial lugs.
  • Better grip. Some people find it's easier to grasp a square slide compared to a rounded one.
There's lots of inconclusive discussion about which factors guided the design and which ones were consequences. TenOfAllTrades(talk) 15:50, 10 December 2017 (UTC)[reply]
Its simply a design choice. No physics or magic behind it. Why are all US Pickups huge? Why does Toblerone sell chocolate in triangular bars? Customers get used to designs and features connected to brands and in general reject changes. So it is always a huge risk to change an established product.
Anyway, a SIG Sauer P226 does not look that much different. At least if you compare both for example with the iconic Mauser C96 which technically outmatches all of today's firearms. But with its outdated design no one would buy it anymore today. --Kharon (talk) 00:09, 11 December 2017 (UTC)[reply]

Because of the rear sights are removeable. It's a milled slot. Newer RMR sights require a flat surface as well. Search for "sight pusher" and you'll see the jigs used to remove the sight. Search for "RMR sight" and you will see the flat, large sight that attaches to the slide. Also, the barrel breach end is a square block of steel unturned when the barrel is made. The Glock design has the barrel tilt up with the square slide moving across the square breach. If the slide were round, the pistol would need to be bigger. Lastly, it's easier and cheaper to make a metal piece in a brake than a CNC mill. --DHeyward (talk) 23:15, 11 December 2017 (UTC)[reply]

mental activity during unresponsiveness

Last winter I underwent emergency abdominal surgery, followed by respiratory failure and severe sepsis. During the first 10 days in intensive care I was unresponsive, recovering gradually until after about a month I was completely aware of my surroundings. During this time I experienced an active mental life: feeling that I was on a difficult but purposeful journey filled with struggle and frustration, I passed through quite a few dreamlike scenes. At one point I had to choose between life and death; at another, I found myself completely alone in the universe. Gradually I began to incorporate incidents happening around me in the hospital, but these were incorporated into my private world. I'm now completely recovered.

I wonder if the medical literature describes any similar experiences? If it would be useful for me to share my experience with the medical world, how would I do that? --Halcatalyst (talk) 16:13, 10 December 2017 (UTC)[reply]

Glad to hear you have recovered. Think what you describe, is what goes on in the mind when placed in a induced coma, which would have been appropriate for your serious condition. Doctors are well aware of these dreamlike states where one is not sure which world or reality one is in. This is just one woman's experience: This Woman Explains What It’s Like to Be in a Medically Induced Coma. Think your experience is better telling to others that are just recovering. You can Google around for support groups where you can help give support to those that are wondering about this most bewildering experience. Aspro (talk) 20:27, 10 December 2017 (UTC)[reply]
In the non-medical literature, The Bridge (Iain Banks) and Pincher Martin (William Golding) use this as a plot device, so the phenomenon is not unknown. We _do_ have an article Life review, but it's written from a decidedly non-scientific perspective, so I can't really recommend it here. Tevildo (talk) 21:34, 10 December 2017 (UTC)[reply]
  • Delirium is a well-known phenomenon. The last time I was hospitalized the person in the bed next to me came in screaming of being attacked by unseen entities, was diagnosed with multiple organ failure, and died. When they brought in the next patient for that bed at 3am in the morning without waking me or explaining what was going on I felt like I was in an alien-abduction episode of The X-Files with wires sticking out of me, bright lights, strange figures. It took me quite some time (I was under sedation and on an IV) to realize my actual situation. The same happened in the past when I was on a morphine drip and woke intubated and restrained from an abdominal surgery that was supposed to be laparoscopic but lasted 16 hours. And when I had minor foot surgery (2 hrs.) and woke intubated I had to bang the bedrail to get the doctors to realize I was awake and aware, so they would remove the tube.
My point is, medical professionals will have seen this all, and although it might seem of great interest to the patient, unless you are part of a study on post-operative recovery, there's not much point in advising anyone of your delirious state after the fact. It would be unethical for them to induce delirium, and they won't learn anything from subjective reports after the fact that hasn't been known since the ancients. Our article: "Delirium is one of the oldest forms of mental disorder known in medical history."[1] μηδείς (talk) 23:56, 10 December 2017 (UTC)[reply]
  1. ^ Berrios GE (November 1981). "Delirium and confusion in the 19th century: a conceptual history". Br J Psychiatry. 139 (5): 439–49. doi:10.1192/bjp.139.5.439. PMID 7037094.
Of course, observations from random internet posters shouldn't be relied upon. Your doctor is your best bet for discussing the subject. 2606:A000:4C0C:E200:831:EE2:9FFB:76D0 (talk) 00:07, 11 December 2017 (UTC)[reply]
Mental activity of unconscious patients is well-documented. It appears that the question is asking if the content of the dream is important. Documentation of people divining life meaning from dream content goes back thousands of years. In the medical field, dream content is not important. 209.149.113.5 (talk) 18:01, 11 December 2017 (UTC)[reply]

Thank you all for your responses. I don't believe the "journey" story means anything, it was just the way I perceived/interpreted my experience.

The article you referenced was very interesting; the woman and I definitely had the same type of experience. Muscular atrophy, cognitive problems, etc. At one point with the speech pathologist I couldn't think of a single word that begins with S.

I did receive fentanyl, but I don't believe I had any of the symptoms of delirium. At one point the doctors told my family I was on the brink of death, but I don't identify with near death experiences as they are usually described. Nor with the Life Review article. --Halcatalyst (talk) 21:54, 11 December 2017 (UTC)[reply]

Delirium is a symptom and a broad spectrum; so certain aspects may apply, yet not others. It is certainly a response both to trauma and sedation, and confabulation (the making up of narratives) is typical. I suggest you read the article, as much of it may apply, whether that word is one you would choose to use yourself. Since you are not asking for a diagnosis, the choice of label is unimportant. μηδείς (talk) 03:23, 12 December 2017 (UTC)[reply]
Yes, I did read the article (thanks again for suggesting it), and looked up further information on delirium. The doctors didn't say they had induced a coma. As you say, the label isn't important. To me, looking back, the experience was interesting, maybe the more so because I didn't have any of the unpleasant aftereffects described by the woman in the article.

Secondary car mirror reflections

In my front view car mirror there are two faint reflections in addition to the main one - one above it and one below (the pic shows only the one above). What is the explanation for this? Gil_mo (talk) 16:40, 10 December 2017 (UTC)[reply]

If your car allows you to flip the mirror to a "night vision" setting which reduces the glare of car lights behind you, the mirror used for that setting is probably what you're seeing. ←Baseball Bugs What's up, Doc? carrots16:52, 10 December 2017 (UTC)[reply]
(ec) A lot of those are rigged with a feature where you push them back somehow and get a weaker reflection in a different plane, to prevent the high beams of the guy behind you from being quite so annoying at night. This is the "prismatic" feature described at rear view mirror; apparently the angle is implemented as a glass prism. Wnt (talk) 16:57, 10 December 2017 (UTC)[reply]
This diagram [25] may help. SemanticMantis (talk) 17:12, 10 December 2017 (UTC)[reply]
In short: the "daytime" reflection uses the mirrored surface from the rear of the glass; the "nighttime" reflection uses the plain front surface of the glass. The glass is wedge-shaped, and flipping the lever-thing changes the angle of the mirror accordingly. 2606:A000:4C0C:E200:CCFA:802A:8BDD:BA79 (talk) 19:09, 10 December 2017 (UTC) — Oops, I missed the "in addition to the main one" part of the query. [dynamic IP]:2606:A000:4C0C:E200:831:EE2:9FFB:76D0 (talk) 22:58, 10 December 2017 (UTC)[reply]
When I was a wee lad my parents' car had a "Danite double vision mirror," per the owner's manual. Having read Arthur Conan Doyal's "Study in scarlet," I wondered what rear view mirrors had to do with Mormon terrorists.Edison (talk) 21:29, 10 December 2017 (UTC)[reply]
Our article: Rear-view mirror#Anti-glare.
But this explains only one of the two faint reflections asked about. I've noticed it as well, and found that at night if I have the mirror properly adjusted while clicked in either position, then flicking it to the other position will give me an equivalent dim view. Is this second dimmed reflection due to a partial internal reflection causing a portion of the light beam to make yet another pass through the mirror? (Hmm, Internal reflection redirects to Total internal reflection. There is such a think as a partial internal reflection, isn't there?) -- ToE 21:31, 10 December 2017 (UTC)[reply]
Yes, there is. 2606:A000:4C0C:E200:831:EE2:9FFB:76D0 (talk) 22:52, 10 December 2017 (UTC)[reply]
I think that's on the right track. Think of a mirror aligned with the headlights with a prism in front of it. You turn it one way and now the surface of the prism is aligned perpendicular to the headlights, and you see the reflection off that. You turn it the other way and now the reflection of the surface of the prism (apparently on the far side of the mirror!) is aligned with the headlights. The only thing I need to make this a theory is for the partial internal reflection of light hitting glass straight-on to be the same as (or at least similar to) the partial external reflection of light hitting glass straight-on. My gut feeling is that it has to be because a light path should be reversible -- if 96% of the light makes it through one way, I'd think that means 96% makes it through the other. And if the glass doesn't absorb much light, it doesn't have any momentum to allow it to do anything but pass through or reflect, so... Wnt (talk) 03:38, 11 December 2017 (UTC)[reply]
@Wnt: Your feeling is kind-of wrong correct, but misleadingly formulated; see Fresnel equations for the mathy explanation, and total internal reflection for an example where this goes horribly wrong. When the light strikes close to perpendicular to the air/glass surface, which is the case for the rear-view mirror, the coefficient of reflexion depends only on the refraction indices and not on the striking angle (Fresnel_equations#Normal_incidence), i.e. the transmission coefficient is the same for glass-air as for air-glass). But far from the normal, it is symmetrical with respect to light-return, but not space-symmetrical. TigraanClick here to contact me 10:58, 13 December 2017 (UTC) (edited 11:02, 13 December 2017 (UTC))[reply]
@Tigraan: Actually I don't think total internal reflection contradicts this idea, because the whole idea there is that there is no way for an external ray of light to get inside the refractive medium following the same path as the reflected ray. Wnt (talk) 11:35, 13 December 2017 (UTC)[reply]
Your idea that "a path of light A -> B (on surface) -> C has the same transmission coefficient as the path C -> B -> A" is correct. But the way I misread it at first (and I assume others could misread it as well) is "a path of light A -> B -> C has the same transmission coefficient as the path C' -> B -> (whatever), where C' is the symmetrical point to C with respect to the surface", which is wrong. I just wanted to clear that up (and provide links to the relevant article for transmission coeffs). TigraanClick here to contact me 17:13, 13 December 2017 (UTC)[reply]
Graphical answer to the question, showing the main reflection and the two first-order second reflections. The angle of the top path (with an inner-glass reflection) is not exactly accurate (it should be superposed paths), but it is clearer that way.

Article improvement opportunity: If anyone here is handy with Inkscape or the like, Rear-view mirror#Anti-glare would be well served by an image similar to the one linked above by SemanticMantis. -- ToE 16:10, 11 December 2017 (UTC)[reply]

Ha! I love the constricted / dilated pupil. Great job! I might be pushing my luck here since the result wouldn't really be applicable to the article, but I would also love to see a third image with the mirror tilted down from the glare position instead of up, showing the path via internal reflection, where the main light beam passes through the glass, reflects off the mirror, and then a small portion reflects off the interior of the glass, back to the mirror, and then out through the glass. This is the second dim image asked about by the OP. Angles as they are, I suspect that your first two images would be too crowded to include this third path, but with the mirror down so that this third path is reaching the eye, angles may be such ASkthat all three paths could be included. -- ToE 20:13, 11 December 2017 (UTC)[reply]
 Done Ask and you shall receive. I added the other two pictures to the article. TigraanClick here to contact me 20:56, 12 December 2017 (UTC)[reply]
Tigraan! You're my hero! And the illustrations in the article look great. (I didn't even know about {{switcher}}!) The sun / moon & stars are frosting on the cake. -- ToE 23:28, 12 December 2017 (UTC)[reply]
OP here. Well done, Tigraan, well done everybody! Smarter every day... Gil_mo (talk) 07:44, 13 December 2017 (UTC)[reply]
OP again, here's a challenge - could this be simulated in Blender? Gil_mo (talk) 07:47, 13 December 2017 (UTC)[reply]

December 11

I am unsure whether or not these two articles are about the same subject. Does anyone have more insight? --Leyo 09:50, 11 December 2017 (UTC)[reply]

The first article refers to "an addition to the 1979 Geneva Convention on Long-Range Transboundary Air Pollution (LRTAP)", so it's the same subject, Air (same dates, same place, I suggest a WP:MERGE ). — Preceding unsigned comment added by Askedonty (talkcontribs)

December 13

Are these oranges there in the background?

Are these oranges there in the background?

Thanks, ClinicalCosmologist (talk) 10:35, 13 December 2017 (UTC)[reply]

It's hard to be sure without knowing the scale, but the bush looks more like a kind of Cotoneaster to me. AndrewWTaylor (talk) 11:40, 13 December 2017 (UTC)[reply]
Oranges grow on trees and are typically a few inches in diameter. Assuming the step there is about 4-6 inches in height, the fruit is obviously too small for orange - maybe a quarter of an inch at most. I think AndrewWTaylor has it right. Matt Deres (talk) 13:45, 13 December 2017 (UTC)[reply]
Yes, when I looked much more closely these didn't seem like mini-oranges even ("dwarfy oranges" as used for jams ETC), but rather like a very special kind of Cotoneaster as noted by Andrew Taylor. Thanks guys!

Thermodynamics of flame front propagation in a closed chamber (two-phase model)

I have the feeling that should exist somewhere in a textbook, so I am asking if someone has seen it before.

Assume a closed chamber at volume V containing a flammable homogeneous mixture of gases, which we ignite (e.g. by spark ignition). As the flame front propagates, the fraction of burnt gases increases, heat is generated by the combustion and temperature/pressure rises. Under reasonable combustion conditions, the pressure is at equilibrium between fresh and burnt gases, but the temperature is not (burnt gases are much hotter).

Experimentally speaking, accessing to instantaneous pressure inside the chamber is easy, but transient temperature is trickier to measure. I would think that under reasonable hypotheses (described below) the pressure signal is enough to solve a simple two-phase-at-pressure-equilibrium model and recover the full history, but I have not found a ref that does so yet. The trick is that during combustion, temperature increases, hence burnt gases expand and compress the fresh gases (which heats them) until the pressure is at equilibrium between the two phases. Hypotheses:

  1. No mass or heat transfer through the chamber walls.
  2. Two-phase model: all burnt gases are at the same temperature, all fresh gases are at the same temperature, flame front infinitely thin, combustion is locally instantaneous and complete (i.e. the flame front propagates with a finite speed, but as it passes a chunk of gas it immediately burns it).
  3. Each phase is an ideal gas (but the γ and average molar mass are not the same for the two phases)
  4. (optional) Assume the flame front is adiabatic, i.e., when a chunk of gas burns the heat this generates is averaged over all burnt gases (including itself) but not on fresh gases (those will still heat by compression though). (This is probably not strictly true, but while keeping with a two-phase model, it is more physical than assuming the heat is averaged over all gases (because then the burnt gases would not be at a temperature much different from the fresh ones', which is known to be false))

You have of course access to all relevant constants (heat capacity, molar mass of the gases, lower calorific value for the fuel, etc.). I tried doing it by hand, but after an hour of paper-inking I had no success. Bonus point if the answer allows for a variable (but given) total volume V, but I think I can fiddle with a constant-V solution easily enough. TigraanClick here to contact me 10:48, 13 December 2017 (UTC)[reply]

  • There is a massive amount of research on just this problem - it's the combustion chamber of an Otto cycle petrol engine. The transient pressure and temperature are difficult to measure - even more so to measure them across a spatially-distributed set of measurements - but it's such an important problem that serious effort has gone into doing just that. It resisted modelling for a long time, even when the computing power to do so became available, because the underlying processes weren't well enough understood - and of course, it's fluid dynamics and that's just hard.
Still the best primers I know on this are Harry Ricardo's series of books (they're not just editions, each edition is pretty much a re-write with the new understanding of that decade) The High-Speed Internal Combustion Engine. These go through the best understanding of the day, from pre-WWI to the 1960s, and cover the combustion chemistry, the instrumentation engineering, and of course the engine design as it developed through the 20th century. Sadly they're expensive for most editions (cost me a fortune to complete the set), although the last edition is a reasonably priced university textbook. The material in here is essential for anyone who really wants to understand 20th century engineering. Like some other books (Richard Rhodes' The Making of the Atomic Bomb is another) the strictly chronological treatment (across the editions) means that the development of knowledge is more clearly visible than in most textbooks presenting only the current best knowledge, and this can make it easier to comprehend, albeit lengthier to read.
As to the underlying dynamics of the situation, then that's too much to describe here. But there aren't shock waves (there can be, but they're avoided), multiple accidental ignition points ("hot spots") are avoided as they're uncontrollable for timing and most importantly, there's a significant energy transfer around by optical means. If this causes a pre-ignition ahead of the designed flame front, that's a bad thing and the cause of "knock" (WP has no article on knock, as it confuses several unrelated effects). Andy Dingley (talk) 12:45, 13 December 2017 (UTC)[reply]
I am aware that it is a difficult topic to handle seriously. I still wish to see the result of the simple, grad-student-level computation that I described above, in spite of its unrealistic assumptions (adiabatic walls, 0D-model with two homogeneous areas, no radiative transfer...) that make for a limited applicability to real-world scenarii. This seems simple enough that it has already been done, but hard enough that I will screw up performing the thermodynamics myself. TigraanClick here to contact me 17:08, 13 December 2017 (UTC)[reply]

Looking for youtube video that showed a cartoon train in scenarios near the speed of light?

I remember seeing a series of 5-10 videos that helped explain relativistic speeds of a train that showed how 2 observers would witness seemingly paradoxical behavior of the train (such as passing through a tunnel that was shorter than the train's length) but that to one observer, the train would appear longer than the tunnel. It then took this paradox to further extremes such as having 2 gates that would "close" for a millisecond while the train was inside the tunnel, but to one observer (but not the other) the gates would close at different times, even if they closed simultaneously due to relativistic effects. 67.233.34.199 (talk) 17:22, 13 December 2017 (UTC)[reply]