Wikipedia:Reference desk/Archives/Science/2017 January 25
Science desk | ||
---|---|---|
< January 24 | << Dec | January | Feb >> | January 26 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 25
[edit]How much do minor planet spacecraft change their targets' orbits?
[edit]Has one changed a minor planet orbit by a millimeter or changed it's year enough to make it a millimeter late or early by 2100? What about a meter? A kilometer? More?
Yes I am aware that radiation pressure and comet outgassing might change their orbits many orders of magnitude more than a millimeter. Sagittarian Milky Way (talk) 00:15, 25 January 2017 (UTC)
- A millimeter? Maybe, but how would we know? You'd need to deploy precision instruments like laser rangefinders to detect that tiny a difference; I don't think any existing craft have them. I'm pretty sure anything larger than that is safely out of the question. Gravity tractor gives a worked example of using a spacecraft to alter the orbit of an asteroid over 10 years. Compare the mass of a spacecraft like Dawn. --47.138.163.230 (talk) 01:10, 25 January 2017 (UTC)
- This seams like it could be calculated from the mass of the two objects. Or rather you can calculate the effect if you assume no other forces are in play. As for the need to deploy precision instruments like laser rangefinders, all you need to deploy is a Retroreflector.
- The article Gravity tractor may be of interest. --Wrongfilter (talk) 08:16, 25 January 2017 (UTC)
- This seams like it could be calculated from the mass of the two objects. Or rather you can calculate the effect if you assume no other forces are in play. As for the need to deploy precision instruments like laser rangefinders, all you need to deploy is a Retroreflector.
- As an example, let's consider Rosetta (spacecraft) and comet 67P/Churyumov–Gerasimenko. Rosetta weighed a little under 200 kg (including lander). The comet weighs 1×1013 kg, give or take, and has a maximum orbital velocity 38 km/s. Rosetta approached 67P at a relative velocity of 7.9 m/s. Pretending for the moment that this was an inelastic collision, 67P has 4×1017 kg m / s of inertia to which Rosetta could add or subtract at most 1600 kg m/s or 1 part in 2×1014. Equivalent, the velocity change could be at most +/- 1.6 ×10−10 m/s = 5 mm / yr. Now, spacecraft interactions aren't generally inelastic collisions (or at least you don't want them to be). In reality, most of the potential impulse is likely to have been lost due to Rosetta firing its thrusters out into space as it maneuvered to enter orbit around 67P. However, if you are looking for a millimeter in 100 years, the answer is probably yes. Looking at other spacecraft and minor planet interactions, there are probably ones with larger predicted effects than Rosetta. As long as you keep your expectations small enough, we probably have had some impact on such objects, though the effects are probably too small to reasonable measure directly (too many other confounding effects). Dragons flight (talk) 08:49, 25 January 2017 (UTC)
Gerard J. M. van den Aardweg review of Sexual Preference
[edit]Hello, I'd like to know whether psychoanalyst Gerard J. M. van den Aardweg reviewed the book Sexual Preference in a scientific journal or peer-reviewed publication of any kind, and if so, where I can find that review. Thanks. FreeKnowledgeCreator (talk) 01:22, 25 January 2017 (UTC)
- @FreeKnowledgeCreator: You might try asking at WP:RX as well. TigraanClick here to contact me 14:27, 25 January 2017 (UTC)
- I can't see anything that looks like a specific book review, but apparently he included a critical discussion of the book and its conclusions in PMID 6742238. (I don't have access to the paper, so I can't say any more.) Looie496 (talk) 15:12, 25 January 2017 (UTC)
- OK, thank you. I suspect that is what I'm looking for. FreeKnowledgeCreator (talk) 20:24, 25 January 2017 (UTC)
Uncertainty represented through significant digits
[edit]The number of digits displayed in a decimal number is often used as an indication of the precision of the value. What is not generally defined is exactly how many digits to round to for a given uncertainty, if we wish to give an indication of uncertainty without being too precise about it. That is, assuming that one wishes to give a value with a known uncertainty using standard scientific representation to a useful precision (let's say in an article in WP), how many digits should one use?
So, in physics in a context where the standard uncertainty is tracked fairly closely, one could adopt a rule of the form:
- Given a number in scientific notation, and δ that is the value of a digit '1' in its least significant place, e.g. 1.23 × 106 gives δ = 104, the standard uncertainty u is required to be in the range 0.1⋅kδ ≤ u ≤ kδ.
My thoughts are:
- (a) An initial reaction might be that the least significant digit should be (mostly) correct, which is to say, k = 0.5 might be a good choice. This makes δ ≥ 2u.
- (b) Another approach might be to avoid throwing away useful information, so that the centre of the distribution is given more precisely despite the last digit being possibly inaccurate (primarily a matter of aesthetics), suggesting k = 5 or thereabouts might be a decent choice, so that δ ≤ 2u.
- (c) A happy medium might take the geometric mean of these, thus finding a balance between "mostly correct digits" and "losing little information", suggesting k = 1.5 about.
What is a sensible value for k, and a suitable rationale? Is there a standard/conventional authoritative answer to this? Or is this trying too hard to fit a square peg into a round hole? —Quondum 19:16, 25 January 2017 (UTC)
- It completely depends on what is being measured and how. But for WP articles, "whatever the source says" is probably appropriate. DMacks (talk) 19:25, 25 January 2017 (UTC)
- I this instance, verifiability is not a problem. I am concerned with the construction of a template (default rounding in {{physconst}} giving CODATA-recommended values) which may find use for giving constant values in the lead and elsewhere in physics articles where one does not want to clutter the presentation with explicit uncertainty indicators at every place the constant is given, but one does want to give a good sense of the uncertainty via the precision. —Quondum 20:13, 25 January 2017 (UTC)
- See the article Significant figures for rules when writing or interpreting numbers. Why then should an encyclopedia try "to give an indication of uncertainty without being too precise about it" by inventing a "standard uncertainty u"?
- 1.23 × 106 is an exact positive Integer and if stated as such, implies no uncertainty whatever.
- 1.23 +/- 0.01 × 106 and 1.23 +/- 0.005 × 106 are ranges of values, the latter having half the spread (or twice the precision) of the former. Nothing is implied about the distribution in the given range, only its limits are given. Thus 1.230 is not a more likely guess than 1.229 unless unless one has reason to believe that the spread has a Normal distribution - this may be true where the imprecision is due to thermal noise in physical measurements, or due to a multitude of uncorrelated random processes. Even without specific evidence, the normal (or Gaussian) distribution is often assumed for convenience when using measured data but to apply it properly, its standard deviation σ needs to be stated instead of exact limits such as +/- 0.01 × 106 because the normal distribution "bell curve" actually has infinitely wide tails.
- 3.14159... is the special case of a well known Real number that is knowable but not expressible exactly. The three-dot Ellipsis symbol is a mathematical convention to show the indefinite continuation of this Irrational number and does not represent inaccuracy.
- 3.00 +/-0.005 x 108 m/s is a now outdated inexact statement of the Speed of light whose exact value is 299 792 458 m/s by definition of the Metre. Blooteuth (talk) 21:56, 25 January 2017 (UTC)
- So you are claiming that 1.23 × 106 and 1.23717731337 × 106 have the exact same (zero) uncertainty? --Guy Macon (talk) 23:10, 25 January 2017 (UTC)
- Depends on what they are. If they are merely integers used in a math class, then yes, there is no uncertainty in either one. Uncertainty of an integer isn't even a well-formed concept: 1.23 × 106 is exactly equal to 1.23 × 106, and no other number, and it is also not .
- But, if these are observations of the world, reported according to our conventions on uncertainty and significant digits, then no, their uncertainties are not equal. SemanticMantis (talk) 16:21, 26 January 2017 (UTC)
- Guy Macon asked me to answer so I confirm that all of 1.23 × 106, 1.23717731337 × 106 and (where the horizontal line or vinculum indicates infinitely repeating decimal digits) are number expressions that indeed have zero uncertainty. That is not modified by the observation that only the first value 1.23 × 106 is a whole integer, nor by the quirk of Decimal numbering that leads to recurring digits in representing some fractions (here the fraction 9/11). It's not the fault of the poor number that we don't usually count in base-11 (so-called "undecimal" that was jokingly proposed during the French Revolution to settle a dispute between those proposing a shift to duodecimal and those who were content with decimal). In agreement with SemanticMantis but with added rigour, I contend that if the numbers are observed real-world data, then the measurer is guilty of false precision if (s)he fails to add this information, as well as giving the measurement units. Blooteuth (talk) 21:50, 26 January 2017 (UTC)
- So you are claiming that 1.23 × 106 and 1.23717731337 × 106 have the exact same (zero) uncertainty? --Guy Macon (talk) 23:10, 25 January 2017 (UTC)
- I this instance, verifiability is not a problem. I am concerned with the construction of a template (default rounding in {{physconst}} giving CODATA-recommended values) which may find use for giving constant values in the lead and elsewhere in physics articles where one does not want to clutter the presentation with explicit uncertainty indicators at every place the constant is given, but one does want to give a good sense of the uncertainty via the precision. —Quondum 20:13, 25 January 2017 (UTC)
- "Experimental uncertainties should be rounded to one significant figure" and "Always round the experimental measurement or result to the same decimal place as the uncertainty." [1] - these are fairly clear and common rules for how to round and write uncertain numbers, many introductory textbooks agree. See also e.g. [2], many similar college lab course web pages will give similar, if not identical instructions. SemanticMantis (talk) 22:25, 25 January 2017 (UTC)
- SemanticMantis, not to put too fine a point on it, so far, including all the WP articles, external references and answers here, my question has not been answered. I know you to have a subtle understanding, so I urge you to re-read the question. It is clear that, when communicating clearly about the uncertainty in a measurement, notation that explicitly means x±y is best, where y typically is the standard uncertainty. This is not the question, nor is the question about how many significant digits to use for y (incidentally, CODATA figures use 2 figures, not 1). The question is also not about what the uncertainty implied by a number of digits is (anyone can determine δ as defined by me above, and multiply it by a factor). Anyone who understands the question and believes that it has an answer should be able to hazard a guess at the value k above, possibly modifying the factor 0.1 in the formula. To rephrase the question: precisely what range of values of u (=σ) is it acceptable to represent with a given value of δ, when the format is a simple decimal representation with uncertainty implied by the number of significant digits used? —Quondum 02:12, 26 January 2017 (UTC)
- Quondum Ok, I re-read. I think I have a slight sympathy for Blooteuth's "Why then.." point, but I also see your point about e.g. the lead section of physics articles. I also understand that you haven't received a direct answer your specific question. I understand the choices you lay out for k=0.5, 1.5, 5, and their respective motivations. I think in a sense the question cannot be generally answered with clear references to scientific literature, as this is the sort of thing that a manual of style for e.g. high school textbooks might cover. Weirdly, enough, WP:MOS may have some relevant comments if you dig through it. There may be something in the scholarly literature on pedagogy, but I'm not so familiar with that, and have so far come up blank.
- So then, since we've gone through lots of refs and come up short: my professional opinion (as a scientist, mathematician, educator and Wikipedian, WP:OR) is that choice b, k=5 (or greater) is the least desirable. I would not object to c), but would probably prefer a) if I were writing my own text or MOS, specifically for use when explicit uncertainties are not given, and the context is not one where careful use of significant digits is expected. And yes, such occasions are indeed very common on WP, so I do think this is good to bring up. The rational is that in this sort of less formal context, it is more important to make the last digit mostly correct than to avoid "throwing out" information. That kind of detailed, higher fidelity information is exactly what can and should be dropped in less formal contexts, as a valid way to simplify and de-clutter. I would support making this the rule for how the template displays data, and perhaps the addition of this notion at WP:MOS. Hope that helps, SemanticMantis (talk) 16:02, 26 January 2017 (UTC)
- SemanticMantis – Thank you; this is exactly what I was hoping for, difficult to imagine being more comprehensive . As you suggest, WP:Manual of Style/Dates and numbers § Uncertainty and rounding says: "Where explicit uncertainty is unavailable (or is unimportant for the article's purposes) round to an appropriate number of significant digits; the precision presented should usually be conservative." [my underline] It is difficult to imagine being more prescriptive in the MoS for the general context. With your answer, along with my guess that the typical person's expectation is (a), suggests that for the context I'm interested in (a default setting on a template) that k ≈ 0.5 would be sensible. —Quondum 21:10, 26 January 2017 (UTC)
- SemanticMantis, not to put too fine a point on it, so far, including all the WP articles, external references and answers here, my question has not been answered. I know you to have a subtle understanding, so I urge you to re-read the question. It is clear that, when communicating clearly about the uncertainty in a measurement, notation that explicitly means x±y is best, where y typically is the standard uncertainty. This is not the question, nor is the question about how many significant digits to use for y (incidentally, CODATA figures use 2 figures, not 1). The question is also not about what the uncertainty implied by a number of digits is (anyone can determine δ as defined by me above, and multiply it by a factor). Anyone who understands the question and believes that it has an answer should be able to hazard a guess at the value k above, possibly modifying the factor 0.1 in the formula. To rephrase the question: precisely what range of values of u (=σ) is it acceptable to represent with a given value of δ, when the format is a simple decimal representation with uncertainty implied by the number of significant digits used? —Quondum 02:12, 26 January 2017 (UTC)
- This question was also asked on the Math Desk. One answer was given there, and the original poster responded, as shown here:
- I'm not 100% sure mathematicians are the ones ask, and you're asking more for an opinion than an mathematical result. But generally if you give an answer to say 6 digits then it means you worked it to more digits and rounded. So if I write √2 ≈ 1.414214 then you can interpret it as 1.4142135≤√2<1.4142145. If you really want to be specific about the size of the error, which is more common in science where you don't have the luxury of being able to work out the answer to arbitrary precision, then from what I've seen it's common practice to specify the uncertainty as 1 standard deviation. Presumably when you take a measurement the result is best modeled as a random variable with a Gaussian distribution where the mean is the value given and std equal to uncertainty given. For example my copy of Abramowitz and Stegun gives the value of the gravitation constant as 6.6732 × 10-11 N⋅m2/kg2 with an uncertainty of 31 in the last digits. You'd interpret this as meaning there is a 68% chance that the actual value is between 6.6701 and 6.6763. You might try our article on Uncertainty. --RDBury (talk) 18:40, 25 January 2017 (UTC)
- The normal interpretation (as given in Uncertainty) is that the actual value lies within ±δ/2 of the given value, since that assumes one simply does not wish to display as many digits as one has. That is not particularly helpful in the context of uncertainty, though. My interest is primarily in terms of how to formalize the number of digits to use in articles in WP. But I take your point, and will ask at the science desk. —Quondum 19:03, 25 January 2017 (UTC)
- I'm not 100% sure mathematicians are the ones ask, and you're asking more for an opinion than an mathematical result. But generally if you give an answer to say 6 digits then it means you worked it to more digits and rounded. So if I write √2 ≈ 1.414214 then you can interpret it as 1.4142135≤√2<1.4142145. If you really want to be specific about the size of the error, which is more common in science where you don't have the luxury of being able to work out the answer to arbitrary precision, then from what I've seen it's common practice to specify the uncertainty as 1 standard deviation. Presumably when you take a measurement the result is best modeled as a random variable with a Gaussian distribution where the mean is the value given and std equal to uncertainty given. For example my copy of Abramowitz and Stegun gives the value of the gravitation constant as 6.6732 × 10-11 N⋅m2/kg2 with an uncertainty of 31 in the last digits. You'd interpret this as meaning there is a 68% chance that the actual value is between 6.6701 and 6.6763. You might try our article on Uncertainty. --RDBury (talk) 18:40, 25 January 2017 (UTC)
- Content moved here by 76.71.6.254 (talk) 01:10, 26 January 2017 (UTC)
Evolution of the gaps
[edit]Evolution is very credible at explaining things like small short neck mammals developing into giraffe, why some insects look like leaves, etc. But by current knowledge there are at least 2 things that require a miracles, origin of first cell and origin consciousness. Fossil record indicate gradual decrease in complexity with age, so if the first cell arose from molecules one would expect a similar hierarchy of "proto cells" from organic molecules to the simplest cell. But there are no fossils of such proto cells. Also there's no evolutionary advantage to developing consciousness, and no one has a clue how consciousness is generated by the brain.
Junkyard tornado argument says the chance of first cell coming from molecules is 1 in 10^40000. I dont know how he calculated this but i can't imagine the chance being better than 1 in 10^100, and anything less than 1 in 10^150 simply can't happen (universal probability bound). Religion uses God to "fill in the gaps". As of now, evolution uses "nature won 1 trillion lottery tickets at the same time" to fill in the gaps. With many veridical perception from near death experience, I find evolution to have the same credibility as saying God created life.
Thomas Nagel said there must be some unknown process in nature so that life is inevitable (not God). Is there any article (for layman) by evolutionary biologists that tackle these problems? Money is tight (talk) 23:30, 25 January 2017 (UTC)
- Evolutionary biologists generally don't write about the origin of the first cells. However that came about, it happened prior to the onset of evolution and involved processes that don't have anything to do with evolution as it currently works. See our Protocell article for what Wikipedia has to say about it. (Consciousness involves entirely different considerations, of course.) Looie496 (talk) 00:01, 26 January 2017 (UTC)
- The OP is conflating the concept of "we don't know it know" with "it is never knowable". Those are not the same ideas. --Jayron32 00:39, 26 January 2017 (UTC)
The origin of life is speculated at abiogenesis, with the current most popular theory being RNA world. This is considered a separate topic from evolution. And where did 10^40000, or even 10^100 come from? Sure, if you're asking "what's the chance that a cell popped into existence with a functional genome and complete biochemical pathways", sure, but no biologists is speculating that. Read those articles, they might enlighten you as to what biologists who think about abiogenesis hypothesize. As for consciousness, the problems of consciousness and free will have no bearing on the plausibility of evolution. Evolution will lead to changes in neural structure over time that make an organism more fit for its current environment. If that happens to yield consciousness, it's a happy accident. And if consciousness makes the organism more fit, then it will spread. Someguy1221 (talk) 00:41, 26 January 2017 (UTC)
- This is the problem I'm asking, things like consciousness is hand waved away as an accident. Are you going to hand wave away anything you can't explain? Isn't this exactly what religion does, why is evolutionary biology making the same mistake? I was looking for a naturalist explanation without all this hand waving of miracles. Money is tight (talk) 10:56, 26 January 2017 (UTC)
- No one said anything about miracles except you. Evolutionary biologist deal with evolution. They may be interested in other things, but that isn't part of evolutionary biology. They can explain how evolution happens, they can explain how it may lead to changes in the brain etc. What they can't do is magically explain something which we don't understand yet, and for which there may be no direct evolutionary explanation. To be clear, as Someguy1221 has said, we can't be certain that there is a direct evolutionary reason for consciousness. Even it true there's no evolutionary advantage for consciousness, if there is an evolutionary advantage for the features that lead to consciousness then it's not unresonable it would arise if conditions were right for these features to arise. While there is some dispute over stuff like Spandrel (biology), I think pretty much all evolutionary biologists would agree it's a mistake to assume every feature is an adaptation. (There is of course also disagreement among evolutionary biologists on much of a problem Just-so stories are.) Once there is more understanding of consciousness, then evolutionary biologists may or may not have to tackle it, but it's silly to expect them to tackle something when there is insufficient information and it's outside their field. None of this is similar to what happens with a lot of religious people, where often a case of "we don't know so that must mean...." or "we can't know because god...." or "well I have no explanation, but I don't need one and will never need one because god...." or "I don't understand, I'm sure that's because god...." Nil Einne (talk) 15:43, 26 January 2017 (UTC)
- This is the problem I'm asking, things like consciousness is hand waved away as an accident. Are you going to hand wave away anything you can't explain? Isn't this exactly what religion does, why is evolutionary biology making the same mistake? I was looking for a naturalist explanation without all this hand waving of miracles. Money is tight (talk) 10:56, 26 January 2017 (UTC)
- See this video and the second part. Count Iblis (talk) 00:43, 26 January 2017 (UTC)
- Thanks for the videos, watching it now. Money is tight (talk) 10:59, 26 January 2017 (UTC)
- The OP seems to be referring to Junkyard tornado although it doesn't give any probabilities. However as the article explains as does Universal probability bound (which the OP linked to), such calculations are inevitably flawed. Nil Einne (talk) 01:08, 26 January 2017 (UTC)
- The only flaw in the universal probability bound is that there are more particles than in the observable universe (because the whole universe is bigger). Money is tight (talk) 10:56, 26 January 2017 (UTC)
- Did you read the articles? You can't calculate probabilities based on incorrect assumptions and understandings and then say this couldn't have happened because the probabilities are so low. Of course the idea that something couldn't have happened by chance because of probabilities needs to be applied with caution anyway. You don't even need an evolutionary biologist to tell you that. In fact as our article sort of hints at, mathematicians are just as likely to to reject those dumb concepts are evolutionary biologists. E.g. these seem to be a mix of both [3] [4] [5] [6] [7] [8] [9]. Of course if you're not simply stating this couldn't have happened, but this means a creator must have been involved then you hit the problem Dawkins pointed out via the Ultimate Boeing 747 gambit that your proposal is actually even more fantastical. (Dawkins is an evolutionary biologist but such claims are outside the realm of science anyway so aren't something evolutionary biologists explicitly have to deal with.) Nil Einne (talk) 12:35, 26 January 2017 (UTC)
- The only flaw in the universal probability bound is that there are more particles than in the observable universe (because the whole universe is bigger). Money is tight (talk) 10:56, 26 January 2017 (UTC)
- The OP is asserting creationist nonsense and expecting what? A debate? The origin of cells is easy, see Stuart Kauffman's The Origins of Order. Basically you need a big body of water with dissolved salts and lipids and some cyclic energy source to drive the concentration of chemicals into the proto-cells. When these cells become too large they will simply split, as anyone who's had a bubble bath would know.
- As for consciousness, it is a simple continuum from early neural networks through the worms up to the higher organisms. This is no mystery. (The nature of qualia is mysterious, but that's a different question.) We're being invited here to discuss the God of the Gaps, and I suggest this question be moved to the theology page. μηδείς (talk) 02:52, 26 January 2017 (UTC)
- One of the classic attributes of a creationist pseudoscientist is jumping back and from the origin of species to the origin of life to the origin of the universe as if those three things are connected somehow. --Guy Macon (talk) 10:00, 26 January 2017 (UTC)
- When did I say I was looking for a debate? And when was I spurting creationist nonsense? I said I was wondering if there's a materialist theory that doesn't involve the incredible luck of first cell forming. Why has no one ever synthesized a cell in the lab yet? And again you've hand waved away consciousness like what Someguy1221 did. I already mentioned Thomas Nagel believing the theory of life should be such that life is an inevitable consequence of the universe, not through some extreme low probability events. I don't even want to continue this discussion with you two given your attitudes. Money is tight (talk) 10:56, 26 January 2017 (UTC)
- To call my argument "hand waving" is to imply that your own is even relevant. For a biologist's perspective on the problem of consciousness see neural correlates of consciousness. Consciousness is broadly ignored by evolutionary biologists because it cannot be defined in such a way that we can tell what is or is not conscious, definitions which need to be very solid indeed for a discipline that often studies species that are long since extinct. That is, you can't argue over whether evolution can explain how consciousness arose, until you can explain what consciousness is. You may also be interested to read about dualism, the theory that the mind cannot be inferred from the body, which would render the entire question moot for evolution. Essentially zero neurobiologists believe in dualism these days, but instead hold out hope that a biological definition of consciousness is possible. Someguy1221 (talk) 11:09, 26 January 2017 (UTC)
- Are evolutionary biologist taking the eliminative materialism view on consciousness? I agree that we need to first explain what consciousness is, and that's a neuroscientist's job. I know about dualism, but like I said, I'm looking for a plausible conjectural materialist theory that doesn't require miracles (if anyone even has such a theory as of 2017). Dualism has major problems of its own, just like with materialism (violation of entropy is the most serious problem with dualism). Money is tight (talk) 11:17, 26 January 2017 (UTC)
- I've never met a biologist who actually believed eliminative materialism, though all neuroscientists I've spoken to about this believe that certain mental states are unlikely to be described on a biological level without major unforeseeable advances in the understanding of the brain. Until then, the theory you're looking for doesn't exist. Some people have tried, but it always involves massive hand waving, usually somewhere on the spectrum from "quantum" to "something happens". From where I'm standing in the field, there is a major division between scientists who want to measure the activity of the human brain to greater and greater accuracy to inform more accurate models that may allow such an understanding, and those who want work up from the simplest neural systems to understand the fundamentals of neural network behavior. The former seem to have an easier time getting high profile publications, although I'm partial to thinking the latter will be more successful. Someguy1221 (talk) 12:16, 26 January 2017 (UTC)
- Your first post looks a lot like Question time, where after two minutes of exposing your position on a subject for the benefit of television cameras, you ask a question because that's what the rules say must be done. The real question content of it can be summarized as
How did life appear?
, which is what should have been posted in the first place. TigraanClick here to contact me 17:48, 26 January 2017 (UTC)
...anything less than 1 in 10^150 simply can't happen...
I counter your universal probability bound (a classic creationist trope, BTW) by the anthropic principle.
- Your argument is to define a "probability that life will appear from shaking that bottle of carbon matter" (basically), observe that it is low, deduce that it is not reasonable to expect that event to have taken place by chance, and marvel at the existence of life. The problem is that for all we know, there could be 10^1000 parallel universes with similar initial conditions where, in fact, life did not appear, and because life did not appear there, we are not in position to observe those.
- There was a story in ancient Greece (shame I could not find the quote) of a cynic who visited a temple to Poseidon. One of his friends pointed to offerings from sailors that survived storms and admired the god's clemency for humans; to which the cynic answered that he could not see the offerings of those that drowned to compare. TigraanClick here to contact me 17:48, 26 January 2017 (UTC)
- Tigraan, I could not find an example from ancient greece, but there is a well-known example from WWII.[10][11][12] If we ever find a citation supporting the Greek story, it would make a nice addition to our article on Survivorship bias. --Guy Macon (talk) 04:10, 27 January 2017 (UTC)
- @Guy Macon: - found it, though it was a bit hard to track down. I originally read the story in Montaigne's Essays, book 1, chapter 11 (search for "Diogenes, surnamed the Atheist"). The same story is here ("When someone expressed astonishment...") sourced to the Greeks (did not check the original). Is is unclear whether the quote is from Diogenes or Diagoras (in Montaigne's French text it is Diagoras, in the translation I linked it is Diogenes, although there were multiple editions of the Essays). There is no mention of Poseidon in either - probably a false memory from my part. TigraanClick here to contact me 11:45, 27 January 2017 (UTC)
- Tigraan, I could not find an example from ancient greece, but there is a well-known example from WWII.[10][11][12] If we ever find a citation supporting the Greek story, it would make a nice addition to our article on Survivorship bias. --Guy Macon (talk) 04:10, 27 January 2017 (UTC)
- First, if this reply sounds offensive it's not a personal attack, I'm ranting because I'm just fed up with the use of anthropic principle, multiverse, many world interpretation etc. I don't know how the !@#$ Everett got a PhD from his many world interpretation, it's complete nonsensical bullshit, Everett actually said no communication is possible so no one can ever disprove his theory. This is the same as bible thumpers telling everyone Jesus walked on water when no credible witness was there to see it. Multiverse is in a similar vein, but at least some multiverse theories allow communication by gravity. And anthropic is the same as saying "I don't know wtf is going on so I'm going to ignore the problem", just like eliminative materialism.
- I was exactly looking for an alternative to the shaking bottle explanation, because it's impossible. I'm watching the videos by Count Iblis and the lecturer is addressing this problem with something called "systems chemistry", exactly the kind of thing I was looking for. Clearly if materialism is correct, then there must be some unknown process that gives a high probability of assembling molecules to the first cell, and things didn't just suddenly come together with a 1 in 10^1000 probability. Money is tight (talk) 01:18, 27 January 2017 (UTC)
- Have you read Everett's PhD thesis? It's quite a lot of work he put into it. It was never supposed to be falsifiable because it's an interpretation of a theory, not a theory in and of itself. Further, Everett did something extremely important in his thesis. He proved mathematically that the theory of quantum physics works even without observers, whether or not you accept the rest of what he wrote. Someguy1221 (talk) 04:09, 27 January 2017 (UTC)
- User:Money is tight - I will happily second the book medeis has recommended above, and give an alternate, "At home in the universe" [13]. It covers the same material and is in fact written by the same esteemed author, Stuart Kauffman. It precisely tackles the problem of the junkyard tornado, and explains how autocatalytic processes and self-organization can be seen to be a natural and expected material consequent of the physical processes acting in early Earth (or indeed many other places). AHITU is by far the best popular science book on abiogenesis that I've seen, accessible to the masses, but written by a respected scientist and not engaging in any hand waving. If you think there is anything hand waving in that book, then turn to the appropriate chapter of "Origins of Order" and you will find all the math, physical chemistry and bio in all it's gory detail. Hope that helps, SemanticMantis (talk) 19:13, 26 January 2017 (UTC)
- Thanks for the recommendation. I also found the video from Count Iblis to be quite helpful. Money is tight (talk) 01:18, 27 January 2017 (UTC)
- What's amusing about "replacing" a creationist trope with the anthropic principle is that there could be nothing more explicitly creationist! We have a world where consciousness is possible because only of that world we can be conscious; and if that requires a miraculous mechanism, then it does not matter how unlikely that miracle is; it must occur. And it seems like a relatively sensible notion that the fundamental source of consciousness in the cosmos would itself be conscious, no? It is a veritable proof of the existence of God, really, and of God's role in planning the world we see.
- I have regaled this Desk too often with explanations of how precognition could create causality violations responsible for paranormal phenomena including qualia and free will; anyone not yet having suffered through will have to resort to archives. Wnt (talk) 00:14, 27 January 2017 (UTC)
- That's a rather naive, Panglossian view of the anthropic principle. It's also backwards; the anthropic principle doesn't assign purpose to the universe, as though it was destined to create humanity to study it. Instead, the anthopic principle is merely a statement of the obvious fact that the laws of the Universe do exist the way that they are, and if they weren't, the Universe would be a different place. Really kind of trite when you get down to it. --Jayron32 03:12, 27 January 2017 (UTC)
- A simple example: We have found many bodies (human and animal) buried in peat bogs and very few lying out in the open in rain forests. This would appear to be statistically improbable given how many animals die in rain forests and how few die in peat bogs but it is actually simple Survivorship bias. Animals that die in rain forests tend to not leave any remains for us to find. --Guy Macon (talk) 04:16, 27 January 2017 (UTC)
- There's more to the anthropic principle than that. It would be some sort of miracle if there was just the one universe and it was perfectly tuned for life and small variations were completely inimical. The obvious conclusions are either that fairly large variations would not be inimical or that there are lots of universes with different variations. Either of those puts constraints on physical theories. Dmcq (talk) 09:24, 27 January 2017 (UTC)
- Neither is really required at all. The past always has a probability of 100% of having had happened. Events in the past did happen. The universe does exist in its current state. That we speculate on what the universe may have been like had it happened differently is fun intellectual masturbation, but those idle speculations have no effect on what has already happened. Our sense of miraculousness, and more importantly our need to explain it, is more about our own psychology than any property of the universe. The universe happily goes on whether its existence makes us uncomfortable or not. --Jayron32 13:42, 27 January 2017 (UTC)
- They are not 'required' but saying that a world of intelligent beings just happen to exist even though that has practically zero probability is simply not going to be part of any acceptable physical theory. Dmcq (talk) 12:03, 29 January 2017 (UTC)
- Neither is really required at all. The past always has a probability of 100% of having had happened. Events in the past did happen. The universe does exist in its current state. That we speculate on what the universe may have been like had it happened differently is fun intellectual masturbation, but those idle speculations have no effect on what has already happened. Our sense of miraculousness, and more importantly our need to explain it, is more about our own psychology than any property of the universe. The universe happily goes on whether its existence makes us uncomfortable or not. --Jayron32 13:42, 27 January 2017 (UTC)
- That's a rather naive, Panglossian view of the anthropic principle. It's also backwards; the anthropic principle doesn't assign purpose to the universe, as though it was destined to create humanity to study it. Instead, the anthopic principle is merely a statement of the obvious fact that the laws of the Universe do exist the way that they are, and if they weren't, the Universe would be a different place. Really kind of trite when you get down to it. --Jayron32 03:12, 27 January 2017 (UTC)