Wikipedia:Reference desk/Archives/Science/2015 January 18
Science desk | ||
---|---|---|
< January 17 | << Dec | January | Feb >> | January 19 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 18
[edit]Science method: If you want to compare a stock picking algorithm, a stock broker or a fund to a random number generator
[edit]How would you pick a range for the random numbers? (I suppose that experiment where monkeys thow darts is just a construct and not a thing that really happened).--31.4.154.9 (talk) 00:43, 18 January 2015 (UTC)
- I'm not so sure - most fonds seem to work exactly that way. Anyways, there are many ways to set up such an experiment - for each stock in the Dow Jones, throw a die and spend your capital equally on all stocks that come up with a six. Every day, role for all stocks you have and sell the ones that get a one, then repeat the buying procedure with the capital you just got. Or indeed arrange your 36 most interesting stocks in a six-by-six matrix, and role once each for row and column, then put everything in the one stock that comes up (or repeat 10 times and put 10% of your capital into each stock you rolled). --Stephan Schulz (talk) 01:03, 18 January 2015 (UTC)
- Put a cow in a flat pen. Draw a 22 x 23 grid at the center. Draw a A outside a corner. Write ZZZZZZ on 6 squares at the opposite corner. Buy the S&P 500 stock the cow poops on first. Sagittarian Milky Way (talk) 01:36, 18 January 2015 (UTC)
- If you can't come up with anything better, you can use https://www.random.org. But actually, it's very easy nowadays to program a computer to generate numbers that are indistinguishable from random. Looie496 (talk) 02:46, 18 January 2015 (UTC)
- The point here is not the random numbers per se, but about the range of the random numbers. For example, given a stock, try to guess its price in 3 months. Should random process pick between -10% and 10%? Or between -20% and 20% or something else? Whatever range we pick, it can make the professional investor win, or the computer program. --31.4.152.90 (talk) 05:03, 18 January 2015 (UTC)
- Ah...so you're not asking about picking the best stock from some set (which is generally what stock market people care about) - but rather to estimate the future value of a particular stock at some specific time in the future? I suppose what I'd do would be to pick a random stock from the market from that same amount of time in the past and see how much it gained or lost and assume that the stock you're interested in will do exactly that well. SteveBaker (talk) 05:25, 18 January 2015 (UTC)
- I'd make it mores specific than that, by using that same exact stock, and picking a random 3 month time period in it's past, and figure the return there, using that as you estimate for the next 3 months. Thus, if you're following a utility stock in a stable market, you aren't likely to predict wild swings, if it's never done that in the past. StuRat (talk) 05:42, 18 January 2015 (UTC)
- The problem is that the more clever statistical tricks you add, the less this is a random prediction - and the closer you get to doing whatever it is that stock market experts actually do. The OP's goal to discover whether pundits get closer to accuracy than chance would predict. The problem is that we can't really say "what chance will predict" without knowing more about the realms of allowed randomness. If you ask that the algorithm picks at random over a linear range from 1,000,000% increase to the stock price hitting zero - then there is no chance in hell that it'll beat the expert. On the other hand, if you guess between 10% increase and 10% decrease, then the chance of beating the expert will obviously be much better. That's because we've applied a bit of knowledge about how the stockmarket works (the likely range of stock price change) - and now the random number generator isn't purely random. As we apply more and more actual knowledge about the market, we would presumably bring the algorithm more and more closely in alignment with a market expert...because, in the end, we'd be using the same algorithm that they use.
- So we have a slippery slope. We can say that a truly random number between positive infinity and zero has a zero percent chance of beating an expert. We can say that an algorithm that uses the same tricks as an expert and then randomly adds or subtracts 0.000001% to the prediction will perform (on average) exactly as well as an expert. Somewhere between those two ranges of built-in skill, perhaps the randomness improves over the expert...but that point is unlikely to be a truly random algorithm.
- SteveBaker (talk) 06:01, 18 January 2015 (UTC)
- This doesn't answer the question, but if the expert is so bad (worse than chance) then why don't you just copy the expert but in reverse? (without telling him). An expert would have to perform as good as chance for there to be no way to time the stock market. So short when he says to buy. And buy when he says sell. This would only work if the prediction industry has made enough predictions that it's very likely that their worse than chance is not because of chance (and that they're even worse than chance at all). I don't know, it's very easy to for a study to exist where random stock picks perform better than experts if the sample size is fairly small. Those get reported on cause they're counterintuitive. Whether this us true over all the known expert stock pick data in history I don't know.
- An interesting thing would happen if enough people start doing this. What would happen to the stock prices? The experts' predictions might even suddenly become good if over half the money is betting against them. Sagittarian Milky Way (talk) 07:04, 18 January 2015 (UTC)
- I believe the way it typically works out is that, while an expert's predictions are slightly better than random, his stocks actually do worse, because he takes out a management fee that is more than his advice is worth. So, doing the opposite of his advice wouldn't work, unless he also then paid you his management fee. StuRat (talk) 07:36, 18 January 2015 (UTC)
- Certainly there is also a Heisenberg-like effect going on here where calculating the value of the stock changes it's value and thereby invalidates that original calculation. If a prominent stock analyst says "This stock is cheap...buy it now!" then that causes a lot of people to do exactly that, which drives up the price of the stock beyond the point where that's a bargain - and possibly beyond the amount that it's really worth. This makes the prediction seem like a very good one to the 'early adopters' that jumped in at the bargain price and sell when the price went beyond the company worth - but a very bad prediction to people who waited a while before following the advice. I suppose this is why people pay stock analysts to work just for them - and why following stock tips on TV shows and such is likely to be a bad idea.
- The problem with stock markets is that they have very little to do with what the stocks are really worth and very much to do with what stock buyers and sellers think that they ought to be worth. The gap between perception and reality is where the money is to be made - and that gap is caused by the very people who benefit (or lose) from the gap.
- I very much agree with StuRat about management fees. This has been the downfall of many "day traders" because they not only have to make a profit - but find stocks that will earn more than the transaction fees. The classical approach to owning stocks - where you buy them, keep them for many years, and then sell them gives you the scope to achieve much larger gains, which make the transaction fees somewhat negligible. However, that's not a "get rich quick" approach which too many people are looking for these days.
- If you stop to think about it - why would a stock suddenly be worth a lot more than it was the day before? What changed about the bricks and mortar, the people and the intellectual property of the company to make it change worth by so much so quickly...and what made that happen more rapidly than other stock market experts had already predicted? Sudden changes aren't changes in real value - they're changes in long-term predicted value...and now you're in the realms of the Heisenberg effect where stocks are changing value because they're changing value, and not for any real reason.
- If you take the idea that these aren't real changes in company worth, then the fluctuations in price can only average out to a net zero gain. Now, for every $1 that someone wins by clever trading, someone else lost $1 due to not-so-clever trading. Hence if you average the performance of ALL stock analysts, they can only (on average) do as well as the underlying businesses are (on average) gaining real value. Since the average of all stocks is growing slowly - a random stock picker will (on average) do precisely as well as an average broker.
- Hence, there are two ways to gain on the market - pick a stock in a strong company, ride it out for years as the genuine value of the company grows - or bounce around on the random fluctuations in the market caused by everyone else who is doing the exact same thing as you are. In the first case, it's fairly easy to pick winners - but you have to wait a long time to get your winnings - and you might as well bet on the economy as a whole by buying into a financial instrument that is the average of a lot of different stock prices. But in the second case, you're relying on someone who purports to be able to guess how everyone else in his business are guessing - and we know that for every winner, there has to be a loser. That's a zero sum game and averaging all of those experts together will reveal only the underlying growth of the economy. So you can't meaningfully average all of those experts together. You have to ask whether an individual broker does better than chance over time.
- SteveBaker (talk) 18:04, 18 January 2015 (UTC)
- This has actually already been the subject of considerable study, most famously by Daniel Kahneman. He has won a Noble Prize in economics and lots has been written about his work. Vespine (talk) 04:54, 19 January 2015 (UTC)
- SteveBaker (talk) 18:04, 18 January 2015 (UTC)
What are the most frequently written about species which don't have articles on Wikipedia?
[edit]Here's my answer:
Animals (chordates only):
- Girella nigricans - Opaleye or Rudderfish
- Tor tor - Deep bodied masheer or Tor mahseer (many synonyms[1] including Barbus megalepis)
- Salpa fusiformis - common salp
- Embiotoca jacksoni - Black perch or Black surfperch or Butterlips
- Nandus nandus - Gangetic leaffish
- Thalia democratica - (appears to have no common name other than salp)
- Ammodytes americanus - American sand lance
- Phallusia mammillata = Phallusia mammilata (misspelling) - white sea-squirt or warty sea squirt
- Epinephelus guttatus - Koon or Red hind
Plants:
- Nicotiana glutinosa (Solanaceae, Solanales) -- tobacco. Nicotiana
- Scirpus lacustris (synonym) = Schoenoplectus lacustris (Cyperaceae, Poales) - Scirpus
- Larrea divaricata (Zygophyllaceae, Zygophyllales) - Larrea
- Begonia semperflorens (synonym) = Begonia cucullata - clubed begonia (Begoniaceae, Cucurbitales) - Begonia
- Nicotiana plumbaginifolia - Tex-Mex tobacco (Solanaceae, Solanales) - Nicotiana
- Pelargonium zonale - Horseshoe geranium (Geraniaceae, Geraniales) - Pelargonium
- Cola nitida - Großer Kolabaum (Malvaceae, Malvales) - Cola (plant)
- Sterculia urens (Malvaceae, Malvales) - Sterculia
And here's more chordates and a link to notes and methodology... —Pengo 03:34, 18 January 2015 (UTC)
- I think you're in the wrong place -- we can't really answer that, but the attempt could help people improve Wikipedia. Recommend you go to WP:WikiProject Zoology, WP:WikiProject Botany, WP:WikiProject Taxonomy etc. and see how you can help out. Wnt (talk) 03:55, 18 January 2015 (UTC)
- If what makes these notable is that they're the most commonly written-about species that don't have a wikipedia entry, we have a bit of a catch-22.... --89.133.6.76 (talk) 12:05, 18 January 2015 (UTC)
- I get your objection, but the lack of WP articles isn't really a part of the notability. These are just a list of spp that are notable, but not nearly so famous as e.g. C. elegans or E. coli. I agree with Wnt that this list should be shopped around at the appropriate project pages. Good list OP, and thanks for helping! SemanticMantis (talk) 17:20, 18 January 2015 (UTC)
- If what makes these notable is that they're the most commonly written-about species that don't have a wikipedia entry, we have a bit of a catch-22.... --89.133.6.76 (talk) 12:05, 18 January 2015 (UTC)
Why does this happen? Certain other populations do not have this phenomenon.174.3.125.23 (talk) 04:33, 18 January 2015 (UTC)
- It's called sunbleaching, but we don't seem to have an article on it under that name, but we do have one for surfer hair, which mentions sunbleaching. (BTW, there's no need to leave a lined out typo in the title, so I removed it for you.) StuRat (talk) 04:38, 18 January 2015 (UTC)
- This post at "Ask a Scientist" answers the OP's question directly. This answer to again, almost the exact same question, is more in depth, and discusses the chemistry of sun bleached hair in detail. Sun damages melanin in both skin and hair; skin being alive makes more melanin in response to the damage; sun tanning is a form of "overcompensation": the skin makes more melanin than was initially damaged, resulting in darker skin. Hair, being dead, cannot regenerate the damaged melanin, so it stays damaged. It damages all forms of melanin in all hair, but people with very little melanin anyways (blondes), it shows much more than people with darker hair, where the damage may not be as noticeable because they have so much more melanin in their hair to begin with. At least, that's basically what these sources seem to be saying. If you search for "chemistry of sun bleached hair" in your favorite search engine, you get a lot of hits, which seem to broadly agree with this process. --Jayron32 04:46, 18 January 2015 (UTC)
sane bounds on how much information the human brain can possibly store, and how much computational power it can possibly have
[edit]Hi,
I'd like some sane upper bounds on the amount of information that the human brain actually stores (uses), and how much computational power it can possibly have. Here, I'll give some terrible bounds first. If you Google "how many atoms in the human body" Google snippets you this,
> "In summary, for a typical human of 70 kg, there are almost 7*10^27 atoms"
So, clearly 1 atom in the human body does not store 1 terabyte and do 1 teraflop of computation. So the storage limit of the human brain is 7*10^39 bytes (27 plus 12 for tera) and the computational limit is... wait, okay, I don't even know what the general concept is for a certain level of computation (the way we can just call information storage in terms of bytes). You'll have to tell me that too.
But mostly can you give me some *saner* bounds than the above? I want some more reasonable bounds, likely based on neural count maybe? 212.96.61.236 (talk) 08:19, 18 January 2015 (UTC)
- This Scientific American article estimates the memory capacity at around 2.5 petabytes, though some different estimates are given in the comments. It's hard to estimate because (1) the brain isn't a digital computer; and (2) we don't know how memories are stored. AndrewWTaylor (talk) 13:32, 18 January 2015 (UTC)
- The calculation in that Scientific American thing is nonsense, as some of the comments point out. That number is quite a bit too high. Most neuroscientists believe that synapses are the brain's main memory elements. The human cerebral cortex is generally estimated to contain around 10 billion neurons, each of which holds around 10,000 synapses. This gives a total of around 10^14 synapses. If each synapse is a binary element capable of holding one bit of memory, we get a total capacity around 10 terabytes. I myself believe that the usable capacity of a single synapse is only a small fraction of a bit, because of the amount of noise in the system: I favor a total memory capacity in the range 100 gigabytes to 1 terabyte. To get a number over 1 petabyte you have to make assumptions about the precision of operations in the brain that don't accord with anything we know about brain structure. Looie496 (talk) 19:15, 18 January 2015 (UTC)
- Looie496, did you look much into this? (Because if so I would like to ask follow-up questions - can you tell me how much you know :).) Your numbers overall agree with mine (but are a bit lower). However, one thing that you do not account for that I would have expected is - isn't one of the main sources of information in a neuron the question of WHICH other neurons it has synapses with? It is not like you can just enumerate the neurons and then for each one, store 10,000 binary values without addresses (which is 1 kB per neuron.) You have to account for the addressing, i.e. which other neurons is it in contact with? Don't you? If so it severely impacts the per-neuron size of the data. However, I would also be highly interested in what kind of a connection a single synapse can consist of - how much information can a single synaptic connection represent, really? (Not "on average" but a given one.) You seem to be going for a lower bounds whereas I am trying to think of bounds that are sane but perhaps account for a bit more data. --212.96.61.236 (talk) 22:34, 18 January 2015 (UTC)
- There really isn't any substantial data to account for. We have very strong evidence that specific types of synapses function as memory elements, but we have only a very sketchy understanding of how they interact with each other at the system level. In principle a modifiable synapse could be used to store several bits of information, because its strength is determined by the amount of neurotransmitter released and the density of postsynaptic receptors, and both of those can be varied continuously. However, there is at least a certain amount of evidence that modifiable synapses in the cerebral cortex are actually binary: they are silent until they are used to store information, and are enhanced in a single quantum step. Anyway, the business about addressing comes into play when you are trying to understand how the memory is used, but it isn't very important for forming a crude estimate of capacity. For example, to computer the amount of RAM memory a computer has, you just count the number of memory elements on the chip -- if you start worrying about how they are addressed, you'll just get sucked into a black hole of complexity. (I've been a bit incoherent here, but I hope this is helpful.) Looie496 (talk) 05:16, 21 January 2015 (UTC)
What is the mechanism of Morphine on the breathing?
[edit]According what I know, Morphine effects on respiratory system and depress it. What is the mechanism of this depressing? 149.78.16.22 (talk) 08:27, 18 January 2015 (UTC)
- See Morphine and μ-opioid receptor. According to the latter article, "the physiological and pathological roles of these two distinct mechanisms remain to be clarified", but, unfortunately, it doesn't state what the "two distinct mechanisms" are. This may be an opportunity to improve the article... Tevildo (talk) 11:33, 18 January 2015 (UTC)
Heat discolouration (?)
[edit]I've recently arrived in the 20th century, and got a kitchen with a dishwasher. I noticed that one of my ancient but favourite stainless steel pots now shows an interesting and not unattractive discolouration pattern when coming out of the dishwasher - I suspect this is due to the heat, but this is just a guess. Since I'm old, any change is bad, of course ;-). So what causes this change, what mechanism generates the colours (in particular, is this a chemical or a physical change, and if the first, is it just the alloy restructuring itself, or is it a reaction with the detergent?), and how concerned should I be? --Stephan Schulz (talk) 13:32, 18 January 2015 (UTC)
- The colors in Stephan's photo do look like that, but a dishwasher contains water in the liquid state, so its contents can't possibly reach the temperatures mentioned in that article. --65.94.50.4 (talk) 14:31, 18 January 2015 (UTC)
- Its a pot. It has been used for cooking most likely. --Kharon (talk) 14:39, 18 January 2015 (UTC) P.S. And in case an metal object is tempered without cleaning its surface very diligently befor, it will likely not result in an evenly Tempering color. Instead it will look like in the picture. --Kharon (talk) 14:48, 18 January 2015 (UTC)
- You're very likely seeing iridescence caused by some sort of thin film coating the surface. My guess is a soap film, but it could be some sort of protective coating that has been altered by the heat of the dishwasher. Looie496 (talk) 15:28, 18 January 2015 (UTC)
- The thin film coating the surface is oxidation; the color changes are in that oxidized film. It's sometimes called "heat tint". --jpgordon::==( o ) 16:57, 18 January 2015 (UTC)
- Yes, it looks very similar to the patterns from a thin film of oil floating on water, in sunlight. StuRat (talk) 16:59, 18 January 2015 (UTC)
- If it were tempering, scrubbing wouldn't help at all to remove it, while these stains can generally be removed with stuff like Bon Ami or Bar Keeper's Friend (and a fair amount of elbow grease.) --jpgordon::==( o ) 18:26, 18 January 2015 (UTC)
- Yes, it looks very similar to the patterns from a thin film of oil floating on water, in sunlight. StuRat (talk) 16:59, 18 January 2015 (UTC)
- Thanks all. It's certainly not soap or anything like that, and the pot is not coated. What seems to be most plausible is Jpgordon's suggestion - the dishwasher only reaches 55 °C or so, but maybe the detergent promotes oxidisation. --Stephan Schulz (talk) 21:33, 18 January 2015 (UTC)
- Tempering is certainly a possibility, especially if, for instance, the pot was accidentally left without any water to boil in it for even a minute or two. Natural Gas burns at 900-1500 degrees celsius, according to Flame, more than enough to heat the metal beyond the important temperatures; the range for the colors you see there is listed at the tempering page as 250-350 degrees celsius. Your pot does not appear to be particularly thick-bottomed, meaning even a short exposure to direct flame without food or water to transfer heat into could heat it very hot.
Heat transfer
[edit]It's quite bad that I can't do this, because I've just graduated with a mech. engineering degree, but someone came to me with this problem, and no one I've asked can give me an answer.
I have a vessel which is hot inside, and cold outside. How long will it take for the temperatures to reach equilibrium?
Say we have a tube - radius 6m; length 15m; wall thickness 0.06m. On one end is a hemisphere - same radius; same wall thickness. On the other end there is no heat transfer in or out. Therefore, the surface area over which heat can be lost is (15*2*pi*6)+(2*pi*36)= 792m^2
Inside the vessel is air at 30C, and outside is water at 3C. Thermal conductivity is 34.3W/mC. Cheating by using this website (http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/heatcond.html), we get heat conduction to be 12MW.
After that, I don't really know where to go to get the time to thermal equilibrium. I think the heat transfer formula is required - Q = m*C*dT (which is offered here: http://formulas.tutorvista.com/physics/heat-transfer-formula.html)
I see it requires specific heat capacity, which for air at 30C is 1.005 kJ/kgK (I think), and for water at 3C is 2.407 kJ/kgK (I think). The volume of the vessel is 2102 m^3, so the mass of air is 2102000 kg. The mass of water is effectively infinite (it's in the sea)
And at this point I'm totally stuck. Can anyone help? Thanks!! 92.237.191.99 (talk) 15:34, 18 January 2015 (UTC)
- If you insist on strict theoretical equilibrium, It will take forever. That dT is the temperature differential, so as the two temperatures equalise, the heat flow goes towards zero. In other words, your hot tube will approach but never reach the temperature of your infinite ocean. I also think some of your numbers are off. The mass of water is 1000 kg per cubic meter, but for air it's only 1.292 kg (for air at 0 °C, 1.164 for 30°C). With those numbers, you need to decide which temperature difference you call "equal" and then integrate heat loss to that point. Sorry, I don't solve even simple differential equations, but the resulting temperature curve should be of the form , unless I'm stupider than I hope. --Stephan Schulz (talk) 16:45, 18 January 2015 (UTC)
- Agreed. The rate of temperature change is proportional to the difference in temperatures, which results in the temperatures approaching each other asymptotically, whether or not one is considered an infinite temperature sink. StuRat (talk) 16:50, 18 January 2015 (UTC)
- (edit conflict)(edit conflict) You need to set up (and solve) the differential equation for heat flow through the material of the vessel. The solution for temperature difference will be an exponential with a negative exponent of time, so in theory the temperatures will never be quite equal, but you can find the time to reach effective equilibrium with perhaps a tenth of a degree difference between inside and outside. Strictly, you also need to set up differential equations for heat flow within the air inside the container, and for the water outside, but you can probably assume that there is sufficient convection in
boththe latter fluid to be able to ignore the limitations of conduction through the fluids (it depends on how complicated you wish to make the mathematical model). Dbfirs 16:52, 18 January 2015 (UTC)
Lol, physicists. I love these answers. "If you insist on strict theoretical equilibrium, It will take forever. If you're just interested for practical reasons, I'd say you'll be cold by morning." only you didn't even try to estimate that for him - seriously, he said "it's 30C, and outside is water at 3C" - don't you guys want to at least *guess* how long until it's cold in there? 30 degrees is a very hot room, 3 degrees is near freezing, and he's told you how conductive (i.e. how well-insulated) the stuff is as well as the exact dimensions. As helpful as 'forever' is, I think it's a cop-out from actually estimating anything at all - as he was just asked to do, since he's a freshly minted mech engineer :) :) 212.96.61.236 (talk) 17:53, 18 January 2015 (UTC)
- No - you could not be more wrong! That is a severe misstatement of the situation. It's a serious consideration for practical engineering as well as for physicists. You simply cannot approximate this one.
- If you say that the temperatures have to be equal to within 1 degree, the answer will be VERY different than if you demand a half degree or a tenth of a degree. The rate of cooling depends on the temperature difference so the smaller the difference you require for "equilibrium", the longer it'll take. Halving the amount of allowed temperature difference in your approximation quadruples the amount of time it takes to reach that point. Your completion criterion is not a small irrelevance...the answer can literally be anything from zero to the life of the universe depending on your choice of allowed difference. If anyone tries to tell you that the answer is "about 3 hours" without specifying the final temperature discrepancy - then they are either incompetent or lying. It's not a matter of overly-picky theoretical physicists versus practical engineers.
- This kind of failure to specify critical details why you see so much "false" advertising on TV - you say "Our product is two times as good!" is meaningless without saying two times as good as what. So saying "This insulation barrier prevents your house from getting as cold as the outside world for 10 hours after the heating is turned off!!" is entirely meaningless because your competition can say "Well, ours prevents that from happening for 100 hours!!"...since neither of them stated the final temperature difference, these statements tell you nothing whatever about which insulation solution is better.
- When you're dealing with a highly non-linear system (such as this one) - details matter a heck of a lot.
- Did you even read what I wrote? Why would ANYBODY care about equilibrium - in an ocean - to a tenth of a degree or half a degree? Pick a number, such as a couple of degrees, and give him an answer. ("Noo!!!!") The huge difference is whether the thing cools down in a few minutes, an hour or two, or over several hours, or over days. pick some numbers and figure it out, don't just leave the guy hanging with "forever, bro." contextualize. you can give several numbers of course. 22:27, 18 January 2015 (UTC) — Preceding unsigned comment added by 212.96.61.236 (talk)
- The OP specified equilibrium, not the replies. This is the sort of question that mathematicians, physicists and engineers could argue over for days. We could have given an accurate "half life" (a second or two) for the temperature difference between the air touching the inside surface and the water outside, but this would ignore the variations in the temperature of the inside air that are more significant than I'd anticipated. Did you read Dragons flight's excellent answer below? Dbfirs 23:09, 18 January 2015 (UTC)
In general, for a simple box model assuming uniform temperature inside and out, you are looking for:
Which implies:
In your case, that gives:
So, the simple box model says it will take about a few seconds to get cold. If that bothers you, and it should, then it is time to reexamine assumptions. In this case, the issue is with assuming a uniform temperature within. The inside skin of your metal tube will rapidly approach the same temperature as the sea water, but the temperature of your container will primarily be determined by heat transfer through the air (conduction / convection).
For a rough estimate, we can recompute the time constant above, but using values for the thermal conductivity of air. . This considers only conduction and only does so crudely, so should be taken as an upper bound. A somewhat better estimate by replacing by an effective convective heat transfer coefficient. Let's say 5 W/(m2 K), which gives a time constant of 4 minutes. That illustrates that there is a very large difference between thermal loss due to conduction and that due to convection. Convection is of course controlled by geometry (including orientation relative to gravity) and temperature differences; however, in most circumstance in air, it is safe to assume convection wins. So in short, I would guess that most of the heat is lost within 30 minutes or so (i.e. several times the estimated convective time constant of 4 minutes). If you want a more precise answer than that, you will need to model the temperature distribution in the air explicitly and probably turn to computer models. (Or you could build the thing and run tests.) However, if you want to keep the inside warm for a long time, then a extra layer of insulation on the interior surface would definitely be a good idea. Dragons flight (talk) 19:40, 18 January 2015 (UTC)
Thank you all! To be honest, I think I've forgotten practically everything from my degree, because all of that is beyond my ability (although I know I've learnt it before). With that, I'm going to relearn what all that is.... Thanks again! 92.237.191.99 (talk) 23:27, 21 January 2015 (UTC)
Human evolution
[edit]Hello,
I would like help in understanding the followings in simple terms please.
1)
Genetic studies show that primates diverged from other mammals about 85 mya in the Late Cretaceous period, and the earliest fossils appear in the Paleocene, around 55 mya. The evolutionary history of the primates can be traced back 65 million years.
Q: Are they talking about one or two things by dates here? Diverging and evolution here, as its stated, are they not the same thing?
2)
I would like to know, how the evolution occurs? Don’t you need a different gene? How do they, from ‘H. erectus’ to ‘H. heidelbergensis’ are ‘H. antecessor’ and so on at first? I understand that environmental changes could only be a cause of the skeletal changes resulting in different ‘homos’ but after the first migration of H erectus some stayed in Africa and evolved into many other homos. How?
(Russell.mo (talk) 17:01, 18 January 2015 (UTC))
- 1) I can't be sure without knowing what you're quoting from, but I suspect the 85 mya divergence is calculated based on studies of genetics, while the 65 mya is talking about actual fossils that can be classified by morphology. Some of the genetic v.s. fossil evidence is described at Human_evolution#Evidence.
- 2) Natural selection only really needs a pool of different alleles to act upon. Some alleles will change in frequency, and some new alleles will develop through mutation. Speciation can occur sympatric speciation or allopatric speciation (and a few other ways). You also might want to read about gene fixation. Basically all the forms of speciation have to do with some type of (perhaps functional, not physical) reproductive isolation. With early hominids, the reproductive isolation could be in part cultural, e.g. sexual selection. SemanticMantis (talk) 17:16, 18 January 2015 (UTC)
- The point of divergence is a very fuzzy line - it could be the smallest change in bone or tooth shape. At the literal instant when the first animal was born that was the ancestor of all humans but the ancestor of no other primate, that animal would have been almost indistinguishable from either of its parents. To be able to examine two fossils and say that because of some tiny bump on this or that bone, this was that point of divergence is impossible. So we have to wait for the divergence to be large enough to be clearly distinct. So examining the fossil record can't ever give you an exact date. Also, fossils are relatively rare - it's quite possible that no animals of that species were ever fossilized - or if they were, that the fossils are embedded in the middle of rock that we'll never excavate.
- Worse still, when we find a fossil, we have to assign a date to it. That's tough because the fossil may contain none of the original material that made up the animal, so techniques like carbon-dating don't work. Instead, we have to do tricks like looking at what layer of rock the fossil was found in and use that to get an approximate date...but that too is a somewhat fuzzy measurement. Suppose the animal died by drowning in a river that had cut it's way through the surrounding rocks - that would lead you to think that it was in a layer considerably older than you'd otherwise say.
- An alternative way to estimate that date comes from examining the DNA of humans and of species in the same family tree. We can simply count the number of differences in our DNA and use a rule-of-thumb that says "such-and-such number of base pairs change every hundred thousand years" - and use that to calculate the point of divergence...but that's also a very approximate approach.
- Since both techniques are inaccurate - it's unlikely that they'll agree very closely.
- "Since both techniques are inaccurate - it's unlikely that they'll agree very closely." - unlike boolean errors. Which are so close to agreeing, half the time you don't even need to debug them at all! --212.96.61.236 (talk) 17:59, 18 January 2015 (UTC)
- I'll read through the articles SemanticMantis, thanks. I don't know where I got it from, my work is mixed up. Thanks for the summary.
- @SteveBaker: Are you aware of mtDNA, apperantly it provided satisfactory result (dates things upto 200,000 years)? if so, is this correct? Because your explanation demolished this belief too... -- (Russell.mo (talk) 21:17, 18 January 2015 (UTC))
- Sure, but our Mutation rate article says: "Human mitochondrial DNA has been estimated to have mutation rates of ~3× or ~2.7×10−5 per base per 20 year generation (depending on the method of estimation);" - so that's at least a 10% uncertainty over the mutation rate in humans. Worse still, mutation rates vary significantly between species - and humans have only been human for a tiny fraction of the time between the last common ancestor with all primates and today. So the mutation rates get less and less certain as you go back in time. So if you look at the mtDNA of a typical non-human primate and that of a human...count the differences...then divide by the mutation rate, you have at least a 10% error because the mutation rate in humans isn't a definite number - and quite possibly you have a much larger error in your estimation because all of the intermediary species since the last common ancestor and us may have had totally different mutation rates. So it's easy to understand that the estimation over 50 to 80 of millions of years could quite easily differ from reality by several tens of millions...which is plenty enough to explain the discrepency our OP refers to.
- So, I agree that they mtDNA studies might provide reasonable rates (with maybe a 10% error) back to 200,000 years (when we were still essentially human) - but when you're talking 55 to 85 million years - the errors compound.
- SteveBaker (talk)
- I understand. I'll read through the link you stated too. Thank you for taking the time to make me understand in simple terms. Kind regards. -- (Russell.mo (talk) 16:16, 19 January 2015 (UTC))
- Actually, it's even worse than that - I re-read that post. The mutation rate is specified PER GENERATION, not PER YEAR - so not only is the number uncertain, but it depends on the age at which the species reproduces. Currently, for humans, a 20 year "generation" is generally accepted - but for other primates, it can be as low as a three or four years. So now, you have to take the varying reproductive ages of proto-human species into account in getting that estimate right...which makes it all vastly harder. How could you ever figure out the generation span of species that are long-extinct? SteveBaker (talk) 16:18, 20 January 2015 (UTC)
- I understand. I'll read through the link you stated too. Thank you for taking the time to make me understand in simple terms. Kind regards. -- (Russell.mo (talk) 16:16, 19 January 2015 (UTC))
- @SteveBaker: I’ve read your summary, once I understand the article(s) properly, I’ll analyse it with your summary. I must say, your informed teachings will win regardless of what the article(s) says. Please help me if I get stuck with thoughts again… Thank you. – (Russell.mo (talk) 16:24, 21 January 2015 (UTC))
Atmospheric pressure at the height of the ISS
[edit]I've been wondering: is it possible to quantify the ambient air pressure at the level of the ISS in terms of pascals? – Juliancolton | Talk 18:06, 18 January 2015 (UTC)
- In our outer space article it says "The Earth's atmospheric pressure drops to about 0.032 Pa at 100 kilometres (62 miles) of altitude", so at 414 km it will be significantly less than that (about 3.6x10-11 Pa, according to this). Mikenorton (talk) 18:17, 18 January 2015 (UTC)
- The ISS orbits roughly 410 km above the earth, in the middle of the thermosphere. According to our article on vacuum, it is hard to interpret the definition of pressure above the Kármán line, because "isotropic gas pressure rapidly becomes insignificant when compared to radiation pressure from the sun and the dynamic pressure of the solar wind". Not the best answer I'm afraid, but the best I could find =) WegianWarrior (talk) 18:28, 18 January 2015 (UTC)
- We could probably come up with a reliable number for the density of the atmosphere at that height - but pressure is tough because it ties in closely with temperature - and both concepts start to lose meaning beyond a certain point. SteveBaker (talk) 19:25, 18 January 2015 (UTC)
- At what point does temperature start to lose meaning?--Noopolo (talk) 04:17, 19 January 2015 (UTC)
- Temperature loses its conventional meaning once the air becomes ionized. This is the case at the height of the ISS, which is within the F region of the ionosphere. There are different temperatures for the different constituents, e.g., electron temperature and ion temperature. Short Brigade Harvester Boris (talk) 04:36, 19 January 2015 (UTC)
- At what point does temperature start to lose meaning?--Noopolo (talk) 04:17, 19 January 2015 (UTC)
- Since pressure and temperature is closely tied to each other and pressure starts to become meaningless above the Kármán line, temperature likely starts becoming difficult to interpret at the same height. That's just a educated guess though =) WegianWarrior (talk) 04:22, 19 January 2015 (UTC)
I've heard 1/1 trillionth that of sea level and 1/100 trillionth atmosphere on the Moon but the ISS was lower then. I don't remember if this was density or pressure. Also, you can define the temperature of a single particle, there are thermal neutrons. And you could calculate the force on the whole 410 km night side and divide by area. Though it's probably not too useful unless your object gets hit by at least 100 particles in the time it takes a particle to move a small percent of the objects size. Sagittarian Milky Way (talk) 06:08, 19 January 2015 (UTC)
measles
[edit]If someone has had their measles vaccine when they were a baby but did not get their booster shots, are they immune? and if not, is measles milder in partialy vaccinated people. — Preceding unsigned comment added by 63.247.60.254 (talk) 21:13, 18 January 2015 (UTC)
- 90% or more will be immune without the booster shot, according to the NHS (other sites suggest an even higher percentage) and 98% or 99% will be immune for life after two doses. The second dose is not a "booster"; it is designed to catch those whose immune system did not respond fully to the first dose. There are recorded cases of mild infection after up to five doses of vaccine, but these are rare. People who were vaccinated with the original killed-virus measles vaccine between 1963 and 1967 often have incomplete immunity and tend to suffer a mild form of measles if exposed to the live virus. Dbfirs 22:08, 18 January 2015 (UTC)