Wikipedia:Reference desk/Archives/Science/2011 January 23

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Science desk
< January 22 << Dec | January | Feb >> January 24 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

January 23[edit]

where does the RNA stored?[edit]

i know that in the cell nuke 99% DNA and 1% in the MITO's..

but what about the RNA? —Preceding unsigned comment added by (talk) 00:41, 23 January 2011 (UTC)

Messenger RNA is generally not stored -- it is manufactured (in the nucleus via transcription), read (in the cytoplasm by ribosomes via translation) and degraded (by catabolic enzymes). Of course, prokaryotes don't have nuclei, so transcription occurs in the cytoplasm, and translation occurs while the RNA is being produced because ribosomes don't have to wait outside a nucleus. There are some RNA viruses, so it's maintained in these types of situations. DRosenbach (Talk | Contribs) 00:54, 23 January 2011 (UTC)
Read our RNA article. There are many different types with functions in different parts of a cell. See also List of RNAs. SpinningSpark 00:58, 23 January 2011 (UTC)
A large portion of the RNA in the cell at any one time is that making up the ribosomes. See rRNA. -- (talk) 18:54, 23 January 2011 (UTC)
You can store it for a short time in nuclease-free water at -80 C or else for a longer time as a precipitate in 1 volume nuclease-free water, 0.1 volumes 3 M NaAc and 2.5 volumes 100% ethanol, at - 20 C. —Preceding unsigned comment added by (talk) 01:19, 24 January 2011 (UTC)

Solute property[edit]

What's the property of solutes that makes the number of atoms in solution significant? I'm thinking of a chemistry lecture that discussed using aluminum chloride over sodium chloride because there's 4 atoms in the former and only 2 in the latter and ice will take much longer to form with double the number of solute molecules. I think the word begins with the letter "c." DRosenbach (Talk | Contribs) 01:03, 23 January 2011 (UTC)

Nope, it's a v. See Van 't Hoff factor. Unless you meant "c" as in colligative property. the Van 't Hoff factor is the numerical value which determins how much the solute "matters". Colligative properties are the properties the Van 't Hoff factor affects, like freezing point depression and osmotic pressure. --Jayron32 01:09, 23 January 2011 (UTC)
Yes -- colligative. thanx! DRosenbach (Talk | Contribs) 04:23, 23 January 2011 (UTC)


what happens to girl's sexual organs when she is sexually aroused? — Preceding unsigned comment added by Pranashu1432 (talkcontribs) 04:31, 23 January 2011 (UTC)

See Sexual arousal#Physiological changes and Sexual arousal#Female physiological changes. Red Act (talk) 05:01, 23 January 2011 (UTC)


1. Why electricity is generated in 11kv in generating station & why it comes to 220volt in case of distributed to individual consumer? 2. how the current flows in the three phase plug ? 3. what is the function of each point of 5point plug in the switch board? — Preceding unsigned comment added by Khan2010sonali (talkcontribs) 04:51, 23 January 2011 (UTC)

Please provide a reference for what country has the "5point plug." Edison (talk) 05:35, 23 January 2011 (UTC)
3 phase + neutral + earth = 5 pins--Aspro (talk) 12:20, 23 January 2011 (UTC)
Electrical power is distributed at high voltages (usually quite a bit higher than 11kV for national grids) because that minimizes that transmission losses. It is transformed down to lower voltages for residential use because the high voltage requires impractically large shielding and safety devices. Historically the choice of residential voltage was informed by the need to run practical incandescent lamps directly off the grid voltage. Lots more information can be found in the Electric power distribution article.
The five pins in a five-point plug carry three live phases, offset 120° from each other, one common return/neutral wire (see three-phase electric power for lots of explanation and animations), and a protective earth pin. The precise shape and layout of these plugs differ between countries, and sometimes also between residential and industrial applications. –Henning Makholm (talk) 07:44, 23 January 2011 (UTC)
Addressing question 2, how the current flows in a three-phase plug: the current flows into the load through one phase and returns through the other two phases. Ideally, with a perfectly balanced load, there is no current at all in the neutral conductor. Which phases are delivering current and which returning current changes rapidly with time; any one phase current first rises to a peak, then falls to zero and then the current reverses. As Henning said above, this cycle is 120° apart for each phase ensuring that at least one is delivering current and one receiving current at any one time. The three currents added together algebraically sum to zero; which is why there is no neutral current. See three-phase and three-phase electric power. SpinningSpark 10:26, 23 January 2011 (UTC)

Schizophrenia AntiBiotics - 52%? success rate that treats and requires no anti psychotics[edit]

I myself and a large amount of people I know have schizophrenia

Months ago visiting Wikipedia I found an article that said schizophrenia had a 52% treatability rate with the listed antibiotic

Could you please provide this again information again? I haven’t been able to find it

I hope to forward it to medical institution —Preceding unsigned comment added by (talk) 06:06, 23 January 2011 (UTC)

This question has been removed per our medical advice policy. If you have any questions about possible treatments, you should discuss this with your doctor or psychiatrist. Nimur (talk) 06:35, 23 January 2011 (UTC)
Nimur I put the question back because it does not meet the guideline what what constitutes a non-answerable request for medical advice. This is a request to find an article, not a request for a diagnosis and advice. Ariel. (talk) 11:14, 23 January 2011 (UTC)
You may be looking for E. Fuller Torrey although the 52% efficacy claim is not made in that article. SpinningSpark 14:04, 23 January 2011 (UTC)
Please be aware that Wikipedia can be edited by anyone, and so information which is speculative, unverified, and/or downright bogus is constantly being added to and removed from articles. material which appears and then disappears (never to return) most likely falls in one of those categories. -- (talk) 14:21, 23 January 2011 (UTC)
I've had a look in some obvious articles like E. Fuller Torrey and Toxoplasmosis. Can't find any mention of the claim although it's hard to look for some specific wording in the history particularly without knowing when the info was there. As 76 has said, claims may be added to articles which lack support and these will usually be removed when noticed. It's also possible you're remembering wrong. For example the Toxoplasmosis link has evidentally been studied since 1953. These [1] [2] mention 52% in relation to Schizophrenia. So perhaps some of these details were in an article and you're remembering different bits of information as one, I do that some times. Nil Einne (talk) 15:00, 23 January 2011 (UTC)
The paper in question is probably PMID 19269110. Note though that it appeared in Medical Hypotheses, which is not generally considered a reliable source for Wikipedia's purposes. Looie496 (talk) 19:14, 23 January 2011 (UTC)
There is a pretty credible hypothesis that schizophrenia is caused by a viral infection in infancy whose long-term effects emerge decades later.[3] Bacteria don't come into that picture, from what I gather. (talk) 21:49, 23 January 2011 (UTC)

Sussex cricket meteorite[edit]

I meteorite was reported to have landed in the middle of a cricket match in Sussex in July 2010, as in this this newspaper article:

but I cannot find any report to say whether or not it was confirmed as a meteorite, which makes me suspect that it wasn't.

According to a review in New Scientist, 22 January 2011, of the book Incoming! by Ted Nield, it was 12 cm long, and split into 2 pieces, but the photo in the article shows the pieces as being 3 or 4 cm each.

Any opinion or information about whether this was a genuine meteorite, or what it was identified as?

FrankSier (talk) 11:44, 23 January 2011 (UTC)

I also posted this question in Yahoo! Anwsers and got a couple of very quick replies: it was not a meteorite, but probably something that fell off a passing aeroplane. See:
FrankSier (talk) 13:45, 23 January 2011 (UTC)
Some objects thought to be meteor fragments or green fireballs are indeed remnants of broken pieces of artificial satellites that often contain stainless steel. July is usually the early starting period of the Perseids meteor shower. Here are some reports of recent meteors mostly in the US: AMS. ~AH1(TCU) 00:39, 27 January 2011 (UTC)

Heat generating gel[edit]

I bougght some bags with gel in that solidifies and gives out heat when activated by a mechanical shock. How do these work?-- (talk) 12:08, 23 January 2011 (UTC)

We have an article about that:Heating pad.--Aspro (talk) 12:22, 23 January 2011 (UTC)
(edit conflict) Various mechanisms for releasing heat are described in our articles on hand warmers and heating pads. From your description, it sounds as if your bags contain a phase change material such as a a supersaturated solution of sodium acetate. Gandalf61 (talk) 12:26, 23 January 2011 (UTC)

Planck constant constant ?[edit]

Is it known for certain whether the Planck constant has always had the same value throughout the history of the universe ? If the value of h were changing slowly over time, at a rate that was too slow to be directly seen in local measurements over the last 100 years or so, what observable effects would this have ? Presumably, given conservation of energy, the frequency of light from distant galaxies would be shifted, but could this effect be distinguished from the cosmological red shift ? Gandalf61 (talk) 12:42, 23 January 2011 (UTC)

A change on the value of Planck's constant would also imply a change on the value of Rydberg's constant so the spectrum lines wouldn't simply shift around. Their spread would also change. That is not observed. Other chemical compounds would have their spectrum altered in even more complex ways. Some of them might even be completely unstable for values for h-bar much different than the currently observed ones. That severely constraints any possible variation on h-bar to a very minimal variation almost indistinguishable from no variation at all. Dauto (talk) 16:09, 23 January 2011 (UTC)
What I said above is true but that has not kept some very high profile physicist from speculating on the possibility of physical constants varying over time, starting with Dirac. See Physical constant#How constant are the physical constants?.
read this. Count Iblis (talk) 16:28, 23 January 2011 (UTC)
I think that Duff is wrong about this, and so is everyone who's expressed similar sentiments on the reference desk (which includes me). Imagine a physical theory with two continuous parameters. You can cover the parameter space with two coordinates, b and q. But you can also cover it with b and m, where m = b + q, or with m and q. These are all unitless parameters. But what does it mean for b to vary? It's one thing for b to vary with q held fixed, quite another for b to vary with m held fixed, and logically impossible for b to vary with m and q held fixed. So "variation in the fundamental constant b" depends on what other constants you've decided to take as fundamental, even though b is unitless. Adding units is equivalent to increasing the dimension of the parameter space and adding a corresponding number of symmetries. Does that affect what I wrote above? Not as far as I can see. Suppose we parametrize the same theory by x, y, and z, with b = x − y and q = y − z (and, therefore, m = x − z). The symmetry is that adding the same real number to all three of these coordinates leaves the physical theory unchanged. This is equivalent to saying that x, y, and z all have the same (nontrivial) units. Is variation in x physically distinguishable from variation in y? Yes: you can't use the symmetry to convert one into the other. On the other hand, if the parameters were b, j, and k, with q = j − k and m = b + j − k, then varying j and varying k would be physically equivalent. You have to look at the context. In the appropriate context, though, it's fine to talk about variation of a unitful physical parameter. -- BenRG (talk) 21:47, 23 January 2011 (UTC)
Indeed, whether a quantity is unitless or not is more a question of the conventions of measurement we adopt than it is a fundamental feature of the universe.
Another relevant point is that it is not really well-defined to say, "imagine that Planck's constant changed but all the other constants stayed the same", because that there are sufficiently many candidates for "all the other constants" that fixing them all also implicitly fixes Planck's constant. One needs to specify explicitly what else one supposes to be constant while the thing one is varying varies. If we're varying Planck's constant, saying "... while the mass of the electron, measured in Planck masses, stays the same" is different from saying "... while the mass of the electron, measured in kilograms, stays the same". –Henning Makholm (talk) 00:53, 24 January 2011 (UTC)
I agree with all of the above except the idea that a whether a quantity is unitless depends on our conventions of measurement. I the quantity has no unit, how could we change its value by changing measurement conventions? That's akin to changing the value of pi by changing choice of measurement conventions. It makes no sense to me. Dauto (talk) 04:16, 24 January 2011 (UTC)
Right, pi could probably not made unitful by choosing other units, so my assertion was too strong. (One could make a tortured claim that the unit of pi is radians, but there are strong arguments in calculus that radians ought be unitless too -- that's why it makes sense to use radians for anything in the first place). I was thinking of something like how different systems of electromagnetic units can make various quantities (such as permeability and permittivity) either unitless or not. –Henning Makholm (talk) 07:58, 24 January 2011 (UTC)
The way I prefer to approach this is by working in natural units and then rescaling certain variables leading to the appearance of conversion factors. Formally everything is then still dimensionless, but you are of course free to assign some dimensional factors to a particular conversion factor if you also multiply the variables it multiplies by the inverse of those dimensions. The issue is then if a change in some constants correspond to a change in the theory, as in BenRG's example. Count Iblis (talk) 13:32, 24 January 2011 (UTC)
Thanks for all the responses. I take the point about needing to be careful about defining which other quantities are assumed to remain constant before we can make "does the Planck constant vary with time" into a meaningful question. So ... if we assume that the values of the speed of light, the elementary charge and the permittivity of free space all remain constant, then the question, in effect, becomes "does the fine-structure constant vary with time ?". And according to our article, the jury is still out on that one. Gandalf61 (talk) 12:54, 24 January 2011 (UTC)

is it true that Scientists think people walked around for 150,000 being as intelligent as us but without written culture?[edit]

Is it true that Scientists think people spent 150,000 years walking around being as smart as us, but without any writing, schools, architecture, roads, etc etc? If they were as smart as us, why does Science say they took so long to found a school or start writing or make architecture? Isn't this proof that evolution is an interesting story, but obviously doesn't make much sense on a practical level...? (talk) 13:09, 23 January 2011 (UTC)

How long was it before you wrote your first book or built your first aeroplane? If you have even done either of those things it would not have been before many people spent a great deal of time teaching you to write or explaining science and engineering to you. Knowledge and skills have been built up very slowly over many centuries, you are lucky to live in an age of printing and internet where this knowledge can be acquired very quickly. But to discover something new is not so easy and takes a lot longer the first time round. Once we have the knowledge it can be passed to others quickly, but finding it in the first place - well there are many things we still do not know, and may never know.
The growth of human knowledge has little connection with biological evolution. The rate of acquisation of knowledge does not prove anything about evolution one way or the other. SpinningSpark 14:16, 23 January 2011 (UTC)
(EC) I'm not that familiar with current theories of human evolution, but I don't see any reason to think human reached an intelligence maxima 150k years ago. (Of course even if we did there are still plenty of reasons why all those took a long time to develop although that's more of an anthropological/sociology question then a evolutionary one.) P.S. From a quick search I found Evolution of human intelligence and Behavioral modernity which suggest ~50000 years as more likely. You may also be interested in these to give you some idea of current theories of how human behaviour changed over time. Nil Einne (talk) 14:17, 23 January 2011 (UTC)
As far as anyone can tell, humans had all the modern mental machinery we have now some time bofore 150kya. See anatomically modern humans. SpinningSpark 14:52, 23 January 2011 (UTC)
My comment about 50k may not be correct, but from what I can tell the comment that human mental machinery was the same 150kya as 50kya is only one of the current theories. There's dispute over whether anatomically modern humans also displayed behavioral modernity and if they didn't, then suggestion that evolutionary changes resulted in the behavioural modernity. In particular quoting directly from the article (i.e. don't dispute it with me, fix the article if you believe the theory is fringe) "One theory holds that behavioral modernity occurred as a sudden event some 50 kya (50,000 years ago), possibly as a result of a major genetic mutation or as a result of a biological reorganization of the brain that led to the emergence of modern human natural languages." Note that anatomically modern humans only seems to refer to the skeletal and other structures which are preserved in fossils, not things like brain structure (i.e. mental machinery which would potentially affect intelligence) which are not. In fact without wanting to get into controversial stuff like Race and intelligence and Heritability of IQ (and of course we need to ask what intelligence is in the first place), I don't think it's clear that human intelligence 10kya was the same as it is now. Note that I'm not arguing this is the reason for our advancements, far from it, rather I'm arguing the premise that intelligence reached a maxima 150kya is at best unsupported and probably reflects a misunderstanding of evolution on the part of the 194 (akin to the common 'why did monkeys stop evolving' idea). To put it a different way, the changes between 150kya and now may perhaps be far less then say 300kya and 150kya, it doesn't mean there were no changes. Nil Einne (talk) 15:10, 23 January 2011 (UTC)
In any case, it makes little difference to the principle of the answer whether one takes it as 50kya or 150kya (although I think the anatomical evidence supports at least speech being around at 150kya). The point is that humans had the physical ability to write and build cities long before they actually started to do so around 10kya. The reasons for this happening so late on are to be found in social development rather than evolution. SpinningSpark 17:47, 23 January 2011 (UTC)
We've come to naturalize scientific research, when really it is one of the most awkward and profound creations of the human species. It requires so many things before you can really even begin. You need to have enough people so that enough wealth is generated in order to allow some people to sit around pondering imponderables. You need to have societies with enough tolerance of oddness to allow people to voice heretical or at least bizarre notions. You need to have religious institutions tolerant enough (as religiousity and religious institutions seem far more "basic" to the human species than science — they seem to spontaneously arise, even today) to be challenged in their monopoly of knowledge. You need to have a sense of philosophy, a theory of how knowledge works, and why you might want it. And then, even if you have all of that, you need to have people willing to experiment, be wrong, and work at the tedious, tedious job of organizing the near infinite amount of observable phenomena into useful categories. And lastly, you need enough of said people, and enough means for them to communicate, for them to become a real community, as one toiler cannot accomplish more than a few flights of genius even at their best (consider how little Newton accomplished as an individual, compared to the scientific output of his time; consider again that much of Newton's accomplishments were based on the data and work of others, at that!). So it's no simple thing. It looks simple now because we have had exponential growth in the scientific community and scientific output in recent years; our last decade of scientific work is probably as much in raw numbers as the entire 17th century put together. But it's no easy thing, and even that is largely because after World War II, states started to think, "hey, funding science in a major way is a way to real success." The modern scientific infrastructure is really that recent — 65 years or so. --Mr.98 (talk) 14:22, 23 January 2011 (UTC)
Providing basic needs can be met , what incentive is there for developing culture and skills beyond what is required to meet these. Nomadic and semi nomadic life doesn't favour the accumulation of many possessions other than the basics, therefore higher skills had no infrastructure on which to be formed. It was only with the advent of farming, that people invested time and effort in a plot of land and so stayed in one place, which in turn, provided the opportunity to develop larger settlements with more complex cultures and a broader development of skills. Farming also suggests by its very practice, that advantages can be had in the future by planning in the longer time frame and by the keeping of records. Living in permanent settlements also create new problems and that in itself drives invention. That only started about 12000 years ago. --Aspro (talk) 14:24, 23 January 2011 (UTC)
As Mr.98 mentions, population size is very significant. The rate of technological development makes more sense if you look at it in terms of people-years rather than chronological years. The human population of the world was so small under quite recently that most of the people-years have been in the last century or so (possibly more recently than that), as have most of our technological developments. --Tango (talk) 15:03, 23 January 2011 (UTC)
I'm inclined to agree that population size should be very significant. A quick Web search turned up [4], but I haven't tracked down the source and I can't say how reliable it is (probably it's a wild guess anyway). But the guess, anyway, is 100 million at Year 0, 500 million at Year 1000, 1 billion at Year 1800. But to this we should probably add another factor, which I don't present data on, which is the number of scientists; as I understand it, aboriginal societies theoretically would have had a great deal of free time to invent things, but probably much of it was constrained in ritual; post-agricultural societies grew larger populations but paid for it by spending most of their available man-hours tilling the land. It is only in certain times - the Renaissance, and recently - that are known for having a lot of random professions, of which some were inventive. But how to convert such blather into numbers?? Wnt (talk) 02:06, 24 January 2011 (UTC)

Before even looking at the rest of the question, I'd like to see a citation for the opening salvo "scientists think people ...". --LarryMac | Talk 15:33, 23 January 2011 (UTC)
As the discussion above goes into depth about, it's probably more like 50,000 years. But that's still a huge amount of time, compared to the amount of time we've had anything that looked recognizably like science (e.g., a thousand years at most, with a very liberal definition of science; 400 years with a more constrained but still liberal one; 100 years with a very modern definition). --Mr.98 (talk) 17:24, 23 January 2011 (UTC)

I'm not sure why we're putting so much effort into pandering to an ignorant Creationist question. (And I mean Creationist in the literal sense.) HiLo48 (talk) 17:21, 23 January 2011 (UTC)

It's not a bad question, frankly, and the answers have been very good, in my opinion, and we are supposed to assume good faith. If you don't want to put in effort, please feel free to abstain. Personally I do find it boggling to imagine how many years humans spent being essentially "primitive," only to turn around in a blink of an eye and suddenly start bounding around on the moon. I am not a Creationist under any definition. --Mr.98 (talk) 17:24, 23 January 2011 (UTC)
How are creationists supposed to learn better if no-one is willing to teach them? While many of them aren't actually interested in learning, this OP might well be one of those that is. We should assume so until given reason to conclude otherwise. --Tango (talk) 17:44, 23 January 2011 (UTC)
Yes agreed (@Mr. 98), either answer the question straight or leave it alone. There is nothing to be gained by insulting the OP, if they are genuine it is upsetting, and if they are troll you are feeding them. Regarding the years spent as primitives; note that there are still guys out there herding reindeer or whatever with no fixed abode and no modern amenities. Assuming our own current lifestyle is the pinnacle and everyone else should be aspiring to it is a little presumptious. SpinningSpark 17:47, 23 January 2011 (UTC)
I'm not asserting it's the pinnacle, but I am asserting it is radically different in many key ways. And I would argue that the number of people who still live as essentially "primitive" (I put the term in quotes because I am uncomfortable with it, but also uncomfortable with euphemisms that are supposed to imply that dying at age 35 of malnutrition is an "equally good" outcome) in today's world is very, very small compared to those who live essentially "modern" lives, even in very poor or undeveloped nations. The poorest modern Haitian is still leaps and bounds more "modern" in their outlook and lifestyle than the people who were living on that island in the 13th century. I will note that while I have an ambivalent relationship with modernity, I think anything which tries to argue that the past was "better" for most people, or that pastoral people were more "happy", is probably nonsense. --Mr.98 (talk) 18:37, 23 January 2011 (UTC)
Keep in mind that just because they didn't have writing, schools, architecture, roads, etc, they weren't just sitting around in a puddle of their own drool. You might be able to beat them in a novel writing contest, but they would seriously kick your ass in a contest of hunting/tracking, or a contest of "where are the edible roots/berries" or "where should I sleep tonight to minimize the chance of getting eaten". If you talked with a person from 150,000 years ago, they'd probably be despairing of the fact that most Americans don't have the first clue about how to disjoint a chicken (edible bird), let alone pluck one. It all depends on what's considered "important" at the time - and what's important changes with advances in technology. You don't get supermarket chicken without refrigeration, which requires electricity, which requires metallurgy, or trucks, which require petroleum refining, or mass agriculture, which requires animal husbandry, etc. etc. Picking up a pack of chicken breasts at the grocery store seems simple enough, but there's a *vast* amount of technology needed before you can even start to consider doing it. -- (talk) 18:51, 23 January 2011 (UTC)
The liberal democracies of our present age are especially conducive to scientific advancement, I think. In the modern liberal democracies there is a minimum amount of social hierarchy. Conversely there is a lot of fluidity of social class. Movement from the lower classes to the upper classes takes place with relative ease. Communication from the lower classes to the upper classes is uninhibited, relatively speaking. And education is available to all (to an extent). This all fosters advancement of all of the rational (and irrational) disciplines of study, science being one of them. The hierarchical structure of societies past I think was an inhibition to the advancement of science. Bus stop (talk) 19:19, 23 January 2011 (UTC)
I take exception to the above comment about Americans not knowing how to disjoint a chicken. From my experience Americans disjoint chickens with aplomb. Bus stop (talk) 19:28, 23 January 2011 (UTC)
By "disjoint a chicken" I meant taking a whole butchered chicken and cutting it up into pieces (breast, leg, thigh, etc.) before cooking. While Americans do eat a lot of chicken, it's mostly obtained in the pre-cut-up (if not pre-cooked) form. Speaking as an American, I get the impression that most of my countrymen aren't all that handy in the kitchen for anything not "ready-made" (we have pre-washed salad and pre-washed & peeled carrots, for goodness sakes). My point was simply that even a professional scientist researching the cutting edge of atomic physics might be baffled when presented with a whole supermarket chicken, let alone one a live one. People 150,000 weren't any less intelligent, it's just that they applied that intelligence to different things. -- (talk) 01:10, 24 January 2011 (UTC)
An interesting question might be: if we had all of our technology taken away from us, how long might it take to regain it? It might take 50,000 years. It might take 150,000 years. Bus stop (talk) 01:31, 24 January 2011 (UTC)
Mere decades, I'd say, assuming we still had all our ideas. (If not, we wouldn't really be "us".) (talk) 01:41, 24 January 2011 (UTC)
Taking the technology away is not enough: you'd have to take away all the memories of how to read, how to make a primitive bow and arrow or spear, how to roast meat, and so on. You'd have to bring up a group of Feral children to have a comparable starting point. Even when such children were immersed in their contemporary society after being disciovered, they still did badly. (talk) 11:46, 24 January 2011 (UTC) and—let us say that somehow a remnant of humanity remained in the year 2011, the present year, on the planet Earth, bereft of all the accomplishments of the past 100,000 years—how would they fare? I think they would fare little better than humanity fared for the past 100,000 years. That is because there is probably little more intelligence, or cerebral capacity, in us than there was in those of us who were around 100,000 years ago. Let us say that the remaining population was comparable in numbers to the population size of our species 100,000 years ago. And let us say that those of us remaining were of above average intelligence but not interested in or knowledgeable of science, technology, religion, culture, or anything else that we consider our species' crowning achievements. This is a thought experiment. We can't be precise. We do not even know what qualities those of us around 100,000 years ago possessed. Let us say that we lost roughly every characteristic that would distinguish us from our forebears. It doesn't matter how. This is just a thought experiment. We could have had our tongues cut out and our hands removed, if the thought experiment required us to have no language. An invasion by aliens could accomplish this, conceivably. The question is: how would progress proceed? Rapidly? Slowly? If our cerebral capacities are little different than that of those of us 100,000 years ago, then civilization would advance at no faster pace. Bus stop (talk) 14:28, 24 January 2011 (UTC)
There are surviving cultures today that have no writing! I think the Mbuti qualify. Their lifestyle doesn't really support the development of writing. (Sure, nowadays, they could trade for pen/paper if they really wanted it, but they couldn't produce useful writing materials on their own. For example, anyone can make a clay tablet, but what good is that to a nomadic people with no pack animals?)
Of course your next question would be, why don't they change their lifestyle to be more like ours ("The wheel, New York, Wars and so on"), but people typically don't change their lifestyle unless some disaster forces them to. APL (talk) 20:45, 23 January 2011 (UTC)
I'm not sure I buy the latter sentiment. I'd bet that if the truly Western option was open to them, a lot would flock to it. As it is, their "Western" option is that of the wars in the Congo or the horrors of sub-African mining, which are no options at all.
Incidentally, a tablet can be quite useful for nomadic people, if it keeps track of property or debts. The earliest examples of writing that we have, if I recall, are ledgers of debts. --Mr.98 (talk) 22:00, 23 January 2011 (UTC)
Oh, sorry, I meant a roughly natural progression of forging the next step for your civilization, analogous to the path people would have had to take 100,000BC, not individuals simply transferring from one existing civilization to another. APL (talk) 22:35, 23 January 2011 (UTC)

It has been suggested that a lot of development happens for negative reasons, such as war, or population pressures. I've seen it argued that the Australian Aboriginal people, with no nasty, threatening invaders on the doorstep, had no need, and hence no motivation, to develop sophisticated weaponry, and its associated spin-offs. In addition, the absence of a practical beast of burden on the continent meant that even developing the wheel was pointless. Can you imagine a mob of kangaroos towing a chariot or a wagon full of stuff to sell down the road? They had no writing, but didn't have a need for it. HiLo48 (talk) 01:42, 24 January 2011 (UTC)

I was troubled by similar questions myself (barring the doubts about evolution) when reading about Pre-Columbian North America, from which it is apparent that the indigenous peoples of North America spent some 12,000 years not inventing the bow and arrow. I mean, 12,000 years is a really long time, and it's a really useful device, and it's only made out of sticks and string. What was stopping it happening? (talk) 01:53, 24 January 2011 (UTC)
How is that apparent? The only mention of bows on the linked page says that some North American peoples did not use bows and arrows until about 1000 CE. It appears that projectile points are present in the entire archaeological record of humans in the America; it is very difficult to see from archaelogical finds whether the points were for arrows or spears, because the shafts and bows themselves tend to be biodegradable. So how would anyone be able to conclude that bows and arrows were not used in prehistoric North America? –Henning Makholm (talk) 02:08, 24 January 2011 (UTC)
I never liked the saying "absence of evidence is not evidence of absence". I think it conflicts with Occam's razor. (talk) 02:21, 24 January 2011 (UTC)
The article mentions "some" peoples who didn't, but I haven't tried to hunt game in their regions with an atlatl. For all I know bows and arrows were invented a thousand times, and results were disappointing every time. Wnt (talk) 02:09, 24 January 2011 (UTC)
I wonder if something to do with optimism and encouragement might be the answer to the question about technology in general. Perhaps before progress can really take off, the idea of progress has to be invented. (talk) 02:16, 24 January 2011 (UTC)
...and seen as a desirable thing. HiLo48 (talk) 02:24, 24 January 2011 (UTC)
Yep. (talk) 02:26, 24 January 2011 (UTC)
It is likely that early civilizations did have some technology, for example the Baghdad Battery suggesting that Mesopotamians had electricity. However most inventions were created after civilization took place, and since early hominids are thought to not have built cities, much written communication would likely have been unnecessary. ~AH1(TCU) 00:06, 27 January 2011 (UTC)

State of the art of photorealistic computer animated actors[edit]

In the movie Tron: Legacy, the actor Jeff Bridges appears as a younger version of himself using performance capture technology. I wonder when we can expect the technology to be cheap enough to be routinely used in movies, say with one out of ten movies having a character who's realized that way. If cheap enough, such a technology can allow important characters in a franchise like James Bond to have the same faces and physical appearance, even though the actors playing them may change over time. It would also expand opportunities for actors, by decoupling acting from having the right face and body. —Preceding unsigned comment added by (talk) 15:06, 23 January 2011 (UTC)

Personally I thought the Tron technology was still terribly crude, but maybe that was just me. The Clu character looked like something out of The Polar Express to my eyes, and set off my uncanny valley sensors every time he was on screen (which was all too often!), but perhaps I am too sensitive. In any case, there has been speculation along these lines for a long time. See, e.g., the film S1m0ne. Personally I suspect that there will probably be something "lost" in such renderings, even as they improve. And the star system seems like it would mitigate against this to some degree (the public and the studios, to some degree, like having consistent "stars", much more than they do talent on its own). But it's certainly possible. --Mr.98 (talk) 17:29, 23 January 2011 (UTC)
I haven't seen the movie, but from seeing the trailer, I thought the young Jeff Bridges face was not 100% natural. But that was me seeing it knowing what they did. I'm not sure if I would be fooled if it were a completely unfamiliar face. I did think about the star systems angle, and I agree that it's an important factor. -- (talk) 19:13, 23 January 2011 (UTC)
I had not known it would be computer generated, and it looked tremendously fake from moment one. It was really quite silly that they spent all that money on special effects, most of which look great, except for the rendering of one character who is in fact central to the entire film and is on screen a huge amount of the time. I have seen better CGI — Gollum was far superior, for example. --Mr.98 (talk) 21:52, 23 January 2011 (UTC)
The technology is embryonic. It will improve vastly. Like every new technology, it will succeed or fail on the basis of whether it makes money for the money people. I imagine that actors (as opposed to "stars") would be as likely to embrace this kind of technology as to fear it. One of the biggest problems actors face is aging. Particularly women. Conceivably, this sort of technology could render irrelevant such superficialities as youthfulness, physical condition, attractiveness, and even things like race and height. Actors could market themselves purely on the basis of abstract qualities like subtlety, charisma, comic timing, panache, etc. Indeed, they would have to market themselves on such qualities, since mere accidental beauty would no longer cut it. The Japanese will get there first, of course. LANTZYTALK 19:31, 23 January 2011 (UTC)
Have you heard of Hatsune Miku? The article doesn't have any photos of footage, look "her" up on youtube. Vespine (talk) 00:16, 24 January 2011 (UTC)
As far as I know Hatsune Miku is just the name of a singing-synthesizer voice, like Microsoft Anna (except that I guess there are no pictures of Anna). Kyoko Date is a better example. The whole computer-rendered idol singer thing doesn't seem to have caught on all that much in the 15 years since she was introduced. -- BenRG (talk) 11:22, 24 January 2011 (UTC)
The singing-synthesizer voice software is vocaloid, Hatsune is a specific "voice" and actually has an associated "character" which has played concerts to live audiences.. Vespine (talk) 23:36, 24 January 2011 (UTC)
But her image isn't computer rendered. Her voice is. Kyoko Date is a computer rendered image with a real person's voice, which is what the original question was about (though it's not state-of-the-art, at least not any more). -- BenRG (talk) 00:12, 25 January 2011 (UTC)
Ummm, yes Hatsune is computer rendered, and projected onto the stage.. have you seen the youtube clips? A google image search is not very work safe, she looks like a manga character. Vespine (talk) 05:47, 25 January 2011 (UTC)
Are you talking about this? I thought that was a dancer in an animatronic costume, but I guess you're right. That's about as primitive as motion-capture 3D can get, though. -- BenRG (talk) 20:00, 26 January 2011 (UTC)
Yeah very true, photo realistic it ain't. But I thought it was quite relevant as it seems to be approaching the same, or similar conclusion, but from a different angle. In fact, what we have for "actors" in the future might be a bit of a mixture of the two. Vespine (talk) 22:01, 26 January 2011 (UTC)

Disease from eating sheep's lungs?[edit]

Does anyone know why it is illegal - since 1971 - to eat sheep's lungs in the US? Is there some kind of disease you can get from eating sheep's lungs and if so, what is it? I am just curious and can't find anything online. Only mention of the law itself and then, in the 80s, problems with scrapie / CJD from eating sheep's brains. Saudade7 19:42, 23 January 2011 (UTC)

Sheep's lung, carries with it the possibility of introducing the phenomena of spontaneous generation which may lead to a serious outbreak of Haggisess. These little blighter's are impossible catch unless the weather is cold, wet and windy. Even then ... oh, I just don't what to think about it! They are 'orrible! A small 4 ounce dose of boiled “neeps” taken orally is, so I have been told, a good antidoted, if you have been bitten by one of these hideous creatures. --Aspro (talk) 20:05, 23 January 2011 (UTC)
ahahahahahahaha! no. :-) Saudade7 20:18, 23 January 2011 (UTC)
Cute, very cute. The only lung-related disease I can think of that relates to sheep is anthrax, which (all things considered) would be a pretty good reason to ban them. I don't know why they would single out sheep lung for that, though - lungs of any herbivorous animal should be an adequate vector. --Ludwigs2 20:19, 23 January 2011 (UTC)

It's not illegal to eat lungs. It's illegal to sell them as food for humans. (And not just sheep's lungs, but any kind of livestock lungs.) If you raised and slaughtered your own sheep, you could make authentic haggis. And if you called it cat food, you could sell it too. As for why the USDA considers lungs to be unsuitable for human consumption, it's because of the high incidence of lesions in the lungs they inspected back in the early seventies. Such lesions indicate pneumonia, emphysema, hydatidosis, anthracosis, pleurisy, melanosis, and tuberculosis. Currently there is a blanket ban on using livestock lungs, but it is worth noting that cattle lungs have a much higher incidence of lesions than sheep lungs. So it wouldn't be surprising if the USDA were to ease the restrictions on sheep lungs while retaining the ban on bovine lungs. LANTZYTALK 20:50, 23 January 2011 (UTC)

Wow Lantzy, thanks for such an authoritative answer! Saudade7 21:15, 23 January 2011 (UTC)

Bamboo in laptops[edit]

A well-know laptop manufacturer, which introduced a bamboo covered laptop claims that:

"Bamboo was picked (...) after research identified it as a quickly-replenishing resource whose utilization has almost no impact on the environment. Bamboo also has a tensile strength that rivals that of steel, "

Are both claims true?Quest09 (talk) 21:01, 23 January 2011 (UTC)

Probably, somewhat. From our Bamboo article, "Bamboo is one of the fastest-growing plants on Earth ... all bamboo have the potential to grow to full height and girth in a single growing season of 3–4 months." That's not the last word on sustainability, but it is a very good start. And this Scientific American article talks about the strength (or lack of) of Bamboos; our article phrases it thus: "the sturdiest [Bamboo] products fulfil their claims of being up to three times harder than oak hardwood but others may be softer than standard hardwood." --Tagishsimon (talk) 21:11, 23 January 2011 (UTC)
Fast-growing doesn't necessarily mean "almost no impact." Perhaps Bamboo needs huge amount of water. Besides that: the best bamboo for laminates, according to the article, is 6 years old. Thus, they don't use the strongest bamboo or they don't use the fastest growing bamboo. Right? Quest09 (talk) 22:27, 23 January 2011 (UTC)
Oh that particular problem is easily solved - you just get six lots of ground to grow your bamboo on and use every year's allotment in sequence. When you harvest your six year olds, you plant new bamboo, and wait for another year for your five year olds to get to be the six year olds. In fact, this is done in foresting everywhere, just on a much longer scale. TomorrowTime (talk) 22:48, 23 January 2011 (UTC)
Claims of "no impact on the environment" are tricky. Environmental impact comes from the aggregated effects of all parts of the production of a product. Usually when people claim that something has no effect on the environment it is because it is used on a small scale where it truly doesn't have much impact. Making a couple thousand bamboo lap top covers from bamboo has little impact on the environment, replacing most plastic pieces of everyday items with bamboo certainly would, in the form of land use, transportation and probably some unintended consequences that would be hard to imagine. Even things that seem low impact on the environment like wind turbines start to have major effects when used on a large scale, see Environmental effects of wind power. --Daniel 01:35, 24 January 2011 (UTC)
Speaking of which, why don't we grow any bamboo (as a crop) in the U.S.? Wnt (talk) 01:48, 24 January 2011 (UTC)
The National Center for Appropriate Technology discusses this question here. They summarize, "Countries that export this product have decided advantages over American farmers with respect to climate, labor, and processing costs." --Allen (talk) 02:11, 24 January 2011 (UTC)
"As strong as steel" is a fairly meaningless claim. Steel comes in an incredible variety of strengths: when I worked in a mechanical testing lab, I saw everything from 6 ksi steel (so weak I spent half an hour running additional tests to verify I was getting good data) to tool steel that was somewhere over 250 ksi (the test machine broke, rather than the sample). --Carnildo (talk) 02:49, 26 January 2011 (UTC)
Maybe "no environmental impact" refers to little or no net carbon footprint. However even your bones are stronger than oak (to constant pressure, not impacting forces), but only if Encyclopedia Brown is to be trusted. Also, tensile strength only refers to restistance to stretching, and does not refer to the ability of the laptop to withstand impact. ~AH1(TCU) 23:55, 26 January 2011 (UTC)

Permutation of position weight matrix for biological problem solving[edit]


I would like to know how to permute automatically a position weight matrix from JASPAR to TRANSFAC mode. In addition, I would be glad to know how to obtain automatically a position weight matrix from a Clustal of several sequences. Thanks in advance. -- (talk) 21:53, 23 January 2011 (UTC)

Isn't transfac just a simple transpose of a jaspar matrix, with a little formatting? Section 6.ii.3 at this page has examples. It looks like the JASPAR database can export in transfac format. There are many flavors of PSSM generators - what is your goal? -- Scray (talk) 02:09, 25 January 2011 (UTC)

I am trying to make some positive controls to find out in what condition algorithms as Clover and similar work well. So I decided to test a well-known promoter with a series of PSSM. I formatted some of them in TRANSFAC mode but, as you can imagine, it is a tedious operation (I tried to export TRANSFAC matrices from JASPAR database, but I failed). In addition, I experienced difficulties in finding all the PSSM needed. So I would like to transpose them automatically and also to generate them from a Clustal, when required. Consideration on how to avoid false positive and false negative are also particularly welcome. -- (talk) 10:59, 25 January 2011 (UTC)

I am familiar with the Clover (software) package from Atlassian, but my sense is that you are referring to something else. The transform would be pretty easy to do in a number of environments, including R and Perl, but even something as simply as Microsoft Excel could do most of the work (a simple matrix transform is the most tedious part). You may find related answers at biostar. BTW, I think you are confusing the software program Clustal with "multiple sequence alignment" - the two are not synonymous in the least. -- Scray (talk) 01:45, 26 January 2011 (UTC)
The Clover I mean is this (the Clover linked by me above doesn't fit so much the problem, LOL). Performing the transposition with Excel could be a valid solution, thanks (I am unfamiliar with R and Perl). I used the word Clustal informally to depict a multiple sequence alignment obtained with Clustal. Thanks also for the useful link. Any other eventual suggestion for the control of false positive and false negative is still welcome. Greetings. -- (talk) 15:47, 26 January 2011 (UTC)

Sterile chicken tells no tales[edit]

If I cook, say, a chicken at 200 C (375 F?) in an oven, switch the oven off and don't open, so air is not travelling into the oven, will the chicken be sterile and therefore keep for a very long time? —Preceding unsigned comment added by (talk) 23:53, 23 January 2011 (UTC)

No oven is airtight, in fact it is far from it, so the chicken will become contaiminated at roughly the same rate as if the chicken were left on the counter. If we're talking about a fully cooked bird, a few hours likely wouldn't hurt it, but it definately wouldn't keep until, say, next thursday. --Jayron32 23:58, 23 January 2011 (UTC)
Add to that the fact that an oven isn't an autoclave. The chicken wouldn't keep even in a hermetically sealed oven, as there are still residual microbes even after cooking. Cooking kills a large portion of the bacteria, but not all. -- (talk) 00:55, 24 January 2011 (UTC)
So I'd be better off cooking it in an autoclave? Would it come out crispy? —Preceding unsigned comment added by (talk) 01:13, 24 January 2011 (UTC)
LOL. (Yes, and my apologies in case that was an innocent question.) The issue with your question is that normal cooking will kill most bugs on the outside, but the inside is never quite as hot. Bugs in the middle will survive in numbers sufficient to grow rapidly at room temperature. HiLo48 (talk) 01:21, 24 January 2011 (UTC)
There is such a thing as canned chicken so it is possible to fully sterilize a chicken (it's possible at home too, it doesn't require special equipment). Ariel. (talk) 01:55, 24 January 2011 (UTC)
The chicken will probably keep until next month if you disjoint it before cooking (to be absolutely sure that it cooks right through), then freeze it after a short cooling period. There are risks in home freezing, but dangers can be avoided with care. Unfortunately, freezing will spoil the crispy effect unless you re-cook it. An autoclave (pressure cooker) will not achieve a crispy exterior and is just a way to boil at a slightly higher temperature. I've never tried sealing chicken in an airtight jar (after thorough cooking), but I suspect that there would be too many risks for this to be recommended. Dbfirs 08:04, 24 January 2011 (UTC)
It's not difficult to can meats, but there is more to it than just cooking it and sealing it in a jar. --Sean 15:08, 24 January 2011 (UTC)
Ah yes, thanks for the link. I assume that the secret (to ensure safety) is the higher cooking temperature, longer time, and airtight seal. Dbfirs 19:46, 24 January 2011 (UTC)