Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 2606:a000:4c0c:e200:e958:86e3:541f:e7f1 (talk) at 17:12, 21 November 2017 (→‎Easier to convert to natural gas: diesel or otto?: thx, etc). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 14

USGS measurements

How the United States Geological Survey is able to measure earthquake magnitudes around the world? That said, do they have their own stations across the world or do they measure indirectly at home, deducing the magnitude from available data? Thanks.--212.180.235.46 (talk) 09:00, 14 November 2017 (UTC)[reply]

Instruments in the US, others around the world, and international agreements to share data - see National Earthquake Information Center Wymspen (talk) 09:51, 14 November 2017 (UTC)[reply]

I looked at this article and the talk, but didn't get the answer i want. Quite basic calculation (*) shows that, if greenhouse effect were absolutely perfect, atmosphere absorbing each and every parcel of energy from the surface (it doesn't matter whether it is absorbed through conduction, convection, phase transition, radiation or whatever), then back-radiation (let's call it B) peaks at a maximum A + C where A: absorbed by atmosphere (77.1 according to the picture in the article) C: absorbed by surface (163.3, same source) A+C: 240.4 BUT B is supposed to be 340.3 (same source), 100 higher that the calculated maximum.

Well, I don't expect NASA to be that wrong, and I think any error would have been long corrected, so i have to suppose that somehow Back-radiation is currently HIGHER than in a perfect greenhouse effect world. My question is: how?


(*) we are looking for a steady state, at equilibrium, stable (things get back there if some noise disturbs the system) solution. I leave you the easy calculation to get there, just gives you the only solution -- nothing else works.

  • surface receive directly C and A+C from back-radiation, for a total of A+2C, which are then all send up so surface is at equilibrium.
  • atmosphere gets A directly, plus those A+2C from the surface, for a total of 2A+2C; half of it (A+C) goes down (the same as back-radiation used just above, sanity check OK) , half of it (A+C) goes up (which is just as much as absorbed, sanity check OK)

185.24.186.192 (talk) 11:42, 14 November 2017 (UTC)[reply]

The greenhouse effect
Does the simplified schematic from greenhouse effect help? The greenhouse effect is based on a circular flow of energy trapped in the system (i.e. heat). If you look at the schematic, the total energy entering each level is equal to the total energy leaving each level, which corresponds to an equilibrium. (There is actually a slight imbalance these days due to global warming.) However, it is not the case that the back-radiation must equal the total radiation from the sun. The amount of back-radiation depends on the temperature of the atmosphere. Similarly, the amount of energy transfer from the surface depends on the temperature of the surface. The surface and atmosphere will warm up until they reach a temperature where the energy flows out equal those coming in. The warm temperatures at the surface are maintained, in part, by a circular flow of energy which we know as the greenhouse effect. The energy flows from surface to atmosphere and back again happen to be larger than those from the sun, but that isn't a problem as long as we are talking about a closed loop. Dragons flight (talk) 11:58, 14 November 2017 (UTC)[reply]
Thanks, but no, it doesn't help at all : figures are only slightly different (67 + 168 = 235, Vs 324 BR instead of 77 + 163 = 240, Vs 340), but share the same issue
There is equilibrium in each level indeed, and you would have the same equilibrium at each level by adding just any value, positive or negative, to both back radiation and upward radiation. Subtract 324 to back radiation (putting it at zero), and also 324 to upward radiation (down from 452 to 128), ant it still works. Add another 324 to back radiation (putting it at 648) and also 324 to upward radiation (up from 452 to 776), and it also works. Well, no, it doesn't. The system is then, in both case, out of equilibrium (even though each level is at equilibrium). A zero back radiation would also mean a zero up radiation from the atmosphere, so it would warm up and emits more and more back radiation, until reaching equilibrium value. Similarly a 648 back radiation is way too much, meaning huge loss to space, cooling down atmosphere, lowering back-radiation, until the equilibrium is reached
The point is, basic (too basic ?) calculation put the said equilibrium at a maximum of 240 (or 235, depending on schematic) in the perfect GHE case. While each schematic says that in a NON perfect GHE case, back-radiation is much higher, when it should be lower (nothing can beat perfect GHE scenario).
185.24.186.192 (talk) 13:39, 14 November 2017 (UTC)[reply]
Its just a very simplified model representation and you added elements which are not in that simple model. One result of that is of course that the numbers in the model no longer add up because you changed the "formula" that model is using (to result in equilibrium). Find another model that contain your elements or "manufacture" a model yourself (which you already kinda tried (wrong) with your question). --Kharon (talk) 14:01, 14 November 2017 (UTC)[reply]
I added elements which ARE not in that simple model, taken from wikipedia article or schematic provided by talk
I may be wrong, indeed i asked "how", so your answser "you are wrong" is just not an answser...
185.24.186.192 (talk) 21:40, 14 November 2017 (UTC)[reply]
Perhaps it is unclear, but the radiation from the surface and the atmosphere is determined by the temperature of each component not the flux. So, you can't just put in random values without also changing those temperatures (flux emitted is roughly proportional to T4). Why do you believe 240 is the maximum? It's not. Let's consider a different analogy. Consider an oven. It consists of a heating element, some food you want to cook, and an insulated box. If you want to maintain a constant temperature, then the heat being put into the heating element must equal the heat leaking out of the insulated box. If the insulation is pretty good hopefully, then not much energy is leaking, so that necessary flux to maintain a constant temperature is low. However, the flux of energy being radiated between the food and the box and back will be much higher. That's because the inside of the box can get much hotter than the outside. If the insulation were nearly perfect, you could imagine the oven being able to getting ridiculously hot and the internal energy fluxes between the food and the box getting arbitrarily large. This is true even if the heating element is only providing a relative trickle of new energy, since the heat can build inside until an equilibrium is achieved. It's the same with the greenhouse effect in planetary atmospheres. The sun provides new energy, which at equilibrium counters the losses, but the internal transfers of energy can become much larger than the source flux depending on the characteristics of the atmosphere. For a thin atmosphere (like Mars) nearly all surface radiation escape directly to space, the back-radiation is very low, and the temperature enhancement is negligible. For a thick atmosphere (like Venus), essentially all surface radiation is captured by the atmosphere, the back-radiation is enormous, and the temperature enhancement is huge. Earth happens to lie in between these extremes. Dragons flight (talk) 16:27, 14 November 2017 (UTC)[reply]
more food for the though here, thanks.
the radiation from the surface and the atmosphere is determined by the temperature of each component not the flux, but the flux determines the temperature:higher flux in or out respectivly warms or cool the element until flux in and out balance again.
Your oven analogy is perfect. Even a perfect insulation box radiates energy out because of its own temperature, and this temperature will increase until radiation out perfectly match radiation received by the insulation box from inseide. And you can even calculate it, and that is just what i did:
the heating element brings C, heating the insulating box until its temperature rise at appropriete level to radiating out C, no more, no less; A is zero (no direct heating to the insulating box, neither from the outside nor from the heating element inside); the insulating box also radiates C back into the oven (Back-radiation B = C), because othewise it would either cool or warm (if it were more or less), so the food actually gets B+C=2C heating (C from the heating element+ B=C backradiation), which it also send back to insulating box (so it receive 2C, send C out and C back in: balance respected) , and everything balance perfectly, and stay so because this is a stable equilibrium. So it doesn't gets ridiculously hot inside the oven, the maximum heating being A+2C, as calculated above, with A=0 in your oven case.
And that's why I believe 240 is the maximum backradiation: because calculation shows it to be. It is not a "random value". It is the absolute maximum in the most perfect insulation case (unless something is wrong here, but what?)
Now, I understand your point that surface emperature being more or less known, the surface upward radiation cannot be very different from 452. and so the back-radiation must be whatever needed to balance things out, and that's 324 from your schematic. Higher than 235
Well, the only sensible conclusion is that atmosphere is better than a simple insulation layer. A heat pump. Heat pump exist, we build some, so why not nature, but I don't see how this works nor where it would pump heat from, and it is not explained in wikipedia, if it were so. Back to the start: how is this possible?
185.24.186.192 (talk) 21:58, 14 November 2017 (UTC)[reply]
The insulating box doesn't radiate at the same rate inwards and outwards. 93.136.80.194 (talk) 08:20, 15 November 2017 (UTC)[reply]
I think you are right, but this doesn't explain why, and this actually is just another way to put my initial question: why would the insulating box (a perfectly absorbing, choked full of GHG, atmosphere) radiate at different rate inwards and outwards?
185.24.186.192 (talk) 11:58, 15 November 2017 (UTC)[reply]
Imagine a box made of two thin shells. Each shell is perfectly absorbing and radiates at the same rate inwards and outwards. When the inner shell receives 1 unit of energy, 0.5 is backradiated and 0.5 is sent to the outer shell. Of the latter 0.5, 0.25 is radiated out and 0.25 is backradiated onto the inner shell. Of that 0.25, 0.125 is radiated inside (total for inside is 0.625 now), and 0.125 is backradiated onto the outer shell, and so on. In the end, 2/3 of the energy is backradiated and 1/3 is let through outside. If you add more shells, you can make the fraction radiated out as small as you want.
If this box has reached equilibrium, the amount of heat radiated to the outside is equal to the amount being received by the system. But to get to that point, the box contents might have received far more energy than it could radiate for a long time, and this would have caused an arbitrarily large buildup of energy. The system may receive 1 W and radiate 1 W, but that doesn't preclude that there's 200 W bouncing off the box's inner walls (and that doesn't necessarily imply that the box has been heated to its capacity as an insulator and will start to disintegrate and radiate out much more than its usual fraction). 93.136.80.194 (talk) 19:13, 15 November 2017 (UTC)[reply]
(indent out)
I see, but, as you points out, this require 2 (or more) PERFECT boxes, not a single perfect one.
If the too boxes are not perfect, but rather 2 imperfect, each of it absorbing half of incoming energy from innerward, so that the multilayers system is still perfect, what happens? is there any multiplicative effect?
for "ground": initial heating: C ; backradiation from inner to bottom: C ; total emission : 2C, from which C to inner layer and C to outer layer
for outer layer: directly from ground:C ; send downward: C ; radiated outward: C ; received from inner layer: C
for inner layer: directly from ground:C ; radiated downward: C ; send to outer layer : C ; received from outer layer: C
No multiplicative effect. A perfect box is a perfect box, whether it is single layered or multilayered to achieve perfection. You can change the number of layer to infinite, change the ratio received by each layer, no matter what, you cannot beat perfection.
Well, you can, but you need some sort of heat pump, pumping energy from the outer layer(s) to the inner layer(s)
However, you made me think of a real engine, able to power such a heat pump, and it is gravity, powering lapse rate. Lapse rate allow the top of atmosphere to be lower temperature that bottom, so it allows higher emission downward that upward. It is it starting to make better sense.
It is already stated in relevant article that GHE was a misnomer, I now know it is a double misnomer: Lapse rate is involved, despite not being mentioned (methink it should, but i guess fixing the article is not that easy)
thanks, consider the question answered
185.24.186.192 (talk) 11:21, 16 November 2017 (UTC)[reply]
The "perfect" multilayered box you describe does not exist because radiation cannot "skip" layers. At each layer is absorbed and dissipated in all directions including back, so naturally less energy reaches the outer layers. Besides, what you're talking about doesn't describe the Earth's atmosphere because it simply wouldn't be an insulator; Earth's atmosphere's lapse rate proves that it does insulate. 93.136.10.152 (talk) 20:35, 16 November 2017 (UTC)[reply]

November 15

Positronium diameter

In the book "Parallel Worlds" Michio Kaku writes that in the Dark era of the universe intelligent life might survive by being based on positronium atoms which would be 10^12 parsecs in diameter. How come these atoms would be so huge when Wikipedia says that nowadays they're the size of an ordinary hydrogen atom? 93.136.80.194 (talk) 08:13, 15 November 2017 (UTC)[reply]

When positronium is in an excited state it becomes bigger. It does decay, but the higher the state, the longer its life time. It does not have to be so big to last a long time. This would be termed a Rydberg atom. Some may combine together to form Rydberg matter. A solid positronium chunk of matter based on this would be less dense than air. Graeme Bartlett (talk) 12:31, 15 November 2017 (UTC)[reply]
Let me try to understand your question: In a <book of fiction> the author writes about <some concept they made up by plucking an existing scientific-sounding word out of the air> and now you want us to explain it? You'd have to ask the author. It's their imagination. Explaining the fictional scientific concepts in real science terms is always a futile exercise. --Jayron32 13:32, 15 November 2017 (UTC) Sorry for the misunderstanding. Carry on. --Jayron32 16:06, 15 November 2017 (UTC)[reply]
FYI, Parallel Worlds is intended as a work of popular science non-fiction. That said, I don't know the answer to the IP's question or whether he is accurately describing what is presented in the book. Dragons flight (talk) 14:46, 15 November 2017 (UTC)[reply]
The book is on archive.org) (apparently legally), search for 'positronium'. Positronium#Natural occurrence also mentions this, with a link to a paper. Basically, they are talking about the distant future when the density of matter in the Universe is extremely low and after nucleons (protons and neutrons) have decayed away (if protons do decay). In such an environment huge positronium "atoms" can be stable over a long time scale (small positronium atoms would annihilate quickly) and seem to be the only thing that is still around if this scenario is correct. --Wrongfilter (talk) 15:56, 15 November 2017 (UTC)[reply]
So arbitrarily large atoms can be created? Why 10^12 pc then? 93.136.80.194 (talk) 19:52, 15 November 2017 (UTC)[reply]
"Atom" is a funny word here, and it depends on what you mean by an "atom". Positronium has some properties like an atom, in that it is metastable enough at current conditions to be studied, it forms chemical bonds with other atoms, etc. Indeed positronium hydride has been created long enough to be studied; the half-life of positronium being longer than some transuranium isotopes. But it isn't really an "atom", if you mean "A group of nucleons surrounded by an electron cloud". What it is is an electron and positron with enough quantum pressure to keep them in the same general area long enough to have consistent properties. The question being asked (and answered) by the 10^12 parsecs answer is something akin to "at what distance will a bound electron-positron pair be such that the quantum pressure keeping them apart would be sufficient to prevent them from collapsing together and annihilating?" and apparently that answer is "a trillion parsecs" I don't know the specifics of the math here, but that's how I interpret the result. Now, since this thing would only really be able to exist in a state that large if there were literally nothing else left to interact with it in ways that may disrupt its stability, that would be a very empty universe indeed. But I think that's the point, the author is looking for some sort of matter which would still exist. As long as you have matter, you can store information, and if you can store information, you're not yet at the end of time. --Jayron32 20:16, 15 November 2017 (UTC)[reply]
I see, thanks. 93.136.80.194 (talk) 20:33, 15 November 2017 (UTC)[reply]
I think "neutrino nuggets" make for a much more interesting speculation in this direction - hmmm, I thought we had an article... anyway, the general idea is that all the light fast-moving particles that don't interact with anything in a given era of the cosmos eventually tend to slow down as the overall temperature decreases, while their interactions at increasing distances become relevant as much longer time scales become of interest. It's thought that there can be interesting "neutrino chemistry", which due to high speeds and poor interaction is presently not readily accessible to us for study. Wnt (talk) 16:45, 19 November 2017 (UTC)[reply]

Baked Beans

Question posed by a blocked user. ←Baseball Bugs What's up, Doc? carrots19:29, 15 November 2017 (UTC)[reply]
The following discussion has been closed. Please do not modify it.

It is well known that baked beans can cause flatulence. According to the article this is "due to the fermentation of polysaccharides (specifically oligosaccharides) by gut flora, specifically Methanobrevibacter smithii. The oligosaccharides pass through the small intestine largely unchanged; when they reach the large intestine, bacteria feast on them, producing copious amounts of flatus."

1) Of the carbohydrate content of baked beans, what percentage is actually polysaccharides? For example, this can from Heinz says 11.4g of carbohydrate per 100g. How much of that is polysaccharides?

2) When the polysaccharides are feasted on by bacteria, how much of it gets absorbed by the human body or wasted?

Thanks 91.47.17.210 (talk) 10:09, 15 November 2017 (UTC)[reply]

See "Polysaccharide from Dry Navy Beans, Phaseolus vulgaris: Its Isolation and Stimulation of Clostridium perfringens", [1], a wonderful research paper that discusses both the polysaccharide content of a few bean varieties, and also gives measurements for how much gas is produced. What a world! SemanticMantis (talk) 17:34, 15 November 2017 (UTC)[reply]

Baked beans and polysaccharides

Wikilawyers, take a sidebar. Please???
The following discussion has been closed. Please do not modify it.

I read something recently that made me wonder about baked beans and flatulence. It is well known that baked beans can cause flatulence. According to the article this is "due to the fermentation of polysaccharides (specifically oligosaccharides) by gut flora, specifically Methanobrevibacter smithii. The oligosaccharides pass through the small intestine largely unchanged; when they reach the large intestine, bacteria feast on them, producing copious amounts of flatus."

The questions are:

1) Of the carbohydrate content of baked beans, what percentage is actually polysaccharides? For example, this can from Heinz says 11.4g of carbohydrate per 100g. How much of that is polysaccharides?

2) When the polysaccharides are feasted on by bacteria, how much of it gets absorbed by the human body or wasted?

Thanks, SemanticMantis (talk) 19:34, 15 November 2017 (UTC)[reply]

I have found a suitable reference on the topic, but I'm curious to see what anyone else can dig up on question 2). Polysaccharide from Dry Navy Beans, Phaseolus vulgaris: Its Isolation and Stimulation of Clostridium perfringens", [2], a wonderful research paper that discusses both the polysaccharide content of a few bean varieties, and also gives measurements for how much gas is produced. What a world! SemanticMantis (talk) 19:34, 15 November 2017 (UTC)[reply]
  • WP:POINTY "When one becomes frustrated with the way a policy or guideline is being applied, it may be tempting to try to discredit the rule or interpretation thereof by, in one's view, applying it consistently. Sometimes, this is done simply to prove a point in a local dispute. In other cases, one might try to enforce a rule in a generally unpopular way, with the aim of getting it changed.
"Such behavior, wherever it occurs, is highly disruptive and can lead to a block or ban. If you feel that a policy is problematic, the policy's talk page is the proper place to raise your concerns. If you simply disagree with someone's actions in an article, discuss it on the article talk page or related pages. If mere discussion fails to resolve a problem, look into dispute resolution.
The proxy for whom you have made yourself a proxy can always post under his real identity in 60 days from the time he was blocked. Hopefully that addresses your problem, SemanticMantis. μηδείς (talk) 04:03, 16 November 2017 (UTC)[reply]
I had figured it was the banned user Light Current. ←Baseball Bugs What's up, Doc? carrots04:13, 16 November 2017 (UTC)[reply]

Off topic shouting. Take it to the talk page if you must SemanticMantis (talk) 13:41, 16 November 2017 (UTC)[reply]

It's already on the talk page, and it is on-topic. ←Baseball Bugs What's up, Doc? carrots15:31, 16 November 2017 (UTC)}}[reply]

This is off topic for the Refdesk, so I'm opting for a hat here. Wnt (talk) 16:47, 19 November 2017 (UTC)[reply]

November 16

"After"-Talk

Is it just a myth or a fact that certain types of bugs can make it possible to listen what was talked in a room upto an even an hour after the talk has ended. I mean the device wasn't there when the conversation was on and installed, say, many minutes after the talkers had left the premises.  Jon Ascton  (talk) 05:13, 16 November 2017 (UTC)[reply]

Just a myth. Perhaps someone can link to a reference that debunks this fanciful notion? Dbfirs 08:43, 16 November 2017 (UTC)[reply]
(edit conflict) it is hard to debunk the general concept of after-talk-listener - there is no rock-solid physical principle that says you cannot, like it would be the case for a claim that you can listen before the talk happens. But any specific implementation I can imagine would be easily debunked.
For instance, "picking up the attenuated sound waves bouncing off the walls by a strong microphone" is next-to-impossible: (1) since the sentence spoken at t is bouncing off when the sentence at t+Δt is spoken, it will need a whole lot of deconvolution that may or may not be possible and will anyways surely worsen the signal-to-noise ratio; (2) except at resonant frequencies of the room, sound attenuates quite fast (i.e. the Q factor is low) (test: shout at the top of your lungs, and listen if you hear anything a few seconds after you stopped: you don't, which means the decibels drop fairly quick); (3) microphones are not much more sensitive than the human ear and way less complex as far as signal processing go (see e.g. [3], [4]), so if you cannot hear something it is usually a good guess that a microphone next to you cannot either. (I remember someone saying that the acoustic noise generated by air at room temperature was not far below the threshold of human hearing and some people with Hyperacusis could hear it, but I could not track a source to that effect - anyone else can, or is that just another myth?) TigraanClick here to contact me 09:07, 16 November 2017 (UTC)[reply]
Methinks it would require some kind of Echo chamber. But unless it could reverberate for a very long time, the OP's concept wouldn't work. Also, you'd likely have echoes of different parts of the conversation going on all at once, and it would require some tedious work to separate it out. ←Baseball Bugs What's up, Doc? carrots09:26, 16 November 2017 (UTC)[reply]
The events were recorded when they occurred (and could then have been relayed later). There are many ways to record people which they are not necessarily aware of... The belief however reminds me of "wall memory", a spiritualist belief that objects, walls and houses could have memory which they could echo later in particular circumstances to explain paranormal encounters etc. —PaleoNeonate10:55, 16 November 2017 (UTC)[reply]
The device doesn't have to be inside the room to hear what's going on. See laser microphone. 82.13.208.70 (talk) 11:16, 16 November 2017 (UTC)[reply]
Perhaps the users of such devices spread rumours about recording with a device installed long after the event, just to hide how sensitive their real-time devices really are. Dbfirs 12:27, 16 November 2017 (UTC)[reply]
Oh, it's a myth, but it would be useful if you could provide a source/link for the original claim. It would be much easier for us to provide sources to debunk a particular and specific assertion, rather than just throwing open the field to try to prove a general negative.
For a thought experiment, though, consider the speed of sound is about 300 meters per second, and a good-sized meeting room might be 10 meters across. In one second, a sound originating from a point in the room will have bounced back and forth off the walls of the room 30 times. (It's even worse if you remember that rooms have ceilings about 3 meters high; that's 100 floor-ceiling bounces per second.) A minute later, a single short, clear sound will have bounced off at least a couple of thousand surfaces, getting spread out, attenuated, and jumbled into a bit of molecular jiggling indistinguishable from heat. A hard surface like concrete, drywall, or glass might reflect as much as 98% ([5]) of the sound energy that hits it back into the room—an echo. If we do the math for 2000 ideal 98% bounces, we get...2*10^-18 times the original intensity. And that's your best-case, because it doesn't account for the presence of soft sound-absorbing objects in the room, like chair cushions, drapes, or people, and it doesn't account for the miserable nuisance of multiple paths interfering with each other.
If I fire a pistol in an office with a closed door, and then open the door a minute later, the guys out in the hall don't hear a 'bang' when the door opens. Forget trying to get information about something like a conversation. TenOfAllTrades(talk) 13:48, 16 November 2017 (UTC)[reply]
There's some novel where "the words used by Jesus to raise Lazarus from the dead", I think, are recorded in a ceramic vibrations of a potter's wheel or some such. But sound can be very weak (the ear is said to be able to sense an eardrum motion on the order of a single atom's radius) and these fictional scenarios are usually only that. Although yes, spies can record conversations by bouncing lasers off office tower windows and watching the vibration. You can certainly picture some loopy scenario where such a wiggling of an office tower window reflecting a distant bright object gets recorded on a security camera or something. But the easiest fiction is for a cop to tell someone of average intelligence -- for criminal defendants -- that they have such a device, and that he can get a lesser charge by talking before they have to bring it out to the scene. Wnt (talk) 16:53, 19 November 2017 (UTC)[reply]
  • I just read of a sci-fi story from the '40's (can't remember if it's a book or movie) where in the future, all prior actions can be reconstructed by trace imprints and vibrations, so the murderer concocts a plot entirely in his head, and befriends his intended victim to allay suspicion. It's killing me I can't think of the name, remember the source, or find it on google. It may have been mentioned in the Oct Discover or Sci Am magazines, which I returned to the library last week. In any case, the idea is nonsense simply based on chaos theory, namely nonlinear feedback and path independence. Most of the relevant info is quickly overwhelmed by signal noise, destroyed by entropy. or simply not recorded in the first place. For example, leave two glasses of water in the fridge an hour apart, then try to find out which one was placed in first the next day? The temperature will give you no clue, and other hints will also very swiftly disappear. μηδείς (talk) 22:06, 19 November 2017 (UTC)[reply]

How far from the center has a bound electron of the observable universe ever reached?

Out of zillions of atoms one has had an electron reach the most picometers from the center since the Big Bang. This distance should be estimatable right? Maybe it's "more wrong" to think of electrons this way than as clouds but you're only probabilistically estimating, not observing and changing actual electrons. Sagittarian Milky Way (talk) 08:57, 16 November 2017 (UTC)[reply]

Even before you get into quantum weirdness, your question is poorly defined. Say there are only 2 protons and 2 electrons in the universe. If the two electrons are both closer to proton A than to proton B, do you have two hydrogen atoms, or a positive hydrogen ion and a negative hydrogen ion? i.e. when an electron is far enough away from the atom, it ceases to be meaningful to define it as an atom (and that's before you get to issues regarding all electrons being interchangeable, and not having defined positions unless measured). MChesterMC (talk) 09:35, 16 November 2017 (UTC)[reply]
The greater issue then is that what you really have is just a set of data describing the location of charge centers in relation to each other and their relative movement. Concepts like "electron" and "proton" and "ion" and "atom" are human-created categorizations to make communicating about complex data like this meaningful to us. We define the difference between an atom and an ion based on our own (arbitrary but useful) distinctions. What makes something a thing is that we set the parameters for that thing. There is no universal definition for that thing outside of human discussion. --Jayron32 11:50, 16 November 2017 (UTC)[reply]
Also there is no such thing as the center of the Universe.--Shantavira|feed me 11:14, 16 November 2017 (UTC)[reply]
True, but I read it as being from the centre of the atom. My quantum mechanics isn't up to the task, but it should be possible to estimate a probable maximum distance just by multiplying the probability density function from the Schrödinger equation by the number of atoms being considered, perhaps just for hydrogen atoms. Whether such a distance could ever be measured in practice is questionable, but the mathematics based on a simple model should provide a very theoretical answer. Do we have a quantum mechanic able to offer an estimate? Dbfirs 12:22, 16 November 2017 (UTC)[reply]
You have to define your probability limits. The maximum distance is infinite if you don't define a limit, like 90% probability, or 99% probability, or 99.99% probability. If you set the probability to 100%, you get a literally infinitely large atom. --Jayron32 12:36, 16 November 2017 (UTC)[reply]
Yes, of course, that's what the probability density function gives, but if you find the distance at which the probability is ten to the power of minus eighty, then we have a theoretical figure for the maximum expected distance since there are about ten to the power of eighty hydrogen atoms in the observable universe. Statisticians might be able to refine this estimate, and I agree that it might bear little relevance to the real universe. Dbfirs 12:50, 16 November 2017 (UTC)[reply]
In the ground state, the distance you are asking about is ~100 times the Bohr radius of a hydrogen atom. However, in principle there exist an infinite number of potential excited states with progressively increasing orbital sizes. Very large orbitals involve energies very close to but slightly below the ionization energy of the atom. In that case the electron is only very weakly bound. Aside from the general problem that the universe is full of stuff that will interfere, there is no theoretical reason why one couldn't construct a very lightly bound state with an arbitrarily large size. Dragons flight (talk) 14:12, 16 November 2017 (UTC)[reply]
The important thing to remember here is that energies are quantized but distances are not. This is all what the uncertainty principle is about. You can't handwave some "yeahbuts" around; the position of electrons with a well-defined momentum are fundamentally unknowable which means that the chance of finding that electron at any arbitrary point in the universe is not zero. In a single-hydrogen-atom universe, we can construct a hydrogen atom of any arbitrarily large radius by asymptotically approaching the ionization energy of the atom (this is akin to the escape velocity in a gravitationally bound system). As the energy level of an electron approaches asymptotically close to the ionization energy, the radius of that atom tends towards infinity. Well, sort of. The radius itself is not a well defined thing, but any given radius definition (such as the Van der Waals radius) will tend to increase arbitrarily large values as one approaches ridiculously high energy levels. There are an infinite number of energy levels below the ionization energy, so you can get arbitrarily close to it without passing it. That's what DF is on about. In a real universe with other atoms, highly excited electrons are able to absorb enough energy from a stray photon to excite it past the ionization energy, so in practical terms in a real universe, there are practical limits to the size of atoms, but those are imposed by factors external to the atom, not factors based on internal forces within the atom. Purely as a system-unto-itself, there is no limit to the distance that a bound atom cannot remain bound. Only an energy limit. --Jayron32 15:15, 16 November 2017 (UTC)[reply]
There's no way you can find an answer to "there's a 50% chance a bound electron has not been x far from its atom center" in some way not really applicable to the real universe? (like if you were to measure the position of every electron once to good accuracy (clearly not possible for many reasons i.e. sentient life postdated atoms) there should be a 50% probability one of the bound electrons are x far, the most likely electron speed is y (which has to have an answer since superheavy elements get relativistic effects from the electrons moving that fast) it takes z time for an electron at the most likely or 50th percentile distance to move a reasonable distance away from where it was at that speed (say 1 radian, yes they don't really orbit), there's been w of these time periods since the percent of hydrogen atoms that were non-ionized is similar to now (does that even cover most of the time since stars?) and that could be taken as w more chances for an electron to get far so you can then calculate the distance with w times more atoms each being measured once? So if (numbers for example, not accurate) there were 1080 atoms and there have been 1034 periods z long so far you'd find the 50% probability maximum for 10114 atoms being measured once? (since good positional accuracy would screw with trying to measure the real universe's electrons' positions quadrillions of times per second) If cosmological weirdness has made the amount of (normal) matter within the observable boundary vary a lot I wouldn't mind if that was ignored to make the math easier) Sagittarian Milky Way (talk) 19:27, 16 November 2017 (UTC)[reply]
There's some fun stuff that happens with low-probability statistics and indistinguishable particles.
The probability that an electron is measured at a distance of a thousand light-years radially from the proton it orbits is very low but non-zero.
But - if you set up a detector, and you register one "count", can you prove that the electron you observed is the one you were trying to measure?
No, you cannot. Your detector might have measured noise - in other words, it might have been lit up by a different electron whose interaction with your detector was more or less likely than the interaction with your super-distant electron. Actually, the probabilities and statistics don't matter at all, because we have a sample-size defined exactly as one event. Isn't quantization fun?
In order to prove that it was the electron you were hoping to measure, you need to repeat the experiment and detect a coincidence. The probability that you will measure this is very low, but non-zero, squared - in other words, it won't be measurable within the lifetime of the universe.
Here's what Plato Encyclopedia has to say on this topic: Quantum Physics and the Identity of Indiscernibles.
Take it from an individual who has spent a lot of time counting single electrons - even with the most sophisticated measurement equipment, my electron looks exactly like your electron, and I can't prove which electron was the one that hit my detector.
Nimur (talk) 19:53, 16 November 2017 (UTC)[reply]
I think the weirdness is weirder than that. I mean, the question is how far any electron in the history of history has ranged away from its nucleus and returned. But electrons are ... a cloud of probability. At any instant you could measure the electron and find it anywhere. So if you measured any one electron at every possible instant, an infinite number of instants, it should range pretty much an infinite distance away. Except... that measuring the electron changes it! If you find the electron a light year away from its nucleus, you won't find it at a Bohr radius a nanosecond later away, because the probabilities are all changed. And the reverse is also true. So you can't talk about where you would have measured an electron, but only where you did measure it.
But even if you rephrase the question to where an electron has been found in the history of history, it's still a problem ... because if you measured its position, it is not possible to measure its momentum accurately, to determine if it was truly "in orbit" or if it had simply been ejected from the atom! And so you can't actually give a distance; you can only give a probability function that may extend out an arbitrarily large distance.
I think... Wnt (talk) 17:04, 19 November 2017 (UTC)[reply]
In other words, shut up and calculate. --47.138.163.207 (talk) 09:45, 20 November 2017 (UTC)[reply]

ethanol fermentation

A gram of sugar has 4 calories and a gram of alcohol has 7 calories. Would anyone be able to tell me what the approximate conversion rate of sugar to alcohol in ethanol fermentation is in calories? Eg if you put 400 calories of sugar into the reaction how many calories of ethanol do you get out? I've tried reading the article but all the equations are way over my head, sorry. Thanks for your time. — Preceding unsigned comment added by Oionosi (talkcontribs) 10:21, 16 November 2017 (UTC)[reply]

C6H12O6 → 2 C2H5OH + 2 CO2
translates into
180g sugar → 2x 46 g alcohol + 2x 44g CO2,
(those weight values can be found at respective article of glucose, ethanol and CO2)
4x 180 calories sugar (=720) -> 7x 2x 46 calories alcohol (=644) + 76 calories lost
As you see, this was not are way over my head, you underestimate yourself — Preceding unsigned comment added by 185.24.186.192 (talk) 11:36, 16 November 2017 (UTC)[reply]
That looks like a valid calculation (though the one-significant-figure calories given by the OP lack the precision to come up with an accurate "calories lost" figure). I'll add a note of explanation though. With fermentation the idea is to take something in an intermediate redox state and split it up into more oxidized and reduced components. In a very loose sense it is like the opposite of a combustion reaction, though one rarely thinks of burning something (even something greasy) in carbon dioxide! So you start with a sugar, i.e. -CHOH-. Not counting any ends, the formal oxidation state of the carbon is -1 from the direct hydrogen and +1 from the oxygen bonds adding to 0; the oxygen is -2 as usual. Such a compound could be produced by the oxidation of -CH2- (which has carbon at -2) with one oxygen. As a result it has less energy than -CH2-, assuming an oxygen-containing atmosphere. (On Saturn or early Earth it would have more energy than -CH2- since that won't do much in regard to methane) Oxidation can take a terminal -CH2OH (alcohol, -1) and convert it to -CHO (aldehyde, +1), then -COOH (carboxylic acid, +3), then separate it as CO2 (carbon dioxide, +4, with a -1 change in oxidation state on the other end as a hydrogen replaces this carbon on the decarboxylated main chain). Reduction can convert -CH2OH to -CH3 (methyl, -3). By such manipulations, we see that the initial six +0 CHOH carbons become two +4 CO2s, two -3 methyls, and two -1 alcohols. As a result, the "burnability" of six carbons is concentrated into four carbons, while the other two are now completely burned to CO2. This doesn't quite match the 7/4 ratio because the energy of compounds is more complicated than that (after all, oxidation state is a rather crude approximation that assumes absolute differences based on quantitative differences in electronegativity, among many other things) but I think it is qualitatively useful. Wnt (talk) 01:14, 20 November 2017 (UTC)[reply]

Telephone lines

Can someone explain how telephone lines "work", i.e. how it is that they can carry multiple conversations simultaneously, rather than being busy for all the people on the exchange whenever any subscriber is on the phone? I looked at telephone and telephone line without seeing anything, and Google wasn't helpful either. I would imagine that the line would carry electrical pulses just one at a time (as on a party line), and multiple conversations would cancel each other out, but obviously our whole telephone system wouldn't work properly if that were the case. Nyttend (talk) 12:23, 16 November 2017 (UTC)[reply]

This has a pretty good explanation, getting down to how signals are encoded for travelling down the wire. --Jayron32 12:34, 16 November 2017 (UTC)[reply]
Hm, so nowadays it's electronic when going from exchange to exchange; not surprised, but I wasn't aware of that. And I didn't realise that there was a completely separate wire from every subscriber to the exchange, or I wouldn't have wondered. But before electronics, how was it possible for two subscribers from the same exchange to talk simultaneously with two subscribers from the other exchange, rather than one person taking it up and preventing other callers? Does Myrtle have multiple individual wires going to every nearby town's exchange, and a massive number of individual wires going to some nationwide center in order to allow Fibber to phone someone halfway across the country? Nyttend (talk) 12:49, 16 November 2017 (UTC)[reply]
Andy gives a good summary below. In the early days of telephone systems there really was a direct electrical connection that had to be made between each pairs of callers, and no one else could use the same wires at the same time, so each hub on the network had to have many separate wires available that could be variously connected to make one complete circuit for each call. However, we have long since abandoned that approach. Nowadays everything is generally digitized and travels over packet switched networks. Depending on where you live, and who provides the phone line, the digitization may happen inside your home or at some regional or central exchange. Dragons flight (talk) 14:25, 16 November 2017 (UTC)[reply]
  • There are several methods used historically.
  • Party line systems were connected simply in parallel. Only one could be used at a time.
  • Underground cables used a vast number of conductor pairs, one for each circuit. 100 pair cables were common, far more than could ever be used by overhead cables, which were mostly single pairs to each visible cable (the first telegraph signals used a single copper conductor for each visible wire, so a telephone circuit might need two wires and pairs of china insulators). Cables for the 'local loop' from the exchange to the telephone used a single pair for each phone. Cables between exchanges were of course circuit switched to only need enough pairs for the calls in progress (not the number of phones) and many calls would be local, within the same exchange.
  • Analogue multiplexing[6] was used from the 1930s (rarely) to the 1980s. Like a radio, this was a broadband system that packed multiple separate signals down the same cable by multiplexing them. Frequency division multiplexing was used, like an AM radio. Each telephone signal only needed a narrow bandwidth of 3kHz: 300Hz to 3.3 kHz. This meant that the largest trunk lines could carry several MHz signals over a coaxial copper tube conductor, several to a cable, and these could each carry thousands of voice phone calls - or a single TV signal, for the early days of national TV in the 1950s-1960s.
  • In the 1980s, PCM (pulse code modulation) came into use, where analogue phone signals were digitised, then distributed as circuit-switched digital signals. Usually this was done in the telephone exchange, but commercial switchboards[7] began to operate digitally internally and so these could be connected directly to the digital network, through ISDN connections (64kbps or 2Mbps). There was some movement to placing concentrators in cabinets in villages, where the local phones were digitised and then connected to a local town's exchange via such a connection (all phones had a digital channel to the exchange). This allowed simpler cables (such as a single optical fibre) to the village, but was less complex than an exchange in the village.
  • In the 1990s, the Internet became more important and packet switching began to replace circuit switching for digital connections between exchanges and commercial sites. The domestic telephone was still an analogue connection, rarely ISDN, and anyone with home internet access used a modem over this.
  • By 2000 the analogue telephone was no longer as important as the digital traffic. Also IP protocols from the computer networking industry replaced the mix of digital protocols (ATM, Frame Relay) from the telephone industry. Analogue phones became something carried over an IP network, rather than digital traffic being carried by analogue modems. BT in the UK began to implement the 21CN, as a total reworking of their legacy network. Andy Dingley (talk) 13:27, 16 November 2017 (UTC)[reply]
Thank you; this really helps. I don't much understand how radio works, but the idea of broadcasting at different frequencies I understand, so using a different frequency for each telephone conversation makes sense. Could you add some of this information to telephone line and/or telephony, since the first is small and the second mostly talks about digital? Nyttend backup (talk) 15:06, 16 November 2017 (UTC)[reply]
What he said about 100 circuits is the source for the old "all circuits are busy" message. ←Baseball Bugs What's up, Doc? carrots15:30, 16 November 2017 (UTC)[reply]
Very rarely. It was exchange equipment that ran out first, not cables.
Telephone exchanges are obviously complex, but for a long time and several generations of technology pre-1980 (and the introduction of stored program exchanges, i.e. single central control computers) they consisted of line circuits, junctors and a switching matrix between them. Line circuits were provided for each local loop (i.e. each customer phone). Obviously the amount of equipment per-customer was kept to an absolute minimum, and as much as possible was shared between several subscribers. Typically[8] a rack of subscribers' uniselectors was provided, each one handling 25 lines. Several sets were provided, so each subscriber might be connected to 5, or even 10 on a busy exchange. When a subscriber picked up their phone, the next free uniselector would switch to their line (and only then the dialling tone was turned on). So no more than 1 in 5 people could make a call at the same time - any more than that and you didn't get dial tone (and maybe did get a busy tone or message instead).
Exchanges are connected together by cables, and the switching circuit for these is called a junctor (Junctor is a useless article). Again, these are expensive so the equipment is shared and multiple sets are provided, but not enough to handle a call over every cable at once. Traffic planning and the Erlang were important topics for telephone network design. For a pair of exchanges (called "Royston" and "Vasey") where all of their traffic is between the two exchanges and they don't talk to people from outside, then enough junctors might be provided to meet the full capacity of that one cable. Usually though, enough equipment was provided to meet the "planned" capacity for a cable and the "planned" capacity for the exchange, and the equipment racks (the expensive and more flexible aspect) would be the one to run out first. Only in exceptional cases would all the traffic land on a single cable, such that it was the cable which maxed out.
One aspect of more recent and packet switched systems, rather than circuit switched, is that they become more efficient at load sharing, thus "equipment busy" becomes rarer. Also we demand more, and the hardware gets cheaper, so it's easier to meet this demand. Andy Dingley (talk) 17:27, 16 November 2017 (UTC)[reply]
Thanks Andy, great stuff! I agree that junctor is a fairly poor article, and I'm interested to learn more about them. If anyone has any references on those please post them here, maybe we can use them to improve our article :) SemanticMantis (talk) 02:00, 17 November 2017 (UTC)[reply]
See also the phantom circuit, later reused in a patented method for supplying power, known as Power over Ethernet (PoE). --Hans Haase (有问题吗) 11:11, 17 November 2017 (UTC)[reply]
Many years ago I went to an "Open Day" at the local telephone exchange. There was a historical talk and we saw lots of metal rods with ratchets which moved up and down to make connections, making a clicking noise. I accepted an offer from the presenter to call my house - he didn't get through, but when I mentioned this later my mother said that the phone had been ringing intermittently all evening. 82.13.208.70 (talk) 16:36, 17 November 2017 (UTC)[reply]
See Strowger exchange Andy Dingley (talk) 17:23, 17 November 2017 (UTC)[reply]

What is a 'double domed appearance'?

"Double domed" cranium

Relating to an animal's (in this case, an elephant's) head. 109.64.135.116 (talk) 19:49, 16 November 2017 (UTC)[reply]

It looks like an Asian elephant looks. HenryFlower 19:54, 16 November 2017 (UTC)[reply]
A picture is worth several words →
2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk) 05:48, 17 November 2017 (UTC)[reply]

Sun's helium flash

Judging by the article about helium flash, our Sun will apparently exhibit this when it starts fusing helium. Do we know how bright this flash will be? Will it affect life on Earth? Is this the part where the Sun engulfs Mercury and Venus? Also, I suppose there will be a process of dimming once the hydrogen supply is exhausted while the Sun is collapsing. Is this near-instantaneous or will it take minutes/days/millenia? 93.136.10.152 (talk) 20:40, 16 November 2017 (UTC)[reply]

You can read Sun#After_core_hydrogen_exhaustion. Ruslik_Zero 20:50, 16 November 2017 (UTC)[reply]
Thanks, didn't think to look there. So the compression of the core is more or less gradual as Sun reaches the end of the red giant branch, until the moment of the helium flash? 93.136.10.152 (talk) 21:15, 16 November 2017 (UTC)[reply]
There's also a carbon flash with three heliums kung-powing into carbon and so on in stars with enough mass (more than the Sun). If the star's massive enough it can reach 1 million sunpower with only tens of sun mass and build up central iron ash till it loses structural integrity (since stars can't run on heavy elements). The star collapses till 200 billion Fahrenheit, bounces off, and explodes with the light of billions of Suns (and up to about a trillion sunpower of neutrino radiation). When the center reaches the density of a supertanker in a pinhead it becomes extremely resistant to further collapse (but not invincible) since there's only so many neutrons that can fit in a space unless it can force a black hole (or possibly get crushed into smaller particles). Sagittarian Milky Way (talk) 00:01, 17 November 2017 (UTC)[reply]
Will it affect life on earth? No, because by that time all life on earth will have died off. 2601:646:8E01:7E0B:5917:3E80:D859:DF69 (talk) 06:43, 17 November 2017 (UTC)[reply]
Relevant quote from the article: In the case of normal low mass stars, the vast energy release causes much of the core to come out of degeneracy, allowing it to thermally expand (a processes requiring so much energy, it is roughly equal to the total energy released by the helium flash to begin with), and any left-over energy is absorbed into the star's upper layers. Thus the helium flash is mostly undetectable to observation, and is described solely by astrophysical models. "Flash" in this context is meant in the sense of "in a flash", since the helium fuses extremely rapidly, not in the sense of a visible flash of light. The Sun will engulf Mercury and Venus when it transitions to a red giant. A helium flash happens in stars that have already been red giants for a long time. As for what happens once the core's hydrogen is exhausted, Stellar evolution#Mid-sized stars seems to answer this. --47.138.163.207 (talk) 09:41, 20 November 2017 (UTC)[reply]

November 17

An odd ball [possible insect nest identification]

Odd ball

What is it?

It's not a wasp hive. It seems pretty solid when you break it open. It is heavy and hard. The one pictured is around 25cm high. They are all over the place in Hainan. Ants seem to like crawling around that one.

What is it?

Anna Frodesiak (talk) 06:09, 17 November 2017 (UTC)[reply]

COuld it be a bird nest of some sort? Swallows build nests of mud, though usually on cliff faces or under building eaves. --Jayron32 11:57, 17 November 2017 (UTC)[reply]
The acrobat ant builds nests like this (looks similar to me). Alansplodge (talk) 12:24, 17 November 2017 (UTC)[reply]
Looks like an ant colony to me too. I was a bit surprised to not see any ants in the photo, but OP says ants are all around, so that's also pretty good evidence. Tropical termites can also build roughly similar nests, but I think they usually also build covered galleries. Further destructive sampling would probably clear this up rather quickly, at least to determine ant or non-ant. SemanticMantis (talk) 15:06, 17 November 2017 (UTC)[reply]
Just a note - it's not wise to mess with these things. Sometimes insects swarm out and sting you to death - or if you're rock climbing you fall to your death. 82.13.208.70 (talk) 15:54, 17 November 2017 (UTC)[reply]
I think that's a bit melodramatic, but better safe than sorry I suppose. For the version found in the USA, "acrobat ants are usually of minimal nuisance to people" University of Florida Entomology & Nematology. Alansplodge (talk) 21:01, 17 November 2017 (UTC)[reply]

Thank you, all. I'm still perplexed. There are never openings large enough for a bird. It is nearly solid inside. No ants swarm out when disturbed. The surface is made of mostly whole leaves glued on. There are ants visible on the thing and stem below, but not that many. There are no openings where any bug comes and goes, or none easily seen. They come in all different sizes, always enveloping branches quite symmetrically. An odd ball indeed. Anna Frodesiak (talk) 22:34, 17 November 2017 (UTC)[reply]

Ants, eh? That does sound possible. It's made of whole leaves so I can't see wasps carrying those. Ants could because they work as a team.
And careful about calling ants "just a subgroup of the wasps". They already have wing envy, and if they hear that they get very upset and start running around in different directions. :) Anna Frodesiak (talk) 05:08, 18 November 2017 (UTC)[reply]
Beware when they stop doing the conga and take off their tutti-frutti Carmen Miranda hats. ;) 2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk) 05:57, 18 November 2017 (UTC)[reply]
Ants are generally considered party animals, and that is why the conga line is huge with them. (By "huge" I mean quite small because they're ants.) Anna Frodesiak (talk) 07:52, 18 November 2017 (UTC)[reply]
I would guess that this is man(women)-made "ball" (or weight?). Birds or small animals would use other materials and better locations for a nest in a wood and colony insects would not include complete leaves in their construction like in the picture. Other interesting details are that this seems to be a young tree, that it is very straight and the added info that its in China, which could be a hint towards the ancient chines art of forming plants (Bonsai etc.) for some (later) purpose. --Kharon (talk) 06:23, 18 November 2017 (UTC)[reply]
Not man-made, no way, impossible. They're everywhere and people would not make those things. They're too busy farming. Anna Frodesiak (talk) 07:52, 18 November 2017 (UTC)[reply]
Maybe they plant something in there and farm them later. You know the rich Chinese cuisine, right? --Kharon (talk) 14:08, 18 November 2017 (UTC)[reply]
Have you google-imaged the topic "nests"? It looks sort of like a hornets' nest. ←Baseball Bugs What's up, Doc? carrots14:51, 18 November 2017 (UTC)[reply]
Bugs, I actually looked at the source data on Anna's photo; saw the location was Hainan, China; google imaged "dauber aunt nest china" and posted the link in my bulletted response above, so people can just click there and use the related image link if not satisfied. μηδείς (talk) 17:17, 18 November 2017 (UTC)[reply]
Hi μηδείς. I'm stuck with Bing here in China. And yes, the pic is from Hainan. So, Bing shows dauber wasps, but I cannot find dauber ants at search engines or at Wikipedia. I'd like to find a species at Wikipedia then add it to the disambiguation page Dauber.
Anyhow, every single picture of a dauber nest at search engines shows them made of stuff like hummus but not with lots of whole leaves. Seeing that the leaves in my image are the same as the living ones on the plant, they could be just bent over and hummussed rather than the leaves gathered from the floor and carried. If the latter, then it must be ants because wasps cannot carry huge leaves.
Next time I'm in the forest, I'll get someone to cut one open and I'll take a picture of the inside. I'll tap on it first to ensure it's abandoned. Anna Frodesiak (talk) 23:46, 18 November 2017 (UTC)[reply]
Well, wasps will build cells reminiscent of honeybee combs, although maybe not as well-structured. Tapping may not be enough assurance, I would put it in a sealed bag with a rag soaked in insect killer--bug spray can also be toxic to humans; follow the instructions. You could also just go to a local university with a reasonable biology department, as they will know the genus, if not the species. μηδείς (talk) 00:02, 19 November 2017 (UTC)[reply]
μηδείς, thank you kindly for the suggestions. However, 1, I will not kill them, and 2, the university thing would be a dead end for a hundred hilarious reasons. :) But thank you. I'll take my chances with a knock-then-run-then-return technique. :) Anna Frodesiak (talk) 00:12, 19 November 2017 (UTC)[reply]
Hornets' nests come in a wide variety of styles, some of which resemble your mystery object (e.g.), but a cursory image search didn't find any that incorporated whole leaves; they tend to make their own "paper" covering from chewed fiber and saliva (the leaves on this one seem incidental). Note: don't mess with Chinese hornets!2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk) 03:47, 19 November 2017 (UTC)[reply]
Yes, this doesn't seem to be papery. Sort of dauby with leaves.
And I have encountered those giant wasps. They are absolutely huge! Anna Frodesiak (talk) 04:54, 20 November 2017 (UTC)[reply]

Evolution of anticoagulants

Why did some blood-sucking animals evolve anticoagulants if they suck in a manner where the blood doesn't have enough time to coagulate? E.g. mosquitos or leeches pierce the skin and suck directly, so the blood goes steadily into them, similar to injection needle which prevents coagulation. My guess is that anticoagulants were present before they evolved the necessary organs, allowing them to feed on spilled blood. Brandmeistertalk 16:30, 17 November 2017 (UTC)[reply]

I'm not sure I'm following your question. Does it help to know that ordinary saliva contains enzymes like amylase and lipase that start the pre-digestion of food? In other words, ordinary saliva is already pretty decent at breaking things up. Now, something like a vampire bat has even more advanced enzymes working for it (at least, according to our citation), but it was clearly a case where evolution took something that was already good at a certain activity and then enhanced it (to some extend, a form of exaptation). Matt Deres (talk) 18:26, 17 November 2017 (UTC)[reply]
As I understand, digestive enzymes and anticoagulants here are different stuff. Digestive enzymes may be needed to break down blood, but anticoagulants look redundant if the sucked blood goes straight for digestion anyway. Per clotting time, blood starts to coagulate in about 8 minutes, far more time than required for a mosquito to feed and fly away. Brandmeistertalk 18:45, 17 November 2017 (UTC)[reply]
As usual, Matt Deres has it with exaptation. As long as a preëxisting circumstance (digestive enzymes in the saliva) provide a little benefit, and the benefits of producing stronger enzymes is possible (chemically and given the right mutations) and outweigh the costs, evolution will move in that (in this case anticoagulant) direction.
Or, to further address Bradnmeister, other chemicals which have anticoagulant properties may become expressed in the saliva due a change in the regulation of gene expression. (Look at the ubiquity of melanin and cholesterol-type chemicals and their various roles.) Eventually these will be classified as different classes of chemicals.
μηδείς (talk) 23:49, 17 November 2017 (UTC)[reply]
I am concerned that there may be some unexamined and potentially incorrect assumptions here about the clotting process. Geometry matters—clotting will proceed more quickly in the very long, narrow proboscis of a mosquito compared to the relatively much wider tubes used to measure clotting time in the lab. For instance, this presentation shows how long bleeding persists in a rat experiment where a needle is inserted and removed from the tail vein of the animal (we're interested in the control – "vehicle" – bar at the extreme left). Even with a direct puncture of a vein, bleeding from the much narrower opening stops in less than 90 seconds—not 8 minutes. The size of the needle wasn't obviously specified in that slide deck, but even a relatively fine 30-gauge needle still has outer and inner diameters of roughly 300 and 150 microns, respectively. That's several times the diameter of the mosquito proboscis; see the figures in this paper.
There's no need to invoke exaptation or other phenomena when we already know that blood clots faster in narrow tubes. TenOfAllTrades(talk) 01:31, 18 November 2017 (UTC)[reply]
That last comment makes no sense to me at all. Exaptation is the use of a pre-existing element--saliva enzymes--for a new purpose; anti-coagulant. What in the world does that have to do with probosces? Especially since even non-proboscised vampire bats have anticoagulants. I suspect you may have misundertood me, but I did not even mention the feeding organ in my post. μηδείς (talk) 02:04, 18 November 2017 (UTC)[reply]
You're right; I completely misread what you were saying about exaptation.
That said, there's actually another somewhat subtler misunderstanding at work here, about how mosquito saliva inhibits coagulation. The potent anticoagulant agents in the saliva of mosquitoes (and many other blood-sucking creatures, for that matter) generally aren't enzymes at all; they're polypeptides that bind to regulatory and/or active sites and act as inhibitors of pro-clotting enzymes. That is to say, they don't inhibit clotting by digesting clots, they inhibit clotting by preventing clot formation in the first place. There's a bit of an overview in this PNAS snippet, which looks at anophelin (we need an article...), a thrombin inhibitor in Anopheles mosquitoes.
Numerous animal species, from insects (mosquitoes) to mammals (vampire bats), feed primarily or exclusively on fresh blood from their prey. These parasites produce some of the most potent antagonists of the blood clotting system known, which are critical for their hematophagous lifestyle. Many of these compounds are small polypeptides that inhibit the proteolytic enzymes of the clotting cascade, notably thrombin....
The overwhelming majority of proteinaceous inhibitors of proteolytic enzymes works by physically blocking access of the substrate to the active site....
I would certainly be intrigued if someone could locate some good references regarding the evolutionary roots of these sorts of polypeptide inhibitors. Given that the anophelins have no intrinsic enzymatic activity and don't share any conspicuous sequence similarity to any enzymes (digestive or otherwise, based on a quick BLAST) I don't find the hypothesis of digestive enzyme exaptation particularly compelling.... TenOfAllTrades(talk) 03:21, 18 November 2017 (UTC)[reply]
I agree, I was making no actual claim that it was, say, specifically amylase that mosquito saliva developed from, but some pre-existing substance. Small polypeptides can also easily evolve from large polypeptides with a frame-shift mutation or two that deactivates all but the part of a gene that was already producing a digestive enzyme, leaving just a small polypeptide that blocks coagulation. The step from necrophagy to sarcophagy to hemophagy seems not to have to overcome to many barriers in the evolutionary landscape. You are right, we need sources, and this (vampyrism) is not a specific field I have studied or have books on. μηδείς (talk) 03:37, 18 November 2017 (UTC)[reply]
Again, though, there's no particular evidence (at least none provided here) that small(ish) polypeptides like anophelin originated with any sort of digestive enzyme, given their lack of similarity with even fragments of existing, known proteases. Yes, there's lots of ways to mutate the gene for a larger protein which will result in a much smaller product, but I'm still not clear on the fixation in this thread on the idea that a digestive enzyme in particular would be the likely starting point. TenOfAllTrades(talk) 21:59, 18 November 2017 (UTC)[reply]
Read this sentence again (emphasis added): "I was making no actual claim that it was, say, specifically amylase that mosquito saliva developed from, but some pre-existing substance." μηδείς (talk) 22:46, 18 November 2017 (UTC)[reply]
I read the whole comment, actually. I even think I understood it this time. The second sentence says (emphasis added): "Small polypeptides can also easily evolve from large polypeptides with a frame-shift mutation or two that deactivates all but the part of a gene that was already producing a digestive enzyme, leaving just a small polypeptide that blocks coagulation." If you – or anyone – don't want me to talk about how you're talking about digestive enzymes, stop talking about digestive enzymes. TenOfAllTrades(talk) 02:02, 19 November 2017 (UTC)[reply]
But you are totally dropping the context that I had just said I was using it, "say", as an example of "some prexisting substance". Would I be justified in assuming that you are arguing that an entirely new polypeptide, that just happens to be expressed in the mosquito's saliva, appeared out of nowhere de novo with all the regulatory mechanisms in place to generate it and release it at the proper time? Of course not. So please retain the full context of what I write (notice I use paragraphs) and stop cherrypicking words when my full meaning is clear from my entire post. μηδείς (talk) 02:34, 19 November 2017 (UTC)[reply]
The third paragraph of Coagulation begins: “Coagulation begins almost instantly after an injury to the blood vessel has damaged the endothelium lining the vessel.” Assuming a bite damages the endothelium, that appears sufficient to answer OP’s question, but here are a few related facts and sources that may provide perspective. From leach: “An externally attached leech will detach and fall off on its own when it is satiated on blood, which may be anywhere from 20 minutes to two hours or more.” From Mosquito#Saliva: “Universally, hematophagous arthropod saliva contains at least one anti-clotting, one anti-platelet, and one vasodilatory substance.” The book Mosquito, by Andrew Spielman and Michael D’Antonio, pp.14,15 mentions that a mosquito will bite up to 20 times before finding a blood vessel and each time it will inject “a chemical that inhibits your body’s ability to stop any bleeding that might begin.” (Hematophagy#Mechanism_and_evolution does not appear to directly address the question.)--Wikimedes (talk) 21:39, 18 November 2017 (UTC)[reply]
The most recent free paper I found on anophelin is this, which classifies it as an atypical serine protease inhibitor. The paper doesn't try to guess which (if any) it came from. Two important things to note is that anophelin is fast-evolving even within Anopheles, creating new relevant sites in some versions absent from others, and it is a largely disordered protein, exposing those sites readily, which means that there is less stabilizing selection because one amino acid doesn't have to stay the same to contact another. The result is that it is going to be very hard to tell which way this train went by looking at the track. I would not presume to say it is impossible, though. BTW the clotting enzyme (thrombin) which it inhibits is described as an atypical chymotrypsin/trypsin like enzyme, so you can say that the clotting it inhibits is an exapted digestive enzyme if you want. ;) Wnt (talk) 16:37, 20 November 2017 (UTC)[reply]

A question about g-force

How much g-force a pilot who fly an airplane at 510 knots will endure? 37.142.17.66 (talk) 18:09, 17 November 2017 (UTC)[reply]

If they fly straight and level, then zero.
They only feel g forces if they manoeuvre, usually by pulling vertically (relative to the airframe) upwards (the wings can generate more lift than any yaw or roll forces). So loops or tight turns. Andy Dingley (talk) 18:37, 17 November 2017 (UTC)[reply]
I'm particularly talking about United Airlines Flight 175. 37.142.17.66 (talk) 18:49, 17 November 2017 (UTC)[reply]
I don't recall any particularly hard manoeuvring on that day. Also these were airliners, which just aren't built to pull many g. Andy Dingley (talk) 20:06, 17 November 2017 (UTC)1;[reply]
I am confused. Do you mean how much G-force was generated by the abrupt stop (crash)? --Lgriot (talk) 20:08, 17 November 2017 (UTC)[reply]
No. before the crash. according to National Transportation Safety Board the pilot nosedived more than 15,000 feet in two and a half minutes. so how much the g-force was on the pilot, if he really fly at 510 knots? 37.142.17.66 (talk) 20:44, 17 November 2017 (UTC)[reply]
That is not enough information to say. G-forces are caused by acceleration, i.e. how fast the velocity is changing. A descent of 15,000 feet in 2.5 minutes corresponds to a vertical speed component of 100 ft/s. The question is how rapidly they transitioned from level flight to 100 ft/s descent. If they took x seconds that would be an acceleration of −100/x ft/s², or approximately −3/x gees. If x is small, the G-force is large. If x is large, meaning a gradual transition into descent, the G-force is small. --69.159.60.147 (talk) 21:22, 17 November 2017 (UTC)[reply]
It depends on how quickly the pilot noses over, and that data isn't specified here. But first, some ballparks:
Let's do some math (mostly trig). 510 kts is 860 feet per second. In 2.5 minutes at that speed, the plane covers 129,000 feet and, per the above, descends 15,000 feet. It's obvious even there that 15 is only a small proportional component of 129; the trig works out to a grade of 7%.
This Boeing publication discusses pitch rates on takeoffs and shows a nominal flight case of ~3 degrees per second of pitch rotation to a maximum 15 degree pitch. That takes five seconds and is performed on virtually every single flight. Of course, that's at a lower speed than 500 kts.
On the other hand, we can check out how these planes usually descend. Per the FAA, a 767-200 flies a standard climb and descent rate of 3500 feet per minute, roughly half of the 6,000 feet per minute discussed here. Assuming standard descent happens at more or less cruising speed, that's about a 4% grade.
The main thing to note here is that the calculated 7% grade isn't a remarkable number. There's lots of wiggle room, and for that, we go to the g-acceleration calculator for curved paths.
  • If the plane rotates at 3 degrees per second at 260 m/s (~500 kts), that's a curve of radius 5000 m (260*30 / (pi/2)) and yields -1.4 g (so -0.4 g observed). Well, that's substantial, though not debilitating. But that's making the whole pitch transition in two seconds; that's really fast.
  • 1 degree per second still accomplishes starting the descent in 7 seconds and drops the effect to -0.45 g (so 0.55 g observed). That's close to what a plane might reasonably experience on a standard flight in abnormal circumstances.
  • Go to 0.5 degrees per second (still just 15 seconds to transition into the dive) and the force from the dive is just -0.2 g (so 0.8 g observed). Totally reasonable.
So, absent additional evidence, there's no cause to expect that a controlled descent of 15000 feet in 150 seconds at cruising speed would impose g loads with any significant immediate consequence. — Lomn 21:43, 17 November 2017 (UTC)[reply]

[e/c]

Also, it depends on which axis (Gx, Gy, Gz) the G-force is applied before G-LOC occurs (unconsciousness due to hypoxia). — (Speculation) It is likely on a commercial airliner that the plane would experience a catastrophic failure (like a wing falling off) before the pilot experiences unconsciousness. 2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk) 21:09, 17 November 2017 (UTC)[reply]
Direction as well as axis. A nose-over won't lead to G-LOC in the way that a pull-up will. Andy Dingley (talk) 21:55, 17 November 2017 (UTC)[reply]
It depends on the Pilot and what he is trying to do! Very good Pilots manage to do 1G-barrel rolls in soaring planes and Airline Pilots aim to give their Passengers a gentle pleasant journey by default so they probably try to keep the G-Forces at 1 +- 0.2 at all times except at take off, landing and emergencies of course. --Kharon (talk) 06:40, 18 November 2017 (UTC)[reply]
The whole point of a barrel roll (rather than an aileron roll) is that it's a low-g manoeuvre. Although neither puts many g on the pilot in the central fuselage, the forces needed to aileron roll a large aircraft will easily overload the wing spars. This is why the Boeing 707 [9] and the Avro Vulcan were barrel rolled, not aileron rolled. Andy Dingley (talk) 20:43, 18 November 2017 (UTC)[reply]
My whole point of mentioning a 1G barrel roll was just to show that aviation knows some tricks to circumvent the "simple" physical logic applied here in prior answers using (2-dimensional) direction change per second formulas to calculate a (g)force. --Kharon (talk) 00:43, 19 November 2017 (UTC)[reply]
Is that a Very Good Pilot? I know nothing about this, but I picture some turbulence or mechanical issue putting that glass of water all over the top of the cockpit, dripping down into any electronics etc., all while he's trying to recover from a barrel roll under already pear-shaped conditions. Is that as bad a thing as I imagine, and if so, I wonder if that is really such a bright idea. I mean, he could have gotten more hits drawing a cartoon penis. ;) Wnt (talk) 16:26, 20 November 2017 (UTC)[reply]
  • Just to be pedantic, the pilot experiences a force of 1G downward when the plane is not accelerating. A descent of 15000 feet in 150 seconds is not accelerating even as much as a free fall. In free fall, the pilot would experience 0G. -Arch dude (talk) 07:06, 20 November 2017 (UTC)[reply]
  • G-forces on the pilot are caused by forces on the aircraft, plus gravity forces on the pilot. Oversimplifying, the aircraft experiences four forces: thrust, drag, lift, and gravity. On a commercial aircraft the strongest of these is lift (supplied by the wings), which is why a plane banks in order to turn. On a commercial aircraft, the wings will break off before the pilot has (other) troubles related to G-forces. -Arch dude (talk) 07:21, 20 November 2017 (UTC)[reply]
So to answer the OP's question directly, unless he was doing something unnecessary, the pilot would have experienced a force slightly less than 1G during that descent? --Lgriot (talk) 16:39, 20 November 2017 (UTC)[reply]
Yes. But so "slightly" that the word "endure" is wholly inappropriate.
Even for a suicidal terrorist trying to impact a building, there is no need to fly heavy-handedly. If they do so, there's also a risk of a large aircraft responding badly to that. Andy Dingley (talk) 16:44, 20 November 2017 (UTC)[reply]

November 18

Back EMF in DC Motors

Most materials that I have read about the dc motors mostly point out the back emf generated in in the armature windings. Does this mean that no back emf is generated in the field coils? — Preceding unsigned comment added by Adenola87 (talkcontribs) 07:45, 18 November 2017 (UTC)[reply]

Back emf, also known as Counter-electromotive force, in a motor armature winding is voltage produced by relative motion between the armature and the magnetic field produced by the motor's field coils. (This voltage opposes the applied voltage, or allows the motor to act as a generator). Stationary field coils that carry a constant current produce a constant field so there is no back emf of this kind on them, only Ohm's law and ohmic heating apply to the DC current and voltage here. A transient change in field coil current may be detected when the motor speed or load changes. This is due to transformer coupling to the changing armature current and is not a back emf effect. Blooteuth (talk) 16:03, 18 November 2017 (UTC)[reply]
Incidentally, that article isn't particularly good. See Brushed DC electric motor for a more comprehensive treatement of the subject. Tevildo (talk) 19:25, 18 November 2017 (UTC)[reply]
When the DC motor is running, there is a back emf in the armature windings, since the current through them is interrupted by the commutator. There are usually small-value capacitors fitted across the commutator brush terminals to reduce or suppress this back emf, which causes excessive sparking, brush wear and radio interference if not suppressed. The current in the armature windings is not interrupted while the motor is running, so there's no back emf in them, but they do experience a back emf when the motor is switched off, since this is a 100% change of current. At all times when the motor is running, there is however, electromagnetic induction into the field windings caused by the magnetic field in the nearby rotating armature. "Back emf" is reserved for self-induced voltage in a coil when the current through it is changed abruptly - in almost all practical applications it is used when referring to the complete removal of current through the coil. "Induced emf" or simply "induction" is used to refer to the voltage induced in a coil by an external magnetic field whose strength is changing, as developed in the field windings already discussed. This magnetic field may be from a simple magnet moving nearby, or another nearby coil which is in motion with direct current flowing in it or a stationary coil with alternating current flowing in it. This latter AC application is the principle on which the transformer is based. Akld guy (talk) 01:46, 19 November 2017 (UTC)[reply]
The article Counter-electromotive force explains that the expression can apply equally validly either to voltage caused by relative motion of the armature in the surrounding magnetic field (my response) or to self-induced voltage that opposes a change in current through an inductance (Akid guy's response). @Tevildo thank you for linking to Brushed DC electric motor which is a good article. Blooteuth (talk) 14:43, 19 November 2017 (UTC)[reply]
Nevertheless, I would caution against using "back emf" for anything except a self-induced voltage in a single coil when the current through it is changed abruptly. It's a term that's almost exclusively used to refer to the undesirable very high voltage developed across a coil when the current is turned off, such as the several hundred volts that can be developed across a relay winding at the moment of switch-off when the relay was operating at a much lower voltage, such as 12 volts. The term would never be used to describe the voltage induced in the secondary of a transformer, for example. Its very name ("back") implies that the voltage is in the reverse polarity to that of the original operating voltage, and this is indeed the case in a single coil back emf situation. In the armature-to-field scenario, the induced voltage is reverse-polarity and aiding-polarity as the armature rotates toward and away from the field, so "back emf" doesn't seem to be a correct term. Akld guy (talk) 21:30, 19 November 2017 (UTC)[reply]
@Akld guy Let's be aware that the two usages of "back emf" are equally valid because they describe the same phenomenon of a magnetic field and a conductor in relative motion. The difference is which one is stationary. In my response it is the field that is stationary. In your response it is the conductor (coil) that can be stationary while its own-produced field collapses around it. Since the OP asks about motors (presumably not employed as generators) the armature Counter-electromotive force is indeed a nett "back emf" that opposes the driving voltage, except at start-up or when the motor is stalled. With a commutated armature each winding moves through a virtually constant field strength during the small rotation angle when it is in circuit. Blooteuth (talk) 13:06, 20 November 2017 (UTC)[reply]
Please don't refer to me as Akid guy. If you look at my page you'll see why I'm named AKLD_GUY. I'm not going to argue with you as I've said all I wish to say and we will not agree. Akld guy (talk) 20:24, 20 November 2017 (UTC)[reply]
I apologize for misreading your name and have fixed my mistake. You have an issue of disagreement with the article Counter-electromotive force which is the result of many editor's work. You should try engaging constructively with them at Talk:Counter-electromotive force. If obduracy prevents understanding that a back emf arises from an armature's movement through the magnetic field, the obdurate one may have difficulty explaining why the current drawn by a DC motor decreases with increasing r.p.m. Blooteuth (talk) 10:46, 21 November 2017 (UTC)[reply]

Easier to convert to natural gas: diesel or otto?

What motor is easier to convert to natural gas? --Hofhof (talk) 11:35, 18 November 2017 (UTC)[reply]

Otto engines are very easy to adapt for running on gas. Converted Vehicles usually even have a simple switch for gasoline or gas which you can savely switch while you drive. Diesel can not be converted completely because Diesel engines usually work with self ignition on pressure max, but there are systems where a gas tank is added to lower the use of diesel. In theory that is. I never saw any Diesel-Gas mixture engines besides the military ones, that basically make the engines capable to run on anything, from crude oil to gas.
I actually owned a car with an Otto engine and a gasoline- aswell as a gas-tank. Worked great and saved me some thousands cash for fuel during the time i owned it. The only flaw was the installed 90l gas tank that took away half of the trunk space but i never needed all the trunk space anyway. --Kharon (talk) 13:50, 18 November 2017 (UTC)[reply]
Your car didn't run on natural gas. Andy Dingley (talk) 20:33, 18 November 2017 (UTC)[reply]
Potayto, potahto. Its a mixture of carbone fuel gases in both cases. It doesnt make a difference for an otto engine: See Natural_gas_vehicle#Differences between LNG and CNG fuels. --Kharon (talk) 00:55, 19 November 2017 (UTC)[reply]
But still, your car didn't run on natural gas. Nor would your car have been able to store a useful quantity of natural gas (as it can't be liquefied by simply pressurising it, it would require substantial refrigeration). Andy Dingley (talk) 01:01, 19 November 2017 (UTC)[reply]
Yes and there are about 5,000 (genetical different!) potato varieties worldwide. You need a different tank for LPG than for CNG, so what? The question was about combustion engines, not about gas storage tanks. --Kharon (talk) 01:12, 19 November 2017 (UTC)[reply]
" You need a different tank for LPG than for CNG, so what? "
The "so what" is that there's no way to make a useful CNG tank that will fit in a car. Andy Dingley (talk) 11:26, 19 November 2017 (UTC)[reply]
Really? This seems eminently practical to me ;-) Alansplodge (talk) 19:39, 19 November 2017 (UTC)[reply]
One could just skim wp for "CNG" and find a whole list of vehicles. Even a tiny car can be CNG fitted.--TMCk (talk) 19:53, 19 November 2017 (UTC)[reply]
The Fiat Panda is a good example of how difficult it is to make this work. The TwinAir engine was designed for bifuel CNG from the outset, but the provision of car size gas cylinders is still such a problem (energy / volume is only something like a sixth of petrol). The Panda has to triple its volume fuel capacity to manage this, which involved raising the suspension (the LPG duel fuel version doesn't need this) to make space. There's also a serious price premium for the Natural Power - something like 20%. Andy Dingley (talk) 21:50, 19 November 2017 (UTC)[reply]
Are you seriously still claiming "...there's no way to make a useful CNG tank that will fit in a car..."???--TMCk (talk) 22:28, 19 November 2017 (UTC)[reply]
... the difficulty is in fitting the passengers and luggage around the tank. Dbfirs 22:49, 19 November 2017 (UTC)[reply]
Difficult but not impossible as claimed. Gosh, batteries are taking up more space and they managed to make it work just as they managed to make CNG work.--TMCk (talk) 22:55, 19 November 2017 (UTC)[reply]
And no, no trunk space used in case of the Panda.--TMCk (talk) 22:59, 19 November 2017 (UTC)[reply]
Not for a conversion, no. And this is a question about conversions.
The Panda tanks needed to squeeze in tankage that's twice the size of the existing petrol tank (and the petrol tank is still needed), which needed major rearrangement and a suspension lift beneath. They also cost a couple of thousand, just for the tanks, and they give the car likely limited lifetime of only 10 years (current pressure vessel regulations require their replacement then, and neither inspection nor economic replacement seem practical for a 10 year old car).
This new car design is right on the edge of what's practical - and conversions aren't. Andy Dingley (talk) 23:19, 19 November 2017 (UTC)[reply]
  • Obvious link: octane rating. Natural gas (i.e. methane) has a RON of 120, making it even more knock-resistant to engine knocking than high-grade gasoline, and even worse than that at diesel combustion (where you need the injected mass to auto-ignite as fast as possible).
Now, of course there are incovenients to running on gaseous fuels - fuel tank size can be a problem as hinted above, for instance. But those are not really a diesel vs. gasoline thing. TigraanClick here to contact me 14:26, 18 November 2017 (UTC)[reply]
Courtesy link: Otto engine -- Are there currently any vehicles using this? 2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk) 19:20, 18 November 2017 (UTC)[reply]
It is the normal 4-stroke petrol engine! -- Q Chris (talk) 10:53, 20 November 2017 (UTC)[reply]
Lol, that's not clear from the article -- it only shows old-timey machines. 2606:A000:4C0C:E200:E958:86E3:541F:E7F1 (talk) 04:39, 21 November 2017 (UTC)[reply]
Just to clarify, the Otto engine was the first to use the Otto cycle. Tevildo (talk) 07:01, 21 November 2017 (UTC)[reply]
Thanks. I took the liberty of adding that to article (here).[dynamic IP]:2606:A000:4C0C:E200:E958:86E3:541F:E7F1 (talk) 17:12, 21 November 2017 (UTC)[reply]
  • In practical terms, neither. Conversions are possible, but they're somewhere between "Well I wouldn't start from here if I were you" and "There's not much left of the Ship of Theseus by the time you've finished". As such natural gas engines require either gas pipelines to supply them, or heavy pressurised storage tanks, then these are large engines and so they're built from industrial or truck diesel engines - but they're built from scratch as gas engines, and although the cylinder blocks might start out the same, there's little in common when they're finished. Most of the larger ones (for electricity generation, rather than mechanical output) are gas turbines on the Brayton cycle, not piston engines. Larger ones are CCGT where the gas turbine exhaust has its heat recovered by a steam boiler and turbine (something like half of the UK's electricity is coming from these at present [10]).
What you may be thinking of instead are the far more common autogas conversions of spark ignition petrol Otto engines to run on LPG (propane), rather than Natural Gas (methane). This is easier to store (it liquefies at a manageable low pressure), so the dense liquid stores in a similar volume to a petrol tank, often replacing the spare wheel storage space. It's also very easy to convert the engine for its use. A typical 1990s-onward installation uses an extra set of fuel injectors, one per cylinder drilled into the inlet manifold before the inlet valves. The control of these is then taken from the existing (and highly developed) manufacturer's own petrol injection system, by using the petrol injector timing and multiplying that by an adjustable factor, set during conversion. The complex mapping of ignition timing and fuel volume from all of the engine parameters has already been done by the car maker - the gas injection just runs at a fixed multiple of this. The engine is started and warmed through on petrol, then switches automatically. Such conversions are popular in Europe (my Volvo 940 has such a conversion) where petrol is heavily taxed but Autogas LPG is far cheaper (half the volume price). Filling stations are available at the "one per small town" and every motorway or large road service station level. Andy Dingley (talk) 20:31, 18 November 2017 (UTC)[reply]
Natural gas vehicle seems to state that vehicles capable of running on either natural gas or gasoline are not unheard of. Here in Southern California it seems just about all the public transit buses were modified to run on CNG in the 2000s. Granted, I don't know what's involved in that. --47.138.163.207 (talk) 09:29, 20 November 2017 (UTC)[reply]
Several things are needed. Mostly a damned good reason for doing so. In Europe, and a few US cities, that reason is the reduction of diesel particulate pollution from idling buses. That justifies the costs for bus replacement. Buses are large, expensive anyway, and can justify the cost of the on-board tankage. A further problem is that of refilling them - this is slow (several hours) and requires dedicated filling bays (not just one shared pump) with protection for the equipment and cheap overnight electricity. That's far easier to do in a well organised bus garage than in a domestic flat with an ad hoc parking space outside. Andy Dingley (talk) 10:41, 20 November 2017 (UTC)[reply]
... fitted to the same type of vehicle as diesel engines, but the combustion is much more like that in a petrol (gasoline) engine. Dbfirs 22:24, 19 November 2017 (UTC)[reply]

November 19

Smallest black hole to eat Earth

Inspired by these articles, what would have to be the smallest mass a black hole placed at the center of the Earth would have to have to destroy the Earth or at least wreak some noticeable havoc before it perishes from Hawking radiation? 93.136.4.186 (talk) 02:25, 19 November 2017 (UTC)[reply]

I don't know the numbers, but any black hole which would suck matter in faster than it evaporates would eventually destroy all Earth if placed at the center thereof. 2601:646:8E01:7E0B:404:F3D3:C557:159A (talk) 11:03, 19 November 2017 (UTC)[reply]
If you look at your links, you will see that the Earth will be heated into a plasma, before being swallowed. Is vapourising the Earth counted as destroying it? Graeme Bartlett (talk) 00:34, 20 November 2017 (UTC)[reply]
Stellar_black_hole#Properties cite: "There are no known processes that can produce black holes with mass less than a few times the mass of the Sun.[...].As of April 2008, XTE J1650-500 was reported by NASA[6] and others to be the smallest-mass black hole currently known to science, with a mass 3.8 solar masses and a diameter of only 15 miles (24 kilometers). However, this claim was subsequently retracted. The more likely mass is 5–10 solar masses.". --Kharon (talk) 01:13, 20 November 2017 (UTC)[reply]
Hmmm, what if you shot two black holes at each other at particle-accelerator speeds? One might speculate the black hole merger should be absolute, yet the angular momentum this imposes would make the resulting hole a naked singularity. Is there any theory that suggests they break back apart, perhaps into some kind of compex black hole shrapnel, so the pieces keep their event horizons? Wnt (talk) 01:21, 20 November 2017 (UTC)[reply]
If you believe in Hawkins theories of "Micro black holes" read Micro black hole#Minimum mass of a black hole. In that case also please try to answer the Question what process should produce and apply enough force to make one when even whole suns, much much bigger than ours (a multiple of the resulting smallest black hole atleast - lets say minimum ~20 times our sun's mass), only have a small theoretical chance to produce a black hole. Looks to me like an ant thinking if it could push a planet on another Orbit by keep jumping in one place. --Kharon (talk) 01:40, 20 November 2017 (UTC)[reply]
I was originally thinking of collisions among multiple black holes that may pile up near the galactic center. But regarding your reply... that article suggests that large accelerators might produce black holes. Now if what is needed is 22 micrograms in a Planck radius, the first question that comes to my mind regards the Compton radius of ridiculously relativistic particles with that combined mass, which presumably one collides from opposite directions. The particles are surely foreshortened to a very narrow distance, but do they remain as fuzzy laterally as when they are at rest? I would think they'd have to, in which case they usually pass through each other without forming a singularity. (Of course, you also need a ridiculously accurate means of alignment...) Wnt (talk) 04:04, 20 November 2017 (UTC)[reply]
Odd... Compton radius directs to classical electron radius, not Compton wavelength. I'll have to look later and see if there's a plausible reason for that... Wnt (talk) 16:20, 20 November 2017 (UTC)[reply]
OP here, @Graeme Bartlett, vaporizing Earth counts, I'm aware that complete digestion is impossible. Basically, I'm assuming that there is a certain mass m, where black holes with smaller mass will tend to lose mass to Hawking radiation faster than they can accrete and thus evaporate, while black holes with a higher mass will tend to accrete material fast enough to grow. @Kharon this is just a "what if" scenario, I'm aware that there's p=0 probability of this happening. 93.139.55.105 (talk) 04:16, 20 November 2017 (UTC)[reply]

Yeast as food

This might be a stupid question but why isn't yeast used as a meat replacement dietary protein source?

I've been reading about world protein shortages and about how farming animals for meat won't be sustainable in the long term to feed the growing population. Various alternative protein sources like farming insects and plant based proteins are discussed. There doesn't seem to be any attention paid to yeast though.

Yeast is easy to grow and is a compete protein. It requires no light so can be grown pretty much anywhere. Is there a reason that it has been overlooked as a dietary protein source? How much space would be required to grow enough yeast to feed one person?

Thanks Temic3300 (talk) 08:15, 19 November 2017 (UTC)[reply]

No, this is not a stupid question. And yes, yeast is used for food, see yeast extract and vegemite. There's also some yeast in leavened bread and unfiltered beer. Dr Dima (talk) 09:12, 19 November 2017 (UTC)[reply]
and we also have an article on Nutritional yeast. Dr Dima (talk) 09:18, 19 November 2017 (UTC)[reply]
As well as the actual yeast in beer (at least in Real ale, as Dr Dima alludes), the alcohol produced by the yeast also has nutritional value, per Alcoholic drink#Food energy, hence it sometimes being called "liquid bread". {The poster formerly known as 87.81.230.195} 94.0.37.45 (talk) 10:37, 19 November 2017 (UTC)[reply]
Have a look at the article on Quorn which is a meat substitute derived from a type of fungus. The advantage of the particular fusarium fungus is that it produces hyphae with a similar structure to muscle fibres, and can therefore be processed to give a more meat-like texture than would be possible with yeast. Wymspen (talk) 14:36, 19 November 2017 (UTC)[reply]
See also Single-cell protein. Various yeast are mentioned, but it's they're just some of several suggestions. Of course some suggest other alternatives like Entomophagy or plant proteins for various reasons (including the issues with keeping a sterile culture and yield). See also [11]. Of course you also have to convince people to eat the thing, that's one of the reasons why in the short term at least, most of these are ending up as animal feed. Nil Einne (talk) 15:13, 19 November 2017 (UTC)[reply]

Feynman Lectures. Exercises. Exercise 14-21 JPG archive

In previous discussion if we take into account the conservation of momentum law then there is no a paradox with energy needed for acceleration from 0 to 1 m/s and from 100 to 101 m/s.












Last equation is a law relating probe start and final velocities () for sun reference frame. E.g. for (which is the escape velocity for the sun from 1 AU distance) it gives , which coincides with 16.3 km/sec in earth reference frame.

Why can't we solve an equation like this :
Username160611000000 (talk) 20:11, 19 November 2017 (UTC)[reply]

Isn't this just the three-body problem? Rmhermen (talk) 00:48, 20 November 2017 (UTC)[reply]
No, because no need to know a trajectory. We know the start position and the final position. To solve a problem of escape velocity from solar system we use 2-step method (1st step is an overcoming the Earth gravitation, 2nd step is an overcoming the Sun gravitation from Earth orbit), shown in article. I wonder is it possible to solve the problem directly. Username160611000000 (talk) 05:08, 20 November 2017 (UTC)[reply]
I wouldn't usually bother trying to answer such a malformed question, but I think the OP is trying to calculate the escape velocity from the Earth's surface to interstellar space. I don't know why it is done in two steps, perhaps https://en.wikipedia.org/wiki/Sphere_of_influence_(astrodynamics) explains in enough detail. Greglocock (talk) 06:01, 20 November 2017 (UTC)[reply]
No, the article Sphere_of_influence is not about my question.Username160611000000 (talk) 06:32, 20 November 2017 (UTC)[reply]
As you like. I've just done a MOOC in Space Mission Design and Operations and I can assure you that SOI was fundamental to the three stages of planning the delta V needed for an interplanetary mission. Greglocock (talk) 09:19, 20 November 2017 (UTC)[reply]
As I see from the article the Sphere of influence (SOI) is an approximate imaginary surface, Feynman said nothing like that. And again I do not care about space dynamics. The exercise is to calculate initial speed to make the probe guaranteed to move at infinity with a residual speed v. I simplify the exercise to zero residual speed .
All we know and all we should use is lectures 1-14. And 2 -step method was explained by ToE here and here. The 2-step method is next. When the probe starts from the Earth with speed 11 km/sec it then overcomes Earth gravity and is flying in solar system with speed 30 km/sec in Sun ref.frame and with 0 km/sec in Earth ref.frame. To escape it should have 42 km/sec. So the excess speed = 42 - 30 = 12 km/sec and excess energy = 0.5m(12 km/sec)2.
It was not clear why excess energy isn't calculated like 0.5m(42 km/sec)2 - 0.5m(30 km/sec)2. But when I have counted the conservation of momentum law and got formula , this confirmed that at an initial speed of 46 km/sec the final speed would be 42 km/sec.
Username160611000000 (talk) 12:26, 20 November 2017 (UTC)[reply]
Feynman didn't mention SOI because he was only considering two bodies. Duh. You want three bodies (probe earth sun), so SOI becomes an important concept. Out. Greglocock (talk) 18:44, 20 November 2017 (UTC)[reply]
Feynman has mentioned all planets in lecture 9, sec. 7 and showed a way to calculate positions at any moment with any wished accuracy. But the exercise 14.21 is to lectures 13, 14 on Energy. Feynman never proposed exercises that go beyond the material of lectures. In this exercise it is not asked to find all coordinates of the probe. I will use SOI and the numerical methods when in an exercise it will be asked. Username160611000000 (talk) 19:02, 20 November 2017 (UTC)[reply]

November 20

How hard can ice be?

A plastic bag containing a 3 to 4 kg chunk of ice fell of a table and cracked a terracotta floor tile. The ice showed almost no damage. Was the tile substandard or can ice really be harder than terracotta? The ice temperature was estimated at about -10 to -15 Celcius. It was frozen in a walk-in freezer at a meat packing plant set to -20C. Roger (Dodger67) (talk) 17:14, 20 November 2017 (UTC)[reply]

There's much more at work here than merely "hardness"; among other things, there is also the shape of the two pieces and the manner of impact. If you hold up a piece of aluminum foil, you can deform it with a cotton ball or feather, but you wouldn't normally say that either of them were "harder than" aluminum. Consider also that there probably were small cracks in the ice, but these essentially can "heal" if they don't immediately result in fracture. Matt Deres (talk) 17:38, 20 November 2017 (UTC)[reply]
The temptation is to answer this question by referencing the famous Mohs scale of mineral hardness, where ice is a 2 and terracotta is usually closer to a 5. But that's not the kind of "hardness" we're looking for here. It just means that terracotta will easily scratch a piece of ice, while ice will not easily scratch terracotta, which I think we knew.
We're really looking for a measure of material strength. Probably either Toughness or Fracture toughness.
I don't have an exact answer for you, but check out this mystifying chart.
If I'm reading it correctly, terracotta, being a porous non-industrial ceramic, should be tougher than ice, but not a lot tougher.
So why did your tile break but not the ice? Probably luck. Ice could certainly be heavy enough to smash tile. The mystery is really only why the tile broke first. That probably comes down to how they landed. Whether there was a pressure point or a weak spot, etc. (Also, Are you sure the ice was undamaged? If a chip came out of the ice at high speed it could absorb a lot of the energy.)
I don't think anyone can give you an exact mathematical answer without about a bunch of measurements and stuff. Sorry. ApLundell (talk) 17:39, 20 November 2017 (UTC)[reply]
Per ApLundell, there are different measures of hardness, it's a vague term and asking how "hard" something is depends on what you mean by hardness. None of these, however, strictly applies to the scenario being described. One can break a steel container with nothing but the force of air pressure, and yet air is not "hard" by any definition. The relevent thing here is not how "hard" the ice is, per se, but with how much force it strikes the tiles, over what area, and over what period of time. Higher forces concentrated in smaller areas over shorter periods of time are more likely to exceed the forces necessary to break the tiles, regardless of what provides that force. --Jayron32 17:45, 20 November 2017 (UTC)[reply]
There is probably a bunch of measurements and stuff for ice at http://www.tms.org/pubs/journals/JOM/9902/Schulson-9902.html and refs (in particular #39), but I am a bit short on time right now to look at them. TigraanClick here to contact me 18:10, 20 November 2017 (UTC)[reply]
  • You can break a floor tile with a rubber mallet, if it's not perfectly bedded.
Tiles are brittle. They are hard and strong, but they will not bend. If you support the tile on two sides with a wide gap in the mortar beneath, then any load on the top can cause them to bend, thus break. A soft impact (ice or mallet) might not give a sharp point, but if the ice is concentrated over an area smaller than the mortar gap, then even if the ice is crushed, the tile can still break. Andy Dingley (talk) 18:20, 20 November 2017 (UTC)[reply]
You may also be interested in Pykrete, a mixture of ice and sawdust which is bullet-proof and was seriously considered for the construction of a giant aircraft carrier called Project Habakkuk. Alansplodge (talk) 18:31, 20 November 2017 (UTC)[reply]

Mammoth Size

Why were woolly mammoths estimated to be smaller than other mammoths and elephants if they lived in cold climates? wouldn't they survive more easily in the cold if they were larger? אדנין (talk) 19:40, 20 November 2017 (UTC)[reply]

The premise is questionable. Woolly mammoth states that they were about the same size as the African elephant, the largest elephant species alive today. When it comes to other mammoths, Mammoth says that "most species of mammoth were only about as large as a modern Asian elephant", so that means that wooly mammoths were larger than most mammoths. - Lindert (talk) 19:49, 20 November 2017 (UTC)[reply]
According to this the Wooly Mammoth was about the middle in terms of size of Elephantidae. This notes "Mammoths and modern elephants overlap significantly in body mass." This also has a size chart that puts the Woolly Mammoth right at the middle in terms of average size. --Jayron32 20:08, 20 November 2017 (UTC)[reply]
But still, the Columbian mammoth and straight-tusked elephant lived in warmer climates, right? and they were still bigger than the woolly mammoth. אדנין (talk) 06:57, 21 November 2017 (UTC)[reply]
Quality and quantity of available food might have been a significant factor. Note that the last surviving woolly mammoths, on Wrangel Island, had become dwarfed: Island dwarfing is a phenomenon attributed partly to lower availability of food resources in a geographically restricted habitat. Assuming that woolly mammoths were woolly because they generally lived in colder climates than other mammoths, their available food resources are likely to have been poorer, too. {The poster formerly known as 87.81.230.195} 94.0.37.45 (talk) 08:43, 21 November 2017 (UTC)[reply]
Big size has advantages and disadvantages. Big animals are less threatened by predators but they usually also reproduce much slower. One theory in the science debate about Quaternary extinction event is that many species became distinct because of human hunting. If that was the truth, size did not matter much in sense of big elephant or small elephant because they where the delicious meat burgers which could not hide anyway. --Kharon (talk) 10:01, 21 November 2017 (UTC)[reply]
Despite their mammoth hide. ←Baseball Bugs What's up, Doc? carrots13:20, 21 November 2017 (UTC)[reply]
The presumption that larger animals live in colder climates seems an odd belief to have, given the billions of counterexamples where it isn't true. --Jayron32 16:33, 21 November 2017 (UTC)[reply]

November 21

Does physics have axioms?

Does physics have some kind of axioms? Should we treat as a given at least the perception of basic units? For example: movement, change, or distance? Or how should we call the basic conceptual units? --B8-tome (talk) 11:24, 21 November 2017 (UTC)[reply]

Yes, most science has axioms. Much of the more theoretical end of their researches is about either removing such axioms, or at least clearly defining them in their minimal form. An experimental scientist might see axioms as a failure of science thus far, as some form of deus ex machina. A mathematical theoretician though sees them as the basis for a formalised and axiomatic system, an approach that has been powerful in mathematics. For physics, see Hilbert's sixth problem and Hilbert space. Andy Dingley (talk) 11:38, 21 November 2017 (UTC)[reply]
Causality (physics) is often considered axiomatic in physics. See Axiom of Causality, which is a bit weak for a Wikipedia article, sometimes this is called the "Causality principle". --Jayron32 12:14, 21 November 2017 (UTC)[reply]
  • Most of physics is grounded in mathematical theories, which entail the acceptance of the axioms of the underlying mathematical theory.
About experimental sciences (rather than strictly physics), Asimov has a good quote: I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be. (The end of it is merely a wordy explanation of the concept of a Bayesian probability, but the beginning is an assumption that knowledge can be gained by observation, which is not trivial philosophically speaking - see problem of induction.) The foundation of the experimental method is the belief that any well-designed experiment ought to be repeatable at least in some statistical sense. For instance, if I throw a coin many times, each result might be random, but the average result over a large number should depend only on my throwing technique, whether the coin is rigged, etc., and not on extra variables (such as the time of the day or the location on Earth where I perform the experiment) which ought not to matter. If it does turn out that there is some variation, we will ascribe it to extraneous variables that we failed to take in consideration rather than to a variation of the law of physics with space/time.
Accepting the above, a lot of experimental stuff can be tricky to classify. For instance consider the Huygens-Fresnel principle. It is "simple" (in an Occam's razor sense), and matches experimental data, but we do not really have a clue about why it is so. Does it count as an axiom, or is it a logical conclusion of the experimental evidence plus the "axioms of experimental method"? TigraanClick here to contact me 13:15, 21 November 2017 (UTC)[reply]
Well, if you really want to get metaphysical, anything beyond simple solipsism requires us to accept the evidence of our own senses and intellect as axiom. There's no way to derive, from sufficient logic, that we can trust our own senses and intellect based on outside evidence, we have to work under the assumption we can trust them to some point. --Jayron32 13:20, 21 November 2017 (UTC)[reply]
Well, yeah, pretty much any knowledge-seeking activity will rely on assumptions that the external world exists, that basic logic rules work, etc. But assuming that the experimental method "works" is different. The modern scientist (à la Bacon, Galileo, Newton etc.), when confronted with a phenomenon they do not understand, assume that there is some reason for why it happens with explanatory power (i.e. the reason does not just fit the current data, it also tells you stuff about other possible experiments or the later repetition of that experiment).
Consider radioactive decay which seems to follow some statistics (Poissonian process). We are not sure why it is so, and it may well be that we will never know of any explanation that is further up in the chain of causes and consequences; yet we take the belief that any radioactive decay phenomenon follows those laws, not just the ones we observed. Maybe the immediately-preceding root cause is that some deity is playing dice for each atom at each instant; but we still assume that the dice are thrown the same way when we are and when we are not looking (rather than the deity stacking the results when experiments are made in a lab). Or to take another example: nowadays, we have some reasonable clue as to how thunder strikes; a couple millenia back, "Thor made thunder follow yet-to-be-known laws" was a scientific assertion, when "thunder strikes when and where Thor pleases" was not. TigraanClick here to contact me 13:59, 21 November 2017 (UTC)[reply]

Using a transistor rated for 45 V in a 60 V circuit

According to my thinking, I should be able to use a BC337 transistor rated for a maximum of 45 V across the collector and emitter in a circuit which uses 60 V to power LED filaments because, when the transistor is off/closed, the LEDs (there are about 18 in series, I suppose) should drop some of the voltage themselves.

I don't yet have my LED filaments so I did an experiment with four regular LEDs in series with the transistor and applied 8 V. With the transistor off/closed, I measured the voltage across the transistor to be 2.2 V and the voltage across the LEDs to be 2.4 V even though the voltage across the transistor AND the LEDs was 8 V and 2.2 + 2.4 != 8. I was advised (on an electronics forum) that this discrepancy is because the DMM itself is passing current and affecting the measurement. I was also told that I should just use a transistor rated for 80 V but I think this was a lazy response that doesn't conisder the voltage being dropped by the LEDs. --185.216.48.85 (talk) 16:18, 21 November 2017 (UTC)[reply]

  • A DMM or "valve voltmeter" shouldn't pass significant current. It's the function of their high impedance input to reduce this to a low level that won't load a circuit like this. However, when your transistor is switched off the circuit's impedance will itself rise to a level that it again becomes comparable to that of the meter, and your meter will (probably, depending on the meter) become a significant load again, as the forum advised.
As to the breakdown voltage, then the VCEO limit for a BC337 depends on the current, but it also has a sharp cutoff at around 45V. The transistor is assumed to fail above this, whether any current is flowing or not. The idea that the LEDs will drop voltage is only true if they're flowing current. In the zero-current situation like this, the transistor voltage will still float high, toward the supply rail voltage (and the transistor fails). Of course, it might work. It might well do, but fail early. It's impossible to know without serious testing, but you are working past the limits of the datasheet, so any failure would be an "I told you so". I'd probably look at using an MPSA42 rather than a BC337. Andy Dingley (talk) 16:47, 21 November 2017 (UTC)[reply]

Pomegranate pith

From an evolutionary view, do the inedibility of pomegranate flesh (pith) and its relatively thick envelope mean to protect it from harmful birds in the same way as poisonous berries do? Does it also mean that because of that pomegranate relies more on pollination rather than seed dispersal? 212.180.235.46 (talk) 16:30, 21 November 2017 (UTC)[reply]

This seems to be a good start at answering some of your questions. --Jayron32 16:32, 21 November 2017 (UTC)[reply]