Wikipedia:Reference desk/Science: Difference between revisions
Line 611: | Line 611: | ||
:Bird never make nest in bare tree. ;-) Anyway our article on this is [[Hair removal]] [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 16:41, 26 March 2013 (UTC) |
:Bird never make nest in bare tree. ;-) Anyway our article on this is [[Hair removal]] [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 16:41, 26 March 2013 (UTC) |
||
: Here's the article on [[Body hair]]. Some men have more (or more visible) than others, but I don't think there's any medical definition of "too much". [[User:Ndteegarden|thx1138]] ([[User talk:Ndteegarden|talk]]) 17:29, 26 March 2013 (UTC) |
: Here's the article on [[Body hair]]. Some men have more (or more visible) than others, but I don't think there's any medical definition of "too much". [[User:Ndteegarden|thx1138]] ([[User talk:Ndteegarden|talk]]) 17:29, 26 March 2013 (UTC) |
||
::There is such a thing as too much hair, [[Hypertrichosis]], but I doubt that's what the OP is talking about. Anyway, having a lot of body hair should help you get in touch with your more distant ancestors :P [[Special:Contributions/109.99.71.97|109.99.71.97]] ([[User talk:109.99.71.97|talk]]) 18:01, 26 March 2013 (UTC) |
Revision as of 18:01, 26 March 2013
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
March 21
Timetravel
If the past of a point is in any direction away from it, doesn't that mean that timetravel into the future is limited to one second at a time. Since you can hardly get closer to the point than 0 meters, and the local rate of time is one second at a time. To me it just doesn't mak sense to talk of negative radial displacement from the point. For those who doesn't understand what I'm talking about: one year ago from this moment is one lightyear away in any particular direction, plus any time since that moment. The past travels away from this spacetime coordinate at a rate equal to the speed of light. If you wish to reach the past, you must overtake it by exceeding the speed of light. If you wish to remain in that moment in the past, you must travel in any particular direction at exactly the speed of light. Then you will always remain in the present moment. This follows that you can travel to any coordinate within the past, but you cannot exceed one second into the future, as you cannot travel slower than being stationary. Plasmic Physics (talk) 00:46, 21 March 2013 (UTC)
- Either your speaking of time in a way I have never heard, or it's a corrupted version of light cone. Someguy1221 (talk) 00:52, 21 March 2013 (UTC)
- Yeah - I can't decipher what you're trying to say - but whatever it is, the answer is "No!" - no backwards time-travel, no matter what speed you or your destination moves or over what distance at whatever speed. Just "No!". SteveBaker (talk) 00:54, 21 March 2013 (UTC)
- For one thing "one second" is a arbitrary man-made unit of time, and has no direct correlation with "spacetime", etc. ~:74.60.29.141 (talk) 01:12, 21 March 2013 (UTC)
- One second at a time, means one second per second, or zero dilation. However, perhaps I have it back to front: the future is away from the point. Plasmic Physics (talk) 01:15, 21 March 2013 (UTC)
- One way to think about it is philosophically: there is only one past, there are an infinite number of possible futures and the present doesn't exist. [That last bit requires some 'splaining] ~:74.60.29.141 (talk) 01:36, 21 March 2013 (UTC)
- One second at a time, means one second per second, or zero dilation. However, perhaps I have it back to front: the future is away from the point. Plasmic Physics (talk) 01:15, 21 March 2013 (UTC)
- You can think of time as a physical dimension we are moving through. When we are still with respect to our reference frame in all three spacial axes, we are moving in the direction of time at the speed of light. You travel through space by changing the direction of your velocity through spacetime. A particle traveling at the speed of light through space has zero displacement in time, necessarily. In this model, which reproduces the equation for time dilation just fine, the speed of light is not simply a cosmic speed limit but also a minimum. You are always traveling at the speed of light, and moving through space merely changes the direction of travel. That is the closest explanation of time dilation I have ever seen to what you wrote to start this section (the explanation I just gave came from Brian Greene. In this model, backwards time travel is anything in a forbidden hemisphere of hypothetical velocities (behind you). And speedy forwards time travel requires changing your velocity through spacetime, which is also forbidden. As in most of these models, the restrictions on time travel either do not arise from the model, or are assumed. This is precisely why time travel is still such a topic of discussion amongst physicists - There is no accepted theory of physics that would outright forbid time travel, but there are many good reasons to think it should be impossible. Someguy1221 (talk) 01:41, 21 March 2013 (UTC)
- I think you're talking about taking Δτ² = Δt² - Δx² (where τ is proper time and t is coordinate time) and rewriting it as Δt² = Δτ² + Δx², which shows that if you draw a graph of coordinate position versus proper time, the elapsed coordinate time is the ordinary Euclidean length of the worldline. However, in that picture, extending the line downwards takes you backwards in proper time while still going forward in coordinate time. That appears to make even less sense than ordinary time travel; at any rate it's not like jumping into a H.G. Wells style time machine. -- BenRG (talk) 02:12, 21 March 2013 (UTC)
- kind of like Event_horizon#Particle_horizon_of_the_observable_universe? Gzuckier (talk) 17:16, 21 March 2013 (UTC)
- I don't think so. The particle horizon is a clearly defined concept in cosmology, and isn't related to time travel. -- BenRG (talk) 18:41, 21 March 2013 (UTC)
- You can think of time as a physical dimension we are moving through. When we are still with respect to our reference frame in all three spacial axes, we are moving in the direction of time at the speed of light. You travel through space by changing the direction of your velocity through spacetime. A particle traveling at the speed of light through space has zero displacement in time, necessarily. In this model, which reproduces the equation for time dilation just fine, the speed of light is not simply a cosmic speed limit but also a minimum. You are always traveling at the speed of light, and moving through space merely changes the direction of travel. That is the closest explanation of time dilation I have ever seen to what you wrote to start this section (the explanation I just gave came from Brian Greene. In this model, backwards time travel is anything in a forbidden hemisphere of hypothetical velocities (behind you). And speedy forwards time travel requires changing your velocity through spacetime, which is also forbidden. As in most of these models, the restrictions on time travel either do not arise from the model, or are assumed. This is precisely why time travel is still such a topic of discussion amongst physicists - There is no accepted theory of physics that would outright forbid time travel, but there are many good reasons to think it should be impossible. Someguy1221 (talk) 01:41, 21 March 2013 (UTC)
- (edit conflict)
- If you're stationary at a speed u = 0 relative to point A, then you're travelling to A's present according to you.
- If you're travelling at a speed 0 < u < c relative to A, then you're traveling to A's future according to you.
- If you're travelling at a speed u = c relative to A, then you're stationary in A's instant according to you.
- If you're travelling at a speed u > c relative to A, then you're travelling to A's past according to you.
- (edit conflict)
- If these are true, then one year since this timespace coordinate according to you, is one radial lightyear away, if you travel in a linear path at a particular speed 0 < u < c relative to that coordinate.
- And, then this timespace coordinate according to you, is any radial distance away, if you travel in a linear path at c relative to that coordinate.
- And, thene, then one year before this timespace coordinate according to you, is any radial distance away, if you travel in a linear path at a particular speed u > c relative to that coordinate. Plasmic Physics (talk) 01:53, 21 March 2013 (UTC)
- The idea that speeds greater than c take you back in time is wrong. The misconception may derive from the fact that the time dilation factor is 1 at v=0 and 0 at v=c, so it looks like it might go negative for v>c. In fact, though, it goes imaginary (the square root of a negative number). Furthermore, since every number has two square roots, you can just as well say the time dilation factor is −1 when v=0. That doesn't make time travel possible.
- Other than that, nothing that you wrote above makes any sense to me. More distant things in the universe look older because we see them indirectly, via light that is right here on Earth, in our telescopes, and has been in transit for a long time. This is true even in a Newtonian world if light doesn't go infinitely fast. It has nothing to do with time travel. -- BenRG (talk) 02:12, 21 March 2013 (UTC)
- I'm sure the assumption that v>c => backwards time travel comes from the idea that for an object traveling at superluminal velocity, there is a reference frame in which causality seems to be going backwards (a gun sucking a bullet out of the air, for instance). IIRC my college physics. Someguy1221 (talk) 03:10, 21 March 2013 (UTC)
- I'm pretty sure both misconceptions exist. Among people who haven't taken college physics there seem to be a lot who think that your wristwatch will start running backwards when you exceed c, with no Lorentz transforms or two-way signaling protocols involved. That has got to be a simple extrapolation of "time stops at the speed of light" (together with the misconception that the slowdown only affects wristwatches, not your thought processes). Among people who have taken college physics there are a lot who think that you can send signals into the past with tachyons, though the truth is that the standard model of causality simply breaks down if there are tachyons, and can't be said to make any prediction at all. -- BenRG (talk) 18:41, 21 March 2013 (UTC)
- I'm sure the assumption that v>c => backwards time travel comes from the idea that for an object traveling at superluminal velocity, there is a reference frame in which causality seems to be going backwards (a gun sucking a bullet out of the air, for instance). IIRC my college physics. Someguy1221 (talk) 03:10, 21 March 2013 (UTC)
- I took a cosmology paper, and the lecturer never discussed faster than light travel. Plasmic Physics (talk) 21:51, 21 March 2013 (UTC)
- Does that imply that there is an imaginary dimension of time? Otherwise what are thte implications of imaginary dilation factors. Plasmic Physics (talk) 02:26, 21 March 2013 (UTC)
- There is nothing to indicate that imaginary time dilation factors are anything more than a mathematical artifact. Someguy1221 (talk) 03:10, 21 March 2013 (UTC)
- Travelling faster than the speed of light does not mean you are going back in time. It just means you are going somewhere else, and the light from the place you came from will reach you some time after you arrive. If you consider that to be the definition of time travel, then fine. This would then mean that you are simultaneously travelling forward in time, as well as backward, in relation to the object you are flying towards and the object you are coming from, respectively. (Explanation: if you travel 10,000 times the speed of light towards an object you can see from Earth, and that object is 1,000 light years away, you will not get to that object's location 9,000 years before it arrived there. In fact, it won't be there any more, having moved on 1,000 years ago - this will seem to you like you have gone forward in time, and a thousand years later, you will get light from the present-day Earth.) This is my understanding of it, anyway. KägeTorä - (影虎) (TALK) 11:07, 21 March 2013 (UTC)
See here for how you can exploit faster than light travel to travel back in time. Count Iblis (talk) 12:48, 21 March 2013 (UTC)
Age of the universe
Many of our current articles claim that the universe is around 13.73 billion years old. Three years ago, there was a claim (reported in National Geographic) that this should be 13.75, in December last year this was updated to 13.77, and now the Planck space telescope's observations indicate that the figure should be 13.81 or 13.82 billion. How reliable was the 13.73 billion figure, and was it based on other evidence that casts doubt on the Planck space telescope data? Dbfirs 12:36, 21 March 2013 (UTC)
- Our article on Age of the universe lists the evidence for the estimated age and lists some of the assumptions made. --Guy Macon (talk) 14:10, 21 March 2013 (UTC)
- The current preferred estimate from the WMAP team is 13.772±0.059 Gyr at 68% confidence, which (doubling the error bars) means 13.65 – 13.89 Gyr at 95% confidence. So all of these small changes to the central point are well within the uncertainty that's existed all along. A while back (when the error bars were larger) I changed the Wikipedia articles to say "13.5 to 14 billion light years" in an attempt to discourage bad reporting, but it looks like it got changed back. -- BenRG (talk) 16:35, 21 March 2013 (UTC)
- Thanks. I expect there will be new papers published soon, but, as Ben points out, the new estimate is less than one standard deviation above the 13.77 figure quoted (and even closer to the new combined figure), so I agree that we should continue to report that figure for now. I've changed a few of the old 13.73 figures in other articles because that figure is clearly many years out of date. There are lots more to be changed if anyone wants a boring task. Dbfirs 22:45, 21 March 2013 (UTC)
C/2013_A1 collision effects
I know the chances of C/2013_A1 hitting Mars is well below 1% now, but I will ask my question anyway. Everything I have found online about the impact is blah-blah-blah crater size. Is there any good speculation on the long-term effects? Specifically, that seems like a whole lot of water. Would it change the atmospheric composition in the long run? Or the pressure? Estimates for its size vary wildly, but we're still talking cubic miles of ice. That seems like a lot. — Preceding unsigned comment added by Tdjewell (talk • contribs) 12:39, 21 March 2013 (UTC)
- I haven't seen any references about what you're asking. However, consider that according to Atmosphere of Mars, the Martian atmosphere has a mass of about 25 teratonnes. A cubic kilometer of ice would have a mass of 1 teratonne. If that all made it into the atmosphere, then yes, it would be a huge amount of water - it would raise the concentration of water to about 4% by mass. However, I don't know how much would be ejected, buried and frozen, or lost over time. A proper analysis probably hasn't been done because it is considered so unlikely, and there is a lot to take into consideration. I did check, and our Terraforming of Mars article doesn't mention anything about directing a comet into Mars to change conditions - it seemed like the kind of thing someone may have considered. 38.111.64.107 (talk) 15:49, 21 March 2013 (UTC)
- I detect the hand of the Brennan-monster in this affair.... --Trovatore (talk) 17:23, 21 March 2013 (UTC)
- One gigatonne. - ¡Ouch! (hurt me / more pain) 10:08, 25 March 2013 (UTC)
- Thanks for the correction - I know I looked it over twice since I was surprised that it would make that much of a contribution. A cubic kilometer is huge, but a planet's atmosphere is on a whole different scale. That means a cubic kilometer only contributes .004% which sounds a lot more reasonable to me. Still, it is hard to say what effect that would have on the planet in the long run. 38.111.64.107 (talk) 15:57, 26 March 2013 (UTC)
new space program
I want to set up my own space program and launch rockets and such, obviously it would take a lot of work and money and finding the right people to actually get anywhere interesting, such as the moon, but would it be possible to send a first test rocket at least a little way into space, and what would I need to sort out, to start doing so?
Kitutal (talk) 16:09, 21 March 2013 (UTC)
- Among other things, you'll have to obtain clearance from your national airspace authority (the Federal Aviation Administration for the US, the European Aviation Safety Agency for the EU, etc. This page includes a brief overview of the FAA's role in small rocket launches. You may also be interested in the Civilian Space eXploration Team. — Lomn 16:20, 21 March 2013 (UTC)
- (ec) It would take a group of dedicated amateurs, tens of thousands of dollars and flight clearance from your national government. The GoFast rocket reached 100 miles in 2004.[1] 75.41.109.190 (talk) 16:21, 21 March 2013 (UTC)
- Tens of thousands? No, try billions. SpaceX has had 1 billion dollars to work with over 10 years, and they haven't gotten anywhere near the Moon. --140.180.249.152 (talk) 18:48, 21 March 2013 (UTC)
- No thousands is correct. Follow the links Lomn and I provided. The OP only asked about getting to space - not all the way to the moon. Rmhermen (talk) 20:57, 21 March 2013 (UTC)
- Tens of thousands? No, try billions. SpaceX has had 1 billion dollars to work with over 10 years, and they haven't gotten anywhere near the Moon. --140.180.249.152 (talk) 18:48, 21 March 2013 (UTC)
- Since you'd basically be trying to replicate SpaceX, you might find that article informative. Looie496 (talk) 16:54, 21 March 2013 (UTC)
- I suppose it will be cheaper than previous programs, since a lot of what has been done was research, which could be re-used, but you'll still need a huge amount of money. Besides that, you'll need the approval of some government, any government. Uganda may be a good choice, since they have already a (rather pathetic) space program and they are on the equator. Being a quite corrupt country should be no problem for you, since you already have billions to throw away, don't you? If not, I don't see any way of gathering the fund for a project without any economical use like reaching the moon, although a private satellite launching company could be a viable commercial enterprise. See List of private spaceflight companies too. OsmanRF34 (talk) 20:08, 21 March 2013 (UTC)
- The most frequently-claimed economical use in reaching the moon is to mine Helium-3 - which is thought to be common there, while it's extremely rare here on Earth. Helium-3 is the dream-fuel for fusion reactors - and if even relatively small amounts could be mined on the moon and shipped back to earth, it would be a phenomenal source of revenue. There are other possible economic gains...solar energy plants at the poles where you get 24/7 sunlight and no atmospheric attenuation. Telescopes on the 'dark side' where no human-created light or radio signals ever reach. Mining ice from deep craters, which your polar solar power site splits into hydrogen and oxygen - which you can sell as rocket fuel at your Lagrange-point refuelling station. How about tourism? I'm sure there are other things. SteveBaker (talk) 20:18, 21 March 2013 (UTC)
- This doesn't look quite realistic right now, although it's nice to just imagine how things could be in the far future. But from a present day perspective fusion reactors won't necessarily work some day, electricity for a population of 0, no sure source of water, you can send a probe, instead of creating a moon station, tickets much more expensive than low-cost airlines, indeed, as expensive as an airplane ... OsmanRF34 (talk)
- Rather than reinventing the wheel and trying to send a rocket into space, perhaps a better goal is something like Inspiration Mars, which aims to use commercial space technology to send humans to Mars by 2018, or Mars One, which aims to set up a Mars colony by 2023. Both of these projects build upon the proven technology of companies like SpaceX and Paragon Space Corporation to do what nobody has ever done before. --140.180.249.152 (talk) 03:46, 22 March 2013 (UTC)
Calanus
Is calanus a shellfish and is it a scavenger? — Preceding unsigned comment added by 208.38.232.40 (talk) 16:56, 21 March 2013 (UTC)
- Calanus are copepods, closely related to shrimp, so I suppose you could consider them shellfish, but they are very small, only a fraction of an inch long. I don't think creatures that small are ordinarily thought of as shellfish. They feed on plankton both living and non-living, so they are partly scavengers and partly predators. Looie496 (talk) 18:26, 21 March 2013 (UTC)
Laser light polarization in laser cutting
I have a rather nifty laser cutter - and I thought I understood all about it...until I read about a gizmo that turns the linearly polarized light from the laser into circular polarization - supposedly to improve it's ability to cut stuff.
The way the laser cutter works is that the light from the CO2 laser is bounced off of three mirrors, then through a focussing mirror and into the material you want to cut. (There is a good diagram HERE). For clarity, the mirrors are numbered 1, 2 and 3 with the number 1 mirror being the first in the path of the laser, then two, then three. So the beam is turned through three right-angles by this mirrors.
The #1 mirror is stationary. When the machine cuts in the "Y" direction, the #2 and #3 mirror are moving together towards or away from the laser. When it cuts in the "X" direction, the #2 mirror is stationary and #3 moves to the left or right along the X axis. The focussing lens is fixed with respect to the #3 mirror.
The claim here is that with the linearly polarized light from the laser tube, that you get a noticable change in the amount of energy you get onto the target depending on whether the system is cutting in the X or Y direction.
I find that rather hard to believe - but there are a paper here that tries to explain it - and at least one manufacturer (eg here) have tricks involving two laser sources that claim to avoid the problem.
I'm having a hard time understanding that paper - can someone explain in more layman's terms why the MOTION of the mirrors can affect the amount of laser energy delivered to the target if the beam is linearly rather than circularly polarized? It seems entirely counter-intuitive to me.
SteveBaker (talk) 20:41, 21 March 2013 (UTC)
- Not my field at all, but seeing as nobody else has answered, after a quick scan of your citations, I think you can visualise it this way: Visualise an XY table drawing an image with a medium weight pencil. If the pencil was sharpened in a normal sharpener that gives a circular point, the lines will be equal in width and density regardless of whether plotting with X fixed and Y moving, or vice versa. Now, visualise it with a pencil sharpened the way you were taught in high-schoool woodwork class, i.e., a chisel point. Let's say the long axis of the chisel point is aligned in the X direction. You can see that with Y fixed and the pencil moving in the X direction, the line drawn will be narrow and full density. When plotting with X fixed and Y moving, the line will be wider and less dense.
- You can now see that it is NOT the motion of the mirrors that is the key to understanding it. The key is the direction of beam movement over the workpiece with respect to the polarisation, the polarisation being unchanged by the mirror movement (I mean the beam does not rotate as the mirrors move - the polarisation is fixed for any given mirror configuration).
- It appears similarly that a linearly polarised laser cuts metal wider if the polarisation is aligned with the direction of beam movement. If the cut is wider, the energy absorbed per unit width must be less. By making the polarisation circular, the effect is midway between the aligned direction and the orthogonal direction, reagrdless of direction of travel.
- Wickwack 121.221.25.39 (talk) 00:46, 22 March 2013 (UTC)
- That just sounds totally bizarre to me. The laser beam is not literally fatter in the direction of its polarization, and thin in the direction orthogonal to it. Someguy1221 (talk) 01:25, 22 March 2013 (UTC)
- Nobody said it was, though in practice it could be another (minor) factor. But the two references the OP cited indicate that polarisation vis a vis direction of travel does matter (due to another reason). The essence of the OP's query is that he could not visualise how movement of mirrors affects cutting. In fact it doesn't - Steve has misunderstood what's happening. I provided an analogy to help him understand that movement of mirrors is a red herring. Perhaps you could read more carefully what the OP said, what I said, and what the references said. Wickwack 124.182.55.66 (talk) 02:33, 22 March 2013 (UTC)
- That just sounds totally bizarre to me. The laser beam is not literally fatter in the direction of its polarization, and thin in the direction orthogonal to it. Someguy1221 (talk) 01:25, 22 March 2013 (UTC)
- You said it was, in your pencil analogy. Someguy1221 (talk) 07:30, 22 March 2013 (UTC)
- Well, that is the danger of using analogy I suppose. Some people extrapolate beyond what is intended in a foolish way, just to show they haven't got the point. How would you have explained to the OP that the movement of mirrors is not the key as he thought it was? The point is not about pencils, the point is that the mirrors do not rotate the beam just as the pencil is not rotated - this results in different conditions at the workpiece as the cutting/marking direction changes - something the analogy makes obvious. Wickwack 121.221.211.67 (talk) 14:55, 22 March 2013 (UTC)
- I understand from your analogy how a non-circular cross-section beam might produce different cut widths when cutting vertically instead of horizontally...but as far as I know, the beam is more or less circular in cross-section - and focussed down to a small circular point. So, OK, maybe this is an analogy - but how does linear polarization versus cicular have anything to do with the beam width? SteveBaker (talk) 15:49, 22 March 2013 (UTC)
- You are like Someguy1221 - you have focussed on cross section, whereas my analogy was to help the OP undertsand that the orientation of the beam wrt the workpiece does not change but does change wrt cutting direction. The pencil with a chisel cut is "polarised" and the laser beam is polarised, but not in the same way. I thought that was so obvious as to not require qualification - but it seems for some people that it is a confusion. Wickwack 58.170.154.119 (talk) 01:09, 23 March 2013 (UTC)
- If I'm making the same misconceptions as SteveBaker, I feel pretty good about myself. Someguy1221 (talk) 01:38, 23 March 2013 (UTC)
- It would seem that from Steve's comments below, he later understood what I was getting at, and devised a test that could prove or disprove it. So you feeling good is not for now justified. Wickwack — Preceding unsigned comment added by 58.170.154.119 (talk) 01:56, 23 March 2013 (UTC)
- I'm sorry your analogy sucked. Someguy1221 (talk) 04:47, 23 March 2013 (UTC)
- It would seem that from Steve's comments below, he later understood what I was getting at, and devised a test that could prove or disprove it. So you feeling good is not for now justified. Wickwack — Preceding unsigned comment added by 58.170.154.119 (talk) 01:56, 23 March 2013 (UTC)
- If I'm making the same misconceptions as SteveBaker, I feel pretty good about myself. Someguy1221 (talk) 01:38, 23 March 2013 (UTC)
- You are like Someguy1221 - you have focussed on cross section, whereas my analogy was to help the OP undertsand that the orientation of the beam wrt the workpiece does not change but does change wrt cutting direction. The pencil with a chisel cut is "polarised" and the laser beam is polarised, but not in the same way. I thought that was so obvious as to not require qualification - but it seems for some people that it is a confusion. Wickwack 58.170.154.119 (talk) 01:09, 23 March 2013 (UTC)
- I understand from your analogy how a non-circular cross-section beam might produce different cut widths when cutting vertically instead of horizontally...but as far as I know, the beam is more or less circular in cross-section - and focussed down to a small circular point. So, OK, maybe this is an analogy - but how does linear polarization versus cicular have anything to do with the beam width? SteveBaker (talk) 15:49, 22 March 2013 (UTC)
- Well, that is the danger of using analogy I suppose. Some people extrapolate beyond what is intended in a foolish way, just to show they haven't got the point. How would you have explained to the OP that the movement of mirrors is not the key as he thought it was? The point is not about pencils, the point is that the mirrors do not rotate the beam just as the pencil is not rotated - this results in different conditions at the workpiece as the cutting/marking direction changes - something the analogy makes obvious. Wickwack 121.221.211.67 (talk) 14:55, 22 March 2013 (UTC)
- You said it was, in your pencil analogy. Someguy1221 (talk) 07:30, 22 March 2013 (UTC)
- I think this is related to the principle behind Brewster's angle, i.e., when light hits a surface at a glancing angle, the rate of absorption is higher when the polarization direction is aligned with the surface normal. It's not obvious what polarization would be best for laser cutting, but it makes sense that it might matter. According to the paper, for the simplest transverse beam mode (TEM00), circular polarization (C) is preferable to linear polarization either parallel (P) or perpendicular (S) to the cut, for some reason related to the fact that it's absorbed equally in the forward and sideways directions. -- BenRG (talk) 04:08, 22 March 2013 (UTC)
- That certainly sounds reasonable to me, though I'd have thought you'd want the energy of the beam to be absorbed preferentially in the direction of the cut rather than the two sides so getting the polarization right might lead to not only a quicker cut but a thinner one. Dmcq (talk) 13:49, 22 March 2013 (UTC)
- Yes, but that would require that the polarisation be continually adjusted to match the direction of cut as the direction of cut changes. The advantage of circular polaristion is that you get a consistent cut finish without the complexity that rapid polarisation change would require. Wickwack 121.221.211.67 (talk) 14:44, 22 March 2013 (UTC)
- That certainly sounds reasonable to me, though I'd have thought you'd want the energy of the beam to be absorbed preferentially in the direction of the cut rather than the two sides so getting the polarization right might lead to not only a quicker cut but a thinner one. Dmcq (talk) 13:49, 22 March 2013 (UTC)
- Sure, the plane of polarization may change at each of the three mirrors, but these silicon-backed, gold-coated mirrors are claimed to be 98% reflective, so clearly not much energy is lost due to the linear polarization. Indeed, if they were much less reflective than they are then the energy that they absorb would have them glowing red hot within seconds. (And that's exactly what happens if you don't keep the mirrors scrupulously clean!). The circular beam geometry is still circular as it enters that final lens - and I know that it is because if I put a piece of tape in place of the focussing lens, I get a neat, almost perfectly circular, 1/4" hole chopped out of it.
- So if even the worst case for polarization only loses us 2% of energy per each of the three mirrors - then no improvement due to polarization will change that by much.
- Since the final beam is circular (at least to the precision that I can visually estimate it), there can't be any gross geometric reason for this effect.
- The laser hits the material at 90 degrees to the surface.
- Are we saying that in a brief *zap* with no motion of the mirrors - that even though the beam is circular, the polarization results in an elliptical hole being formed? The "pencil analogy" (well, the "Calligraphy nibbed pen analogy" might work better!) seems to require that to be a true statement.
- The hole that my laser cutter makes is a few hundredths of a millimeter across - so I'm going to need a microscope to know that for sure!
- I do have a "kerf test" pattern that I can probably use to measure the thickness of the hole that's produced - I habitually measure a Y-direction cut when I'm testing to be sure that the laser is properly focussed - I guess I could easily rotate the test pattern by 90 degrees to verify that.
- SteveBaker (talk) 15:49, 22 March 2013 (UTC)
- Steve - great to see you reading all this! Many times we respond to a queston, but never hear from the OP again, so we don't know whether we helped, or it was just a lot of blather to the OP, or whathever. As I said right at the start, laser cutting is NOT a field I am expert in, but I could see right away that which mirros move have nothing to do with why circular polarisation may be beneficial, so I explained that. I'm not qualified to say whether a spot zap will produce an eliptical hole, though it seems likely. The tape test could be misleading, as it is at different intensity per unit area, and (I presume) a different material. A test using a rotated kerf test pattern seems a good idea. Not the least because if you the user can't tell the difference, all this theory matters not - you need not waste money on the circular polarisation gizmo. I never thought the mirrors had any significant loss, for the reason you gave (they'd get hot), but even if they did have significant loss, as the beam is virtually of constant width between the mirrors (or should be), moving the mirrors will not change the total loss. Wickwack 58.170.154.119 (talk) 01:52, 23 March 2013 (UTC)
- Sure, the plane of polarization may change at each of the three mirrors, but these silicon-backed, gold-coated mirrors are claimed to be 98% reflective, so clearly not much energy is lost due to the linear polarization. Indeed, if they were much less reflective than they are then the energy that they absorb would have them glowing red hot within seconds. (And that's exactly what happens if you don't keep the mirrors scrupulously clean!). The circular beam geometry is still circular as it enters that final lens - and I know that it is because if I put a piece of tape in place of the focussing lens, I get a neat, almost perfectly circular, 1/4" hole chopped out of it.
- Wickwack, I see nothing in the paper to suggest that linear polarization is preferable to circular even when you only cut in one direction. It seems to say exactly the opposite. Steve, the mirrors have nothing to do with it. I was talking about absorption by the material that's being cut. As the paper says, the cut is usually much deeper than the diameter of the beam, so most of the beam energy is absorbed by the walls of a deep pit in the material, at a small glancing angle. -- BenRG (talk) 18:04, 22 March 2013 (UTC)
- I just had a read of that paper and it explains why what I was saying about just using p polarized light didn't work as well as might be hoped. The cut rapidly became V shaped in the direction of the cut and so the beam wasn't being absorbed properly. The circularly polarized light gave a U type cut instead and was about as good as the p-polarized light and better than the s-polarized and since it was easier it is a good general solution, but the radially polarized light was even better even though it is a more complicated business to produce it. At least that's my reading of it. Dmcq (talk) 23:31, 22 March 2013 (UTC)
- So P-polarised (aligned with cut direction) does work better that S-polarised (orthogonal to cut direction), contary to what BenRG says? That was my reading of it as well. Wickwack 58.170.154.119 (talk) 02:06, 23 March 2013 (UTC)
- Most of the time, the cut is *much* deeper than the diameter of the beam. I mostly cut materials between 3 and 6mm thick - but my machine can delicately cut paper and thin cloth - and up to maybe 15mm wood. The unfocussed laser beam is about the diameter of a pencil - but it's focussed down to a few hundredths of a millimeter. It's a 100 watt laser - the amount of energy it's putting out is no more than a 100 watt lightbulb. The reason it slices through wood and plastic like it wasn't there is because you're taking all of that energy output and concentrating it into a microscopic dot.
- The business of the shape of the bottom of the cut is interesting...but the actual practical situation is more complicated. We have a gizmo called "Air Assist" which is a high power air jet aimed into the slot as the laser cuts. This helps the laser to cut thick materials by pushing the intensely hot air surrounding the laser beam down into the slot instead of allowing it to rise up out of the slot. That hot air pre-heats the material before the laser hits it.
- With the air assist turned on, I'd expect a very different leading edge to the cut than without it. SteveBaker (talk) 03:21, 24 March 2013 (UTC)
- Depending on the material you are cutting, Air Assist might be causing combustion, somewhat like oxy-acetaline cutting steel. If you attempt to cut steel with a standard oxy-acetaline welding torch, what you do is locally melt the steel - resulting in an ugly blobby cut without good penetration. Oxy-acelatine cutting torches have an extra oxygen feed - when you turn on the extra oxygen (after first locally heating the start of the cut) it rapidly burns the workpiece, resulting in much deeper penetration and a much better cut finish.
- Did you try rotating your kerf test pattern 90 degrees to see what the effect was?
- Wickwack 60.230.231.106 (talk) 03:53, 24 March 2013 (UTC)
- Isn't the air assist just to keep the area clear of smoke and actually to stop the hot air affecting anything else, so only the point where the laser is pointed is affected and you have a nice clean cut. Dmcq (talk) 18:53, 24 March 2013 (UTC)
- So P-polarised (aligned with cut direction) does work better that S-polarised (orthogonal to cut direction), contary to what BenRG says? That was my reading of it as well. Wickwack 58.170.154.119 (talk) 02:06, 23 March 2013 (UTC)
Neaspora in cattle
Can a bull whose mother is positive for neaspora pass on the disease to his offspring — Preceding unsigned comment added by 92.41.39.54 (talk) 20:55, 21 March 2013 (UTC)
- According to our article (Neospora), it can be passed from cow to calf, but not from bull to calf. Tevildo (talk) 21:21, 21 March 2013 (UTC)
How's this animal called?
Picture: http://imgur.com/Nc8Izej --109.173.37.164 (talk) 20:59, 21 March 2013 (UTC)
- A Tapir (Tapirus terrestris, the Brazilian tapir, in this particular example). Tevildo (talk) 21:23, 21 March 2013 (UTC)
- Thanks! --109.173.37.164 (talk) 21:52, 21 March 2013 (UTC)
Why faces matter?
Why do we humans care so much about faces? An ugly woman could be perfectly able to generate offspring, so why bother about her wide nose, big ears, or whatever trace that could make a face ugly? Do faces reflect the functioning of internal organs? OsmanRF34 (talk) 21:04, 21 March 2013 (UTC)
- Sex appeal, maybe? I, for instance, would be much more turned on by a woman with a hot face than with an ugly face, if the rest of their bodies were (hypothetically) equal or almost equal in appearance. Futurist110 (talk) 22:04, 21 March 2013 (UTC)
- I'd say it's far more about free choice than generation of offspring. Personally I don't care if someone is ugly or beautiful; it depends on their intelligence and such. Again, that's fairly subjective as well. This sort of thing happens with animals all the time. If you're a male peacock, it doesn't matter if you're the most fertile if the female doesn't like your plumage. RMoD (talk) 22:16, 21 March 2013 (UTC)
- Futurist110: your answer is kind of tautological
- RMoD: it leaves often why people freely care about faces. OsmanRF34 (talk) 22:41, 21 March 2013 (UTC)
- "The Evolutionary Psychology of Facial Beauty" talks about the various theories: "Theorists have proposed that face preferences may be adaptations for mate choice because attractive traits signal important aspects of mate quality, such as health. Others have argued that they may simply be by-products of the way brains process information." Personally, I believe it's driven by the inexorable evolutionary pressure to get a Wikipedia article, the pinnacle of success. See WP:HOTTIE. Clarityfiend (talk) 22:19, 21 March 2013 (UTC)
- nice link. OsmanRF34 (talk) 22:41, 21 March 2013 (UTC)
- Also note that "beauty is in the eye of the beholder". I, for one, place a high value on facial beauty, but I know that my idea of beauty is quite different to that of other people (or at least to the one portrayed by the media). All those "supermodels" don't hold any appeal to me. 86.136.42.134 (talk) 22:45, 21 March 2013 (UTC)
- is there any evidence that symmetry and proportion in face and body is reflective of a deeper symmetry and proportion of organs, bones, and even a proper balance of chemicals and chromosomes?68.36.148.100 (talk) 01:36, 22 March 2013 (UTC)
- Greeks and Romans definition of beauty was a straight nose that was in line with the forehead. I read in Africa the male prefer the real weighty women. Japanese women ought to have small feet, some african tribe preferres very long necks etc. etc. Its all specific cultural tradition obviously. In the end it seems simply the feature or shape all or most of your local competitors are after, that you adapt. --Kharon (talk) 05:59, 22 March 2013 (UTC)
Scientifically Determining One's Relatives
Is it scientifically possible (excluding paternity and maternity tests) to determine how closely related someone is to someone else? For instance, would it be scientifically possible (without looking at records and documentation) to determine that my first cousin is more closely related to me than my third cousin or fourth cousin? I apologize if this is a stupid question, but I am genuinely curious about this. Futurist110 (talk) 22:03, 21 March 2013 (UTC)
- The only stupid question is the one you don't ask. So, what information do you allow? Are we allowed eyewitness testimony? If there is only one doctor that delivers all the babies, he could establish maternity. Are you allowed to compare whether people look alike? --Guy Macon (talk) 22:20, 21 March 2013 (UTC)
- Nope, eyewitness testinomy is not allowed. Neither is comparing if people look alike. Only scientific (especially genetic) testing is allowed for the purposes of this question. Futurist110 (talk) 00:42, 22 March 2013 (UTC)
- It's not that hard, actually. You share 50% of your genes with each of your siblings, 25% of your genes with each first cousin, 12.5% with each second cousin, and so on. For every x generations back your nearest common relative is you'll share 1/2x of your genes with them. For removed cousins, replace the x with (x+y)/2 where x and y are the number of generations back for each of you to that nearest common relative. --Jayron32 22:25, 21 March 2013 (UTC)
- I believe Jayron's excellent answer becomes more correct with liberal insertions of "on average". That is, siblings on average have 50% of their alleles in common (let's also remember that, strictly speaking, genes are not alleles: the gene is the locus, the allele is what code is there). A given pair of siblings may share more or less than 50%, due to some randomness in recombination, meiosis, and other reproductive processes. SemanticMantis (talk) 23:01, 21 March 2013 (UTC)
- Pardon my ignorance on this, but it there any scientific way to test for the percentage of common genes between two people? Futurist110 (talk) 00:42, 22 March 2013 (UTC)
- Yes, you completely sequence their entire genomes, and then you can define a degree of similarity. Someguy1221 (talk) 01:06, 22 March 2013 (UTC)
- For a shortcut, since chromosomes normally stay more or less intact, you can just check the number of chromosomes in common. This is a lot less work than comparing every base pair (you do need to compare some base pairs, to determine if the chromosomes are common or not). One of the chromosomes may not even require genetic testing. Excluding abnormal sex chromosomes, if one of the two people being compared is male, and the other is female, then you know they don't share the chromosome which is X in the female and Y in the male. StuRat (talk) 02:10, 22 March 2013 (UTC)
- I'm quite sure it's been explained before, most people will probably have zero chromosomes in common with their parents or anyone else except identical twins or clones (ignoring random mutations) due to chromosomal crossover. While our article doesn't discuss it, I believe sources have been presented before showing that most chromosomes have at least one during successful meiosis and if they weren't, it's easy to find sources saying that for C. elegans [2] and humans [3]. As our Pseudoautosomal region article mentions, we shouldn't even exclude sex chromosomes since it appears to be needed for X-Y chromosomes as well. Nil Einne (talk) 13:36, 22 March 2013 (UTC)
- Yes, but isn't the total volume of genetic change resulting from the one or two chromosome crossovers quite small, such that it could be ignored when trying to do a quick comparison of "relatedness" ? StuRat (talk) 15:57, 22 March 2013 (UTC)
- Companies like 23andMe can work up detailed genetic profiles and will identify relationships. Their "relative finder" tool will flag siblings, parents, and cousins. For more distant relationships they say things like 3rd to 6th cousin, or 4th to "distant" cousin, etc., which give estimated ranges for relatedness. Dragons flight (talk) 01:22, 22 March 2013 (UTC)
- One issue that will come up in any genetic comparison, though, is whether the genetic differences are "significant". That is, there are many differences in DNA which don't actually do anything important, while one base pair different, somewhere else, can make a huge difference. So, just looking at the total volume of genetic differences isn't very useful in telling how different two people are. (It's a bit like comparing the contents of two houses to determine the similarity in wealth of the owners, and saying "there's only 1 difference found, so they must have a similar wealth", even though that one difference is the presence of a 1000 carat diamond in one of the two houses.) StuRat (talk) 02:13, 22 March 2013 (UTC)
- True (sort of), but not relevant to the OP's question. A better analogy is this: Two houses should not be regarded as being built to different designs just because one has more furniture, or painted in different colours. Wickwack 124.182.55.66 (talk) 02:42, 22 March 2013 (UTC)
- The OP mentions "excluding paternity and maternity tests", but allows genetic testing, without perhaps realizing that most Parental testing (especially such that would be recognised in court) IS genetic. Vespine (talk) 02:51, 22 March 2013 (UTC)
- Consider: A man and his wife have sex several times a week on a regular basis. One day, the wife has an affair with her husband's identical twin brother. 9 months later she gives birth. I don't think there is any test of any kind that could tell you who the father of that baby is.
- It's all a matter of the degree of certainty you require and biological similarity of the potential fathers and mothers:
- If they are genetically kinda similar (same blood groups, for example), then figuring it out is harder, if there are identical twins involved, it's probably impossible.
- If the potential candidates they are sufficiently genetically different then it's much easier. Suppose the mother and one potential father have dark skin and the other potential father is caucasian - then the skin color of the child is almost certainly a dead giveaway as to who the father was (not 100% certain - but pretty damned close). I'm sure that having certain genetically-linked diseases pop up in parent and child would also make the result a near-certainty.
- In more average situations, you might be able to study the family tree of one possible parent versus another and look at the hair and eye color, the propensity for certain genetic diseases and thereby come up with a fairly certain idea of who was the parent...but you'd never be 100% sure.
- The degree of certainty (it's never going to be 100%) depends on the degree of genetic diversity in the candidate parents and the amount of information you have about how those genes are expressed in ways that you can measure within the constraints of the rules you're arbitrarily applying. If we can do a complete gene-sequence of everyone involved - then we'll be very certain. If we're only allowed to talk to them on the phone, we'll be very uncertain. If they are identical twins, we won't know - if they are wildly opposite genetically, then we can probably tell at a glance.
- We can't give you a definite yes or no answer on this one. SteveBaker (talk) 17:01, 22 March 2013 (UTC)
- There are genetic tests that can tell identical twins apart: basically, you do a full sequencing of the genome and look for mutations that took place after the egg divided into two embryos. There won't be very many of them, which is why you need to look at the entire DNA sequence rather than the limited areas that a typical DNA fingerprint uses. --Carnildo (talk) 22:32, 22 March 2013 (UTC)
Can a chimpanzee give birth to a human baby?
Suppose we implant a fertilized egg in the womb of a chimpanzee. Will this lead to a healthy baby after 9 months? Would the baby be born after 9 months or would it have to be removed using a C-section? Count Iblis (talk) 23:23, 21 March 2013 (UTC)
- Almost certainly not by natural birth. The human pelvis is adapted to be able to give birth to something with a huge head compared to other primates. --Guy Macon (talk) 00:01, 22 March 2013 (UTC)
- And if you put apart the anatomical issue (where a tremendous number of babies died through all primate species) and then put apart any tissue rejection issues, which would seem to be the most major issue, the hormonal triggers in homosapien pregnancy differ in some subtle, and not so subtle ways from other primates. Even Rh factor matters in human births. It's an incredibly sensitive process, and it fails about as much as it works. That all said, the better question is, what are the impediments to an interpsecies birth. I think that question's a lot more answerable, and maybe has been done? I'd like to hear the expanded explanation of this. Shadowjams (talk) 01:16, 22 March 2013 (UTC)
- I would also be interested to hear about the feasibility of implanting a Wooly Mammoth embryo in an elephant. (Insert Jurassic Park joke here) --Guy Macon (talk) 02:44, 22 March 2013 (UTC)
- Or maybe in a ferret, where after a while you'll be singing, "Pop! Goes the weasel." ←Baseball Bugs What's up, Doc? carrots→ 04:37, 22 March 2013 (UTC)
- I would also be interested to hear about the feasibility of implanting a Wooly Mammoth embryo in an elephant. (Insert Jurassic Park joke here) --Guy Macon (talk) 02:44, 22 March 2013 (UTC)
- And if you put apart the anatomical issue (where a tremendous number of babies died through all primate species) and then put apart any tissue rejection issues, which would seem to be the most major issue, the hormonal triggers in homosapien pregnancy differ in some subtle, and not so subtle ways from other primates. Even Rh factor matters in human births. It's an incredibly sensitive process, and it fails about as much as it works. That all said, the better question is, what are the impediments to an interpsecies birth. I think that question's a lot more answerable, and maybe has been done? I'd like to hear the expanded explanation of this. Shadowjams (talk) 01:16, 22 March 2013 (UTC)
Ultimately, this question is unanswerable as no one has tried. Good luck floating this one past an ethics committee. Should have tried it while the Soviet Union was still around. Someguy1221 (talk) 10:27, 22 March 2013 (UTC)
- They did! See Humanzee. Although this was the creation of a hybrid, not the implantation of a human embryo. (I seem to recall a plan to implant human embryos into cows, as well - I'll see if we have an article on it). Tevildo (talk) 10:38, 22 March 2013 (UTC)
- They didn't! Read the article... --Jayron32 20:02, 22 March 2013 (UTC)
- I didn't say they _succeeded_, and I note the large number of "citation needed" tags in the article (which are on material which wasn't there the last time I read it). But the attempt was seriously considered, at least. Tevildo (talk) 20:41, 22 March 2013 (UTC)
- They didn't! Read the article... --Jayron32 20:02, 22 March 2013 (UTC)
A followup question. Can a human female (a woman) give birth to a chimpanzee baby if a fertilized Chimpanzee egg is implanted in the woman's womb? The head of chimpanzee baby is much smaller, so I don't think anatomy would be an issue. --PlanetEditor (talk) 11:14, 22 March 2013 (UTC)
Thanks for all the answers so far. If this sort of thing can be made to work, women would no longer need to get pregnant, so it would have huge economic benefits. Count Iblis (talk) 12:29, 22 March 2013 (UTC)
- someone has to post it [4] Gzuckier (talk) 14:36, 22 March 2013 (UTC)
- OK - that's just a dumb comment! You aren't thinking this through.
- Firstly: You're making the grave mistake of assuming that women don't want to get pregnant in order to have children...which is certainly not universally true. Many women enjoy their pregnancies...many more want to have the experience at least once, even though they know it can be tough on their bodies. Also, handing over a baby that the mother didn't carry for all those months causes all kinds of bonding issues - it would prevent lactation, so the baby would have to be bottle-fed (which is known to have all sorts of adverse consequences).
- Secondly: There is the demographic issue - there are 12.7 babies born for every 1,000 people in the US every year - so in a population of 300 million, you'll be needing around four million female chimps in good health and child-bearing age at all times. That means you'll need to breed those animals, feed and house them somehow throughout their lives. Suppose one female chimpanzee could survive a dozen pregnancies - she could serve as surrogate womb for only between 4 and 6 human females - depending on the birth rate, etc. Ultimately, the total male, female, adult and baby chimpanzee population would have to be perhaps a tenth to a quarter of the human population size to make this a sustainable effort. That is not a small economic cost! Over a the life of a human female, she would (in effect) have to pay for perhaps a quarter of the lifetime cost of breeding, owning, housing and rearing a chimpanzee in reasonably hygenic and humane conditions - and her only benefit is saving the time she'd not be working due to pregnancy. That's got to be millions of dollars over the life of the chimp. Most women these days are able to work and produce economic gain until within weeks of the birth. My g/f's daughter worked until 3 days before she gave birth. All of the economic "loss" is in rearing the child for the first months until she can return to work. So almost none of the economic loss due to her pregnancy would be saved by surrogacy. So under your scheme, the earnings lost over perhaps a month or two total in a woman's life - maybe $10,000 - would have to pay for her share of a quarter of the cost of housing a chimpanzee for 30 or 40 years!
- Thirdly: The cost of the specialised medical intervention required to transplant eggs into the chimp, the cost of the (required) cesarean birth, the cost of anti-rejection drugs...that alone would dwarf the economic loss of a few weeks of the natural mother's work.
- I don't think there is an economic gain here at all! It's a gigantic loss. Biology and ethics aside - I don't think such a scheme would be met with the wild economic enthusiasm you imagine.
- SteveBaker (talk) 16:40, 22 March 2013 (UTC)
- Steve, I agree with your conclusion, that it's unlikely to come out in the green economically, but did you really mean to say that it would cost several million dollars to maintain a chimp for life? Is the chimp eating at Chez Panisse every night, or what? --Trovatore (talk) 20:53, 22 March 2013 (UTC)
- Actually, it wouldn't prevent lactation at all. Also, the fact that surrogates exist and are often engaged at significant cost to the genetic parents, there certainly could be a market for this type of thing, even if it is hideously expensive. 202.155.85.18 (talk) 03:42, 23 March 2013 (UTC)
- Steve, I agree with your conclusion, that it's unlikely to come out in the green economically, but did you really mean to say that it would cost several million dollars to maintain a chimp for life? Is the chimp eating at Chez Panisse every night, or what? --Trovatore (talk) 20:53, 22 March 2013 (UTC)
- The uterine environment matters enough that people talk about it affecting sexual orientation, obesity and so forth depending on what happens to a human mother. The effects of a chimp uterine environment are hard to predict, but I doubt they would be negligible. Wnt (talk) 20:42, 22 March 2013 (UTC)
- Intergeneric hybrids are known. It seems rather silly to assume there is something especially different between chimps and humans metabolically that would make chimps standing surrogates to humans difficult, other than the size of the human infant and its skull. μηδείς (talk) 21:25, 22 March 2013 (UTC)
- In biology you never know until you do the experiment ... that's what makes it so much fun! The reverse experiment would be easier, of course - all we need is a puckish technician at a pro-life in vitro fertilization clinic who has a friend at the zoo... Wnt (talk) 14:48, 23 March 2013 (UTC)
- But where would you find a woman willing to go along with this scheme? 24.23.196.85 (talk) 19:15, 23 March 2013 (UTC)
- That's what makes him puckish. :) Wnt (talk) 23:20, 23 March 2013 (UTC)
- But where would you find a woman willing to go along with this scheme? 24.23.196.85 (talk) 19:15, 23 March 2013 (UTC)
About the costs of using Chimps, there may be other options, like artificial wombs or biological versions of that. You can e.g. imagine growing a womb using a woman's stem cells in the lab... Count Iblis (talk) 15:12, 23 March 2013 (UTC)
March 22
Cruel Nazi research
(No, this is not about making human-chimpanzee hybrids) Is it possible, in principle, to surgically alter a human brain so as to make the victim incapable of rebellion or defiance? I've re-read Isaac Asimov's History of I-Botics a couple nights ago, and it got me thinking, is it even possible to create something like the Iron Major? 24.23.196.85 (talk) 06:33, 22 March 2013 (UTC)
- A frontal lobotomy can do that.
- Also, there's the legend that a voudou zombie is created by feeding the victim some type of poison, including part of a puffer fish, which puts the victim in a coma, which can be mistaken for death in warm climates (in cold climates the body not cooling down would be a clue). When the victim awakes, the part of the brain responsible for making decisions and free-will is destroyed, leaving the victim highly susceptible to following orders "mindlessly". StuRat (talk) 06:54, 22 March 2013 (UTC)
- If there's a "part" of the brain responsible for free will, nobody has a clue where it is. --140.180.249.152 (talk) 07:00, 22 March 2013 (UTC)
- How about the frontal lobe ? As our article states, frontal lobotomies can result in a lack of initiative. StuRat (talk) 07:05, 22 March 2013 (UTC)
- There's no question that it's possible. Just destroy the brain completely and he'll be incapable of anything, or destroy the hippocampus (like Henry Molaison) so that he can't form any long-term memories. The answer depends more on what abilities you still want him to have than on what abilities you don't want him to have. --140.180.249.152 (talk) 07:00, 22 March 2013 (UTC)
- Anyone would be pretty incapable of rebellion or defiance if you severed the spinal cord in the neck. By the way, what's the nazi connection here? 202.155.85.18 (talk) 07:53, 22 March 2013 (UTC)
- I don't know but it sounds more like Jeffrey Dahmer to me. He injected hydrochloric acid or boiling water in his victims' frontal lobes to try to make a Voudou zombie. Sagittarian Milky Way (talk) 21:03, 22 March 2013 (UTC)
- Anyone would be pretty incapable of rebellion or defiance if you severed the spinal cord in the neck. By the way, what's the nazi connection here? 202.155.85.18 (talk) 07:53, 22 March 2013 (UTC)
- Dictatorships don't want to harm brain functions of its subjects because in that case, the subjects will be incapable of serving the dictator. To make the victim incapable of rebellion or defiance, the only way to do that is to impair their critical thinking ability through the use of extensive propaganda and formal education. This is how the subjects are used to serve the dictator without altering their brain anatomy. --PlanetEditor (talk) 11:28, 22 March 2013 (UTC)
- Of course, nowadays pretty much everybody is neurologically materialist, i.e. the brain determines thought, so any kind of propaganda which prejudices the recipient to a certain kind of thought essentially by definition must be reflected as some sort of physical change in the brain, even if it's so subtle/complex that we can't detect it physically. Gzuckier (talk) 14:41, 22 March 2013 (UTC)
- A low protein diet is another option, which results in "brain fog". StuRat (talk) 15:50, 22 March 2013 (UTC)
- Thanks, everyone! To avoid any confusion, my question was about surgical procedures that destroy a person's ability to rebel against his/her master without impeding his/her ability to perform assigned tasks -- in effect, making the person a slave without any hope of becoming free. And the reason why I asked is, as I've already said, I've recently re-read an Isaac Asimov novel in which the Nazis build a robot that actually uses the brains of Holocaust victims as its CPU (the brains having first been processed to eliminate their ability to rebel), but I had doubts whether such a machine could actually use the brains of enemies, or whether it would instead have to use the brains of Nazi recruits who had been brainwashed into making the supreme sacrifice for the Reich. Judging by the responses I see here, it would indeed be possible to use the brains of enemies (assuming, of course, that one could build such a machine, which is another matter). Once again, thanks! 24.23.196.85 (talk) 04:26, 23 March 2013 (UTC)
- Not merely possible, but inevitable. The key invention in the process - already under development - is a prosthetic hippocampus. The victim's state can be recorded during a duped moment of genuine loyalty, or (with some adaptation) reworked from some other subject. In this way he can simply be "rebooted" periodically when he starts to stray, or higher-order processing used to eradicate undesired beliefs.
- The world doesn't have much of a place for somebody who needs 20 years of expensive schooling to learn an advanced profession when they can simply hire someone with proven skills and a proven right attitude. The apparatus for updates with a laser array is left as an exercise to the reader. "And that no man might buy or sell, save he that had the mark, or the name of the beast, or the number of his name." Wnt (talk) 14:55, 23 March 2013 (UTC)
- Indeed possible with today's technology, but we're talking about the 1940s here. 24.23.196.85 (talk) 19:14, 23 March 2013 (UTC)
- No way is it possible with today's science, or any science that we can forsee in a concrete way. The "prosthetic hippocampus" is a joke. It has about as much in common with a real hippocampus as a department store mannikin has in common with a real human. Looie496 (talk) 02:58, 24 March 2013 (UTC)
- Agreed. Wnt's scenario, while intriguing, is highly speculative. No device capable of replicating the basic functions of a significant brain region/module is anywhere near reality, let alone one that would allow us to preserve specific functions while custom tailoring specific reactions with regard to others. For that matter, the elements of personal memory Wnt referenced are far from completely consolidated in the hippocampus; though again, this is all highly speculative, we can say with confidence if such a feat were to be accomplished with a human brain (as we know it today), you'd need a number of these implants. And arguably if you had reached this level of advancement with regard to integrated to the processing of a human brain with that of artificial processors, it would be easier for you to start from scratch and build a functioning artificial brain already designed to take orders without question for your helpless thrall. Unless of course enslaving a pre-existing mind was the end unto itself. This latter scenario is actually the plot to a movie, Ghost in the Shell 2: Innocence; in this film (spoiler alert, I am about to give away the movie's central mystery!), the minds unwilling abducted women and girls are dubbed on to otherwise mechanically responsive sexbots, even though perfectly functional automatons with advanced behavioral responses already exist -- the men who buy these robots get off on the idea that there is a "soul" trapped within their dolls and meanwhile the women (or a copy of their personality in any event as the Ghost in the Shell movies are always eager to put forward the question of whether the copy is still the original person or not without giving any definitive answer) are for the most part trapped entirely entirely within their new mechanical bodies, experiencing everything it does and that is done to it, but generally unable to influence their actions orresponses to commands, which are typically exactly the same as if they were the mindless machines they are supposed to be. Pretty dark, really. Snow (talk) 06:40, 24 March 2013 (UTC)
- Kind of like The Stepford Wives? 24.23.196.85 (talk) 04:18, 25 March 2013 (UTC)
- Agreed. Wnt's scenario, while intriguing, is highly speculative. No device capable of replicating the basic functions of a significant brain region/module is anywhere near reality, let alone one that would allow us to preserve specific functions while custom tailoring specific reactions with regard to others. For that matter, the elements of personal memory Wnt referenced are far from completely consolidated in the hippocampus; though again, this is all highly speculative, we can say with confidence if such a feat were to be accomplished with a human brain (as we know it today), you'd need a number of these implants. And arguably if you had reached this level of advancement with regard to integrated to the processing of a human brain with that of artificial processors, it would be easier for you to start from scratch and build a functioning artificial brain already designed to take orders without question for your helpless thrall. Unless of course enslaving a pre-existing mind was the end unto itself. This latter scenario is actually the plot to a movie, Ghost in the Shell 2: Innocence; in this film (spoiler alert, I am about to give away the movie's central mystery!), the minds unwilling abducted women and girls are dubbed on to otherwise mechanically responsive sexbots, even though perfectly functional automatons with advanced behavioral responses already exist -- the men who buy these robots get off on the idea that there is a "soul" trapped within their dolls and meanwhile the women (or a copy of their personality in any event as the Ghost in the Shell movies are always eager to put forward the question of whether the copy is still the original person or not without giving any definitive answer) are for the most part trapped entirely entirely within their new mechanical bodies, experiencing everything it does and that is done to it, but generally unable to influence their actions orresponses to commands, which are typically exactly the same as if they were the mindless machines they are supposed to be. Pretty dark, really. Snow (talk) 06:40, 24 March 2013 (UTC)
- No way is it possible with today's science, or any science that we can forsee in a concrete way. The "prosthetic hippocampus" is a joke. It has about as much in common with a real hippocampus as a department store mannikin has in common with a real human. Looie496 (talk) 02:58, 24 March 2013 (UTC)
- Indeed possible with today's technology, but we're talking about the 1940s here. 24.23.196.85 (talk) 19:14, 23 March 2013 (UTC)
Electrical circuit that is 10 light minutes in length
If I have an electric wire that is 10 light minutes long (ignore resistance), with a battery on one end, and a place for a light bulb in the middle and I let the wire sit for 20 minutes, and then connect the light bulb. Will it go on immediately? After 5 minutes (each side only has to travel 5 minutes)? After 10 minutes (a complete circuit)? After 20 (first the message that a bulb has been inserted is sent to the battery, then another 10 minutes for the actual energy to arrive)? Or 10 minutes (the message and energy each take 5 minutes)?
What if it's already burning, and I remove the battery. I'm sure it will stay lit for either 5 or 10 minutes - but what is powering it for those minutes? Is the really long wire "storing" energy in some way? How does the wire know how much energy to store - the signal for the wattage of the bulb takes 5 or 10 minutes to travel to the battery. What if I remove the bulb while the energy is in transit? What if remove the bulb and the battery to really confuse the electricity? Where does that "stored" energy go? Ariel. (talk) 07:24, 22 March 2013 (UTC)
- See Transmission line and Electrical length for our articles on the subject. To answer your specific questions, the bulb will light after approximately _15_ minutes (as the typical speed of a signal in a wire is 2/3 c). It's important to note that there isn't a mystical substance called "information" that is physically transferred in any system - any information has to have a physical carrier, electromagnetic energy in this case. The wire is indeed storing the energy in its intrinsic capacitance and inductance. If you remove the bulb before the battery, the pulse of energy will travel up and down the wire as a standing wave until it all dissipates as electromagnetic radiation. (In a real wire, the energy will all be dissipated by resistive heating). Tevildo (talk) 10:18, 22 March 2013 (UTC)
- The physical carrier is not electromagnetic energy, but the kinetic energy of the electrons in the current. If it was electromagnetic energy, then inforation would be transfered at the speed of light. Plasmic Physics (talk) 10:26, 22 March 2013 (UTC)
- No it wouldn't. c is the speed of light in vacuo. In a physical medium (such as the inside of a transmission line), the speed of light (and all other forms of electrical energy) is less than c. Tevildo (talk) 10:48, 22 March 2013 (UTC)
- Reading the above, your statement is _literally_ correct - the information is transferred at the speed of light. But the speed of light is not c in a transmission line. One has to be pedantic sometimes. Tevildo (talk) 10:50, 22 March 2013 (UTC)
- If the transmission line is bare conductors in a vacuum then the transmission speed will be the speed of light, but if it is encased in plastic. then it may be slower. Graeme Bartlett (talk) 11:17, 22 March 2013 (UTC)
- No, that's a vacuum-dielectric twin-lead feeder, which (according to our article) will have a propagation speed of about 71%c. _Any_ transmission line will have significant amounts of reactance (mainly inductance for the twin-lead feeder), which will slow down the signal. Using a vacuum dielectric will reduce the reactance, but not eliminate it. Tevildo (talk) 11:28, 22 March 2013 (UTC)
- The effect would only take 7 minutes to reach the light. One could leave an end free and change the voltage at the other end and the light would light up for a few minutes, then go off and might come on again for a little while. Basically you're talking about sending a signal down a long transmission line and it would reflect off the far end. In the case where both ends were connected it all depends on the impedance at the ends but it would be possible for the light to go off again for a moment I think. Dmcq (talk) 14:13, 22 March 2013 (UTC)
- If the transmission line is bare conductors in a vacuum then the transmission speed will be the speed of light, but if it is encased in plastic. then it may be slower. Graeme Bartlett (talk) 11:17, 22 March 2013 (UTC)
- The physical carrier is not electromagnetic energy, but the kinetic energy of the electrons in the current. If it was electromagnetic energy, then inforation would be transfered at the speed of light. Plasmic Physics (talk) 10:26, 22 March 2013 (UTC)
- so, back to the OQ, you can model the circuit reasonably well for these purposes as a waterwheel powered by a hose or similar. When you shut the water off, it's a while before the waterwheel loses power. If you turn the water back on, again it's a while before the waterwheel starts moving again. The energy put in in the beginning before the wheel starts turning, minus lossses and all that nonideal stuff, is the equal of the energy the wheel works off of when the water is turned off. As said above, that's the energy stored in the inductance of the circuit (analogous to the momentum of the water) and the capacitance (analogous to the sum of the stored up volume and pressure of water at all the points along the hose, if you see what I mean; independent of the momentum in the hose, you get some push out of the water draining from the hose, including the energy involved in inflating the elasticity of the hose by the water pressure that now pushes the water out) Gzuckier (talk) 14:51, 22 March 2013 (UTC)
- Unless I'm misreading the question, none of the answers above is correct. When you turn on a water tap it turns on immediately, not after some delay that depends on the distance between you and the nearest pump. Likewise, if a battery is already connected to an open circuit, there is already pressure in one half of the line and tension in the other, so the light will turn on immediately when you connect it. If you disconnect or reconnect the battery or light, the "signal" to the other half of the circuit travels both ways, so the delay is half the circuit length divided by the speed of electricity (so 7 minutes, not 15). I'm assuming that the battery is connected to both "ends" of the circuit. If only one end is connected, the light won't turn on at all. -- BenRG (talk) 17:56, 22 March 2013 (UTC)
- I am an electronics/control engineer who specializes in hydraulics and pneumatics. Your claim "When you turn on a water tap it turns on immediately, not after some delay that depends on the distance between you and the nearest pump." is factually incorrect. There is a delay, which may be calculated from the distance and the speed of sound in the working fluid. --Guy Macon (talk) 18:14, 22 March 2013 (UTC)
- I'm sorry, there will be a delay before the steady-state current starts. But some current will flow immediately. The wire in this problem should have a capacitance of around ε0ℓ ~ 1 farad, which could power a bulb for a while depending on the details. -- BenRG (talk) 19:21, 22 March 2013 (UTC)
- Right. To expand on the above, some current will flow from the source immediately as it charges that capacitance, After, say, 10 seconds that initial part of the wire will be fully charged, the part of the wire that is 20 light-seconds away will still be discharged, and the transition between the two states will be moving down the wire at the propagation speed. To make this more predictable, surround the wire with a grounded conductive cylinder - a coaxial cable. --Guy Macon (talk) 23:28, 22 March 2013 (UTC)
- Isn't there a confusion here between the speed of electromagnetic radiation, and the speed of electrons? They are not the same thing. μηδείς (talk) 21:20, 22 March 2013 (UTC)
- The speed of electrons has very little relevance to this problem. What is the confusion you see? Dmcq (talk) 23:08, 22 March 2013 (UTC)
- Put a bit of dye at the head of a 100-foot garden hose full of water and open the faucet. The water starts coming out the end almost immediately (there is actually a tiny delay of roughly 1500 meters per second). That is the equivalent of the speed of electromagnetic radiation. After some number of seconds the water at the end will turn color. That is the equivalent of the speed of electrons. --Guy Macon (talk) 23:28, 22 March 2013 (UTC)
- I vaguely remember College physics stating it works like a Newtons_cradle, and that the speed of electrons was very slow, something liek 1/3 m/s. 72.189.225.90 (talk) 00:15, 23 March 2013 (UTC)
- Given a copper wire 2 millimeters in diameter and a DC current of 1 Ampere, the time it takes for an electron to travel one meter is about 12 hours.[5] --Guy Macon (talk) 00:38, 23 March 2013 (UTC)
- Yes, water flows from a quarter-mile of garden hose immediately you open the tap at the outlet end (I've tried it frequently). Conversely, it takes up to a minute after opening the source tap for the water to reach the outlet of an empty pipe (though this would be quicker at higher pressure). The analogy of die at the water source and "theoretically marked" electrons at the battery is not valid because there is a vast store of electrons in the wire. Dbfirs 07:27, 23 March 2013 (UTC)
- I thought the analogy with the electrons was good, but it might be better to think of a gas going down the pipe to get a idea of how the pressure travels down it. Dmcq (talk) 10:07, 23 March 2013 (UTC)
- Has everyone considered that there is no such thing as a perfectly rigid body? Plasmic Physics (talk) 23:54, 23 March 2013 (UTC)
- That's certainly true for garden hose, and explains where the pressure is stored. Dbfirs 08:10, 24 March 2013 (UTC)
- Of course I am fully aware that a garden hose is made of rubber. The problem with using that as an argument for changes at the inlet of the long hose appearing faster at the outlet of the long hose than I predict is that a [surface wave] is much slower than a [P-Wave] (So is an [S-wave], but the rubber isn't rigid enough to propagate an S-wave]). --Guy Macon (talk) 08:48, 24 March 2013 (UTC)
- Yes, we agree that the minimum delay is that defined by the speed of sound in water. Dbfirs 11:31, 24 March 2013 (UTC)
Recognizing Spider
What kind of spider (Tarantula?) is this? Picture raken in Costa Rica, near San Ramon
שנילי (talk) 15:39, 22 March 2013 (UTC)
- Looks like a variety of the venomous Brazilian wandering spider. Mikenorton (talk) 22:10, 22 March 2013 (UTC)
- thanks, but mine is look less hairy and mor black. The yelow pattern on it's back is very distingiushed. see also http://www.tripadvisor.com.au/LocationPhotoDirectLink-g608739-i24653177-San_Ramon_Province_of_Alajuela.html#24653177
שנילי (talk) 08:36, 23 March 2013 (UTC)
Many elements of its morphology make me suspect some variety of wolf spider, though I've never seen one quite that large before; I've seen specimens which have grown to have a leg-span a good 10-12mm across, but this one looks bigger still, though it's hard to say with any certainty from the frame of reference. Can you give a rough approximation of its size? Wolf spiders are present in virtually every habitable region on earth and if I'd expect a gargantuan version to develop or persist in any modern ecosystem Brazil's rainforests would certainly be at the top of the list. Note however that you may want to change the file name for your picture if you intend to leave it up for the project's uses; the spider is not simply by virtue of its size a "tarantula" as this a term which refers to a specific selection of related species. Unless there is some form of local common-use vernacular at use here that I am unaware of. Edited to add: the article I linked above suggests that the upper threshold for wolf spiders is around 30 mm, and the more I study the specimen, the more convinced I am that its in this family (Lycosidae).
Also, just out of curiosity, did there ever appear to be a point where the frog was considered potential dinner? Snow (talk) 06:16, 24 March 2013 (UTC)
- Unfortunatly I was not smart enugh to put a coin or other known-size artifact near it. However, comparing to the size of the frog, which I astimate, as far as i remember, was about 20-30 mm, I belive that the spider's leg span is like 30-50mm. The frog took the oprtunity and leaped during the shooting, so if it was on the menu the spider left hungry. שנילי (talk) 05:40, 25 March 2013 (UTC)
Yeah, honestly, I think our article is in error in defining the upper limit in sizes for the family at 30mm; the more I reflect on it, the more certain I am that I've seen specimen's pushing this limit in some relatively arid environments. Wait until you come across one carrying scores of its young on its back. I tell you, I usually find chance encounters with all forms of life fascinating, but if a 2.5 inch long spider comes barreling at you with a writhing mass on its back (in other words, a gnarly spider that can drop gnarly spiders) and you don't back-peddle with a "Yeeeehh..." then you are seemingly made of sterner stuff than I! Snow (talk) 10:10, 25 March 2013 (UTC)
How do the first individual of a new species find mate?
This is what always made me curious. How do the first individual of a newly evolved species find mate? --PlanetEditor (talk) 17:08, 22 March 2013 (UTC)
- There really is no "first" individual of a new species. Populations evolve gradually. The first individual with a particular mutation will mate with another member of its population, and that mutation may or may not get passed to their offspring. It's kind of like imagining who the "first" speaker of French spoke French to - there is no singular moment in time when people in what is now France stopped speaking Latin and started speaking French; the Latin they spoke gradually changed over generations until it was no longer mutually intelligible with the language people were speaking in Italy, which had also gradually changed into what we now call Italian. thx1138 (talk) 17:13, 22 March 2013 (UTC)
- (Post WP:EC: similar answer to above, with relevant wikilinks) Well, sometimes they don't, and this also applies to the last few individuals of a species. See Allee effect. That being said, you are thinking about this very simplistically. There is seldom (if ever) any "first individual" of a "new species" (this is a general consequent of the species problem). Speciation is a gradual process (with respect to the generation time of the organism). You may be interested in allopatric speciation and sympatric speciation, which cover two of the more common routes to speciation. Finally, consider a hypothetical example. The first "chicken" was hatched from an egg (since there were egg-laying birds before there were chickens). Even if this bird alone has all the characteristics of "chicken", it could still likely interbreed with e.g. many of its cousins. Their offspring would consist of some "chickens" and some "non-chickens", but eventually there could be locally available chickens to form a growing population. SemanticMantis (talk) 17:21, 22 March 2013 (UTC)
- Thanks for the explanation. --PlanetEditor (talk) 18:13, 22 March 2013 (UTC)
- I liked the analogy with the problem of the first person to speak French. Dmcq (talk) 20:21, 22 March 2013 (UTC)
- Did the first person to speak French make fun of the accent of the second person to speak French? Edison (talk) 22:35, 22 March 2013 (UTC)
- Comment?!? μηδείς (talk) 01:22, 23 March 2013 (UTC)
- Did the first person to speak French make fun of the accent of the second person to speak French? Edison (talk) 22:35, 22 March 2013 (UTC)
- Lol. Take that, athiests! 78.150.234.51 (talk) 00:32, 24 March 2013 (UTC)
- What's an athiest and what are they taking?? Dauto (talk) 22:59, 24 March 2013 (UTC)
- Another name for a French person I'd guess from the above discussion and I suppose there must be some special way they're thinking of that French men and women can take advantage of being able to talk to each other. Though what it's all got to do with a new species finding a mate I don't know. ;-) Dmcq (talk) 00:36, 25 March 2013 (UTC)
- That's not it - an atheist is someone who does not believe in the existence of any god or deity. See atheism. Whoop whoop pull up Bitching Betty | Averted crashes 00:48, 25 March 2013 (UTC)
- So athiests are French atheists? What's that got to do with finding a mate though? ;-)Dmcq (talk) 01:23, 25 March 2013 (UTC)
- That's not it - an atheist is someone who does not believe in the existence of any god or deity. See atheism. Whoop whoop pull up Bitching Betty | Averted crashes 00:48, 25 March 2013 (UTC)
- Another name for a French person I'd guess from the above discussion and I suppose there must be some special way they're thinking of that French men and women can take advantage of being able to talk to each other. Though what it's all got to do with a new species finding a mate I don't know. ;-) Dmcq (talk) 00:36, 25 March 2013 (UTC)
- What's an athiest and what are they taking?? Dauto (talk) 22:59, 24 March 2013 (UTC)
- No one has yet considered ploidy. Plasmic Physics (talk) 00:44, 25 March 2013 (UTC)
Urination!
Which creates more spray, pissing to hit the water or pissing to hit the inside of the bowl? — Preceding unsigned comment added by 93.96.113.87 (talk) 20:18, 22 March 2013 (UTC)
- It all depends but a hard surface in general, but is that really the right question? Here may be something to answer the implied question [6] Dmcq (talk) 20:26, 22 March 2013 (UTC)
- To minimize splash-back you want to strike a solid with a liquid stream at shallow angle. So, the side of the bowl will be best, as long as you can keep it inside the bowl. Some urinals have a ridge in them, which makes it easier to strike at a shallow angle, thus reducing splashing: [7]. StuRat (talk) 03:33, 23 March 2013 (UTC)
- British Victorian-era urinals sometimes had a little picture of a bee for gentlemen to aim at, on the part of the porcelain that would cause the minimum splashback. The reason a bee was used, was a joke for the educated; the Latin for bee is apis ("a piss" gettit?). Alansplodge (talk) 12:01, 23 March 2013 (UTC)
- Some urinals still have those. I've seen them on occasion. I never figured that this was the reason why - in fact, I never really figured too much about it at all (just that it was something stuck on there as a booze promo or something). --Kurt Shaped Box (talk) 00:33, 24 March 2013 (UTC)
skin
Is human skin color variation an example of micro-evolution? Pass a Method talk 21:47, 22 March 2013 (UTC)
- No. "Evolution" implies that something is changing across time. Variation at a given moment in time is not evolution, micro or otherwise. It may however provide the raw material for micro-evolution, if the range of variation changes across time. Looie496 (talk) 22:51, 22 March 2013 (UTC)
- I'm not so sure about that. Pale skin evolved in Europe and seems to have evolved as an adaptation to the environment. Sounds like microevolution to me. It's not the evolution of a new species or even subspecies, but it's still evolution. thx1138 (talk) 23:52, 22 March 2013 (UTC)
- Perhaps not an evolution so much as a suvival of the fittest. Without a DNA sample of generations across history, we cannot know either way. Since We are all realted with only a couple hundred generations, a micro-evolutions seems very unlikely given the extreme time lengths it takes to produce a single positive evoled trait. 72.189.225.90 (talk) 00:21, 23 March 2013 (UTC)
- I'm not so sure about that. Pale skin evolved in Europe and seems to have evolved as an adaptation to the environment. Sounds like microevolution to me. It's not the evolution of a new species or even subspecies, but it's still evolution. thx1138 (talk) 23:52, 22 March 2013 (UTC)
- It seems to me that the microevolution/macroevolution distinction shows up mainly in arguments about intelligent design, and in that context you may as well define "microevolution" as the subset of evolutionary processes that ID proponents concede are real. I did a web search for "intelligent design skin color" and the first two hits were a page at ideacenter.org saying that skin color variations arose through microevolution and a page at intelligentdesigntheory.info saying that skin color variations are evidence that God made us (and implying that black people should go back to Africa since God put them there for a reason). Apparently there is disagreement over this issue within the intelligent design community. They should teach that controversy. -- BenRG (talk) 01:18, 23 March 2013 (UTC)
- Interesting the only links one sees here are to user pages. μηδείς (talk) 01:20, 23 March 2013 (UTC)
- I didn't want to give the intelligent design sites even more free publicity by linking to them from Wikipedia. I don't think WP's microevolution and macroevolution articles are very good, or clearly answer this question, or should be trusted to give a correct answer to this question, but there they are for what it's worth. -- BenRG (talk) 01:51, 23 March 2013 (UTC)
- A more well-formed question is "is human skin color an adaptive trait?" It might be, but it might be a polymorphism that is neutral, and maintained by genetic drift. See also background selection. Even if the question is formulated more precisely, experts still disagree. Perusing these links will give you some idea of the subtleties involved. If you want to read some of the scientific literature, you might check out "The evolution of human skin coloration" [8], or "Does the Melanin Pigment of Human Skin Have Adaptive Value?: An Essay in Human Ecology and the Evolution of Race" [9]. (You may need to get access through a library or search for alternate sources for the last two.) SemanticMantis (talk) 04:06, 23 March 2013 (UTC)
- I would be suspicious about whether there could be other factors at work here besides just evolution. The correlation of skin color and sunlight is usually (properly) attributed to natural selection, but I would wonder if there could also be heritable, epigenetic factors that enter into it, as might also be true with body weight. I found one abstract [10] pointing to regulation by methylation of POMC, but that doesn't prove it is heritable, which is less often pursued. Because hormones like MSH do circulate, and do respond to the environment, and can affect gene regulatory elements, it is not impossible that their targets could acquire heritable alterations... Wnt (talk) 04:09, 23 March 2013 (UTC)
- Actually, it's not all that proper to attribute skin color to natural selection; modern research has found no particularly strong correlation with skin tone and exposure to UV light or any other obvious environmental factor. Rather it seems that skin color is instead a result of that other Darwinian force everyone seems to forget about, sexual selection. Jared Diamond treats this subject at some length in The Third Chimpanzee, but if you or the OP prefer it, I'm sure I can scare together some peer-review on the matter too. Snow (talk) 06:49, 24 March 2013 (UTC)
- Well, PMID 23415504 PMID 23274340 PMID 22923467 PMID 16685728 PMID 10896812 seem to favor the impression I had. I don't deny the possibility of sexual selection, but ... why would it be different in Europe vs. Africa? Of course sexual selection is a kind of natural selection and a kind of evolution. (The preferential mutation of methylated cytosine may also be a kind of evolution, a sort of "Lamarckian" evolution to be precise, but not a kind of natural selection) Wnt (talk) 19:14, 24 March 2013 (UTC)
- Actually, it's not all that proper to attribute skin color to natural selection; modern research has found no particularly strong correlation with skin tone and exposure to UV light or any other obvious environmental factor. Rather it seems that skin color is instead a result of that other Darwinian force everyone seems to forget about, sexual selection. Jared Diamond treats this subject at some length in The Third Chimpanzee, but if you or the OP prefer it, I'm sure I can scare together some peer-review on the matter too. Snow (talk) 06:49, 24 March 2013 (UTC)
- As others have pointed out, it is indeed evolution. Asking whether it's "micro" or "macro" is like asking whether a 2 km journey is long or short. Obviously, it depends on what you call long or short. --140.180.249.152 (talk) 18:20, 23 March 2013 (UTC)
March 23
Resistance of wire
If a wire is drawn out to 3 times its original length, by how many times excepts its resistance to be increased? Please explain in detail, the answer says it 9 but I got 3? 115.253.134.61 (talk) 06:00, 23 March 2013 (UTC)
- Resistance is directly proportional to the wire's length AND inversely proportional to its cross-section. You can figure out the rest on your own -- we're here to explain difficult concepts, not to do your homework for you. 24.23.196.85 (talk) 06:43, 23 March 2013 (UTC)
- Agreed, where "cross section" = cross-sectional area. Also note that the volume of the wire stays constant. StuRat (talk) 06:55, 23 March 2013 (UTC)
- Do you understand that by "drawn out", they mean "evenly stretched"? -- 41.1.32.203 (talk) 09:52, 23 March 2013 (UTC)
I don't understand this question. First, how you can stretch a wire by this much without it breaking? Second, the volume doesn't stay constant, see Poisson's ratio. Count Iblis (talk) 15:06, 23 March 2013 (UTC)
- See Wire drawing, and Poisson's ratio applies only to elastic deformation. Tevildo (talk) 16:33, 23 March 2013 (UTC)
- You can easily stretch a wire to three times its length - that's how wire is made! Start with an ingot of a ductile metal like copper or gold. Spool it carefully through staged dies. Use heat and metal lubricant, and pull. Wire drawing is the main article. Very high gauge wire is essentially made by staging and drawing. Here is a nice video, on wire drawing machinery. Evidently the best resources in light metalworking industry are no longer in English; apparently even physicists in the English-speaking world are no longer informed abou basic material processing techniques! Nimur (talk) 16:43, 23 March 2013 (UTC)
- I see! I did know about this process to make wires, but it was not something I thought of when reading this question, I was just picturing someone trying to stretch a wire :) . Count Iblis (talk) 16:48, 23 March 2013 (UTC)
exploitation of natural resources on the moon
let's pretend that the cost of transporting material from the moon to earth is negligible. ok now that we are assuming that, is there any natural resources on the moon that we could use to benefit humankind.--There goes the internet (talk) 07:43, 23 March 2013 (UTC)
- Cheese. Ya know, bananas in pajamas? ☯ Bonkers The Clown \(^_^)/ Nonsensical Babble ☯ 08:07, 23 March 2013 (UTC)
- Every kg of the moon has a gravitional potential energy of 56 MJ that is about 50 % more than the combustion heat of a kg gasoline. If the transportation process does not use or discard this energy this represent a significant value. The potential energy of the moon could power our current civilization for 9 000 000 000 years except that the sun will explode before that. The most obvious way to use this energy is to power a system of momentum exchange tethers to transport mass from earth to the moon or other space destinations. Gr8xoz (talk) 21:59, 23 March 2013 (UTC)
how can ghosts and other magical creatures walk through walls but not fall through the floor
we have no references, please seek an internet forum |
---|
The following discussion has been closed. Please do not modify it. |
If you can go through solid objects by walking into them, wouldn't gravity pull you straight into the ground and to the center of the earth? Or, alternatively, if gravity had no effect on you, wouldn't you just float away? is there a scientifically coherent explanation why a theoretical ghost would walk on the ground and not fall *into* the ground, but it could still go through walls? i don't beleive in ghosts but it always bugged me that they could walk through walls yet their feet seemed to hit the floor beneath them without going through.--There goes the internet (talk) 08:00, 23 March 2013 (UTC)
Sorry everyone, i think i got everyone fixated on the ghost/supernatural angle. my fault. forget ghosts, and let's get back to physics becuase i dont know much about science. Could a type of physical entity theoretically pass through a wall. like let's say it could voluntarily reshape its matter to somehow fit between the spaces in the molecules of the wall and reform itself on the other side... but would that destroy the wall? because, like, what if the very process of trying to go between the "spaces" in the wall molecules broke the molecular bonds and the wall like dissolved or something. And if such a being could do this let's say the ability somehow became inovluntary and it auotmatically passed through whatever it touched. then i think it would go into the floor and fall into the center of the earth. i'm so confused.--There goes the internet (talk) 14:37, 23 March 2013 (UTC)
|
About animals' tolerance of g-force
Are there any researches on this subject? Is it expected to be inversely proportional to body size?--Inspector (talk) 09:42, 23 March 2013 (UTC)
- How about "Tolerance of small animals to acceleration"? The abstract concludes "Body weight was inversely related to the threshold G-value at which animals [mice, rats, rabbits, finches, pigeons, and roosters] are resistant to the prolonged acceleration." The author of "Comparative Study on Tolerance to Centrifugal Acceleration in Several Animals" tested dogs, rabbits, guinea pigs, rats, hamsters and frogs. Hamsters did better than the other mammals, but the champs were the frogs. I'm still looking for the elephant study. Clarityfiend (talk) 10:31, 23 March 2013 (UTC)
- that study sounds kinda inhumane. i knew a boy who put a mouse in a model rocket and i think it died.--There goes the internet (talk) 14:38, 23 March 2013 (UTC)
- See Great Mambo Chicken and the Transhuman Condition: Science Slightly over the Edge. Wnt (talk) 15:04, 23 March 2013 (UTC)
- Yes. The first study talks about "50% mortality", and the second "100% lethal time". Clarityfiend (talk) 22:56, 23 March 2013 (UTC)
- that study sounds kinda inhumane. i knew a boy who put a mouse in a model rocket and i think it died.--There goes the internet (talk) 14:38, 23 March 2013 (UTC)
quadriplegic pregnancies and childbirth
Need information regarding first known childbirth of quadriplegic woman. Information regarding woman's name, year and place of child delivery. Anything prior to 1952 is most helpful. Thank you,Etofbaok (talk) 14:51, 23 March 2013 (UTC)
Do it yourself soil test
"A Simple Do It Yourself Soil Test -- Perform your soil test by placing a sample into two separate cups or containers. Add vinegar to one. If it fizzes your soil is alkaline. If not, add some water to the second cup and stir. Add baking soda. If it fizzes you have acidic soil. If neither have a reaction your soil is somewhat pH balanced."
I copied this advice from the internet. My hunch is that only soils that were highly acidic or highly alkaline would react to this test by bubbling. That for soils in a moderate pH range the test is pretty much useless.
Am I correct or is this test more sensitive than I suspect?
Thanks, CBHA (talk) 18:41, 23 March 2013 (UTC)
- The first one is not technically correct. Alkali does not fizz in vinegar. Carbonates fizz in vinegar; while many alkali's which are in soil may be carbonates (limestone or chalk for example) and that may cause fizzing, there are many more alkali compounds which are not carbonates (various hydroxides and oxides for example) which will not fizz in an acid like vinegar. So, fizzing is technically not a pH test, it is a carbonate test (most carbonates will by somewhat alkali). You could have a strongly alkali soil which does not contain a lot of carbonates, which would then fizz in neither test. The baking soda (sodium hydrogen carbonate) test should work well to test for low pH (acid) soil. --Jayron32 19:00, 23 March 2013 (UTC)
- But, I agree, this kind of test wouldn't be very accurate. Just get some litmus paper, wet the soil with water, and dip it in. StuRat (talk) 21:41, 23 March 2013 (UTC)
- The water should be deionized (distilled or R.O.)--Digrpat (talk) 22:12, 23 March 2013 (UTC)
Which fishing method results in a larger catch?
- You would use them for different fish species that live in different areas of the sea. Rmhermen (talk) 00:27, 24 March 2013 (UTC)
Coke in space
The Coca-Cola article says A Coca-Cola fountain dispenser (officially a Fluids Generic Bioprocessing Apparatus-2 or FGBA-2) was developed for use on the Space Shuttle as a test bed to determine if carbonated beverages can be produced from separately stored carbon dioxide, water and flavored syrups and determine if the resulting fluids can be made available for consumption without bubble nucleation and resulting foam formation. What was the result of the test? RNealK (talk) 22:28, 23 March 2013 (UTC)
- Experiment was rated as a failure. http://www.spaceline.org/shuttlechron/shuttle-sts77.html CBHA (talk) 22:46, 23 March 2013 (UTC)
- Right. NASA's mission summary states: "FGBA-2, a Coca-Cola soft-drink dispenser, required troubleshooting during the flight, and the SEF experiment was declared failed when command problems in the payload could not be repaired". Looie496 (talk) 22:51, 23 March 2013 (UTC)
- That was the third attempt. Apparently the second was fairly successful. The first employed can of pop (where the carbonation separated out). The second (FGBA-1) dispensed prepared pop. FGBA-2 was like an Earthly fountain pop machine - with separate CO2, syrup and water - but resulted in two much foaming. Rmhermen (talk) 00:25, 24 March 2013 (UTC)
- Right. NASA's mission summary states: "FGBA-2, a Coca-Cola soft-drink dispenser, required troubleshooting during the flight, and the SEF experiment was declared failed when command problems in the payload could not be repaired". Looie496 (talk) 22:51, 23 March 2013 (UTC)
- If that was the two much foaming, what was the one much foaming, and will there be a three much foaming also? Plasmic Physics (talk) 00:52, 24 March 2013 (UTC)
- Oh boy! Typo mocking! I was just thinking how much I miss FidoNet... :) --Guy Macon (talk) 01:13, 24 March 2013 (UTC)
- There's a difference between a typo and the coincidental combination of bad grammar and homophones. Plasmic Physics (talk) 02:48, 24 March 2013 (UTC)
- Zoiks! - That probably means that beer-in-space also won't work! ~E:74.60.29.141 (talk) 01:58, 24 March 2013 (UTC)
- No, see the followup discussion, it should be doable with the right machine. Nil Einne (talk) 05:28, 24 March 2013 (UTC)
- [Warning: detour ahead] — I hope you're right. Actually, brewing beer in space would be an interesting experiment (any volunteers?) — For one thing, beers fall into two categories: top-fermenting (ales, etc.) and bottom-fermenting (lagers, etc.). In "zero-G" [a misnomer] there is no up or down, thus no "top" or "bottom". Also I wonder how the natural carbonation process would be affected. ~Cheers, ~E 74.60.29.141 (talk) 07:27, 24 March 2013 (UTC)
- Off the top of my having-brewed-occasionally head (there likely being little references available on the subject): probably not much. Although top and bottom-brewing yeasts tend to float or sink somewhat (aspecially when spent), during fermentation they are both spread through the whole of the wort (liquid), so they'd both probably perform fairly well in zero-g. Similarly, they'd still excrete CO2 which would still dissolve into the ale/lager, with larger gas bubbles coalescing – but not rising or falling – if and when the CO2 exceeded the soluble capacity of the liquid (which would depend on its temperature and pressure). However, you'd have a major problem venting the excess CO2 gas (whose volume can be considerable) from a (necessarily) pressurised liquid in zero-g. Perhaps it could be used for propulsion - Poul Anderson wrote the (not entirely serious) novella A Spaceship Built for Brew in which an emergency beer-powered spacecraft was employed. {The poster formerly known as 87.81.230.195} 212.95.237.92 (talk) 14:26, 25 March 2013 (UTC)
- [Warning: detour ahead] — I hope you're right. Actually, brewing beer in space would be an interesting experiment (any volunteers?) — For one thing, beers fall into two categories: top-fermenting (ales, etc.) and bottom-fermenting (lagers, etc.). In "zero-G" [a misnomer] there is no up or down, thus no "top" or "bottom". Also I wonder how the natural carbonation process would be affected. ~Cheers, ~E 74.60.29.141 (talk) 07:27, 24 March 2013 (UTC)
- No, see the followup discussion, it should be doable with the right machine. Nil Einne (talk) 05:28, 24 March 2013 (UTC)
- Zoiks! - That probably means that beer-in-space also won't work! ~E:74.60.29.141 (talk) 01:58, 24 March 2013 (UTC)
- Interesting, that means that two new categories would have to be introduced: outer-fermenting, and inner-fermenting. Plasmic Physics (talk) 09:47, 24 March 2013 (UTC)
Easier to track horizontal lines than vertical lines?
I just noticed while making some measurements with a steel rule that I can focus on the line better when they are horizontal than when they are vertical. When they are vertical, I can't keep track of one line long enough to make a mark at, say, 32 mm, whereas if I change position so that the lines appear horizontal then it becomes much easier. Similarly, I have trouble counting consecutive zeros beyond, say, four zeros - same for other combinations of numbers. Is this a common feature for humans? --78.150.234.51 (talk) 22:44, 23 March 2013 (UTC)
- Subitizing is the technical term for "intuitive" counting - the number of things that can be subitized depends on the individual. Different focussing of the vision between horizontal and vertical is astigmatism - you should see an optician if you're worried about your eyesight. Tevildo (talk) 22:49, 23 March 2013 (UTC)
- And note that animals can do this type of counting, too, but not the type where you use numbers to represent things. StuRat (talk) 22:52, 23 March 2013 (UTC)
- Hey, thanks but I'm not really talking about counting the lines - my issue is tracking a specific line and thereby knowing and distinguishing it from the lines around it. I do have an updated glasses perscription and ordering new glasses is high on my to-do list but things like counting zeros or reading mobile phone numbers that are presenting as a continuous string etc is something I've noticed for a long time. If you're confronted with about 100 zeros, can you easily count them without placing an object on them? Or do you just get lost in them? 78.150.234.51 (talk) 23:45, 23 March 2013 (UTC)
- I personally couldn't do more than about five or six without marking them off. Other people are more skillful at such tasks, of course. Tevildo (talk) 00:16, 24 March 2013 (UTC)
- Predators have eyes designed to track vertically (for chasing prey): [12], while herbivores have eyes designed for tracking horizontally (to watch for predators): [13]. Human eyes are supposed to be equally good at both. StuRat (talk) 22:58, 23 March 2013 (UTC)
- That's interesting. But the lines and numbers etc aren't moving... I'm trying to track a stationary article in among identical articles, also stationary. It can't just be me. 78.150.234.51 (talk) 23:45, 23 March 2013 (UTC)
- I believe they're related. That is, the area of optimal vision is spread out vertically for the predators and horizontally for the herbivores, while ours is closer to circular. StuRat (talk) 01:15, 24 March 2013 (UTC)
- How many hours a day do you practice tracking a horizontal line without jumping to an adjacent line? (Hint: you are doing it right now) It could be that someone who usually reads vertical text develops the opposite skill. How is your performance when going right to left? The same?
- Then again, it might be something as simple as astigmatism. --Guy Macon (talk) 01:10, 24 March 2013 (UTC)
- I would tend to go with Sturat. In the early days of television, it was found that horizontal scanning gave a more acceptable picture than vertical scanning - even though, vertical scanning made more sense from the electronic point of view. Both systems contained the same amount of information but our eyes and visual cortex are more receptive to the information in horizontal form. Our Abducens nerve controls this side to side movement. In some alcoholics a loss of this fine control is most noticeable. So, smooth up and down tracking ability is not so evolutionary important as side to side. Therefore, poor horizontal tracking in in the case of the alcoholic -is easier to see. Likewise, our eyes have evolved (well, mine at least) to be side by side- not one over the top of the other). Action before us takes place either side to side ( Abducens nerve) or near to far (convergence and focus). --Aspro (talk) 15:50, 25 March 2013 (UTC)
March 24
how oblong could a planet be. could it be shaped like a tic-tac?
I have notice that all planets and moons are round. But they are not perfectly spherical, and I believe some moons are known for being more oblong than others. What is the MOST oblong a planet could theoretically be? what makes a planet oblong?--There goes the internet (talk) 10:52, 24 March 2013 (UTC)
- I'm not sure about theoretical limits, but you may be interested in the decededly tic-tac-ish Haumea.
Fgf10 (talk) 11:00, 24 March 2013 (UTC)
- There's no place like Haumea. There's no place like Haumea. [tap ruby slippers] Clarityfiend (talk) 11:04, 24 March 2013 (UTC)
- I think Carl Sagan mentions on Cosmos that when a planet is above a certain mass then gravity pulls it into a spherical type shape. Smaller bodies like asteroids can be much less spherical. Ap-uk (talk) 11:38, 24 March 2013 (UTC)
- Indeed, see Hydrostatic equilibrium, which I should have linked to in the first place. (I blame sleepy hungover posting.....) Although interestingly our List of Solar System objects in hydrostatic equilibrium says Haumea is actually in hydrostatic equilibrium. Fgf10 (talk) 11:40, 24 March 2013 (UTC)
- Yes, hydrostatic equilibrium is the right link.However, there is a twist (or a rotation ;-). Rotating bodies in hydrostatic equilibrium will balance gravity and centrifugal force, and so form an oblate spheroid. It can look a bit like a tic-tac from one side, but it's actually more shaped like an M&M. A tic-tac is closer to an prolate spheroid. The most oblate of the 8 currently recognised solar planets is Saturn, where the difference between polar and equatorial diameter is around 10%. --Stephan Schulz (talk) 12:06, 24 March 2013 (UTC)
- Indeed, see Hydrostatic equilibrium, which I should have linked to in the first place. (I blame sleepy hungover posting.....) Although interestingly our List of Solar System objects in hydrostatic equilibrium says Haumea is actually in hydrostatic equilibrium. Fgf10 (talk) 11:40, 24 March 2013 (UTC)
- I think Carl Sagan mentions on Cosmos that when a planet is above a certain mass then gravity pulls it into a spherical type shape. Smaller bodies like asteroids can be much less spherical. Ap-uk (talk) 11:38, 24 March 2013 (UTC)
- Consider the ellipse formed by the intersection of the surface of a planet (which by definition must be in hydrostatic equilibrium) in the shape of a oblate spheroid, and a plane containing the polar axis. For the planet to be stable in the presence of weak dissipation, the eccentricity of that ellipse can be at most 0.81267. However, it's also possible for a planet to be in the shape of a tri-axial ellipsoid, like Haumea is. In that case, the planet can be stable in the presence of weak dissipation when the eccentricity is up to 0.93858.[14] That eccentricity corresponds to the planet's longest axis being 2.898 times longer than the planet's shortest axis. That's the theoretical limit, which assumes among other things that the planet has uniform density. Real planets aren't going to fit the assumptions exactly. Red Act (talk) 17:47, 24 March 2013 (UTC)
- Could you explain what "weak dissipation" means in this context? Also, I wonder what the maximum possible difference in gravitational potential energy can be (or more or less synonymously, I think, the maximum possible difference in air pressure assuming the body were granted an average standard atmosphere of pressure) Wnt (talk) 19:55, 24 March 2013 (UTC)
- If I'm understanding the author of the article I cited correctly, in this context "weak dissipation" means that the planet's mechanical energy can be slowly decreased (turned into heat) due to viscosity, even though the planet is otherwise treated as consisting of an inviscid fluid. Red Act (talk) 20:37, 24 March 2013 (UTC)
- Could you explain what "weak dissipation" means in this context? Also, I wonder what the maximum possible difference in gravitational potential energy can be (or more or less synonymously, I think, the maximum possible difference in air pressure assuming the body were granted an average standard atmosphere of pressure) Wnt (talk) 19:55, 24 March 2013 (UTC)
Scientific
Is there a scientific reason as to why humans are offended by genitals whereas other mammals are not? For example non-vulgar usernames with genital-related connotations regularly get blocked. Also some of the most vulgar words in English are genital-related. I'd like to know the science behind this. Pass a Method talk 14:10, 24 March 2013 (UTC)
- First you need to prove your premise. I don't know of anyone who is "offended by genitals" as such. Types of clothing worn and use of language are matters of culture and climate, not science as such. Suggest you start with Anthropology and branch out from there. ←Baseball Bugs What's up, Doc? carrots→ 15:39, 24 March 2013 (UTC)
- Why are words such as c*nt, d*ckhead and f*ck considered so offensive? They are all genital-related. There are no such vulgar words related to the hands or ears. Pass a Method talk 16:51, 24 March 2013 (UTC)
- The other part of your premise that's false is the notion of animals being "offended", or not "offended". Taking offense to something is strictly a human social construct. ←Baseball Bugs What's up, Doc? carrots→ 21:24, 24 March 2013 (UTC)
- What of wikt:clumsy fingers and wikt:cloth ears? As is so eloquently demonstrated by Edmond Rostand, only the weak of wit must rely on such uncreative anatomical vulgarity, when the nose is fodder for insult!
- Why are words such as c*nt, d*ckhead and f*ck considered so offensive? They are all genital-related. There are no such vulgar words related to the hands or ears. Pass a Method talk 16:51, 24 March 2013 (UTC)
“ | Ah no! young blade! That was a trifle short!
You might have said at least a hundred things by varying the tone... like this, suppose,...
--Such, my dear sir, is what you might have said, had you of wit or letters the least jot: But, O most lamentable man! - of wit you never had an atom, and of letters, you have three letters only! - they spell Ass! |
” |
— Cyrano |
- As this is a cultural phenomenon rather than a scientific phenomenon, you may get a more accurate response on the Language or Humanities desk. --TammyMoet (talk) 17:24, 24 March 2013 (UTC)
You may find this paper to be of interest. --Guy Macon (talk) 17:40, 24 March 2013 (UTC)
See also here, and watch the documentary in four parts:
Count Iblis (talk) 18:26, 24 March 2013 (UTC)
Visual acuity
Is visual acuity an accurate test when measured by opticians? Surely the results would vary depending on lighting conditions in the test room, the brightness of the snellen chart etc. I'm sure eyesight can also fluctuate depending on time of day etc. Clover345 (talk) 17:38, 24 March 2013 (UTC)
- When I've had those tests done they account for those factors. I am given the test in a windowless room with the lights out, so the only light is that which projects the letters onto the screen. This light is presumably standardized. StuRat (talk) 21:57, 24 March 2013 (UTC)
- It's accurate, but it's not precise - see Accuracy and precision. It's a good test of your overall ability to see things, but it can't be used on its own to determine what sort of glasses you need. Incidentally, the past few tests I've had have used a monitor to display the letters, rather than a traditional paper chart. Tevildo (talk) 22:07, 24 March 2013 (UTC)
Voyager
When will voyager reach the oort cloud? When it does, how will it avoid getting hit by the rocks there? 64.134.165.238 (talk) 20:49, 24 March 2013 (UTC)
- See Voyager 1 and Oort cloud. Voyager 1 is travelling at approximately 3.5 AU per year and is currently about 100 AU from the sun, and the Oort cloud starts at approximately 2000 AU from the sun. It's therefore going to get there in about 550 years. There isn't really much out there for it to crash into, so "by not being incredibly unlucky" is the answer to the second part of your question. Tevildo (talk) 21:09, 24 March 2013 (UTC)
- To clarify, there are many objects in the Oort Cloud, but it encloses such a huge volume that the density is extremely low. Being an approximate sphere moves most of the Oort Cloud objects out of the plane of the ecliptic, where most objects in the solar system can be found (and presumably any spaceships flying between those objects). StuRat (talk) 21:59, 24 March 2013 (UTC)
- Also see: http://xkcd.com/1189/ --Guy Macon (talk) 22:02, 24 March 2013 (UTC)
Suckermouth catfish
How do suckermouth catfish take water in to pass over their gills if they use their mouth to form a suction against the aquarium wall? Is water able to be sucked in through the operculum? DRosenbach (Talk | Contribs) 21:19, 24 March 2013 (UTC)
- Quick search yielded [15]: " Whether the buccal pump system is able to maintain a negative pressure in the oral cavity has since long been a matter of debate. HORA (1930) believed that the lips could not function as a sucker while respiration continued, since the inflowing water would cause the system to fail. ALEXANDER (1965) demonstrated that respiration and suction can function simultaneously, and that both actions continue when the fish is pulled away from the substrate (a vertical aquarium glass). Our results indicate that inflowing water was limited to a thin stream passing under the sucker immediately posterior to each maxillary barbel, a phenomenon also observed by VANDEWALLE et al. (1986) in Hypostomus punctatus." Wnt (talk) 21:22, 24 March 2013 (UTC)
Why do we get toothache?
We can go to the dentist if we have toothache, but animals living in the wild can't. Natural selection must have led to the ability to experience toothache, so what is the advantage of having toothache if you can't do anything about it? Count Iblis (talk) 22:21, 24 March 2013 (UTC)
trolling by indef'd user |
---|
The following discussion has been closed. Please do not modify it. |
|
- Perhaps to avoid using that tooth until it falls out, to prevent the infection from spreading ? StuRat (talk) 23:01, 24 March 2013 (UTC)
- The bodies of plants and animals are subject to many different forms of deterioration. For example, we humans are vulnerable to arthritis, dementia and toothache. Rather than saying natural selection has disposed us to these forms of deterioration, I take the view that natural selection has not yet developed a means of avoiding them. Natural selection is very powerful at fine-tuning plants and animals to survive and reproduce. Apparently these different forms of deterioration do not have a signficant impact on our ability to survive and reproduce. Consequently natural selection is slow at developing a means of avoiding them. Dolphin (t) 23:22, 24 March 2013 (UTC)
- The etiological factors relating to tooth pain can many. Decay from (largely bacterial) acid accumulation causes tooth structure to demineralize, exposing nerve endings in the dentin to insults such as thermal stimuli, high/low pH, sugar, etc. Futhermore, gingival pain can be triggered by periodontal inflammation (whether only gingivitis or even periodontitis). The body deteriorates can evolution cannot be expected to do away with that. Animals do suffer from both demineralization and gingival disease. If decay extends into the hollow area within a tooth and directly affects the neurovascular tissue, root canal therapy or tooth extraction is required to alleviate the problem. Removing plaque/calculus accumulation is required to alleviate gingival inflammation. That being said, certain animals have diastemata to allow for the free removal of food particles from between teeth. DRosenbach (Talk | Contribs) 23:51, 24 March 2013 (UTC)
- My wild guess would be that it's the other way around. Evolution brought us pain because it clearly has major advantages to be warned not to use your broken leg. Setting up a system to exempt toothache might have been to costly compared to the advantage (being more productive). Or maybe evolution was just about to invent the feature but dentists entered the just before that. Or maybe in most cases a toothache prevents an animal from using up energy that's needed to heal the tooth. Joepnl (talk) 00:16, 25 March 2013 (UTC)
- The nerves that supply pain signals with decay/toothache have other purposes. Modern man has a high incidence of decay due to eating modern foods and a long life span, but for early man & animals, the most common reason for tooth pain is/was accidents and fights. Commonly, the tooth is loosened but not dislodged. If it is dislodged, most likely the adjacent teeth are only loosened. If the person or animal keeps stress off the tooth as a result of the pain, it will re-knit into the socket and become sound. Also, if a tooth is lost, it is best not to crunch food into the hole before it heals up. The nerves also supply nutrients in some way to the tooth. I've had root canal treatment, which involves removing the nerve. The result after a few weeks is a dark grey tooth. I've asked dentists why this happens, and they consistently say they don't realy know, but apparently the nerve is required, as well as the blood vessels, to keep the tooth alive. Wickwack 58.164.229.78 (talk) 00:44, 25 March 2013 (UTC)
- Agreed, but there is no way that the nerve can actually supply nutrients, although it may stimulate the tissue. μηδείς (talk) 02:32, 25 March 2013 (UTC)
- Are you sure? While we tend to think of nerves as only signalling devices, and assume all nutrients come via the blood capillaries, evolution has provided many examples of pressing into service anatomical structures for additional functions: e.g., brown fat next to the heart supplying nutrients to the heart, conversion of precursors into estrogen by fat, supply of specialised nutrients and signalling chemicals (ege dopamine), required by the processing cells in the brain, by specialised deep brain cells. Googling "function of nerves in teeth" throws up a lot of professional sites that say things like "the main role of the (pulp) nerves occurs during tooth growth", and "the purpose of the nerve is to make the tooth" as in this site: www.ahendo.com/the-nerve-story. Frustratingly, I could not find a website that actually sets out what the tooth-making role actually is (consistent perhaps with what dentists have told me - they typically say the nerve is critical to a tooth being live, but just how they don't know. Wickwack 120.145.50.162 (talk) 03:49, 25 March 2013 (UTC)
- I think they are just misusing "nerve" to mean "the live part of the tooth". StuRat (talk) 03:54, 25 March 2013 (UTC)
- Yea, I think the dentist was being a bit sloppy with his language. Most likely he meant that the nerve is intertwined with the blood vessels, so it's impossible to remove the one without damaging the other. StuRat (talk) 03:41, 25 March 2013 (UTC)
- You could well be right. I once tried to clarify this very point with one dentist, but his reponse to that was along the lines of "well, right now we need to get on with things, so I see other patients on time. That tooth (indicating the one that was root canalled some time before) is still sound." Wickwack 120.145.50.162 (talk) 03:56, 25 March 2013 (UTC)
- Hopefully DRosenbach, our resident dentist, will return to this thread and clarify this matter for us. StuRat (talk) 03:59, 25 March 2013 (UTC)
- it's interesting that animals react differently than humans to such pain. having had some experience with dogs who have broken their canine teeth, they don't seem to be all that bothered by what would make a human miserable. very often, it seems that animals can be incredibly stoic with regard to pain, but other times they appear just as sensitive as people. of course, i'd imagine it all depends on the evolutionary niche, how other individuals of the species respond to signs of pain, etc. Gzuckier (talk) 02:56, 25 March 2013 (UTC)
- That could be related to whether there is any point in reacting. Humans also show markedly varying responses to pain. Small children cry and scream at the most minor scrape, because they know it gains them a hug and tender words from Mum. I saw recently a 5 year old fall off her bike and take some skin off. She looked about, I assume to see if Mum or Dad was around, then got back on her bike and carried on. I have a faulty hip joint. When it first hurt, about 15 years ago, I thought it was terrible, and rushed off to the doctor. His response after viewing a CAT-scan was essentially "You'll just have to put up with it. Eventually we'll give you an artificial joint, but not now, as artifical joints have a limitted life and have significant limitations." Well, it still hurts. But I just put up with it - pretty much ignore the pain. I've got used to it and it now doesn't seem anywhere near as bad as I first thought. When a dog breaks a tooth, he is unaware that humans can stop the pain. No point worrying about things you can't change. Wickwack 120.145.50.162 (talk) 04:15, 25 March 2013 (UTC)
Tree identification
Can anyone help with this? It was taken today in southern California. DRosenbach (Talk | Contribs) 23:46, 24 March 2013 (UTC)
- I don't know if anyone here will be able to answer it, but if they can't, you might try taking it to Wikipedia:WikiProject Plants or another WikiProject listed at Wikipedia:WikiProject Tree of Life#Scope and descendant projects. Ryan Vesey 00:00, 25 March 2013 (UTC)
- Hard to tell from this picture. Could be a dogwood. --Jayron32 00:18, 25 March 2013 (UTC)
- The flowers look a little complex to be a dogwood, but it's very difficult to make out anything. μηδείς (talk) 01:23, 25 March 2013 (UTC)
- Oval leaves, smooth bark, slight curve and criss-crossing of some secondary branches, grouping of flowers, I'm thinking prunus genus, in which there are unfortunately many species, but hopefully it's a start. Richard Avery (talk) 08:09, 25 March 2013 (UTC)
- I was thinking prunus (plum, almond, etc.) as well, largely because there are lots of them blooming at this time of year in various parts of California. Looie496 (talk) 20:06, 25 March 2013 (UTC)
- Oval leaves, smooth bark, slight curve and criss-crossing of some secondary branches, grouping of flowers, I'm thinking prunus genus, in which there are unfortunately many species, but hopefully it's a start. Richard Avery (talk) 08:09, 25 March 2013 (UTC)
- The flowers look a little complex to be a dogwood, but it's very difficult to make out anything. μηδείς (talk) 01:23, 25 March 2013 (UTC)
- Hard to tell from this picture. Could be a dogwood. --Jayron32 00:18, 25 March 2013 (UTC)
- These links may be helpful.
- Ask an Arborist here - Tree World
- Ask an arborist
- ask-an-arborist.com
- Tree Care Answers from Ask An Arborist
- Untitled Document ("Do you have a tree care question? Ask the arborist!")
- Ask Arborist
- —Wavelength (talk) 20:21, 25 March 2013 (UTC)
March 25
Electron deficient nomenclature
What are the IUPAC nomenclature rules for naming group 13 hydrides, that ditinguishes bonding motifs for structural isomers? For instance, there are two isomers for closo-diborane(4). Both have a boron-boron bond, however, the ground state isomer, has two hydrogen bridges. Plasmic Physics (talk) 00:55, 25 March 2013 (UTC)
Interstellar probe, part 2
Most discussions about interstellar travel start out with something like "at Voyager's current speed, it would take 73,000 years to reach the nearest star". So what's wrong with taking 73,000 years, or even a million? What can possibly break a spacecraft in 73,000 years, since there's no wind, rain, bacteria, or vandals, and very few micrometeroids compared to inside the Solar System? --140.180.249.152 (talk) 05:21, 25 March 2013 (UTC)
- What's wrong with it is that we will either be extinct or will have picked it up and put it in a museum long before it reaches anything. In either case, it won't help us make contact with aliens. StuRat (talk) 05:51, 25 March 2013 (UTC)
- We most recently discussed the topic of the longevity and eventual fate of Voyager's material components in February 2011, and there are links to earlier discussions. Nimur (talk) 05:53, 25 March 2013 (UTC)
- StuRat: You're right, but I was really asking whether human technology could stay functional after that long in space.
- Nimur: so according to that discussion, this is actually a feasible way of reaching another star? Assuming the spacecraft stays dormant until it gets near enough to power its solar panels, that discussion seems to say nothing bad will happen to it. --140.180.249.152 (talk) 06:23, 25 March 2013 (UTC)
- It is extremely extremely unlikely that anything that involves electronics or mechanisms made by man would still function after that length of time. On a time scale of thousands of years, semiconductor alloys are unstable, and most metal alloys are unstable. Semiconductor alloys of course includes solar panels. Man has considerable expertise making high reliability electronics for military, space, and telecommunications applications. Even so, not all failure mechanisms are understood, and such components have been found to have rapidly increasing failure rates (both catastrophic failures and drift outside specifications) after only 40 to 60 years. Really, the only way to build something that will still be functional after 73,000 years is to build it with the very best available practice, test it until it fails, figure out why, and build another corrected for the discovered failure mode. Repeat as necesary until you've tested a batch contiunously for 73,000 years.
- Electronic and mechanical systems suffer from random failure in addition to identifiable failure mechanisms, such as Coffin-Manson fatigue failures. The only way to address this is to build in self-repairing or redundant systems. For a service life of 73,0000 years, the simplest functionality is likey need so much redundancy and self correction that it will be physically enormous in volume and mass.
- And then it could, and probaly would, still fail due to crashing into something unexpected, or damaged by interstellar dust.
- Wickwack 120.145.136.89 (talk) 07:12, 25 March 2013 (UTC)
- Once it gets past the point where the solar wind is significant, it will`have a problem. It will still be outgassing, but the gasses won't be blown away - they will form a very thin atmosphere around the spacecraft held there by gravity.. When material outgass, atomic oxygen is a commonly found gas, and it is very corrosive. It will be very thin, but it will also have a very long time to work on the spacecraft. --Guy Macon (talk) 09:35, 25 March 2013 (UTC)
- All sorts of things can happen to materials they are a mixture or touch each other or are small, which all happen in semiconductors, over such long periods, but it would mostly be at 4°K so that should slow things down considerably. An interesting problem would be having a computer working at that temperature for that length of time and waking up properly when it got within a few thousand million miles of the destination. Dmcq (talk) 14:29, 25 March 2013 (UTC)
- That's an interesting question, what temperature would it really reach? After all the interstellar gas can have quite a high temperature. Dmcq (talk) 14:43, 25 March 2013 (UTC)
- Designing for a minimum temperature of 4 K would certainly rule out conventional solid state electronics. The mimimum permitted storage temperature for military spec parts is generally -55 C (218 K) [Ref Fairchild 54/74 databooks, National Semi databooks, etc). Most commercially used alloys have phase changes (more than one solid phase) at low K temperatures, and this aspect and its consequences is poorly understood, as there is virtually no commercial incentive to research it. Most likely a designer would go for some sort of long life nuclear isotope power source to supply both electrical power and heat, but that is another field that does not have sufficient practical knowlege. Wickwack 120.145.9.126 (talk) 14:58, 25 March 2013 (UTC)
- All deep space probes need RADIOISOTOPIC heater units to keep the electronics warm. There long half-life mean they will keep the spacecrft warm for many more decades- long after the radioisotopic thermoelectric generator fails to deliver sufficient power to keep the space craft alive, Also, gravity is the week force, therefore, the atomic recoil from any oxygen out-gassing would send these gases far away from the space craft – no solar wind required. The whole assemblage would need to be no more than a gnats whisker from absolute zero for these atoms not to achieve escape velocity from a few hundred weight of ali and plutonium. If any of you want to contribute to some artificial contrivance that may still be working in 10,000 years time... then I refer you to: [16] But I'm putting my money on a simple egg timer. However, if the Eloi have given up eating eggs by then, even that wont be any use to them, (Stonehenge still works though but it would be difficult to launch into space). --Aspro (talk) 23:53, 25 March 2013 (UTC)
- Warm for many decades eh? The OP wanted 7,300 decades or better. Re size you forgot random failure and the need for redundancy and/or self repair - see above. Egg timers need consistent gravity. If some nutters want to bury a clock programmed never to repeat its' chimes for 10,000 years, well good luck to them - it sounds like fun. But all human endeavour involves human error and oversight. It won't be until 10,000 years is up before we find out what their human erors were. My guess is <<< 10,000 years before it fails. This is why the onboard computers of US sourced military airplanes continued to be manufactured for decades with magnetic core memory (technolgy of the 1950's and 1960's) and not the various far more compact semiconductor memories used commercially since the 1970's. They have lots of experience with magnetic core memories and can trust them. Nobody has decades of reliability engineering experience with semiconductor memories. Wickwack 58.170.130.186 (talk) 01:18, 26 March 2013 (UTC)
- Simply substitute Plutonium-238 for Plutonium-239. That has a half-life of 24,100 years which has the warmth of freshly drawn cows milk. Only 8 kilograms will still one with more than a kilogram over this time period. Magnetic core memory's were used in military applications simple because they had high resistance to Electromagnetic pulse. So, providing the probe doesn't stray into a nuclear war between the Klingons et.al. That should not be an insurmountable problem. An egg timer could be housed in a centrifuge but I was not suggesting this for a deep space time-piece. Just pointing out that such a silicon- sodium- aluminum- boron oxide device would last a very, very long time. Transistor radios came out in the late fifties. Your own ears can lay witness that some still work to day. So we already have decades of experience of semi-conductors. Once the dopants are locked in to the substrate there is little reason (so far) to suggest that one day the might suddenly decide pick up their bags and emigrate to some where else. If science was left to the pseudo-skeptics, Homo-sapien would still be at the stage of trying to rub two boy-scouts together to create fire. My money however, is still on the egg timer.--Aspro (talk) 17:22, 26 March 2013 (UTC)
- Warm for many decades eh? The OP wanted 7,300 decades or better. Re size you forgot random failure and the need for redundancy and/or self repair - see above. Egg timers need consistent gravity. If some nutters want to bury a clock programmed never to repeat its' chimes for 10,000 years, well good luck to them - it sounds like fun. But all human endeavour involves human error and oversight. It won't be until 10,000 years is up before we find out what their human erors were. My guess is <<< 10,000 years before it fails. This is why the onboard computers of US sourced military airplanes continued to be manufactured for decades with magnetic core memory (technolgy of the 1950's and 1960's) and not the various far more compact semiconductor memories used commercially since the 1970's. They have lots of experience with magnetic core memories and can trust them. Nobody has decades of reliability engineering experience with semiconductor memories. Wickwack 58.170.130.186 (talk) 01:18, 26 March 2013 (UTC)
- All deep space probes need RADIOISOTOPIC heater units to keep the electronics warm. There long half-life mean they will keep the spacecrft warm for many more decades- long after the radioisotopic thermoelectric generator fails to deliver sufficient power to keep the space craft alive, Also, gravity is the week force, therefore, the atomic recoil from any oxygen out-gassing would send these gases far away from the space craft – no solar wind required. The whole assemblage would need to be no more than a gnats whisker from absolute zero for these atoms not to achieve escape velocity from a few hundred weight of ali and plutonium. If any of you want to contribute to some artificial contrivance that may still be working in 10,000 years time... then I refer you to: [16] But I'm putting my money on a simple egg timer. However, if the Eloi have given up eating eggs by then, even that wont be any use to them, (Stonehenge still works though but it would be difficult to launch into space). --Aspro (talk) 23:53, 25 March 2013 (UTC)
- Why do the electronics need to stay warm? Absolutely nothing needs to be operational during the interstellar cruise phase. When the probe arrives at the target star, its solar panels can wake up the electronics simply by providing power. Also, what's the timescale for outgassing--will it still be outgassing decades after launch? --140.180.254.209 (talk) 00:11, 26 March 2013 (UTC)
- I told you, above. Electronic parts are rated for a minimum permitted storage temperature (-55 C for military spec parts, various higher temeratures for commercial parts. If the parts are cooled below the allowed minimum temperature, they will be damaged (due to differential contraction of internal structures and many other mechanisms). When they are powered up, the minimum allowed temperature is higher. While it should be possible to design special parts to go lower than -55 C, designing for a low as the 4 K of deep space is just not on - a whole new technology would have to be invented. The problem is especially accute for solar panels, due to their large area, forgetting for the moment that atleast one panel will need to be exposed for the whole trip, to be blasted for 73,000 years worth of interstellar dust and micro meteorites. Wickwack 58.170.130.186 (talk) 01:04, 26 March 2013 (UTC)
- I'd be principally concerned about being able to get there accurately enough for the star's light to power it up. Voyager deviated from its path simply because of thermal radiation and even a tiny deviation would be enough for failure. Dmcq (talk) 16:02, 26 March 2013 (UTC)
- Why do the electronics need to stay warm? Absolutely nothing needs to be operational during the interstellar cruise phase. When the probe arrives at the target star, its solar panels can wake up the electronics simply by providing power. Also, what's the timescale for outgassing--will it still be outgassing decades after launch? --140.180.254.209 (talk) 00:11, 26 March 2013 (UTC)
Is there a matter without any energy or any energy without its origin from matter?
I cannot understand the concepts of dark matter and dark energy and their percentages of 22% and 74% respectively. I think there is no seperate matter nor energy. I think they are inter-related. So matter and energy in universe should be in the ratio 1:1. Even matter cooled to absoluate zero temperature should have its nuclear energy. Why there is a different ratio?--G.Kiruthikan (talk) 06:24, 25 March 2013 (UTC)
- You CAN separate matter and energy. A rock is matter. Light and heat are energy. It's true that you can transform matter to energy and vice versa--for example, you can annihilate matter with antimatter and create photons, or smash protons together and create a shower of particles from their kinetic energy (which is what happens at the Large Hadron Collider). But that doesn't mean matter and energy are the same thing.
- Now, the next question might be: so what's dark energy? The most precise not-too-controversial answer is that it's something which 1) has negative pressure (meaning it makes the universe expand faster rather than trying to contract it), and 2) behaves like the energy of empty space. In other words, its energy density stays constant as the universe expands, so if the universe gets 2 times bigger in volume, there's 2 times as much dark energy. Are there more informative explanations of dark energy? Yes. Do any of them work? No. --140.180.249.152 (talk) 07:05, 25 March 2013 (UTC)
- Now, how in the name of God is this supposed to tie in with the law of conservation of mass-energy, which it directly contradicts??? 24.23.196.85 (talk) 07:57, 25 March 2013 (UTC)
- Don't be confused because both words start with "m". Mass and energy are the same thing. Matter is not equivalent to mass. Mass is a property of matter, just as properties like volume, density, color, etc. are. --Jayron32 12:58, 25 March 2013 (UTC)
- "Dark matter" and "dark energy" are just names, and not very good ones. There is no general definition of the term "matter" in physics. To cosmologists it means "nonrelativistic particles", which have the property that they dilute by a factor of x3 when the universe expands by a linear factor of x (or a volume factor of x3). "Radiation" is relativistic particles, which dilute by a factor of x4 instead. The "dark energy" dilutes by a factor of x0, which is not true of "energy" in general as cosmologists use that term. "Dark matter" is also inconsistently named because it can be relativistic, unlike cosmologists' normal matter. Relativistic dark matter is "hot dark matter" and nonrelativistic dark matter is "cold dark matter". So you shouldn't try too hard to make sense of the individual words in these compound terms. -- BenRG (talk) 16:22, 25 March 2013 (UTC)
- (ec) Maybe I should have been clearer in my original answer. The fundamental distinction between matter and dark energy, as used in cosmology, is that the former's density decreases as the universe's radius cubed whereas the latter's density stays constant. So if the universe's radius gets 2 times as big, its volume is 8 times as big, meaning the density of matter is 1/8 the original. If you want, you can forget the distinction between matter and energy and consider all matter to be its equivalent rest mass energy. In that case, rest mass energy density dilutes as r^3 whereas dark energy density stays constant, which is why they have different observational consequences. --140.180.254.209 (talk) 16:28, 25 March 2013 (UTC)
- Still not possible in the context of conservation of mass-energy -- this essentially means that as the universe expands, dark matter/dark energy is constantly being created from nothing at all! 24.23.196.85 (talk) 00:19, 26 March 2013 (UTC)
- Energy conservation in general relativity is a difficult topic, one which I think it's safe to say that no one really understands. The problem is that energy is clearly carried by gravitational waves in practice (see Orbital decay from gravitational radiation), but you can't assign energy to a gravitational wave without breaking the equivalence principle. The only way to get an energy-conservation law that respects the equivalence principle is to define it on the boundary of spacetime, which is not where the energy seems to actually be. This seems to be related to the difficulty of formulating a sensible theory of quantum gravity, since quantum mechanics absolutely requires energy conservation. And you may know that recent research in quantum gravity has suggested that it might be naturally defined on the boundary of spacetime (gravitational holography and AdS/CFT). So, yes, the dark energy is created from "nothing at all", and the exact meaning of that isn't understood. It may actually show that the universe is finite in size and has a finite maximum entropy, since when the cosmological constant is positive (and only then), you end up with a cosmological event horizon at far future times, which only allows you to interact with a finite volume of the universe, and at even later times, everything inside decays away and you just have the horizon and Hawking radiation from it, which has a finite entropy that can never be exceeded. Classically, there's a huge exponentially inflating universe outside that you can't see, but it's not clear that that's true or even can consistently be true in quantum gravity.
- In short, I don't know. -- BenRG (talk) 02:26, 26 March 2013 (UTC)
- (ec, again) Good observation. Energy is not conserved in General Relativity, and dark energy, being the energy of empty space, really IS being created from nothing at all. In fact, matter is the only component of the universe that obeys energy conservation. The energy density of radiation goes down as 1/x^4, where x is the linear size of the universe. But since the universe's volume is proportional to x^3, total energy goes down as 1/x: the energy in radiation decreases as the Universe expands!
- This can be explained by looking at Noether's theorem, which says that energy conservation is a result of the time-invariance of the laws of physics. In other words, if a law predicts the same result no matter what time it is, the law conserves energy. Immediately we see that the universe is not time invariant. There was an unambiguous past where the universe was smaller, hotter, and denser, and an unambiguous future where the universe will be bigger, colder, and sparser. For more information, see here, or here for a more technical treatment. --140.180.254.209 (talk) 02:39, 26 March 2013 (UTC)
centripetal force in a simple pendulum
In order to find the tension in simple pendulum, we equal the net force in the centripetal direction (T-mg*cos(angle)) to mv**2/r. By doing so, we presume that the result of the combined forces (tension and gravity) is circular motion. Is there a way to find the tension without this assumption? Thanks. 109.160.152.227 (talk) 07:39, 25 March 2013 (UTC)
- Since the pendulum is constrained to circular motion, what is wrong with assuming it? It will not be uniform circular motion, as it approximates sinusoidal velocity, but you only need the tension at maximum, which when the pendulum is centred. So you are really only asuming uniform circular motion at the centre position. Wickwack 121.215.146.244 (talk) 10:32, 25 March 2013 (UTC)
- We know it is constrained to circular motion by experiment. Is there a theoretical way to prove it? 109.160.152.227 (talk) 11:13, 25 March 2013 (UTC)
- We know it is constrained to circular motion by the definitions of a circle and of a pendulum. The rod constrains the mass to points a specific distance from the pivot. A circle is the set of all points equidistant from a specific point. 38.111.64.107 (talk) 11:53, 25 March 2013 (UTC)
- We know it is constrained to circular motion by experiment. Is there a theoretical way to prove it? 109.160.152.227 (talk) 11:13, 25 March 2013 (UTC)
- This is a perfect case for the use of the Lagrangian mechanics formulation, allowing to solve for the motion by analyzing conservation of energy, instead of the Newtonian style representation of forces that are proportional to acceleration. The motion is constrained (by tension) but we don't know the magnitude of the tension force. We do know that energy is conserved. So, set up the Lagrangian; instead of setting the equation to zero, set it equal to the unknown constraining force, and solve for that as a residual. This is a standard homework problem in an elementary mechanics class. Nimur (talk) 15:57, 25 March 2013 (UTC)
- This method described by Nimur is the same as the principle of least action where you take into account the constraint using a Lagrange multiplier. The Lagrange multiplier is then the tension in the rod. Count Iblis (talk) 17:04, 25 March 2013 (UTC)
- You can model the pendulum arm (or any rigid body undergoing elastic deformation) as a spring with a very large spring constant. Then the pendulum bob is in principle free to move in two dimensions, but in practice won't move far from the circular region where the arm has its preferred length. The tension in the arm is given by Hooke's law, and is a function of the dynamical variables. You can derive the original (explicitly constrained) solution by taking the spring constant to infinity. -- BenRG (talk) 00:43, 26 March 2013 (UTC)
- Thank you all. 109.160.152.227 (talk) 16:52, 26 March 2013 (UTC)
Copper Acetate
Is there a (chemical) reaction that can reverse copper acetate back to copper?Curb Chain (talk) 08:22, 25 March 2013 (UTC)
- Sure, find an element that is higher on the Reactivity series than copper is, or (for a more advanced answer) one with a lower standard reduction potential than copper. What you are basically looking for is a metal that can act as a sacrificial anode for copper. That shouldn't be too hard, since copper is a fairly unreactive metal, so most other metals would work well for this purpose. --Jayron32 12:56, 25 March 2013 (UTC)
- Reduction with zinc. Plasmic Physics (talk) 03:55, 26 March 2013 (UTC)
- Can you give me a chemical formula?Curb Chain (talk) 09:14, 26 March 2013 (UTC)
- And what reaction would reduce zinc acetate(?) back to zinc?Curb Chain (talk) 09:39, 26 March 2013 (UTC)
- One problem you may be having is thinking that the acetate would be involved in the reaction. In most cases (especially in aqueous solution) it probably wouldn't. The reaction you're dealing with is Cu++X(s) -> Cu(s) + X+, or equivalent, where X is an element higher on the aforementioned reactivity series. If you're attempting to reduce zinc instead of copper, it's the same reaction, but with a more active metal. - So how do you reduce a metal high on the list? Typically electrolysis (in a non-aqueous solution) is used. For example, the Castner–Kellner process to make metallic sodium or potassium, or the Hall–Héroult process to make aluminium. Note you can also use electricity to directly reduce copper and zinc (see electroplating). -- 71.35.109.213 (talk) 16:02, 26 March 2013 (UTC)
"Tanking" and electric car
Has any manufacturer of electric cars ever thought that for tanking an electric car, the easiest way is to change the battery? That would require building a network, that would need to keep a stock and track who's got what, but what's the alternative? Waiting several hours seems unacceptable for most drivers. OsmanRF34 (talk) 11:19, 25 March 2013 (UTC)
- Our electric car and charging station articles mention this as an alternative to recharging. Better Place was one of the first providers of battery swap services. Gandalf61 (talk) 11:35, 25 March 2013 (UTC)
- And you avoid much of the bookkeeping by making the battery not part of the car, but a lease (preferably from the maker, who will also recycle and recondition them). --Stephan Schulz (talk) 11:40, 25 March 2013 (UTC)
- And if some turkey comes to me offering shares in a battery swap company targeting electric cars, I'll show him the door right smartly, for these reasons:
- The service life of a battery is roughly halved for each 10 C increase in temperature. If the company has swap stations in the southern areas of Australia (22 C typ), commercial pressure will make it charge ($ charge not electric charge) on the basis of a battery life of many years. Then some drivers will go on a tour up to the top end, where the temperature is around 40 C. The batteries will come back stuffed in 1 year.
- The batteries have to be charged (electric this time, not $) - presumably from the grid. Now, modern automotive diesels operate at about 40% efficiency. Power stations can do a bit better, up to 45% efficiency. Losses in the grid and local electricity distribution bring it back to about 40%. Add in the battery charge efficiency, always less than 100%, and the business is on the wrong side of marginal.
- At the very least, the electronic controls need to be swapped out with the battery, otherwise the battery swap company is at risk from cars with faulty controls damaging the battery.
- Then there's the safety aspects of cars with a ton of battery on board - that will make every collision like hitting a big block of cement at 2x speed, with the added danger of electrical fire etc.
- Wickwack 124.182.153.240 (talk) 13:09, 25 March 2013 (UTC)
- I don't agree with any of those concerns.
- The issues of battery life versus temperature are the same issues that all sorts of companies have with leasing all sorts of equipment - and they somehow manage. When you rent a car, they can't tell whether you're going to rev the thing to redline continually, brake like crazy, scrub the tires and so forth...but somehow they manage to stay in business. Same deal with the batteries. It can be managed.
- The batteries have to be charged - yes, but when you talk of the efficiency of diesel engines, you're assuming that the world continues to use fossil fuel to generate electricity. It's not about efficiency - it's about global warming. If your electricity can come from wind, solar or nuclear - then the CO2 footprint of an electric car is close to zero...vastly less than for a diesel. So simple fuel efficiency isn't the issue. Plus this comment is off-topic - we're talking about recharge mechanisms - not whether electric cars are a good or a bad idea.
- Safety aspects are also a ridiculous concern. The Prius has a bunch of batteries in the trunk, so do any number of electric vehicles. Sure batteries weigh a lot - but electric motors are small and light. Compare that to a tank of highly inflammable liquid and the huge engine with all of it's cooling equipment!
- The real problem with battery-swap stations is the initial cost. A gas station on a US freeway has half a dozen pumps and usually about half of them seem to be occupied, and it takes a minute or two to gas up and get going - so as a VERY rough estimate, I'd bet they have around five customers per minute...on the average. Of course gas powered cars have a range of around 400 to 500 miles - and electric cars are about 100 - so you need to swap batteries (say) 4 times more often than that. If each of them needs a complete battery swap and a battery takes (say) three hours to recharge - then they need to keep at least 3x60x5x4=3,600 battery packs in stock and on charge at all times to keep up with "average" demand. Probably they should double that to allow for peak demand (Nobody wants to show up at a gas station that has no recharged batteries in stock!), so let's say you need 7,000 battery packs at every garage, continuously on charge. According to Nissan Leaf, the cost of a battery pack for that car is $18,000...so every gas station has to have about 126 million dollars worth of batteries in stock and on-charge at all times! There are about 170,000 gas stations in the US - so upwards of $20 trillion dollars worth of batteries have to be purchased, stored and maintained in order to have a viable battery-swap system in place.
- SteveBaker (talk) 16:00, 25 March 2013 (UTC)
- I think a realistic range is more like 200 miles - see Tesla Model S. Also, at least when I was in the US, often 90 % of pumps were empty. So there is a factor of about 10 in your calculation. Fuel stations may be more occupied in high-population areas, but there people drive, on average, shorter distances, so they can refuel at home overnight. Also, of course, this infrastructure does not need to be there in one step. Especially with modern electronics and online services, its much easier to dynamically direct cars to stations that have batteries, and batteries to stations that don't. This is only slightly less convenient than having three fuel stations on every crossing. What I see more as a practical problem is a lack of standardisation of batteries, and the additional constraints on the car construction to make the battery easily replaceable. This may shake out over time, but it might take a while. --Stephan Schulz (talk) 16:18, 25 March 2013 (UTC)
- I don't agree with any of those concerns.
- And if some turkey comes to me offering shares in a battery swap company targeting electric cars, I'll show him the door right smartly, for these reasons:
- I stand by my numbers:
- I don't think you're correct about the average occupancy of pumps in US gas stations...but in any case, the number of batteries they'd need to stock would have to match the peak customer demand - not the off-peak demand. If the peak customer demand were any less than the number of pumps out front...why would they have built that number of gas pumps? Whatever calculation they use to decide how many pumps to get - that calculation applies identically to the number of batteries they'd estimate they'd need. So if we're being careful...we should say that the number of batteries a station would have to keep in stock would have to be equal the number of gas pumps they have multiplied by the peak number of customers per hour per pump multiplied by the number of hours it takes to recharge a battery.
- Yes, there are lots of gas stations close to each other at many intersections - but that's how much refuelling capacity we'd be replacing. Sure, you could halve the number of gas stations - but each one would get twice the amount of business - so the number of batteries that would have to be stored and recharged overall is exactly the same.
- The 100 mile electric vehicle range that I used is a very good estimate. The claimed 200 mile range of the Tesla is *very* atypical. With the 85kWh, 200 mile battery pack, the Tesla is a high end super-car and costs $95,000(!) - the slightly more affordable 40kWh version has only a 120 mile range and still costs $59,000. The Nissan Leaf is a more realistic kind of car that the majority of people are more likely to own - and according to the EPA, it has a 75 mile range...so even worse than I estimated.
- The idea of finding an alternative (nearby) gas station in the event that the one you want is out of batteries right now could be very bad for consumers. If you have to replace your batteries every 75 miles (the Nissan Leaf number) then having to drive 10 miles out of your way to find another battery swap station starts to look VERY unattractive! On my recent drive from Texas, through New Mexico and into Arizona, we were getting kinda low on gas and my GPS told me that the nearest filling station was 18 miles away and the second nearest was close to 30 miles away. With a gas-powered car, that's OK - 18 miles is well within my "reserve" and the odds of that gas station being out of gas is almost zero. But for an electric vehicle, there had better be a charged battery at that first station because if you have to drive 30 miles to get to the second closest one, then by the time you get back onto the freeway, you'll only have 45 miles of juice left and it'll be time to turn around and drive back again! With shorter vehicle ranges, you need MORE filling stations - not less!
- You might be correct about not needing battery-replacement-stations in built-up areas - but I have to guess that the battery-replacement companies are going to be strongly resistant to you using their batteries and recharging them yourself. If you're going to spend trillions of dollars on these replacement battery stations, you *REALLY* want people to pay to use them! So I'd expect we'd find some kind of computer-interlock thingy built into the battery that would stop you from recharging your Exxon battery yourself. This is potentially fixable by legislation...but it's tricky.
- I agree that battery standardization is a tough problem. Honestly, I think hydrogen powered vehicles are more likely to be in our future. SteveBaker (talk) 20:38, 25 March 2013 (UTC)
- I think we may be talking about different things. I talk about a reasonable system that can be constructed with the best known technology - I assume that we can get Tesla quality down to the masses. If you want to drive through large empty spaces, you don't want a Nissan Leaf. But you don't want a Smart for that today either. As for finding batteries: You assume your current workflow (drive to fuel stations until you hit one that has fuel). I'm thinking about a navigation system that knows where to find available batteries at any given time, and that will automatically guide you to the most convenient location on demand, and might even call ahead to ensure they hold one for you. Also, there is no reason why you cannot recharge the car from a reasonable electrical outlet in pinch. Sure, it takes more time, but it won't let you get totally stranded. "Fuel" stations will need high-powered electricity for recharging anyways, so you can get a 100 amp line to the recharger easily. Similarly for the business model. You can simply lease the batteries for a monthly (or mileage-based) fee, with electricity provided for free. Then whenever you reload at home, the batterie company saves a charge (but the actual electricity is small change compared to other costs, anyways). Something similar works e.g. for cell phone plans, so why wouldn't it work for batteries? --Stephan Schulz (talk) 23:58, 25 March 2013 (UTC)
- Regarding Steve Baker's first set of comments:-
- Relavence of temperature dependence: The service life of batteries roughly halves for each 10 C increase in temperature. The service life of an internal combustion engine is also temperature dependent, but not any noticeable degree. Most wear occurs during warmup, whcih is shorter in hot climates. Noboby goes around saying how much they are dissapointed their car engine lasted only a year or two because they live in a hot climate. So your argument here is a nonsense.
- Claimed zero CO2 footprint for electric cars. It never ceases to amaze that people have such a head in the sand approach. The reality is that almost all electric power is generated from either fossil fuels or nuclear, and nuclear for lots of reasons is a solution only for countries with very large populations and/or a need to hide production of weapons grade material. Things will remain this way for the forseable future. Why do you think photo-voltaic panels are so expensive, when not subsidised by stupid governments? It's a lot to do with that they have to be made with about the same order of electric energy consumed in the factory as they will produce in their service life. Having the CO2 emitted by a power station in China, where the panels are made, instead of from each car, IS NOT a solution to the World's pollution and greenhouse issues.
- Safety aspects: You must be joking. How many fuel tank fires/esplosions actually occur? Personally, the only ones I know about occurred in Ford Pinto cars about 30 - 40 years ago, and it is understood these cars had an easily corrected design flaw, and there were only a handful of Pinto fires anyway. People who think there is not a safety aspect of a added huge mass are also "head in sand" types. And batteries also cause electro-chemical fires when physically damaged. You need to take into account that while battery cars are very much in the minority, and are still new (which means not driven by less carefull drivers) they won't show up in accident statistics. If battery cars were most or all the cars on the roads, it will be a very different story. This is similar to the experience of our local bus company, who have have been slowly converting their fleet to LPG fuel on government subsidy (LPG being considered a little more environmentally friendly than diesel, and something we have a lot of in this country). They now have one-third of their fleet on LPG, about 500 busses. At first, everything seemed good, but with 500 busses on the road every day they now have about 1 bus fire per 3 months, usually totally destroying the bus. This is just not acceptable. They've been lucky that passengers were able bodied, few, and were able to leave each bus in time. Fires in their fleet of ~1400 diesel busses were about 1 each 10 to 15 years, with none totally destroyed.
- Wickwack 58.170.130.186 (talk) 23:45, 25 March 2013 (UTC)
- I agree that battery standardization is a tough problem. Honestly, I think hydrogen powered vehicles are more likely to be in our future. SteveBaker (talk) 20:38, 25 March 2013 (UTC)
- If the batteries are provided by the car maker as a lease, or by the fuel station network as a lease, different battery lifetimes are irrelevant. Whoever operates the system simply must average cost over all driving conditions. Not trivial, but something that businesses do all the time. And the time that solar panels used about as much energy as they provided are long in the past. Current generation photovoltaics are very much energy-positive. As are other alternative energy sources, like wind, hydro, solar-thermic, and so on. Denmark produces 20%-30% of electricity with wind turbines. Norway produces 99% from hydropower (but lends some of its capacity to Denmark for storage). Germany has about 20% of electricity covered by renewables. --Stephan Schulz (talk) 00:08, 26 March 2013 (UTC)
- As I have shown, batteries are different from any other technology in that their life halves for each 10 C rise in temperature. The asme battery that lasts 5 years in the southern extremity of West Australia will last only one year at the top end, usage patterns being the same. Averaging over all conditions works in most fields because they don't have this sort of variation. Most countries don't have rivers suitable for hydro power to any extent. Germany has a lot of nuclear power - they can afford it with their big population, but it has become a political liability.
- Wind has its own problems. It doesn't blow with total relaibility, so power companies using it have to have spinning reserve covering 100% of the wind gneration capacity. Our local power company has some large wind power farms as the Government made them do it. The spinning reserve is diesel, as their base load coal fired power stations cannot take sudden large changes in load. Big diesel engines cannot tolerate running long periods at idle, so they have to have a base load on the diesels at all times, at less efficiency in CO2 footprint than their base load coal fired stations. That considerably destroys the CO2 advantage of wind power.
- You see a pattern in all this? The pattern is that all these alternative energy sources (wind, photo-voltaic, even nuclear) have to be government forced, and almost always government subsidised. Why is that? Because they are not commercially viable. Why are they not commercially viable? Because they don't much reduce fossil fuel power generation, or evironmental impact, they just shift it somewhere out of sight. It suits China at the moment to sell us solar panels made with their cheap labour and their cheap coal-fired electricity running of coal mined without much regard to OH&S. It will be interesting to see what happens 10 years from now - As their standard of living rises up to Western Standards, and their labour ceases thereby to be cheap, and their Government gets on top of their bad health and environmental issues, you might find they won't sell us cheap panels any more. Wickwack 58.170.130.186 (talk) 00:51, 26 March 2013 (UTC)
- Your first point is not about technical, but about business questions. Businesses regularly deal with a factor of 5 in cost. There are many McDonalds in downtown Manhattan. The rent there is a lot more than 5 times higher than the cost in, say, Seward, Nebraska. What's more, McDonalds is maintaining a number of restaurants in Paris, with high rents and nobody who want to eat there. It's part of their strategy to offer (nearly) the same experience for a similar price all over the world. If you talk about the number of McDonalds restaurants, or mainstream cars, differences simply average out. Your second point has some valid arguments, but it very much overstates the case. Even today, there are ways to handle the variability of wind. It's not free, as many people assumed, but then there is no such thing as a free lunch. You need better electricity networks, you may need high-voltage direct current to connect larger areas so that local effects can, again, average out. You can use pumped-storage hydroelectricity in some areas, you can use molten salt storage with concentrated solar power as Desertec plans to do, and so on. There are some technical problems, there is a price, but there is nothing we cannot do on a technological level even today. Yes, alternative energy currently works best with government subsidies. But which form of energy does not? Nuclear has beed largely developed out of the public purse. Who do you think builds the roads tankers drive on distributing fuel over the country? And so on. And we haven't even started on externalised costs. --Stephan Schulz (talk) 06:30, 26 March 2013 (UTC)
- Yes, building rental in cities will be 5 times that of small towns, but what fraction of the amortised cost per burger is rent? My guess is that labour is a very much greater cost, even though they seem to employ mostly low wage teenagers. One MacDonald's shop looks the same as another, and the floor area seems unrelated to trading volume, but their staff levels do seem in proportion to the trading volume - up to 20 or so in busy stores at busy times, only 3 or so in quiet stores (what they call a front-of-house person aged about 15, a back-of-house person also about 15, and a duty manager, all of 17! Good training though - I've learnt from experience that if you hire an ex-MacDonald's person, you get a hard working customer pleasing worker ) A battery lease company will have as its' major cost the cost of the batteries. As I said before hydro power is only for those countries fortunate in having suitable rivers and gorges to dam up. Most don't. The same applies to hydro storage, which is well established (many decades) but only in those locations suitable. You can't be serious about regarding highway costs as a subsidy to fossil fuel generation, as all sorts of other traffic uses the same roads in far greater volume. Things like molten salt storage are just good ideas, not proven established technology. One of our local universities (Murdoch) put a lot of time and money researching phase change storage (a la glauber's salt and similar), but they were stumped by the problem of supercooling. Supercooling can be virtually eliminated by adding in an evenly mixed dispersion of some non-reactive substance, but it tends to settle out a little bit each cycle, so after a finite number of storage/release cycles, the dispersant is all in the bottom, and the salt will no longer change phase. Wickwack 121.221.236.225 (talk) 07:53, 26 March 2013 (UTC)
- Your first point is not about technical, but about business questions. Businesses regularly deal with a factor of 5 in cost. There are many McDonalds in downtown Manhattan. The rent there is a lot more than 5 times higher than the cost in, say, Seward, Nebraska. What's more, McDonalds is maintaining a number of restaurants in Paris, with high rents and nobody who want to eat there. It's part of their strategy to offer (nearly) the same experience for a similar price all over the world. If you talk about the number of McDonalds restaurants, or mainstream cars, differences simply average out. Your second point has some valid arguments, but it very much overstates the case. Even today, there are ways to handle the variability of wind. It's not free, as many people assumed, but then there is no such thing as a free lunch. You need better electricity networks, you may need high-voltage direct current to connect larger areas so that local effects can, again, average out. You can use pumped-storage hydroelectricity in some areas, you can use molten salt storage with concentrated solar power as Desertec plans to do, and so on. There are some technical problems, there is a price, but there is nothing we cannot do on a technological level even today. Yes, alternative energy currently works best with government subsidies. But which form of energy does not? Nuclear has beed largely developed out of the public purse. Who do you think builds the roads tankers drive on distributing fuel over the country? And so on. And we haven't even started on externalised costs. --Stephan Schulz (talk) 06:30, 26 March 2013 (UTC)
- If the batteries are provided by the car maker as a lease, or by the fuel station network as a lease, different battery lifetimes are irrelevant. Whoever operates the system simply must average cost over all driving conditions. Not trivial, but something that businesses do all the time. And the time that solar panels used about as much energy as they provided are long in the past. Current generation photovoltaics are very much energy-positive. As are other alternative energy sources, like wind, hydro, solar-thermic, and so on. Denmark produces 20%-30% of electricity with wind turbines. Norway produces 99% from hydropower (but lends some of its capacity to Denmark for storage). Germany has about 20% of electricity covered by renewables. --Stephan Schulz (talk) 00:08, 26 March 2013 (UTC)
- A few comments:
- 1) Price of batteries should come way down when they are made in the quantities envisioned here.
- 2) Electric cars probably aren't suitable for Australia, what with the high temperatures and long distances. At most, they could be used as "city commuter vehicles", where each is limited to one city, with no infrastructure for recharging them between cities.
- 3) Recharging will likely occur during off-peak hours, where electricity prices are lower. StuRat (talk) 02:51, 26 March 2013 (UTC)
- Re (1): I doubt it, as batteries are already made in large quantities in automated plants, and the price pretty much reflects the materials consumed.
- Re (2): You got that right. And a high proportion of people buy cars suited for that weeked away, or visiting Aunt Joan in another city. Not entirely rational, as 99% of the time they are just commuting, but that's what they do.
- Re (3): True. Most Australian power companies do not offer off-peak concession rates (the ones that do have turned it around - they charge a premium for on-peak use), but that is merely a policy decision. If a customer uses more than 50 MW.hour per year, deals get negotiated. I negotiated 6 cents/kW.hour for my employer for 10 PM to 5 AM consumption, several rates ending up at 10 cents/kW.hour for peak hour consumption, whereas the same power company charges a flat 12 cents/kW.hour to homeowners. Wickwack 58.164.230.22 (talk) 03:36, 26 March 2013 (UTC)
Titration dilemma
Hi. I am in the completion stages of writing a chemistry lab report, and so this may be the first time that I'm asking for help on an assignment problem on the Reference Desk. I will show an attempt to complete the question myself in order to demonstrate which part I am stuck on. All logs are in base 10.
- Part 1
I am asked to calculate the theoretical pH after adding 25.0 mL of 0.120 M ammonia with 12.5 mL of 0.120 mL HCl. The pKb of ammonia is given as 4.75. Here are the steps I follow to get the pH of this mixture.
Given that the pKb = 4.75, Kb = 10-4.75 = 1.78 x 10-5. I then use the equation that in solution Kb = [OH-]2 / [NH3], and so [OH-] = 1.46 x 10-3 M. The original pH of ammonia solution without any addition of HCl is equal to 14 - pOH, where pOH = -log(1.46 x 10-3), so pH = 11.2. So far, so good.
Here's where the problem arises. I take the [OH-] concentration in total solution as the original molarity multiplied by a 25/37.5 ratio, giving [OH-] = 9.73 x 10-4. I then assume 100% dissociation for HCl, so that [H3O+] = [0.120 M] x 12.5/37.5 = 0.04 M. I then subtract one from the other, so effectively the new [H3O+] is equal to 0.039 M. Taking pH = -log(0.039) = 1.41, that is the value I get for pH.
Here's the real dilemma. According to my problem sheet, the combination of these two solutions occurs at the half-neutralization point, which creates a partial conversion of NH3 into NH4+. Do I have to take into account the conjugate acid, NH4+? I have read that at the half-neutralization point, pH = pKa, and pOH = pKb. That means pOH = 4.75, and so pH = 9.25. This makes sense from a "half-neutralization" standpoint, but it contradicts the previous answer. Uh oh! Where is my calculation error?
- Part 2
As if the previous problem didn't make me sound stupid enough, here's the next issue. I analyze a titration curve, finding the equivalence point of a reaction between aqueous acetic acid (weak) and sodium hydroxide (strong) to be pH~8.3. I use the volume of titrant NaOH added, rather than the pH value, to calculate the concentration of unknown NaOH. Here's method one.
Ka of acetic acid is 10-4.76 = 1.7378 x 10-5 = [H3O+]2 / [HA]. Since [HA] = 0.1056 M originally and is diluted to a 1/7 solution (25 mL original acid plus 150 mL deionized water), effective [HA] = 0.01509 M. So, I get [H3O+] = 5.12 x 10-4, which I assume to be equivalent to [OH-] in the solution at the equivalence point. (Hmm...have I forgotten to account for pH>7?) At this equivalence point, I find that 22.6 mL of NaOH solution at initial unknown concentration is added. I use the concentration-volume formula, with the product of [OH-] concentration with the total volume 0.1976 L divided by the NaOH titrant volume, to find that the stock solution of NaOH has a concentration of 4.48 x 10-3 M, which is quite low.
The second method I use takes the same approach, except this time I assume no dilution took place, and I use 0.1056 M as the original acetic acid concentration, so [OH-] becomes 1.35 x 10-3 M, and the total volume of mixture is 0.0476 L. This time I get [NaOH] = 2.84 x 10-3 M, within an order of magnitude of the first calculation.
The final method I use takes the formula Ka = x2 / ([CH3COOH] - x), where "x" is the original [NaOH] concentration. For this, and using the original [CH3COOH] = 0.1056 M, I get x = 1.346 x 10-3 M. I take the average of the three values to be [NaOH], but I still doubt that I did this part correctly.
Thus, I have demonstrated above that I have attempted to solve the questions without answers that make sense. Please enlighten me as to my erroneous methods used in calculation. Thanks. ~AH1 (discuss!) 13:55, 25 March 2013 (UTC)
- By your own admittance this is a Homework Question. We can not pass your exam for you.--Aspro (talk) 16:04, 25 March 2013 (UTC)
- From the Notes, "We don't do your homework for you, though we’ll help you past the stuck point." I am not asking you to solve the problem for me, but to point to any relevant articles or concepts that may help. I have specifically gone through my thinking process, highlighting areas of difficulty, as it's always been done for homework questions, although this is my first time trying to get help. Have the rules now changed? I am now trying to get past the stuck point, by solving "Part 1" in a different manner, but would still appreciate any kind of guidance. ~AH1 (discuss!) 16:28, 25 March 2013 (UTC)
- For #1, it is even easier than that. At the 1/2 neutralization point, pH of the mixture = pKa of the acid form of your conjugate pair (this is a consequence of the Henderson-Hasselbach equation). So, if you know the pKa of the ammonium ion (and you do if you know the pKb of ammonia, since pKa+pKb = 14), then you know the predicted pH at the half-neutralization point. --Jayron32 16:30, 25 March 2013 (UTC)
- But I already demonstrated (in paragraph 5) that I was capable of applying this knowledge to finding the pH at the specific half-equivalence point. Thank you for confirming that for me. Though how do you find the pH for other ratios, such as a small amount of strong acid, at the full equivalence point, or with a large amount of strong acid...do I use the equation in paragraph 4, a variant of the equation I gave in paragraph 9, or neither? Thanks again. ~AH1 (discuss!) 16:39, 25 March 2013 (UTC)
- Edit: Wait...let me try solving it using pKa. ~AH1 (discuss!) 16:42, 25 March 2013 (UTC)
- OK, so I just got the same pH value, 9.25. I'm now trying to verify my general method by somehow setting up the equation so I have a surplus of basic ions, which is proving rather difficult. How do I go about doing this? I'm going to try it first with the simplest example. ~AH1 (discuss!) 16:52, 25 March 2013 (UTC)
- AH. This, by the way, is THE classic acid-base titration problem. Just about every general chemistry class uses it. You've got several ways to solve these problems, depending on where you are at in your titration:
- If you are calculating the pH of the solution before adding ANY strong acid, then you are just calculating the pH of a pure weak base, and this is a simple Ka problem. If you are calculating the pH at any point between the first drop of strong acid and the equivalence point (including the 1/2 equivalence point, as noted above, which is a special case) you use the Henderson Hasselbach equation. You need to do a bit of stoichiometry to figure out how much at any one point is in the acid form, and how much is in the base form, but that is very easy as the moles of acid = the moles of acid added, and the moles of base = the moles of base initially - moles of acid. Just plug those numbers into the HH equation and solve for pH. If you are calculating the pH the exact equivalence point, that's the pH of a weak acid; keep in mind that the moles of NH4+ are going to be the same as the moles of the NH3 at the start of the reaction, BUT the volume is now the TOTAL volume (you've diluted it some). Otherwise, you calculate this the same way you calculate the initial pH, but for the acid form and not the base form. If you go beyond the equivalence point, you now have an excess of the strong acid. You're just calculating the pH of a strong acid, which is just the moles of strong acid/total volume, and then take the -log of that. --Jayron32 21:53, 25 March 2013 (UTC)
- AH. This, by the way, is THE classic acid-base titration problem. Just about every general chemistry class uses it. You've got several ways to solve these problems, depending on where you are at in your titration:
- I think I've finally figured it out. For the general method, if 3.00 mL of acid is added to the ammonia solution, then the pH is 10.1. Am I right? Thanks, everyone. ~AH1 (discuss!) 17:05, 25 March 2013 (UTC)
- Side note: In order to solve my problem, I needed to represent NH3 as NH4+OH-. ~AH1 (discuss!) 17:08, 25 March 2013 (UTC)
- Not important, the stoichiometry is the same either way. --Jayron32 02:50, 26 March 2013 (UTC)
- Side note: In order to solve my problem, I needed to represent NH3 as NH4+OH-. ~AH1 (discuss!) 17:08, 25 March 2013 (UTC)
Dredging
Here is a research question I am trying to answer for my project. Do ocean floor borrow pits (these are created by dredging) have hypoxic or anoxic conditions? Do they encourage the surrounding area to be hypoxic or anoxic?--anon — Preceding unsigned comment added by 99.146.124.35 (talk) 23:27, 25 March 2013 (UTC)
- I don't see why they should, unless the dredging somehow increases biochemical oxygen demand. Of course I'm not an expert on marine biology, so anyone with more definitive info is welcome to contribute. 24.23.196.85 (talk) 00:05, 26 March 2013 (UTC)
- Not an easy question to answer. Dredging for the purposes of deepening a channel where there is flowing water means that the water would only become slightly hypoxic as it flowed by (until the organic mater had decomposed). But if it is a blind trench or pit below the depth where the wave action above would cause mixing (I.e., well below sixty odd feet), then it could become truly anoxic and a death trap for any animal venturing into it..Aspro (talk) 00:29, 26 March 2013 (UTC)
March 26
recycled paper
I went to an office supply store tonight to get paper for my printer. In their store brand, they had regular paper, 30% recycled, and 100% recycled. Otherwise the descriptions were pretty much the same. The 30% recycled cost somewhat more than the regular paper and the 100% recycled cost nearly twice as much! Does using recycled paper really cost them that much more? If so, is recycling paper economically feasible? We recycle paper, plastic, glass, CFBs, batteries, and old electronics - is any of it worthwhile economically? Bubba73 You talkin' to me? 00:41, 26 March 2013 (UTC)
- You've ignored the other half of the pricing equation, which is demand: are people willing to pay more for recycled paper. If they are willing to pay more, they will be charged more, regardless of what it costs to make. That is, a company will not charge less for a cheaper product if people are willing to pay more for it. Companies aren't in the business of making less money. --Jayron32 00:45, 26 March 2013 (UTC)
- Perhaps you are right, but if recycled paper cost them less, they could charge the same for it and make more money. Of course, probably not, if they can get away with it. Bubba73 You talkin' to me? 00:58, 26 March 2013 (UTC)
- The idea that they could charge the same for it and make more money isn't borne out by the evidence. If they could make more money by charging less they already would be doing that. The Price point for any product is a heavily researched concept, and one that retailers are constantly studying and experimenting on and gathering data about. It isn't random in any way, this is pretty much the major thing that ALL retailers do CONSTANTLY (at least, if they want to stay in business). The evidence that they are maximizing their profit at a particular price point is that the price point has remained consistent across time and space: The fact that all retailers use such price points (recycled paper is more expensive in nearly all stores) and it has pretty much always been that way. If you want to understand how pricing works, look at the pricing of bottled soda. A 2 liter bottle of Coca-Cola in a grocery store runs between $0.99 and $1.50; a 20 ounce bottle of Coca Cola in the same store, but being sold in the cooler by the checkout line is between $1.59 and $1.79. Why? Because will pay that. Obviously, the smaller bottle of soda is NOT more expensive to produce. The price of an object is very tenuously tied to the cost of producing it. The production and distribution costs set the floor for the price, but there is no ceiling at all: the ceiling is the price that will maximize profits. --Jayron32 02:27, 26 March 2013 (UTC)
- If using recycled paper cost them less than regular paper, then they could charge the same price and make more off the recycled paper than the regular paper. Bubba73 You talkin' to me? 02:34, 26 March 2013 (UTC)
- The idea that they could charge the same for it and make more money isn't borne out by the evidence. If they could make more money by charging less they already would be doing that. The Price point for any product is a heavily researched concept, and one that retailers are constantly studying and experimenting on and gathering data about. It isn't random in any way, this is pretty much the major thing that ALL retailers do CONSTANTLY (at least, if they want to stay in business). The evidence that they are maximizing their profit at a particular price point is that the price point has remained consistent across time and space: The fact that all retailers use such price points (recycled paper is more expensive in nearly all stores) and it has pretty much always been that way. If you want to understand how pricing works, look at the pricing of bottled soda. A 2 liter bottle of Coca-Cola in a grocery store runs between $0.99 and $1.50; a 20 ounce bottle of Coca Cola in the same store, but being sold in the cooler by the checkout line is between $1.59 and $1.79. Why? Because will pay that. Obviously, the smaller bottle of soda is NOT more expensive to produce. The price of an object is very tenuously tied to the cost of producing it. The production and distribution costs set the floor for the price, but there is no ceiling at all: the ceiling is the price that will maximize profits. --Jayron32 02:27, 26 March 2013 (UTC)
- Why? If they can charge more and make more money with lest customers (that is, if the increase in revenue from customers who will still buy the paper at the higher price offsets the lost revenue from the customers that won't) then why would they charge less? That would make them less money. Look, consider these example, just making up numbers. Lets say that regular paper costs $1.00 per ream to make, and they sell it for $5.00 per ream. That's $4.00 per ream in profit. Let's, for the sake of argument, say that recycled paper costs $0.90 cents per ream and they sell it for $5.50 per ream. That's $4.60 per ream in profit. Now, lets say that 1000 people buy the regular paper, and 800 people buy the recycled paper. That's $4000 in regular paper and $3680 for the recycled paper. Now, let's say they lower the price of the recycled paper to the same price point. Now we've lowered the profit to $4.10 per ream. Let's say that now 100 people switch from regular to recycled, so now there's 900 buying each. That's $3600 for regular and $3690 for recycled. Under the older pricing scheme, the company made $7,680 in paper sales. Under the new pricing scheme, the company made $7,290 in paper sales. So, it isn't advantageous to lower the price of recycled paper because you make less money. Real world examples are going to be more complex than this, but this at least demonstrates that lowering the price of a product to sell more of it does not always make you more money, even if that product is cheaper to make. --Jayron32 02:47, 26 March 2013 (UTC)
- Maybe they can charge more for recycled paper, but if it cost them less, they don't have to. They can charge the same price as regular paper and make more per unit. And if regular paper and recycled paper are the same price on the shelf, I think more people would chose the recycled paper, which could drive down the cost more, but that is another consideration. The bottom line is if recycled paper cost them less, and they sell it for the same price, they will make more per unit on recycled paper. Bubba73 You talkin' to me? 04:17, 26 March 2013 (UTC)
- Why? If they can charge more and make more money with lest customers (that is, if the increase in revenue from customers who will still buy the paper at the higher price offsets the lost revenue from the customers that won't) then why would they charge less? That would make them less money. Look, consider these example, just making up numbers. Lets say that regular paper costs $1.00 per ream to make, and they sell it for $5.00 per ream. That's $4.00 per ream in profit. Let's, for the sake of argument, say that recycled paper costs $0.90 cents per ream and they sell it for $5.50 per ream. That's $4.60 per ream in profit. Now, lets say that 1000 people buy the regular paper, and 800 people buy the recycled paper. That's $4000 in regular paper and $3680 for the recycled paper. Now, let's say they lower the price of the recycled paper to the same price point. Now we've lowered the profit to $4.10 per ream. Let's say that now 100 people switch from regular to recycled, so now there's 900 buying each. That's $3600 for regular and $3690 for recycled. Under the older pricing scheme, the company made $7,680 in paper sales. Under the new pricing scheme, the company made $7,290 in paper sales. So, it isn't advantageous to lower the price of recycled paper because you make less money. Real world examples are going to be more complex than this, but this at least demonstrates that lowering the price of a product to sell more of it does not always make you more money, even if that product is cheaper to make. --Jayron32 02:47, 26 March 2013 (UTC)
- ... But keeping the Coke cool is more expensive. And the customer may want one that is already cold because they want to drink it soon. Bubba73 You talkin' to me? 02:38, 26 March 2013 (UTC)
- The second reason, and not the first, is your answer. --Jayron32 02:47, 26 March 2013 (UTC)
- So, not taking the profit motive into consideration, can a company use recycled paper cheaper than cutting down trees? And what about the other things that are recycled? Bubba73 You talkin' to me? 02:07, 26 March 2013 (UTC)
- If we want to apply microeconomics to this, an assumption should be made that the higher price does in fact reflect higher costs. There isn't a lot of differentiation between various forms of the same level of recycled paper, so in the long run the price is equal to the Average Total Cost. The price of recycled paper is more expensive in the long run so the average total cost must be higher. Ryan Vesey 02:26, 26 March 2013 (UTC)
- Not necessarily. See my example above about bottled soda. The price will, of course, be affected somewhat by costs, insofar as no company will sell an item at a loss. HOWEVER, if a company can charge more for a product and make more money, they will. 1) If the potential loss of some customers at the higher price point is offset by the increased revenue from the customers who are willing to pay more, companies will charge the higher price. This has nothing to do with costs. 2) Some products actually sell better (i.e. move more units) at higher price points. Premium pricing involves using artificially high prices to increase the "stature" of a product and thus sell more units to people who buy it only because it is expensive. That is, lowering the price of such a product would actually move less units because people will have the perception that lower price = lower quality. This ALSO has nothing to do with actual costs. --Jayron32 02:30, 26 March 2013 (UTC)
- The bottle example above isn't analogous. When you're dealing with pop, you're dealing with monopolistic competition where firms are price setters rather than price takers. When you're dealing with paper, you're in a perfectly competitive market and firms are price takers. If they refuse to accept the price decided by the market, their product will not be purchased. In the perfect competition, the long run price is at Average Total Cost so the only deciding factor in the long run price is changing costs. That being said, this is a bunch of economic bunk and requires a way of thinking that doesn't necessarily mesh with reality. Ryan Vesey 02:38, 26 March 2013 (UTC)
- Not necessarily. See my example above about bottled soda. The price will, of course, be affected somewhat by costs, insofar as no company will sell an item at a loss. HOWEVER, if a company can charge more for a product and make more money, they will. 1) If the potential loss of some customers at the higher price point is offset by the increased revenue from the customers who are willing to pay more, companies will charge the higher price. This has nothing to do with costs. 2) Some products actually sell better (i.e. move more units) at higher price points. Premium pricing involves using artificially high prices to increase the "stature" of a product and thus sell more units to people who buy it only because it is expensive. That is, lowering the price of such a product would actually move less units because people will have the perception that lower price = lower quality. This ALSO has nothing to do with actual costs. --Jayron32 02:30, 26 March 2013 (UTC)
- Yes, most people want a particular brand of soft drink. I don't know of any brand loyalty when it comes to a ream of paper for the printer, but there probably is some. Bubba73 You talkin' to me? 02:40, 26 March 2013 (UTC)
- Anybody who doesn't believe that prices will rise to what consumers are prepared to pay, and doesn't believe in premium pricing just needs to look at few good examples:-
- I worked for a while in a building next door to the Nevada Shirt Company (now driven out of the market by cheaper Asian product), and got to know some of the girls working there. They were into a form of badge egineering, that is, they made much the same shirts using the same material on the same machines, but with several different brands and logos sewed on. Obviously the cost per shirt to the factory was the same in all cases. Some some logos were cheap brands eg Glo-Weave and some were more premium brands like Piere Cardin. The price you paid for a shirt in retail shops reflected the brand image.
- Some years ago, I was shown around a razor blade factory (to learn about their quality control). They made two lines - a cheap line and a premium line. The only difference was the branding on the blades, and the wording and colours on the packaging.
- In the 1970's, Mercedes cars sold small but steady numbers of cars here in Australia, for about 3 times the cost of a Holden (local GM variant roughly equivent to Chevy). But it was a MUCH better car, lasting about twice as long, 4-wheel independent suspension, full function heating and aircon, very quiet and many other features the Holden didn't have. Worth it if you could afford it. By the 1980's though, GM/Holden had caught up, and today Holdens last just as long and have all the features of a Merc. So what did Mercedes Australia do? They substantially increased their price, thus reinforcing the concept of exclusivity and something you can have because you are better than the ordinary peasant. It worked - their sales increased. It's a bit sad that GM's Australian division have never made such a good product as they do now, yet sales are falling - due to brand perceptions.
- Up until the 1970's pretty much all clothing sold in Australia was made in Australia, protected by an import tarrif. In 1974 there was a change in Federal Government, and the newly elected Govt (Labor Party) ended the tarrif protection, on the theory that it was causing high prices by preventing overseas competition. Somehow they thought that local manufactuers would improve their act under competivive pressure. Didn't happen. What happened is retailers started importing direct from cheap Asian suppliers (who sewed on the same brands and logos), rapidly forcing all the local manufacturers out of business. Did the retailers lower prices because they paid less money to factories? Of course not! People were accustomed to paying so much for a shirt, pants, or whatever, and the retailers kept on charging that much.
- Wickwack 58.164.230.22 (talk) 03:11, 26 March 2013 (UTC)
- All very good examples. Thanks for providing those. --Jayron32 03:18, 26 March 2013 (UTC)
- Anybody who doesn't believe that prices will rise to what consumers are prepared to pay, and doesn't believe in premium pricing just needs to look at few good examples:-
- This is elementary economics (not "a bunch of economic bunk"), and I'm surprised that nobody has linked to supply and demand. In particular, look at the first graph on the article, which illustrates exactly the scenario we're talking about. At any given price, consumers will want to buy a certain quantity of paper, given by the demand curve D1. At any given price, suppliers will be able to produce a certain quantity, given by the supply curve S. The intersection of D1 and S gives the price and quantity of paper exchanged in the economy.
- Now, consider what happens if suppliers decide to sell recycled paper instead. If it costs the same as normal paper to produce, S wouldn't change. But consumers are more willing to buy recycled paper because it helps the environment, so at any given price, quantity demanded goes up. The demand curve shifts right, from D1 to D2, and intersects S at a higher price. Therefore the price of recycled paper is higher, even if the cost of production is equal. --140.180.254.209 (talk) 03:46, 26 March 2013 (UTC)
- You're dealing with the short run. If firms are making profits, more firms are going to enter the market increasing supply. The price will be reduced. Again, long term price in a perfectly competitive market is dependent on ATC. Ryan Vesey 03:49, 26 March 2013 (UTC)
- He's also dealing with a simple two-class market, comprising of only sellers and buyers. Real markets aren't that simple - theres's usually at least one layer of middle men, and often more. Take my example of Australian clothing prices above. Asian factories entered the market, which might be expected to drive prices down. They did, but not at the retail level - the shops just simply took a greater profit, knowing what the consumer was accustomed to pay. Economic/pricing theory is a bit like psychology - there's a bit of sound theory, but you are dealing with human buyers, and humans often don't do rational things. Like those who buy a $120,0000 Mercedes when a $35,000 GM/Holden is just a good, technically - but the GM doesn't say to the neighbours "look at me - I draw such a good salary I can waste it on image." Mercedes increased their sales by substantially increasing their prices. If GM did that sales would most likely nose-dive. Wickwack 58.164.230.22 (talk) 04:09, 26 March 2013 (UTC)
All of this discussion of pricing and economics is beside the point of my question. The question is about whether or not producing paper from recycled paper is more or less expensive than producing it from regular paper. And the question applies to other recycling as well. Bubba73 You talkin' to me? 04:21, 26 March 2013 (UTC)
- This isn't the most scholarly source I've ever found, but it states that the extra steps required in the production of recycled paper increases the costs. Ryan Vesey 04:29, 26 March 2013 (UTC)
- (edit conflict) [17]. --Jayron32 04:30, 26 March 2013 (UTC)
- Bahaha, I love it. Ryan Vesey 04:39, 26 March 2013 (UTC)
- Oh, haha. Looks like Google worked well for both of us. LOL. --Jayron32 04:40, 26 March 2013 (UTC)
- Bahaha, I love it. Ryan Vesey 04:39, 26 March 2013 (UTC)
(e.c. I haven't read the above yet.) Here's another case. We saved up aluminum cans for a couple of years and then I took them to be recycled and got a little money. We did this twice. Both times the gas for me to take the cans to the recycle center was about half of what I got for the cans. And that isn't counting other car expense, the environmental impact, and my time and trouble. Is is worth it - economically and envionmentally?
Similarly, when we lived in the Atlanta area, the county charged people to pick up their recycling. Is it worth it if they don't get enough out of the recycled materials to pay for the cost?
Now we put things in a recycle bin which is picked up every two weeks. Can the county send one of those big trucks down my street to pick up what is probably a few cents worth of recycling at each house? And what about the environmental impact of the truck, etc? Bubba73 You talkin' to me? 04:42, 26 March 2013 (UTC)
- Thanks - that link is to the point of my question regarding paper. Bubba73 You talkin' to me? 04:46, 26 March 2013 (UTC)
Another consideration with products made from recycled material is that some organizations have policies that require the use of recycled products. I used to work for a government agency which was under the directive of a law that required the use of recycled products wherever one was available. The administrative staff had to order recycled paper for the office regardless of its higher cost. — Preceding unsigned comment added by 148.177.1.210 (talk) 14:38, 26 March 2013 (UTC)
Chemical reaction and water vapor related questions
Some questions related to chemical reaction and water vapor. Feel free to answer any question.
1. Is there any method (or trick) by which one can predict what will form after a reaction? Consider the following two reactions-
Cu + H2SO4 →
C2H6 + O2 →
How can one know what product will form after these reactions?
2. Which is the most reactive metal and which is the most reactive non-metal?
3. Clouds hover in the sky (carrying huge amount of water) even when water has higher density than the air. What is the reason behind this?
4. Why does water vapor go up in the sky even when water has higher density than the air? They should remain on the earth. Yellow Hole (talk) 08:32, 26 March 2013 (UTC)
- I can offer an answer on question 3. Cloud is a colloid (or mixture) of microscopic droplets of liquid water dispersed among the millions of molecules of nitrogen, oxygen and other gases comprising the air. Even though these microscopic droplets of liquid water are much denser than the surrounding air, the force exerted on each one by Brownian motion (violent impacts from colliding gas molecules) is much greater than the weight of the droplet, so the weight becomes insignificant. Any droplets that do wander out the bottom of the cloud (into air of less than 100% relative humidity) promptly evaporate so this process is invisible. If and when the interior of the cloud cools sufficiently, droplets increase in size and coallesce with others. When this process passes a critical point, large water droplets accumulate and begin falling because their weight becomes much more significant than the forces due to Brownian motion. When these large water droplets emerge from the bottom of the cloud we say it is raining. Dolphin (t) 08:55, 26 March 2013 (UTC)
.
- In regard to questions such as what happens when you allow Cu + H2SO4 (raeacting a metal and an acid) to react, chemists learn that this is a ceratin class of reaction, and rules apply that lead you to Cu + 2H2SO4 -> SO2 + 2H2O + Cu++ + SO4-- at ordinary temperatures. Ratbone 120.145.20.58 (talk) 10:21, 26 March 2013 (UTC)
- In regards to what happens when you react C2H6 with O2, i.e., combusting a hydrocarbon fuel (Ethane in this case) the problem is a lot more complex. The rate of reaction and what results you get is very highly temperature dependent, especially at low temperatures for rate, and high temperatures for products. For complete combustion, you'd need not C2H6 + O2 as you've written, but C2H6 + 3.5O2 -> 2CO2 + 3H2O, but this will only happen at low temperatures, and does not occur in a single step, but in a multi-step chain reaction. In general, for complete combustion of a fuel CnHm into steam and carbon dioxide, you need obviously need n + 0.5m of O2. At very high temperatures, what you will get is a misture of mostly CO, O2, O, H2, and H.
- The combustion of hydrocarbons is not fully understood at present, though good progress is being made. An approximate answer can be made by identifying a writing the individual steps in the chain reaction, and the modified arrhenius equation coeficients (if known) for each step, and all the possible products. Then solve the set of reactions over time. As this involves a large number of non-linear simultaneous equations that don't usually converge very fast, considerable computer time is required. And I do mean considerable. Fortunately, published data for arrhenius coeficients exist for most reaction steps, more gets published as time goes on, and we are at the stage where non-critical reaction steps can be guessed and overall accuracy is still tolerable. The rate of any single step gas phase chemical reaction may be predicted by the modified arrhenious equation:-
- R = a.Tb.e[-c/RoT] [A]n[B]m
- where R is the rate, a, b, and c are constants (generally determined by measurement), T is temperature, and [A] and [B] are the concentrations (gas partial presssure) of the two reactants A and B. Ro is the universal gas constant, 8.3143 kJ/kmol.K. Constant C is often termed the Activation Energy. n and m for this application are integers corresponding to the species formula prescripts. Considerable enginuity is required to measure the constants and, just to make it interesting, they generally vary with temperature.
- To give you a feel of the problem, I list below the most important chain steps for a much simpler combustion reaction, xH2 + yO2. For maximum accuracy 136 reaction steps need to be considered.
- The products of reacting hydrogen H2 and oxygen O2 are-
- H, H2, O, O2, O3, OH, HO2, H2O, and H2O2.
- In theory, there are an infinite series, but products not in the set of 9 above only occur in less than parts per million and can be ignored.
- The most important reactions (out of a total of 136) between these nine products are:-
- H+O2>OH+O
- H+OH>H2+O
- H+H2O>H2+OH
- H2+O>H+OH
- H2+O2>HO2+H
- H2+OH>H2O+H
- Using only these 6 reactions in a calculation will result in considerable concentration errors. Calculations will be within 10% over a wide temperature range if the most critical 23 equations are used. Included in the 23 equations are reactions of the form X + Y + M > Z + M where Z is a combination of X and Y and M is each of the 9 species, with different arrhenius coefficients for each species. Al these reactions proceed simultaneously at different and changing rates until equilibrium is reached.
- In one sense, the case of reacting H2 and O2 is simple, as H2, O2 and the other 7 species are all transparent. So there is no black body absorption of light, adn no emision of light. As soon as you add carbon atoms, and carbon is an ideal black body, you get both the emission of light, and light accelerating the reactions. This can be handled by tweaking the arrhenious constants, but the problem is knowing how much tweaking is needed.
- In short, to find the products of reacting a hydrocarbon fuel with oxygen, we can say what it is for complete combustion is (its steam and carbon dioxide), but if anyone says to you he can work out a practical case with a page or two of figuring, he's having you on.
- Ratbone 120.145.20.58 (talk) 09:25, 26 March 2013 (UTC)
.
- In answer to (4), water vapor mixes with and rises up through the air because it is not liquid water but a gas, that is individual molecules darting about. Liquid density has no relavence for a gas. Any substance in liquid form (eg water pooled on the ground) has at any given temperature a vapor pressure. Molecules leave the liquid in order to establish the vapor pressure. In a sealed rigid container containing nothing but H2O, at temperatures high enough to prevent freezing (~0 C) there will be a portion of water in liquid form and the remainder in gasseous form. Just enough will be a gas in order to establish the vapor pressure for that temperature. There cannot be a full vacuum. Vapor pressure rises sharply with temperature, so at high temperaures there will be more gasseous H2O and less liquid in the container. In the atmosphere the H2O molecules mix thoroughly with the air accodring to the laws of diffusion, driven by brownian motion, until low temperatures at altitude cause the water to precipitate out to form clouds. Ratbone 120.145.32.100 (talk) 11:07, 26 March 2013 (UTC)
Why does light follow straight line?
We have a number of proofs regarding the speed and path followed by light. My question is why does light always follow straight line and its speed in vacuum is constant? 106.215.97.55 (talk) 08:50, 26 March 2013 (UTC)
- Science makes observation of the universe, and the natural world. Science rarely addresses the question why? For example, science observes that light mostly travels in straight lines; and the speed of light is constant in a vacuum, regardless of latitude, altitude etc. Theologians might be interested in speculating why it is so, but scientists are not. However, scientists are interested in making observations to find conditions under which light does not travel in straight lines; and in trying to find conditions of vacuum in which there is a discernible difference in the speed of light. Dolphin (t) 09:07, 26 March 2013 (UTC)
- Actually, science continualy asks, as a famous scoffer of chocolate (http://en.wikipedia.org/wiki/Julius_Sumner_Miller once said multiple times at each appearance, "Why is it so?".
- It can be proven mathematically that the speed of light in a vacuum can be calculated from the permitivity and permeability of free space, which are constants, and by other mathematical means.
- In saying light travels in straight lines, what we really mean is that the wavefornt is not tilted. Tilting can only occur where there is a transition from one permitivity or permeability value to another - this can only happen if what the light is passing through is not a vacuum. Light does travel in curved lines when passing through material that has a gradient of permitivity or permeability. Ratbone 120.145.32.100 (talk) 11:48, 26 March 2013 (UTC)
- I agree that the speed of light in a vacuum can be determined by taking account of the permitivity and permeability of free space, and nothing else, but that doesn't answer the question "why is it so?" Scientists have little interest in why vacuum displays this particular property. Theologians might say that vacuum is a concept created by God for the benefit of mankind, and that God dictated all the properties that vacuum will display. However, scientists are not much interested in such an explanation. Dolphin (t) 12:28, 26 March 2013 (UTC)
- That's because it offers nothing, it offers no understanding. It's just saying it is because it is. What's the good of that? You can say scientists are not interested in why as much as you like - that doesn't make it right. Ratbone 120.145.32.100 (talk) 13:11, 26 March 2013 (UTC)
- The problem is that people have two different things they mean when they use a word like "why". One of them is "by what mechanism did this come to be", as in "Why are the mountains on Earth located where they are?". Science can totally answer those questions. The other use of "why" is "For what purpose is it this way", for example "Why am I here?" or "Why are the properties of the universe the way they are, rather than some other way?". Those are fundamentally unanswerable by science because they don't present falsifiable concepts. Science can't give you purpose or meaning. It can tell you how things work, just not for what purpose (even if there is no purpose). Such questions must be answered by other methods.--Jayron32 16:01, 26 March 2013 (UTC)
- There are no "other methods" that can discover what purpose a mountain has if it has absolutely no purpose. If something does have a purpose--for example, if an aspect of the universe was designed by an intelligent entity--that has observational consequences, and would most definitely be a scientific question. There's simply no sense in which science cannot answer "why" questions. The most we can say is that we don't have enough knowledge to answer some "why" questions, but saying they can never be answered is arrogant and an insult to future generations. --140.180.254.209 (talk) 17:35, 26 March 2013 (UTC)
- The problem is that people have two different things they mean when they use a word like "why". One of them is "by what mechanism did this come to be", as in "Why are the mountains on Earth located where they are?". Science can totally answer those questions. The other use of "why" is "For what purpose is it this way", for example "Why am I here?" or "Why are the properties of the universe the way they are, rather than some other way?". Those are fundamentally unanswerable by science because they don't present falsifiable concepts. Science can't give you purpose or meaning. It can tell you how things work, just not for what purpose (even if there is no purpose). Such questions must be answered by other methods.--Jayron32 16:01, 26 March 2013 (UTC)
- That's because it offers nothing, it offers no understanding. It's just saying it is because it is. What's the good of that? You can say scientists are not interested in why as much as you like - that doesn't make it right. Ratbone 120.145.32.100 (talk) 13:11, 26 March 2013 (UTC)
- I agree that the speed of light in a vacuum can be determined by taking account of the permitivity and permeability of free space, and nothing else, but that doesn't answer the question "why is it so?" Scientists have little interest in why vacuum displays this particular property. Theologians might say that vacuum is a concept created by God for the benefit of mankind, and that God dictated all the properties that vacuum will display. However, scientists are not much interested in such an explanation. Dolphin (t) 12:28, 26 March 2013 (UTC)
- For the first part of your question - "why does light follow straight lines" - see Fermat's principle (and note that in general relativity you must replace "straight line" with "geodesic"). Gandalf61 (talk) 13:22, 26 March 2013 (UTC)
- For the first part, an alternative explanation is Newton's first law--everything moves in a straight line at constant speed in the absence of external forces. If a photon doesn't encounter anything, it moves in a straight line, just like everything else.
- For the second part, see electromagnetic wave equation. From Maxwell's equations, which encompass all of classical electromagnetism, you can derive an equation that represents a wave propagating through space. The speed of the wave can be directly read off from the equation. --140.180.254.209 (talk) 17:35, 26 March 2013 (UTC)
Relative speed thought experiment
Hi all, Imagine 2 spaceships of about the same size of the space shuttles (to avoid quantum effects mostly). The ships both set off with two very accurate clocks which both start off at the same time t=0. The only constraint on their movement is that they both have to observe the same relative speed of each other (ie if ship A records a velocity of v then so must B at all times). The observations are also constrained by modern interpretations of physics (such as they can't record the other ships velocity instantaneously).
Under classical mechanics their clocks will both record the same times at all points. Is there a way to use relatavistic effects to make this not the case? Like is there a way to use a large mass to distort spacetime to make the clocks record different times, whilst the ships both observe the same velocities? Or a large moving third mass? Or close to light speed travel?
Thanks! 80.254.147.164 (talk) 11:14, 26 March 2013 (UTC)
- One thing is noticeable that mass as well as velocity, both distort space-time. Technologous (talk) 12:10, 26 March 2013 (UTC)
- Just for phrasing clarity, I'd suggest that you go with the two ships needing to remain at rest relative to each other. That's what "observe the same velocity" amounts to, and helps illustrate why you then don't have to further care about speed. But as above, mass will impact the clocks, per gravitational time dilation. The clock in the deeper gravity well will run more slowly. If your clocks and measurements are sufficiently precise, this doesn't even have to be a "large" mass. — Lomn 13:50, 26 March 2013 (UTC)
- Ah that makes sense, I had forgot that relative rest is the same thing, I think I had just overcomplicated things. 80.254.147.164 (talk) 13:59, 26 March 2013 (UTC)
Product of two scalars
The product of two vectors may be scalar by dot product or vector by cross product. But when we multiply two scalars what would be the result - a scalar or a vector ? Technologous (talk) 12:05, 26 March 2013 (UTC)
- A scalar multiplied by a scalar is also a scalar. Dolphin (t) 12:15, 26 March 2013 (UTC)
- I was also thinking the same, but here is a contradiction. We know, Pressure(P) = Force(F)/Area(A), hence it can be re-written as F = P*A. Here, both 'P' and 'A' are scalars, but their product 'F' is a vector. So, what is your opinion about this. Technologous (talk) 13:09, 26 March 2013 (UTC)
- I would dispute that force, in F=P*A, is a vector. Given a pressure P and an area A, what direction is the force acting in? Based on that information, all you derive is the magnitude of the force, i.e. a scalar. — Lomn 13:53, 26 March 2013 (UTC)
- Force can be a vector if you use an area vector for the area. No contradiction here. Gandalf61 (talk) 13:59, 26 March 2013 (UTC)
- This means force in few cases could also be direction-less. Thanks for correcting me. Technologous (talk) 15:06, 26 March 2013 (UTC)
- Force can be a vector if you use an area vector for the area. No contradiction here. Gandalf61 (talk) 13:59, 26 March 2013 (UTC)
- I would dispute that force, in F=P*A, is a vector. Given a pressure P and an area A, what direction is the force acting in? Based on that information, all you derive is the magnitude of the force, i.e. a scalar. — Lomn 13:53, 26 March 2013 (UTC)
- I was also thinking the same, but here is a contradiction. We know, Pressure(P) = Force(F)/Area(A), hence it can be re-written as F = P*A. Here, both 'P' and 'A' are scalars, but their product 'F' is a vector. So, what is your opinion about this. Technologous (talk) 13:09, 26 March 2013 (UTC)
men body hair
why have some men too many hair in body? what is the efficient and cheaper way to remove them?? is this way is suitable or it has some side effects?? — Preceding unsigned comment added by 14.102.26.19 (talk) 16:25, 26 March 2013 (UTC)
- Bird never make nest in bare tree. ;-) Anyway our article on this is Hair removal Dmcq (talk) 16:41, 26 March 2013 (UTC)
- Here's the article on Body hair. Some men have more (or more visible) than others, but I don't think there's any medical definition of "too much". thx1138 (talk) 17:29, 26 March 2013 (UTC)
- There is such a thing as too much hair, Hypertrichosis, but I doubt that's what the OP is talking about. Anyway, having a lot of body hair should help you get in touch with your more distant ancestors :P 109.99.71.97 (talk) 18:01, 26 March 2013 (UTC)