Wikipedia:Reference desk/Archives/Science/2012 February 3

From Wikipedia, the free encyclopedia
Science desk
< February 2 << Jan | February | Mar >> February 4 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 3[edit]

Let say there is a star i have to identify it on the Hertzsprung–Russell diagram. How can i identify it base on its apparent magnitude (in other word its brightness)? Basically i need someone to explain for me how to use the diagram and how to interpret it. Ok i have an example problem, i already know the correct answers but i don't understand them. Where would Mira variable be locate on the Hertzsprung–Russell diagram? The answer is V. I was given the Mira variable's brightness. Pendragon5 (talk) 01:05, 3 February 2012 (UTC)[reply]

You are going to have to convert it to an absolute magnitude by adjusting based on the distance. A Mira variable should move around a bit depending on its brightness cycle, so one point would not be enough. Graeme Bartlett (talk) 08:56, 3 February 2012 (UTC)[reply]
Yea that's what i don't really get. There are many things around the HR diagram to determine where the star should be on there but they only give me like 1 info. Let say they do give me the absolute magnitude then how do i do it? The answer was V so i would guess all they want is just a basic answer. This letter V would represent something on the HR diagram which i don't know.Pendragon5 (talk) 12:43, 3 February 2012 (UTC)[reply]
If you look at the big picture at the top of that article, you'll see that V refers to the main sequence (if that's the only answer they gave, then they must be asking you for just the general category of star, rather than anything more detailed). You'll also see that the different categories only overlap between about -3 and +5 in absolute magnitude (although dimmer than +10 could be a white dwarf, if you are counting those). If the absolute magnitude you are given is outside that range, then you can be pretty sure about which group it will fall into. If it's in that range, then you need more information (firstly, colour, but if it's blue-white you would need to know its size to distinguish between a giant, sub-giant or main sequence star). --Tango (talk) 13:12, 3 February 2012 (UTC)[reply]
What is the colour (B-V) at the bottom of the diagram represent?Pendragon5 (talk) 20:34, 3 February 2012 (UTC)[reply]
That's the B-V colour. It is one of the measures of color that can be extracted under the UBV photometric system. To measure the B-V color, one photographs a star through a B (blue) filter, and again through a V (visible, actually a slightly yellowish green) filter. The B-V color is the difference in apparent magnitude when seen through the B and V filters. If the star's light output is more blue (smaller B magnitude) the B-V value will be smaller (or even negative) and the star will fall further to the left on the HR diagram. If the star is more red (less blue, and therefore greater B magnitude), B-V will be more positive and the star will sit further to the right on the HR diagram. TenOfAllTrades(talk) 22:01, 3 February 2012 (UTC)[reply]

Does decay heating and nuclear fission generate the same amount of energy in the end?[edit]

Suppose I have a certain amount of HEU and want to harvest the greatest amount of energy from it in the form of heat. There are two possible methods:

1. Decay heating over an infinitesimal amount of time.

2. Construct and detonate a fission bomb. (assuming all the uranium is consumed in the fission process and that the fissile products contain a negligible amount of untapped radioactivity energy)

Would the two methods generate the same amount of heat in the end? I'm guessing yes but I don't know enough about the subject matter to justify it.

And no, I'm not a terrorist. This is just a thought experiment to see whether it's feasible to use nuclear weapons to geologically reactivate Mars's core.99.245.35.136 (talk) 09:49, 3 February 2012 (UTC)[reply]

I think you'd have to explode something like a Hiroshima type bomb every ten minutes to equal the amount of heat generated in the Earth's core by fission. I think we'd have huge problems trying to do that even using fusion bombs never mind the problem of getting it near the centre. Plus I don't see the point. Dmcq (talk) 12:47, 3 February 2012 (UTC)[reply]
No, they wouldn't release the same energy because you won't end up with the same results. The decay products will, after a while, mostly end up as lead. The fissile products will be all sorts of things. Your problem is that the radioactivity of the fissile products isn't negligible to your calculation. If you include the decay heat from the fissile products, you should end up with approximately the same total energy because they will all decay into lead eventually as well. (It might not be exactly the same because there are different isotopes of lead they might decay into, and you may have to wait billions of years before they actually get to lead if they decay to something else stable first). The important thing is to compare what you have at the beginning with what you have at the end. If they are the same, then the energy released must be the same. However, I don't think your plan to heat up Mar's core would work - you would probably destroy most of the planet in the process, even if you could find enough fissile material to do it. --Tango (talk) 13:18, 3 February 2012 (UTC)[reply]
Side note: "infinitesimal" means "very small", not "very large". --Sean 14:41, 3 February 2012 (UTC)[reply]
I wouldn't think it will be the same. From a thermodynamics point of view, a fission process would be irreversible adiabatic (nearly), while the decay process would probably be reversible isothermal (because it takes place quasi-statically). The reversible isothermal process would probably generate more energy. I have no background in Nuclear Physics, but only thermodynamics, so excuse any inconsistencies please. Lynch7 14:49, 3 February 2012 (UTC)[reply]
This should be pretty easy to quantify. The amount of energy released by an average U-235 fission is around 200 MeV. What's the energy released over the entire course of a U-235 decay chain? According to this chart, it's at most 46.4 MeV by the time it gets to lead. Comparing masses of them directly of course assumes 100% efficiency in the bomb. If my understanding is right, they would be equal if the bomb is only 23.2% efficient, and the decay would win out if the bomb was less efficient than that. The Little Boy bomb was around 1% efficient, the Fat Man bomb was about 17% efficient, but later bombs were much more efficient than either of those. (If I've made some sort of idiotic assumption or error, someone please correct me!) Again, this neglects fission products, which are pretty radioactive, and would tip the balance quite decisively in the direction of the bomb. --Mr.98 (talk) 19:14, 3 February 2012 (UTC)[reply]
Out of curiosity, I ran through one set of common fission products, strontium-94 and xenon-140. They give off a total of about 24 MeV from their respective decay chains. So that's over half of the total energy from the total U-235 decay chain by itself, without even including the fission yield. --Mr.98 (talk) 21:09, 4 February 2012 (UTC)[reply]

is any encryption other than OTP physically secure[edit]

I mean in a physical-universe sense (including quantum mechanics or anything else that could be discovered). I don't consider "hey it would take a while to compute with the best-known current algorithms on the best-known current physical non-quantum mechanical computers". --78.92.81.13 (talk) 13:19, 3 February 2012 (UTC)[reply]

It depends. What do you mean by secure? Many algorithms are secure enough for any practical purpose. But, you seem to be asking in a hypothetical sense if an encryption could be completely secure under any theoretical circumstance. WKB52 (talk) 13:36, 3 February 2012 (UTC)[reply]
A one-time pad is unique in that, properly executed, cryptanalysis is not only practically but theoretically impossible (I think this is what you mean by "physically secure"). Cryptographer Bruce Schneier is fond of noting, though, that modern cryptographic methods are plenty secure, both now and for the foreseeable future, and that other factors (that tricky "properly executed" clause above) are far more likely to cause cryptographic failure -- even (or particularly) in the case of OTP. — Lomn 16:04, 3 February 2012 (UTC)[reply]
The security of a OTP shouldn't be exaggerated - you have to get those numbers from somewhere. For example, you might use a random number generator which is easily reverse engineered. Or you can use a service like HotBits,[1] which I assume has its entire output archived to some handy-dandy NSA decoding disk somewhere in transit. More to the point, that sticker on your computer that says "Intel inside" isn't really talking about the brand of chip they used... Wnt (talk) 17:27, 3 February 2012 (UTC)[reply]
Well, you could generate them from a physically random source (say, use diode noise to fill an entropy pool, and take out only as much entropy as you put in), and unless NSA has invented magic, it's hard to see how they could get at that. The difficulty, of course, is key exchange — somehow you have to get the same random values to both parties.
But if you know in advance with whom you want to exchange secure messages, this can certainly be done. Just fill up some DVDs with the random bits, and transfer the DVDs in such a way that they never leave the physical custody of trusted parties. It's a lot less convenient than public-key methods, but it can be done. --Trovatore (talk) 17:37, 3 February 2012 (UTC)[reply]
Are there actually any cases of people being able to crack modern, strong encryption schemes because of insufficient pseudorandomness? (I'm not counting cases like VENONA where they accidentally duplicated the same random number sheets more than once which was a lucky typo, not a case of the algorithm being wrong. I'm also not counting toy cases where people demonstrate how pseudorandomness can be misleading — I mean actual cases of intelligently designed systems with intelligently used pseudorandom or quasirandom number generators.) --Mr.98 (talk) 03:24, 4 February 2012 (UTC)[reply]
I don't think that's the question at issue. I think Wnt was assuming a one-time pad based on a PRNG, which as far as I know is not a commonly used encryption scheme, though in principle it should work given a sufficiently strong PRNG. The classical one-time pad is based on true randomness rather than a PRNG. --Trovatore (talk) 04:19, 4 February 2012 (UTC)[reply]
OK, but I just wonder if there's any evidence to support the idea that you can crack a code on the basis of its randomness not being "truly" random, assuming you are using a modern PRNG that isn't being implemented incorrectly. --Mr.98 (talk) 23:26, 5 February 2012 (UTC)[reply]
I think it's worth adding that, in addition to the randomness (or lack thereof) issue raised above, the one-time pad also needs to be "as large as or greater than the plaintext, never reused in whole or part, and kept secret" (to quote our article) in order to be as impervious to cryptanalysis as Lomn initially suggested. Jwrosenzweig (talk) 06:55, 4 February 2012 (UTC)[reply]
Yes, Mr. 98 - see here: http://it.slashdot.org/story/08/05/13/1533212/debian-bug-leaves-private-sslssh-keys-guessable all Debian Private SSL/SSH keys ended up one of a few hundred thousand (or whatever), due to bad seeding. Probably an inside job, though :) 79.122.74.63 (talk) 15:25, 4 February 2012 (UTC)[reply]
That strikes me as being analogous to the VENONA situation, not someone actually cracking a not-broken PRNG on account of its being pseudorandom. The problem there wasn't the fact that it was pseudorandom, it was that it wasn't seeding correctly, ja? --Mr.98 (talk) 23:26, 5 February 2012 (UTC)[reply]
The hard part of cryptography is key management. Encrypting something with an OTP is like locking it in an uncrackable safe. But now what are you going to do with the key to the safe? Lock it in another safe? What are you going to do with the key to that safe? Keep in mind that the key is as large as the thing it's protecting. So the safe gains you very little—you're trading the security of one item for the security of another item of the same size. You can leave the key unlabeled and hope nobody figures out which safe it goes with—that's one kind of security through obscurity.
People talk above about using a pseudorandom number generator to make an OTP. That's impossible because it's part of the definition of an OTP that it uses truly random bits. Using pseudorandom bits in the manner of an OTP has a different name: a stream cipher. -- BenRG (talk) 19:48, 4 February 2012 (UTC)[reply]
Strictly speaking that might be true, but I think the meaning is clear to anyone who understands what a one-time pad is in the first place. --Trovatore (talk) 23:42, 4 February 2012 (UTC)[reply]
When you say "understands what a one-time pad is" you must mean "understands what it means to exclusive-or two bit streams together". Some explanations of OTP seem to spend almost all of their time on that minor detail. Since the output of a stream cipher is also exclusive-ored with the plaintext, someone with that "understanding" of OTP might have a head start on "understanding" stream ciphers too. What OTP is actually about, though, is having as many possible keys as possible plaintexts. -- BenRG (talk) 00:50, 5 February 2012 (UTC)[reply]
Oh, responding to your first paragraph: The key is the same size, but it doesn't need to be generated at the same time. You can exchange the key in advance (by exchanging physical artifacts such as DVDs), and then when the time comes that you know what secret message you want to send, you're good to go, and you can transmit the message faster at that time than you could get a DVD to the recipient. Obviously this isn't convenient enough for, say, online banking, but for certain applications it could be workable. --Trovatore (talk) 23:48, 4 February 2012 (UTC)[reply]
That's true, and OTPs have been used in that way by major governments (and they've screwed it up and their messages have been cracked). I think this is one of those if-you-have-to-ask things. Only in very specific circumstances is using an OTP a good idea, and people in those circumstances are not going to ask the likes of me for advice on the subject. So if someone does ask me, I can safely tell them they shouldn't do it. -- BenRG (talk) 00:50, 5 February 2012 (UTC)[reply]
There's an easy solution to this. Use the OTP as the lowest layer (pretend it's like the physical wire the bits go through), but make sure it's higher than any actual physical leak of information!! Then, at a higher layer, pretend all you have is the public Internet, and proceed to use the most secure protocol you have available. In this way, if you foul up your OTP exchange, you are left with the most secure protocol you have available. However, if you don't foul it up, you have the most security imaginable. The downside, as always, is the need to have the parties know ahead of time their need to communicate, combined with needing to take with them as much OTP as they will transfer in bits over the wire, ever. Finally, you should destroy the OTP somehow after using it, as perhaps it could be recovered. On the whole, why not have an OTP layer that you can prove secure, but is error-prone due to other than theoretical reasons, with higher layers being layers you can't prove secure? 188.157.9.122 (talk) 13:15, 5 February 2012 (UTC)[reply]
There's obviously no limit to how many times you can encrypt something, or what methods you can use. But there is the issue that, in a large organization such as a company or army, people might not always do the "right thing" when any one of those layers gets a bug in it and their comm channel turns into useless gibberish. There's a certain chance that one of them is just going to retransmit what they had through a clear channel or using an easily broken code hoping no one notices - and then, you've revealed not only that document, but possibly something about your fancy cipher system that just failed. Wnt (talk) 18:26, 6 February 2012 (UTC)[reply]

about mechnical force[edit]

what is easy to pull a stationary thing or to push it? what takes less force to pull or to push a thing whereas the idle conditions are considered such as friction is same for both pulling and pushing? — Preceding unsigned comment added by 27.124.12.98 (talk) 13:37, 3 February 2012 (UTC)[reply]

With the standard assumptions of an idealized physics problem, pulling or pushing an object requires the same force. In the real-word however, consider an example such as an adult human moving a 1 m cube of wood. Pushing often introduces a downward component of force. This can increase friction, making the object harder to move. Likewise, pulling might be done using a rope, which could introduce an upward force, reducing the friction, and making the object easier to move. SemanticMantis (talk) 14:07, 3 February 2012 (UTC)[reply]
There's more, isn't there? For pushing, isn't the best location for a vantage point somewhere such that you can lean your own weight against the wood, say at such a distance away that your body forms a 45 degree angle with it? Whereas, for pulling, you can only lean away from the wood as far as your arms go (or put another way, the same 45 degree angle would put your feet INSIDE the block of wood, with you leaning outwards and pulling). If you have a piece of rope, you can lean as far away as you like... I would say for this reason in an ideal world it's far easier to push than to pull anything, except when there is a rope or the like attached of sufficient length for you to lean however you like, in which case it's roughly the same... 188.156.144.183 (talk) 15:47, 3 February 2012 (UTC)[reply]
And note that pushing is unstable, and the object will want to "fish tail", as in a rear-wheel drive car, while pulling is inherently stable. Pushing some objects without wheels or skis also doesn't work well, since they will want to dig into the ground at the front. StuRat (talk) 19:02, 3 February 2012 (UTC)[reply]
that's interesting (and same as the friction answer given earlier, isn't digging in just increasing friction :)). Do you think the push/pull distinction above is valid StuRat? 79.122.90.56 (talk) 21:01, 3 February 2012 (UTC)[reply]
"Digging in" can go beyond just increasing friction. The usual def of friction is that the only work it does is to create heat. But if the object plows into the ground, it may also do work moving dirt. Or, consider a case where you push or pull a brick along a sidewalk. At the seam between two slabs, the brick may dig in, and then any pushing force is trying to move the entire slab of sidewalk. Note that this all assumes that you push above the center of gravity. Pulling above the center of gravity will also tend to make the front edge dig in, but, since you likely need to attach a rope or chain to pull an object, you can just as easily attach it below the center of gravity. StuRat (talk) 21:09, 3 February 2012 (UTC)[reply]
In the case of pulling, much depends on the presence of a handle or other device to pull onto. Consider also the case of pulling vs. pushing up an inclined ramp. ~AH1 (discuss!) 01:40, 5 February 2012 (UTC)[reply]

about transformer[edit]

how can a transformer be worked in dc supply? — Preceding unsigned comment added by Lalit7joshi (talkcontribs) 13:42, 3 February 2012 (UTC)[reply]

[See Inverter (electrical)#Basic designs One way is by switching the DC back and forth so that it follows two different directions through the transformer primary. A distorted AC output is produced from the secondary of the transformer. The output can be at a different voltage from the input, depending on the turns ratio and frequency of the switching. In automobiles years ago, an electromechanical vibrator circuit did the switching, to produce high voltages via a transformer and then a rectifier to operate vacuum tube radios. In modern circuits used to step up DC to high DC voltages or to convert DC to AC, transistor switches or other solid state devices such as thyristors are used in place of the vibrator. Edison (talk) 14:53, 3 February 2012 (UTC)[reply]

Did Homo sapiens get their pale skin and red hair from Neanderthals?[edit]

Someone told me that sapiens got pale skin and red hair from interbreeding with neanderthals which already possessed these traits. I seem to recall in my Human Evolution class that neanderthals did indeed have these traits, but they were coded for by different genes so it was just an example of parallel evolution. Who is correct? ScienceApe (talk) 14:54, 3 February 2012 (UTC)[reply]

Neanderthal_admixture_theory#Genetics seems to answer your question. SmartSE (talk) 15:50, 3 February 2012 (UTC)[reply]
For context consider that just about any mammal has red-haired members with some mutation or other in MC1R. Making redheads, genetically, is no big deal. Wnt (talk) 17:21, 3 February 2012 (UTC)[reply]

does near-UV light cause photobleaching of protein?[edit]

If they fluoresce under UV, shouldn't they photobleach too? How does the body repair this? 137.54.28.45 (talk) 17:36, 3 February 2012 (UTC)[reply]

Damaged proteins can simply be replaced. Damaged DNA is more serious, but most of us do have a mechanism to repair that. Individuals with xeroderma pigmentosum lack this ability, so must avoid sunlight, as the children in The Others (2001 film). StuRat (talk) 18:37, 3 February 2012 (UTC)[reply]
Autofluorescence isn't very much for most things, and neither is photobleaching; in fact, as mentioned in that article, advanced glycation end-products produce much of the fluorescence - in other words, damage done by other means already. So it's part of the normal wear and tear, but not that much considering, and of course evolution will have tended to lead to the marking of any particularly susceptible proteins for rapid degradation and replacement. See ubiquitination, proteasome, autophagy, endosome, lysosome ... no doubt I'm forgetting lots of biggies. To survive and thrive, cells have to be almost as good at recycling and replacing proteins as they are at making them to start with. Wnt (talk) 19:14, 3 February 2012 (UTC)[reply]

computer screens[edit]

Sorry, but for a rather bad pun/joke on another site, can anyone tell me what the letters shown on the computer screens are actually made of, what is within the part of the screen they take up that makes them different to the surroundings or whatever is going on in there?

148.197.81.179 (talk) 17:40, 3 February 2012 (UTC)[reply]

See Computer monitor. Feel free to ask here if anything is unclear. --Daniel 17:43, 3 February 2012 (UTC)[reply]
It depends on the type of monitor being used. On an old style CRT, it's just particles (electrons) being beamed on to a screen that begins to glow due to fluorescence I believe. ScienceApe (talk) 17:52, 3 February 2012 (UTC)[reply]
The letters, as well as everything else on the screen, are made up of pixels. Those are actually each a dot or square, but when tiny dots are placed close together, you see them as letters or pictures, as in a dot matrix printer or in Pointillism paintings. If you look closely you may see the dots. A magnifying glass or a drop of water on the screen may help. TVs use similar technology, but often have bigger pixels, which are easier to see up close. StuRat (talk) 18:18, 3 February 2012 (UTC)[reply]
You may be interested in these magnified images of text, shown on ipad and kindle screens [2]. Of interest is the fact that neither one uses uniform placement of square pixels. SemanticMantis (talk) 18:35, 3 February 2012 (UTC)[reply]
The pixels on the iPad and Kindle in that article are arranged in a square grid. They aren't perfect squares, but that's as close to perfect squares as you'll ever see. Most CRTs used a triangular/hexagonal grid (File:CRT pixel array.jpg). -- BenRG (talk) 19:30, 4 February 2012 (UTC)[reply]

Can Potassium nitrate (KNO3) be used to supply oxigen for breathing in close quarters?[edit]

By use I mean - by common utensils. Is it enough to heat it to release Oxygen? 109.64.24.206 (talk) 18:21, 3 February 2012 (UTC)[reply]

Note that with overheating the remaining potassium nitrite decomposes to produce nitrogen oxide, i.e. concentrated smog. The same might be true if it is impure, mixed with other cations. I wouldn't count on the oxygen to be "suitable for breathing" except in some specific, well designed application. Wnt (talk) 19:21, 3 February 2012 (UTC)[reply]
The propensity that potassium nitrate has for decomposing explosively would be a very real problem. If it happens in close proximity to the user the need for breathable oxygen may be ended. Short version: The stuff explodes rather easily! Roger (talk) 19:31, 3 February 2012 (UTC)[reply]
See oxygen candle - KNO3 may not do the job, but there are other things that will. --Tango (talk) 22:54, 3 February 2012 (UTC)[reply]

Wnt, Dodger67 and Tango - thank you VERY much for your answers. You have helped immensely. 109.64.24.206 (talk) 15:51, 5 February 2012 (UTC) (OP)[reply]

Resolved

Using the turntable in a microwave oven[edit]

The article Microwave oven contains this sentence: In turntable-equipped ovens, more even heating will take place by placing food off-centre on the turntable tray instead of exactly in the centre.

Can anyone either give me a reference for why this is so, or offer an explanation comprehensible to a someone who is not even an amateur scientist? Thanks, Bielle (talk) 20:51, 3 February 2012 (UTC)[reply]

The heating of a stationary object will be uneven, with some spots getting more heating and others getting less. By rotating it, you average out all the spots at that radius, which is bound to give a heating amount closer to the overall average. The farther from the center, the more spots are averaged out, while at the very center, there's only one spot, so, whatever heating level the center would get if stationary, it will still get when rotating. If your microwave oven happens to heat the average amount in the center, you will be OK, but, if not, rotating won't improve things at the center. StuRat (talk) 21:03, 3 February 2012 (UTC)[reply]
Thanks, StuRat! Bielle (talk) 22:08, 4 February 2012 (UTC)[reply]
You're welcome. I will mark this resolved. StuRat (talk) 22:10, 4 February 2012 (UTC)[reply]
Resolved

From the article "Children and adolescents are particularly sensitive to the early and late extrapyramidal side effects of haloperidol. It is not recommended to treat pediatric patients." I would like to know where I can read more about this. If there isn't a citation available, perhaps someone can direct me to a specific person i could contact? Thanks198.189.194.129 (talk) 21:47, 3 February 2012 (UTC)[reply]

Not sure, but note that our article contradicts itself by listing children under the "Uses" section. StuRat (talk) 23:20, 3 February 2012 (UTC)[reply]

This paper is a recent review of the relevant literature, at least as regards schizophrenia, and is freely available online. Looie496 (talk) 03:17, 4 February 2012 (UTC)[reply]

To begin with, a good way to get information here is by PubMed [3] or Google Scholar [4] which deliver 700 and 24000 references respectively (though Google often absurdly overestimates and you don't find out until the last page). PubMed is much nicer to work with because it's sorted by date and indicates which resources are available for free.

Adding "adverse" to my search I found [5] which says that in treatment of tic disorders, "all 17 subjects in the haloperidol group experienced unexpected side effects and 6 (35.3%) were not able to continue medication owing to unbearable adverse events." But it was an open-label study, and the other drug has less of a reputation, so I'm not sure it's really any better. Also it says the extrapyrimidal symptoms and the rate of discontinuation were worse in this group than in an alternative group receiving aripiprazole.

This is clearly a tricky decision between bad options and I'm not going to claim my casual browsing is nearly enough to get to the bottom of it. Wnt (talk) 18:41, 6 February 2012 (UTC)[reply]

Thanks, all of you.-Richard Peterson198.189.194.129 (talk) 16:48, 9 February 2012 (UTC)[reply]

Is it safer at 5 pm after dark or 8:30 pm in the daytime?[edit]

At least theoretically? Not that I live in a dangerous area (crime rates are close to the national average (US)), just curious. Sagittarian Milky Way (talk) 22:52, 3 February 2012 (UTC)[reply]

It's going to depend on a lot of things. Light certainly helps prevent crime, but so do potential witnesses. If there are a lot of people around at 5pm and not many people around at 8:30pm, crime could be higher at 8:30pm despite it being lighter. --Tango (talk) 22:56, 3 February 2012 (UTC)[reply]
That might be true of an area where people work (approximately 9-5) but don't live, for example. The assumption being that the question is asking solely about safety from crime. As for safety from traffic accidents, full light is better, but sunset and dusk can be even worse than total darkness, since people's vision may be obscured by the Sun and some may not have their lights on yet. StuRat (talk) 23:15, 3 February 2012 (UTC)[reply]
It's difficult to figure this out, because even knowing the crime rate isn't enough. For example, this analysis finds crime in San Francisco to be highest from 4 to 8 p.m. But what that omits is that to a particular person walking late at night, the lower overall number of crimes is little comfort when he has so few people to share them with. I think that despite the increase in crime rate per time of day, the best guide is still the instinctive feeling that when you're one of just a few passersby, the whole criminal element has the chance to put you in their sights. Wnt (talk) 23:24, 3 February 2012 (UTC)[reply]
Certain studies have found that street lighting only reduces the fear of crime without having any noticeable impact on actual crime rates[6]. Sunshine can cause other problems, like traffic-disrupting glare in the early mornings and late evenings. ~AH1 (discuss!) 01:34, 5 February 2012 (UTC)[reply]
(didn't have access to internet for awhile) Also not mentioned is how late things close in your town or neighborhood. First the stores, then the movie theaters and finally bars. I'd imagine there might be some places so hickey that everything can close before sunset. Imagine walking in the ghetto of Anchorage or Fairbanks (if they have ghettos in Alaska) at 11:42/12:47(a) and it's still daylight. Or getting only a few hours of weak sun all winter and having to be in your cubical for all of them. It'd be a surreal feeling. (though Russia would probably be a more likely place to find ghettos) Sagittarian Milky Way (talk) 23:58, 6 February 2012 (UTC)[reply]