Wikipedia:Reference desk/Archives/Science/2016 August 20

From Wikipedia, the free encyclopedia
Science desk
< August 19 << Jul | August | Sep >> August 21 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 20[edit]

Are engineers scientists?[edit]

Ignoring personal opinions about the definition and limits of each; just considering concrete decisions - like grants, fellowships, or prizes - do engineers count as scientists? Hofhof (talk) 01:53, 20 August 2016 (UTC)[reply]

What do you mean concrete decisions?--Savonneux (talk) 03:58, 20 August 2016 (UTC)[reply]
That is, when someone has to draw a line as to who gets rejected or accepted. I mean when it comes down to apply and receive a grant for scientists, or a prize for the best minority female scientist, would engineers be considered a part of the scientists? Hofhof (talk) 04:13, 20 August 2016 (UTC)[reply]
"Science" grants are infinitely broad in what each one could possibly be for, they aren't the exclusive domain of scientists though (anything practical tends to be an engineering problem). The difference between engineering and pure theory is just that, engineers are mostly concerned with applied science (how do we use science to solve this problem) and "scientists" are mostly concerned with the purely theoretical.--Savonneux (talk) 04:28, 20 August 2016 (UTC)[reply]
While it's true that engineers are concerned with how to use science to solve problems, it's not true that scientists are mostly concerned with the theoretical. They also do empirical observations, such as observing fossils or supernovae, and they do experiments. Then they or other scientists come up with theories to explain the observations and results. Loraof (talk) 16:00, 20 August 2016 (UTC)[reply]
Agreed. The person asking seemed to want a sort of differentiation so I was trying to show the only one. Anyone who applies rigorous logic to a thesis is technically a scientist.--Savonneux (talk) 00:24, 21 August 2016 (UTC)[reply]
The problem is that the word engineer is very difficult to define. The man who comes to repair my washing machine calls himself an engineer, but so do the men who designed and built the space shuttle and the new Forth road bridge. In the USA the driver of a train also calls himself an engineer. Some may deserve to be called scientists - others probably do not. Wymspen (talk) 14:05, 20 August 2016 (UTC)[reply]
Origin of "engineer":[1]Baseball Bugs What's up, Doc? carrots→ 21:31, 20 August 2016 (UTC)[reply]

I'm in A BIG CITY and I can't see the stars at night due to light pollution. If there was a...[edit]

I'm in A BIG CITY and I can't see the stars at night due to light pollution. If there was a countrywide powercut, how long would it take for the light pollution to go away? and the stars to be viewable? Like, would it be instant or would it take several years? Since light is really fast I can't understand how it would stay in the sky for very long once the lights went out. Please explain. — Preceding unsigned comment added by 200.94.21.194 (talk) 14:28, 20 August 2016 (UTC)[reply]

A tiny fraction of a second after the power cut, most of the light pollution would vanish. A small amount would remain because of emergency generators etc, but the effect would appear instantaneous. Air pollution is a separate factor to consider in some cities. Dbfirs 14:42, 20 August 2016 (UTC)[reply]
Do note that the afterglow of many kinds of lights (like high pressure sodium streetlights) would take minutes to dim to invisibility. Car headlights might be the biggest source of light pollution in a blackout, unless there's been a significant increase in generators now than during the 2003 blackout. Besides air pollution, blackouts often happen in heat waves when there might be a lot of haze. Sagittarian Milky Way (talk) 01:39, 24 August 2016 (UTC)[reply]
See Light pollution for our article; Adaptation (eye) might also be useful. Although the light pollution would be gone as soon as the power went off, it would take some time (about 20-30 mins, according to our article) before the observer's eyes could take full advantage of the darkness. Tevildo (talk) 14:56, 20 August 2016 (UTC)[reply]
I'm not so sure if you'd necessarily need 20-30 minutes or how strong the effect would be. Our Accelerating dark adaptation in humans seems in some ways better then the adaptation one, but even so, both primarily talk about moving from high illumination to low. If you were indoors in a light room, you may need 20-30 minutes. But how significant would the effect be if you were already in a dark area of the city (i.e. where there are no lights clearly visible). Sure the sky would still be bright, but I'm not sure how big the difference would be after adaptation for 20-30 minutes. If you were sleeping in a dark room with your eyes closed, provided you're careful when getting out (i.e. red light etc), the difference would probably be even less. So IMO the issue may be less related to light pollution but more what you were doing before. (Or in other words, it's quite similar to if you're living in an area with minimal light pollution except that you can't normally predict when a nationwide powercut will happen.) Nil Einne (talk) 15:31, 20 August 2016 (UTC)[reply]
Thing is that the difference between optimal dark adaption and partial dark adaption is only going to be noticeable when you try to spot hard to see objects at the very limits of visibility, like spotting M81 with the naked eye. Count Iblis (talk) 17:10, 20 August 2016 (UTC)[reply]

Supernova ice ages and artificial substitutes[edit]

I've been reading articles lately [2][3] that suggest that supernova debris, as evidenced by iron-60, might have set off the Pleistocene ice age. There is some mismatch - the iron-60 started 2.7 million years ago, and continued for a million years, but why isn't the correspondence exact, or at least entirely inclusive of the glaciated period? How viable is this idea. And if it is viable .... how much material are we talking about here? I shouldn't think a supernova 300 or more light years away would rain down very much matter on Earth to increase its cloud cover. So is it possible for humans to cook up a space probe that does the same thing, as a sort of geoengineering to counter excessive global warming? (Not that I'm suggesting this is necessarily a good idea... sometimes I wonder if the 'Gaian' purpose of humans was to look around and figure out how to clear out all that nasty ice...) Wnt (talk) 18:57, 20 August 2016 (UTC)[reply]

Ok, this is a really back of the envelope calculation, and I could be off by several orders of magnitude. They are hypothesizing a supernova 300 light years away, ish. Let's pretend it's a big star with 8 solar masses. Even if we pretend the entire mass of this star explodes outward, the earth will only catch a small fraction of the material. How small? Well, by the time the blast front reaches earth, it has spread out over a spherical region 300 light years in radius, and the space in that surface occupied by the earth is only 1e31 of the whole, so that's the fraction of the star we capture. That would be about a kilogram of material. Now, maybe Earth's gravity will pull in some material that is not headed directly toward us, but these particles are moving very fast, so probably not a whole lot more. I think even if I'm way off on some things (if the star is closer, or bigger) it's still going to work out to a truly negligible amount of matter. I honestly just don't see how this could possibly affect Earth's weather. Now a much much closer supernova, however (less than 50 light years) could make a noticeable dent in the ozone layers, and the effect on plankton could have some long-range ecological consequences. Someguy1221 (talk) 10:00, 21 August 2016 (UTC)[reply]
This is why there shouldn't be that much kryptonite on Earth. Terrestrial planets must be at least ~105 times smaller than 8 Sun masses so there should only be a few grams of kryptonite in the entire world if Krypton exploded anywhere near the speed of Alderaan. Sagittarian Milky Way (talk) 20:16, 21 August 2016 (UTC)[reply]

what is this filter?[edit]

hello, in the discrete-time PLL example here there are these lines:

% Implement a pole-zero filter by proportional and derivative input to frequency
filtered_ersig = ersig + (ersig - lersig) * deriv;
% Keep error signal for proportional output
lersig = ersig;

In simple terms, what kind of filter is this (other than pole-zero and, I assume, FIR, as it doesn't use past output samples)? Apparently what the filter does is that whenever there is change, it creates an "overshoot" in the direction of the change and then settles on the new value. Is it then a high-pass? Thanks in advance! Asmrulz (talk) 20:06, 20 August 2016 (UTC)[reply]

A cursory search suggests that a 'pole-zero filter' is an approximation of a Butterworth filter, which our article describes as a theoretical ideal for a "maximally flat magnitude filter". 2606:A000:4C0C:E200:296A:CC64:7945:8C5F (talk) 03:47, 21 August 2016 (UTC)[reply]
That's not quite right... a Butterworth filter is a specific type of filter. "Pole-zero" is a design methodology to create a filter by placing poles and zeros - mathematical constants in the denominator and numerator of a transfer function. One can use pole-zero analysis to build a filter of any specific type. A "pole-zero filter" is any filter whose properties were designed by placing poles and zeros explicitly, instead of computing them using some other method. Every filter has poles and zeros: but many times, because we have a different analytical technique, we may choose not to concern ourselves with the values of the poles and zeros. In this case, the author described this code-snippet "pole-zero filter"; I would call it a "PD-controller"; but in any case, those comments are just a little bit of English language verbage to help motivate this very small mathematical sub-expression of the larger PLL system. Nimur (talk) 14:11, 21 August 2016 (UTC)[reply]
The code excerpt is meaningless without context - but if you look at the full code in the article, it makes a little more sense. This is a digital filter, and it saves state, so although the history (previous samples) aren't explicitly stored, the filter is using their values.
The filter is neither a high-pass nor a low-pass filter. Instead, it outputs a control signal whose magnitude and sign are proportional to frequency error. A reference signal, of known frequency, is compared to an input signal, of unknown frequency. This particular implementation uses the trick of edge-detection, using a digital, bitwise comparator to trigger the edge detection; and it keeps a running sample-counter to estimate how synchronized the two signals are - in other words, how coherent their phases are. In the analog world, we would use a totally different method.
Mathematically, any filter that converts a frequency to a different frequency (in this case, to a "dc" control signal) is definitionally non-linear: loosely speaking, frequency is not preserved. You can not think in terms of "frequency" pass-bands. This is not a low-pass, high-pass, or band-pass filter: it is a filter whose output depends on input frequency (and phase) in a nonlinear fashion.
This is a complicated bit of code - and it stores digital state in many variables, including the signal flags and the "it" (iteration counter) variable. If we want to throw mathematical terminology at this software implementation, we could say that because these state-flags allow us to compute results that are related to previous samples, they are functions of the derivative of the input signal. Although the code doesn't directly compute and store the derivative (difference between current- and previous- sample), the algorithm does make implicit use of the derivative.
Again, if you tried to formally write out that relation, it would be a non-linear differential equation - and what this software filter does is to try to numerically solve it! Like all non-linear differential equations, the solution is only valid in special cases: if you fed a junk input signal in, with really high noise levels or just completely the wrong frequency content, you could cause the PLL to fail to converge (and this is a real thing that does actually happens in real-world electronics applications)!
Nimur (talk) 14:11, 21 August 2016 (UTC)[reply]
Thank you so much! I think I "get" it (to the extent it's possible to "get it" without the theory and the associated math.) I'm just (still) trying to make an all-digital PLL as a hobby project (that I could flash onto the ATtiny.) I wrote a simulation in which I incorporated bits and pieces from the code in the PLL article. It seems to work well (I'm not trying to replicate the 4046 chip.) Even frequency multiplication works if I put a divider in the "feedback path." Thanks again! Asmrulz (talk) 20:02, 21 August 2016 (UTC)[reply]
Are all PLLs actually PID? That not all PID have frequency as the controlled quantity, is obvious, but from cursory googling, noone seems to discuss PLL in terms of PID, and no PLL block diagrams I've seen contain explicit P, I and D sections, like here. Asmrulz (talk) 20:28, 21 August 2016 (UTC)[reply]
No, it is possible to build a phase-locked-loop circuit that does not use a PID controller at all, but uses a different type of digital controller. It is also possible to build a PLL that is completely implemented in analog circuitry. For example, in the analog world, we can build a PLL using a varactor (if we want to pretend like it's 1970!), or using a voltage-controlled oscillator or using a Miller topology feedback control amplifier. In digital designs, PIDs are a fundamental building-block: it's such a convenient, commonplace design methodology that you will surely see it in lots of places. Moral of that story: learn the theory and practice of PID controllers really deeply, and learn to recognize the implementation and the behavior of PID controllers; because they show up as tiny building-blocks inside lots of more complicated systems.
Once you get really good at the math and the theory, you will be able to turn any control equation into an "almost-equivalent" PID controller... once again, "PID" is really just a design-methodology that lets us write a specific type of equation - a second-order digital control transfer function - in a standard, canonical form. Once the equation is in standard form, we can use shortcuts to solve for its behaviors and estimate its properties, and we can "plug and chug" into standard software or hardware implementations. That is meant to be easier than finding full solutions to the stability and control equations for every single sub-problem you encounter in a complicated design.
Nimur (talk) 20:52, 21 August 2016 (UTC)[reply]