Wikipedia:Reference desk/Archives/Science/2015 January 14

From Wikipedia, the free encyclopedia
Science desk
< January 13 << Dec | January | Feb >> January 15 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 14[edit]

How sure is Moore's law?[edit]

I read a lot about how Moore's law may stop at some point in the next few decades. Is it likely to still be a thing in 2100 or might our computer power stop doubling as early as 2025?--79.97.222.210 (talk) 20:53, 14 January 2015 (UTC)[reply]

Moore's Law isn't really a law in any of the usual scientific senses, just a rule of thumb. At some point, and in my opinion, probably sooner rather than later, computer power will stop doubling because it will run into quantum limitations. That is my opinion. If it has still continued through 2100, and we didn't notice anything, it might be a case of having passed the singularity and no one noticing it, or something. Robert McClenon (talk) 21:15, 14 January 2015 (UTC)[reply]
See Moore's Law for a discussion. Robert McClenon (talk) 21:16, 14 January 2015 (UTC)[reply]
Mathematically, it just isn't possible for something to double every 2 years, indefinitely. At that rate you will eventually use up all the available resources. To put some numbers on it, that would mean an increase of over a thousand times in 20 years, a million times in 40 years, a billion times in 60 years, a trillion times in 80 years, and a quadrillion times in 100 years. StuRat (talk) 21:22, 14 January 2015 (UTC)[reply]
OP here again. The article about Moore's law says the bekenstein bound, the ultimate limit from information theory, won't be reached for 600 years of continuous Moore's law, when it has a 100% chance of stopping. However, there are many reasons why it might stop before then, non of which are 100% certain. Will it stop before then is my question?--79.97.222.210 (talk) 21:36, 14 January 2015 (UTC)[reply]
For the entire time Moore's law has existed it's been about the size of silicon transistors, and I don't get the impression anyone believes that can continue past ~5nm because of quantum effects; see this article for example. That would put the end around 2020. As that article mentions, transistor switching speeds and clock rates already hit a wall a decade ago, so Moore's law as a measure of "power" is arguably already dead, though it depends on how you interpret "power" and that was never correctly attributed to Moore in the first place. Conceivably some other technology could take over, but nothing seems poised to. The Bekenstein bound is a useless upper bound; it's like saying that you definitely won't live longer than the lifetime of the universe. -- BenRG (talk) 21:48, 14 January 2015 (UTC)[reply]
One of my professors, who was an expert in digital circuit design, gave an excellent publication on Moore's Law that has since been republished by the IEEE: Future Directions in Mixed-Signal IC Design (2010). Moore's Law is a "gigantic (economic) feedback control system." Companies are controlling the "gain" so that they can achieve a fixed growth (speed and performance improvement) with respect to time. To control the rate of performance-change, they can tune the amount of input resource - talent and money. The diagram of the feedback control system in that presentation is great! Nimur (talk) 00:57, 15 January 2015 (UTC)[reply]
One of the reasons that Moore's law has so accurately tracked progress is that manufacturers plan their product to follow the law. So the likes of Intel and AMD set up their longer term product development with the goal of tracking the law. Even if they could progress faster, they don't. For sure it'll end sooner or later - and I think there are definite signs of that happening. But 5nm features aren't the end of the line - Moore's law says that the number of transistors on a chip will double every couple of years - but it doesn't comment on how big the chip is allowed to be. So even if we hit a limit at 5nm, there is no reason (in principle) why process improvements can't allow larger dies to be made economically. Another aspect of this is power consumption. If power consumption can be reduced sufficiently, then removing the heat from circuits is less of a problem and we can start to use the third dimension to pack more components into a reasonable amount of space. If you consider that a chip could comfortably be a centimeter thick - then with 5nm components, if you could manage the heat, you could theoretically make chips that were millions of layers deep...millions of times more dense than we have now. That would certainly add a LOT of years to the ultimate day when we can go no further.
There are also ways to get more horsepower without more complexity. Computers seem to go through cycles from the super-simple to the super-complex. RISC architectures were a big improvement over CISC - and nowadays, the RISC computers are getting pretty darned complex again. We can also explore tricks like moving the computational power into the memory devices...right now, getting stuff into and out of memory causes horrendous complexity with multi-level caching, instruction look-ahead, branch prediction and so forth. But if you had tiny super-simple computers embedded into the RAM array, you could eliminate all of that. We don't do it because the technology for making very dense RAM arrays is different from the tech for making fast computational engines - but it's not impossible.
This is all highly speculative stuff - but there is plenty of room for something wild and crazy-seeming to beat out present day architectures. SteveBaker (talk) 05:03, 15 January 2015 (UTC)[reply]
I also think we can do a lot more with parallel processing, so a PC might have 256 CPUs, each controlling a separate process. Each CPU can be relatively inexpensive, with the heat problem reduced by spreading them out a bit and allowing air gaps for fans to cool them efficiently. StuRat (talk) 05:34, 15 January 2015 (UTC)[reply]
The difficulty with that is RAM speed. On a modern CPU, it takes around 400 clock cycles to access a byte of RAM that's not in cache and just a couple of cycles to do whatever complex arithmetic/logic is needed to process it. With 256 CPU's competing for the same RAM bus, any cache misses would be exceedingly costly because you'd be waiting for 400 clock cycles times the number of CPU's that need to read memory! So your cluster would work well in algorithms where everything fits in cache - but perform disasterously in applications that require a ton of memory.
That leads you to put more and more per-CPU memory in place - and have less and less communications between them. This results in a situation where you more or less have a network of 256 separate computers...which, is pretty much a cloud-compute server....which is how all the massive super-computers of the world already operate. Those machines are shrinking in size and cost (or growing in capability at fixed size and cost), right along with Moore's Law...but it not exactly a new paradigm that'll change what sits on your desk.
Worse still, not many people have need to run 256 active processes. The computer I'm running on right now claims to be running 194 processes - but all but three of them are consuming 0% of the CPU time. Taking one process and splitting it over multiple processors generally falls afoul of Amdahl's law. There are exceptions...one is graphics. Our present generation GPU's already have in excess of 256 processors - the one I'm using right now has 512. They manage to avoid the problems of Amdahl's law because there is more than enough parallelism in the algorithm to consume a million processors if it were needed - and each tiny processor only needs a tiny amount of RAM to contain the entire data set - so memory contention is minimised. But this kind of improvement is more or less only useful for graphics.
That said, we have programming environments such as OpenCL that allow one to run more-or-less conventional software on those hundreds of processors. So, if your algorithm is sufficiently parallizable, your vision of there being 256 separate CPU's probably already exists inside the computer that you're typing on right now. It doesn't get used all that much outside of very specialised applications because it's just not possible to split "normal" software applications up into that many threads - and that's why Ahmdals law means that we're not likely to see massively multi-core CPU's solving the End-of-Moore's-Law crisis that's looming before us.
SteveBaker (talk) 18:19, 15 January 2015 (UTC)[reply]
Most of those processes might be at 0%, most of the time, but every once in a while they jump to life, do something, and make my PC freeze for a second or two, which I find extremely annoying. I want them each on a separate processor so they will stop doing that.
Also, once we all have lots of extra processors ready to use, there are many programming tasks that could be written to take advantage of them. A forward radix sort, for example, can be quite easily converted to run on multiple processors. Same with searching for matches between two unsorted lists. StuRat (talk) 06:13, 17 January 2015 (UTC)[reply]
I think you have a somewhat naive understanding of what's involved here:
  1. Having 256 processors, each with a task that runs once a day would be a colossal waste of resources. 250 of those processors would just sit idle for 99.99999% of the time. The investment in silicon would be vastly more well spent in adding cache to a 4 core CPU instead.
  2. One 'spare' processor would solve the problem (if things worked the way you imagine) because the probability of two mostly-idle processes needing to run at the same time is minimal.
  3. BUT: You're assuming that these occasional interruptions are due to CPU congestion. How do you know it's not RAM, I/O, memory management, kernal interlocks or disk that's causing your glitches?
  4. A long-idle process isn't likely to be occupying main memory - so the biggest glitch when it has to run is the operating system streaming it's memory image off the hard drive and into main memory. Since many operating system tasks can't be threaded, that can cause other processes to block while the OS does it's thing. No matter how many CPU's you have, you'd almost certainly get the exact same glitch.
  5. You probably have a 2 or 4 core CPU anyway - and it's unlikely that the thing you're actively working on is occupying more than one or two of them...so you almost certainly already have a spare CPU for that occasional task...but that fact isn't helping you - so it's not CPU time that's causing your glitches.
  6. The actual computational load of most of those teeny-tiny processes is typically utterly negligible - perhaps a thousandth of a second of CPU time for a device manager to wake up, decide that the device it's looking out for is OK, then go back to sleep. That kind of intermittent load simply isn't capable of slowing things down to the point where you'd notice them. Your problem is with task switching, disk-to-memory...that kind of thing.
  7. If this bothers you - the very best thing you can do is put in more main memory - and possibly get a solid-state "hard drive" to hold operating system resources and program binaries.
  8. We do already have a gigantic number of parallel processors sitting around for those very specialist jobs. Unless you have an especially clunky graphics chip in your machine, you can almost certainly implement a highly-parallelizable data-local task in OpenCL and have between 200 and 1000 processors working on it. Having 256 main CPU's sitting around just in case you might feel the need for a super-specialist task like that doesn't make sense for a general-purpose computer. The joy of having the GPU do this stuff is that graphics processors are useful for LOTS of other things.
SteveBaker (talk) 19:48, 17 January 2015 (UTC)[reply]
4) Yes, each processor would need it's own memory so it wouldn't have to interrupt others to do it's own thing. The only reason it should ever interrupt another process is if it needs to interact with that process in some way.
I'm basing all this on the assumption that the cost of the current processors will drop dramatically over time, and yet we won't be able to make them much better, because we've hit the limits in that respect. Thus, more processors will start to make more sense. StuRat (talk) 22:19, 17 January 2015 (UTC)[reply]
First is that Moore's Law is about transistor density, which doesn't necessarily imply computing performance. Following Intel's products, each technology node reduces linear size by about 0.7 (and area is 0.7x0.7=0.49). Intel generally takes a product and shrinks it to the next technology node which cuts it's size in half. Things to note are that some of the scaling that used to occur is no longer happening. Reticle sizes (die size limit) are pretty much stagnant. Wafer size was been at 12" for nearly 20 years (wafer size used to scale with Moores law). The place to look for Moore's law is in memory chips where density is the ultimate driver. Memory chips seem to have moved on to multiple die stacking as the driver for increasing density. --DHeyward (talk) 19:07, 15 January 2015 (UTC)[reply]

Archeology of dwarf planets[edit]

This year we have two of our last best chances to find evidence of prior intelligent life in the Solar System: the Dawn (spacecraft) visit to Ceres (dwarf planet) and New Horizons visit to Pluto. Supposing that some race of thinking beings existed roughly one billion years ago, under a cooler Sun prior to the resurfacing of Venus, and that they used either Ceres as a base for the exploitation of asteroids or Pluto as a base for exploitation of plutinos, how big a mark would they have had to make on either of these worlds in order for some trace of it to remain today, that would be recognizable with our probes? Wnt (talk) 23:03, 14 January 2015 (UTC)[reply]

I doubt if there's much erosion on either, particularly Pluto. However, over a billion years, I'd expect a significant amount of dust to accumulate from micrometeorites, so any evidence might be buried. This would leave either objects too large to be buried, or a probe capable of seeing beneath the surface. StuRat (talk) 00:14, 15 January 2015 (UTC)[reply]
Well, above someone gave a figure of 1 mm of moon dust for 1,000 years, which is a kilometer in a billion years. But... Ceres is only 0.0128 Moons of mass, according to the article. Less mass, less dust captured, and more easily bounced off into space... I think. Honestly I have no idea how the formula scales, but I'm suspicious it might be much less. Wnt (talk) 01:01, 15 January 2015 (UTC)[reply]
Pluto has an atmosphere therefore some erosion is certainly present on its surface. Ruslik_Zero 20:49, 15 January 2015 (UTC)[reply]