Jump to content

Talk:Supercomputer

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Isamil (talk | contribs) at 17:43, 28 May 2012 (→‎Claim of 132 exaflop computer is suspect: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconComputing C‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.
WikiProject iconElectronics C‑class Mid‑importance
WikiProject iconThis article is part of WikiProject Electronics, an attempt to provide a standard approach to writing articles about electronics on Wikipedia. If you would like to participate, you can choose to edit the article attached to this page, or visit the project page, where you can join the project and see a list of open tasks. Leave messages at the project talk page
CThis article has been rated as C-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
WikiProject iconTechnology C‑class
WikiProject iconThis article is within the scope of WikiProject Technology, a collaborative effort to improve the coverage of technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.

request

Has anyone produced a graph showing supercomputer FLOP speed over time, and then added "popular" computers onto it, like the Altair 8080, TRS-80, Sinclair ZX, PC (at various clock speeds), PlayStation, and PDAs? I think such a graph would be very interesting indeed. Since "popular" computers came into existence in the mid 1970s, I don't think that they've ever been more than 15 years behind the Supercomputers. I would absolutely LOVE to see such a graph. —Preceding unsigned comment added by New Thought (talkcontribs) 08:34, 2 May 2008 (UTC)[reply]

Console games are no where close to be measured in FLOPS, so don't bring in stupid questions, plus console games are not considered as HPC, Supercomptuer or any type of enterprise systems. I highly doubt that the 8 SuprEngine was be any differeent, they are more of a prototype quad-PPU I would call it.

By the way does anyone know how much TFLOPS is 1 quadrillion floating-point operation per second. Also how much does TFLOPS does OpenSPRAC runs at.

IBM Roadrunner -not sure is it HPC, Supercomptuer, Distributed computing, parallel computing or multiprocessing computing I just started high-end enterprise systems not too long ago. So... this can be added to the Chart

--Ramu50 (talk) 15:11, 10 June 2008 (UTC)[reply]

Well actually, modern games with collision detection and physics use floating point quite extensively, as do 3D games. In fact, there is a direct link between the performance of 3D graphics and floating point performance. However, I don't believe that adding a table to compare these systems to supercomputers is a good idea as they are completely different systems for completely different purposes and becuase most of these old machines mentioned didn't even have a FPU. Rilak (talk) 06:56, 11 June 2008 (UTC)[reply]

Read my lastest suggestion about the chart Talk: Supercommputer "Restriction to Top500." = =\\\ I thought so people notice these 2 might be related --Ramu50 (talk) 21:10, 14 June 2008 (UTC)[reply]

vandalism

Removed vandalism. someone double check I resotred the right version when possible? Jaqie Fox 03:44, 2 February 2007 (UTC)[reply]

appropriate reference?

I'm not sure if posting a link to an Aqua Teen Hunger Force episode at the top of the page is particularly relevant or does anything for the credibility of this page as a source of information on super computers. — Preceding unsigned comment added by 121.45.26.165 (talk) 02:24, 28 March, 2007 (UTC)

It's not really a reference, just a disambiguation link for anyone who happens to come here looking for the other topic. --Mary quite contrary (hai?) 02:37, 28 March 2007 (UTC)[reply]

I replaced the ATHF disambiguation link - it is not a reference. While someone doing research on supercomputers would not be looking for that episode (it is not an obscure show by any definition), someone looking for that episode might end up here by mistake. Hence the disambiguation link. I apologize for forgetting the edit summary, but disambiguation is very important to helping people find what they need on Wikipedia, so please leave the link as is. --Mary quite contrary (hai?) 16:14, 28 March 2007 (UTC)[reply]

If we had a disambiguation link for every TV episode title in existence, we'd have a link on the top of almost every single major wiki entry.

I tend to agree with you here, even though I've reverted removals in the past. It seems that other editors wanted to keep it, but I'd be glad to see it go, I'd think anyone searching for the ATHF episode will probably not be too confused to find an article about a supercomputer if they searched for that term. -- JSBillings 11:36, 27 June 2007 (UTC)[reply]

The point is not whether someone searching for the eposide would be surprised to end up at a supercomputer article, it's whether someone searching for that episode would stumble across this page while trying to find it. If it is not easy enough for people to find, it will not get used as a resource (wikipedia) as much. It's all in making things more easily accessible to people. Jaqie Fox 05:17, 28 June 2007 (UTC)[reply]

I looked at other articles that share their name with ATHF episode names. Most of them either do not have a link directly to the ATHF article (The and Super Model for example) and some refer to a disambiguation page which links to the ATHF episode (Circus for example). I think that a link to a disambiguation page would look better than a link to a cartoon at the top of an article about supercomputers. -- JSBillings 13:58, 28 June 2007 (UTC)[reply]

I heartily agree. total removal is bad, a disambiguation link such as is on Circus is much preferred to the current link. Jaqie Fox 21:52, 28 June 2007 (UTC)[reply]

Ok, Supercomputer (disambiguation) exists and is referred to at the top of the article. I just added the Supercomputer and Super Computer links. I'm thinking it might be nice to link to High Performance Computing too. -- JSBillings 22:44, 28 June 2007 (UTC)[reply]

I still think it's inappropriate. We don't have disamgiuation links for Super Bowl, Super Model, PDA, etc. for ATHF episodes because it's such an obscure show. And the episode articles are merely stubs. It's like an ATHF invasion of wiki. - Animesouth 03:19, 30 June 2007 (UTC)[reply]

Pardon the change, but this is turning into a single long conversation thread anyway so there's no use in keeping all those spacings that wastes so much space in this case. Anyway, I personally hate AHTF and wish it would vaporize into thin air with all the other stuff I feel is crap, but it is definitely not obscure by the wikipedia definition, and if you feel the stubs don't belong then you should edit them into full articles, or maybe campaign to make it into a single AHTF article instead of one for each show, but this is not the place to discuss that, the AHTF page is. The disambiguation page as it stands now is precisely what was needed. Whether or not to link the AHTF from here should no longer be discussed because it is not linked from here, it is linked from the disambiguation page itself. Jaqie Fox 06:27, 30 June 2007 (UTC)[reply]

I think the link to the Supercomputer_(disambiguation) page is a good compromise. Ttiotsw 08:02, 3 July 2007 (UTC)[reply]

An ATHF episode may be significant enough to have a wiki article (which I certainly doubt, but as you stated, that's a different argument altogether), but is it significant enough to listed as a disambiguation of a much more encyclopedic-worthy article? No one is going to remember ATHF in 10 years. But supercomputers will be around, if not merely for historical purposes. Even "The Sopranos", which is a vastly more popular cable TV show, does not have disambiguation links for its episode titles when it coincides with major articles. Why should ATHF receive preferential treament? -Animesouth 14:06, 3 July 2007 (UTC)[reply]

The disambiguation page is a compromise between those who think it is useless to link to an ATHF page on this article, and those who want to maintain a useful project that has information for any audience. I tend to agree with you that a TV episode is an ephemeral item, however it is actually pretty hard to find the ATHF episode entry if you're actually looking for it on wikipedia. The disambiguation page solves the problem. -- JSBillings 16:12, 3 July 2007 (UTC)[reply]

JSBillings, it's not an issue nor a war, not anymore. I contacted an admin a few moments ago (as animesouth had been vandalizing my talk page with false vandalism warnings, which is in itself vandalism) and got it all straightened out. check my and animesouth's pages if you want more info :) Jaqie Fox 16:39, 3 July 2007 (UTC)[reply]

Ack! Please stop removing the colons for indenting. If you feel that it takes up too much space, customize the stylesheet you use to view Wikipedia. Read Help:User_style for more information about that. In fact, stop removing other people's comments altogether! -- JSBillings 12:26, 5 July 2007 (UTC)[reply]


By WP:VAND definition, removal of user discussion comments is considered vandalism: "Discussion page vandalism: Blanking the posts of other users from talk pages other than your own". -Animesouth 01:39, 9 July 2007 (UTC)[reply]

Software Tools

I reverted the Software Tools section to something that was actually about software tools. I removed the "Virtual Supercomputer" reference, because the definition of supercomputer is better explained later. The software tools section starts out OK, but then turns into an advertisement for Apple's software. JSBillings 13:34, 9 May 2007 (UTC)[reply]


Shaw TSRTWO Project ?

Am I the only one who thinks this addition is rather suspicious? Some more detailed references seem to be necessary...

JH-man 08:51, 11 May 2007 (UTC)[reply]

I agree. Considering it links to pages created by the same user as the editor who added it, and the name of the user, I tend to think it is quite suspicious. I haven't heard of it before, and judging from the fact that all the pages that it refers to were created at the same time as the entry with no external references, it's probably bogus. JSBillings 11:42, 11 May 2007 (UTC)[reply]

Small error / inconsistency with tabulators linked

If you look at the very first of the wikipedia article, and then over at the linked http://en.wikipedia.org/wiki/Tabulating_machine page, the year in which New York Times first used the 'supercomputer' term is different. As I do not know the proper year and am incredibly tired(insomnia) someone please look up the right one and fix this.... oh and please remove this entire comment of mine once you have, if you would. thanks! Jaqie Fox 12:32, 28 May 2007 (UTC)[reply]

Cleaned up "Software Tools" section

I removed a lot of text from the "Software Tools" section because part of it read like an advertisement for Apple's Shake (software) app. I think the section really needs to be expanded, discussing other tools, such as performance tuning tools (vtune, for example) and debugging tools for supercomuting/hpc code (such as totalview). JSBillings 14:14, 4 June 2007 (UTC)[reply]

New information about BlueGene

All the information about the BlueGene/P was added without any references. The current references are all old, and only refer to the BlueGene/L. -- JSBillings 14:23, 12 July 2007 (UTC)[reply]

(Note: I'm not the editor who added Blue Gene/P in the first place.) This page has had over-enthusiastic additions of "the fastest computer" several times over the years. However, the lead para on the timeline says that we now use the TOP500 as our benchmark. Therefore, I removed most of the info, since it is in the Blue Gene article anyway. I also adjusted the references for BlueGene/L, added a refrence for Blue Gene/P, and removed the assertion that Blue Gene/P is currently the fastest, since it is not yet deployed.I should probably also move it to the "research" area until it actually shows up on the TOP500, but I decided to trust IBM for a few months. Feel free to be harsher as necessary. -Arch dude 16:55, 4 August 2007 (UTC)[reply]
So much for trust. I finally noticed this again and removed the paragraph entirely, since it proved to be incorrect. -Arch dude (talk) 10:33, 27 April 2008 (UTC)[reply]

Unified formatting for flop/flops/FLOPS

Between petaflop, petaflops, petaFLOPS, PFLOPS, TFLOPS, etc. I think there should be a standard usage throughout this article. It gets confusing when two different notations are used for the same measurement.

The article has a specific paragraph explaining the units of measurement, and the article uses TFLOPS consistently instead of teraflops. I therefore added the definition of PFLOPS to the definition paragraph and changed all occurances of "petaflops" to PFLOPS. -Arch dude 16:47, 4 August 2007 (UTC)[reply]
A greater problem exists in the performance figures for each machine in the table. The performance figures do not correspond to the performance of any of those machines in the year of their introduction be they Cray-1 or CDC 205, etc. The cited performance figures are optimistic at best only in the latter year of their introduction. Additionally, at best only site in the world existed for the 205 to anywhere near approach that performance because the configuration of the machine only existed at one site (all other sites having far smaller pipe configurations). I would go so far as to suggest removing that entry because I never that performance and is likely that it's a cooked up marketing number. The article as a whole needs a good going over. 143.232.210.38 (talk) 23:37, 28 December 2007 (UTC) --enm[reply]

Repairing vandalism

There were a lot of things removed by several vandals. Including this was a bunch of perfectly good links, that I've restored. I also restored the Quasi-supercomputing section, which must have been lost in the vandalism fixes. -- JSBillings 21:30, 24 September 2007 (UTC)[reply]

FLOPS versus OPS

There is no explanation of "OPS" and derivatives, used for pre-1960 computers in the table, and no explanation of how it relates to FLOPS w.r.t. processing speed/time. —DIV (128.250.204.118 00:56, 24 October 2007 (UTC))[reply]

FLOPS=Floating-Point Operations Per Second OPS=Operations Per Second

FLOPS is simply restricting your performance measurments to only measuring operations dealing with floating-point numbers.

Veddan (talk) 15:41, 31 May 2008 (UTC)[reply]

Definition

I think the definition needs work: Surely a supercomputer is a computer that is 'massively' faster than an 'average' (or even 'good') contemporary computer, irrespective of whether the prospective supercomputer is actually the fastest or 'near' the fastest.
In particular, considering the TOP500 list of 27 June 2007 listed the worlds fastest "supercomputer" speed as 280.6 TFLOPS, while the world's 500th fastest "supercomputer" speed was only 4.0 TFLOPS — basically 70 times slower. Note further that the slower machine was dated 2007, whereas the newer machine was dated 2005.
It is not accurate to state that the slower machine "led the world (or was close to doing so) in terms of processing capacity, particularly speed of calculation, at the time of its introduction" (as the article currently reads), and yet it is accepted that it is a "supercomputer"!
— DIV (128.250.204.118 01:12, 24 October 2007 (UTC))[reply]

Update on speed of Blue Gene/L

Blue Gene/L seems to have been upgraded over summer, and is now clocking in at 478 TFLOPS. References http://news.bbc.co.uk/1/hi/technology/7092339.stm http://www.hpcwire.com/hpc/1889245.html http://www.hemscott.com/news/latest-news/item.do?newsId=53878217307964

Should maybe change the article to reflect this. Malbolge 12:06, 13 November 2007 (UTC)[reply]

Actually, that's the BlueGene/P, and I don't believe there is a real-world installation yet. (correct me if I'm wrong). —Preceding unsigned comment added by Jsbillings (talkcontribs) 12:44, 13 November 2007 (UTC)[reply]
Nope, it is the BlueGene/L. According to the TOP500, its achieved 478.2 TFLOPS after a recent upgrade as mentioned before. Rilak 13:41, 13 November 2007 (UTC)[reply]

Ranger

I'm surprized there's nothing on the Sun Microsystems' new supercomputer called Ranger in Wikipedia yet. —ZeroOne (talk / @) 00:04, 7 January 2008 (UTC)[reply]

Inconsistency in List vis-a-vis Top 500

Hi,

According to the Top 500, the Thinking Machine supercomputer of 1993 had a "Rpeak Sum (GF)" of 691. I assume that they mean a top speed of 691 GFLOPS. But the list here says the Thinking Machines CM-5/1024 in 1993 was capable of only 65.5 GFLOPS. Why the inconsistency?

Thanks,

210.206.137.53 (talk) 06:06, 18 January 2008 (UTC)[reply]

The list here is correct. If I am not mistaken (I'm not familiar with Thinking Machine's systems) the CM-5's architecture supports a maximum of 16,384 processors. The system listed here is clearly stated to have had 1,024 processors, thus the difference in performance. Also, I think that the TOP500 always gives the maximum theoretical performance when discussing a system, and the actual benchmarked performance when discussing an installation. Rilak (talk) 12:44, 18 January 2008 (UTC)[reply]

Software tools - Open source community

This quote seems a bit... anti-open source, I doubt anyone in the open source community would deliberately create 'disruptive' software when it comes to this field but not sure, maybe someone else noticed that? "open source community which often creates disruptive technology in this arena." Akritu (talk) 07:49, 21 January 2008 (UTC)[reply]

I linked it. "Disruptive technology" is a marketing term from the dot.bomb era. It is (was) actually a highly favorable description: if y9ou had Disruptive technology, you would make a lot on money,and itf you did nbot, your were a dinosaur. -Arch dude (talk) 01:35, 4 May 2008 (UTC)[reply]

Restriction to Top500

I removed a non-Top500 entry from the list. The entry was for TACC and was based on a press release. While I have no reason to doubt that TACC is as fast as described, We need to have some sort of objective standard, and today, as flawed as it may be, Top500 is that standard.

There have been many, many announcements of "Fastest computer" since 1993 that are not on this list, and the list will become unwieldy if we add them all. Allowing self-proclaimed "#1"s in this list would be the equivalent of allowing any professional american football team to add itself to a list of #1 football teams based on the weekly standings rather than on the superbowl. -Arch dude (talk) 09:56, 27 April 2008 (UTC)[reply]

What are the sources for the list? I understand prior to 1993 there are various sources. Otherwise TOP500. I am looking for a list just like this for print publication and would like to source them accurately (including a "compiled by" source) The TACC DATA hopefully will make the next list. ---(austexcal, may 19,2008) —Preceding unsigned comment added by Austexcal (talkcontribs) 22:55, 19 May 2008 (UTC)[reply]

Hmm...maybe we should open a new article that consist a list of supercomputer, HPC, distributed computing and other various high-performance research or enterprise system, because most people use Wikipedia as a learning center, so I think documenting all in one. People might be more interested in looking some of the major breakthrough in the list to better understand how the multicore architecture works. At least I would be very interested in reading it. But for the list in the article I suggest any high performance enterprise system can be place on there, but """TRY""" to narrow them down between 50~150FLOPS, unless they are for research purpose, otherwise it would look like too much of a "How to buy HPC" topic than understanding the history. = = --Ramu50 (talk) 05:45, 11 June 2008 (UTC)[reply]

Sourcing for peak speed numbers

Where are these numbers coming from? Are you sure they're even right? For example the Cray corporate website's history section says that the Cray-1 had a top speed of 160 MFLOPS, http://www.cray.com/about_cray/history.html This section claims it as 250 MFLOPS Phatalbert (talk) 00:33, 11 June 2008 (UTC)[reply]

IBM Roadrunner

Should the IBM Roadrunner be added to the supercomputer timeline? -- Alan Liefting (talk) - 03:39, 11 June 2008 (UTC)[reply]

Please discuss above in the section "Restriction to the TOP500". -Arch dude (talk) 11:53, 11 June 2008 (UTC)[reply]

Virtual Tape Library

Is VTL design for supercomputer, HPC or just regular workstations / servers? --Ramu50 (talk) 00:45, 17 June 2008 (UTC)[reply]

(Please add new sections at the bottom, not the top.)
VTLs are generally associated withe mainframes and enterprise servers rather than supercomputers. -Arch dude (talk) 03:40, 17 June 2008 (UTC)[reply]

Sequoia

This new IBM computer is to be built: "IBM is to build a hugely powerful supercomputer capable of performing at 20 petaflops per second, twenty times faster than the current record holder, namely the 1 petaflop Roadrunner machine it delivered back in June to Lawrence Livermore National Laboratory"


--MurderWatcher1 (talk) 14:59, 5 February 2009 (UTC)[reply]


While this it is good to give indications of future systems, I would make this more generic. Something like "It is estimated that some supercomputers may achieve a peak performance of approximately 20 petaflops by the end of 2011. IBM has announce plans to deliver such a machine. This 20-fold jump from 1 petaflop in 2008 (Roadrunner and Jaguar systems) to 20 petaflops in 2011 would actually exceed the typical "Moore's law rate of increase." --Coffeespoon (talk) 18:38, 21 June 2009 (UTC)[reply]

The IBM System x iDataPlex Supercomputer can perform 300 trillion calculations per second, operates on 3,240 intel 5500 series 2.53 GHz processor cores arranged in 45 file-like stacks, is Canada's fastest supercomputer, is the number one super computer outside of United States and is the 12th fastest globally. Is it too slow to be listed? Super-Computer related citations at SciNet Consortium SriMesh | talk 00:13, 19 June 2009 (UTC)[reply]

This article is not a list of all supercomputers. The only list in this article is the list of computers that were the number one csupercomputer in the world at some time. -Arch dude (talk) 11:13, 20 June 2009 (UTC)[reply]

I agree. I also think we can delete "In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009" since there are actually several systems slated for delivery in 2009 that will exceed 1PF. --Coffeespoon (talk) 18:44, 21 June 2009 (UTC)[reply]

I concur. This article suffers continually from "recentism," and therefore needs frequent work of this type. It's inherent in the nature of the article that we must basially violate the "recentism" guideline that we use for almost all other Wikipedia articles. Please be bold and edit the article. -Arch dude (talk) 19:52, 21 June 2009 (UTC)[reply]

"semi-infinite "

Can something even be semi-infinite? I know what your going for with the hyperbole but is it the best way to convey information in an encyclopedia? Sorry for not editing this myself but my kowledge of the subject is limited. Stupidstudent (talk) 07:33, 1 August 2009 (UTC)[reply]

It's a maths term for when you have a finite number of variables and infinite constraints or finite constraints and infinite variables. We have a wiki article of semi-infinite programming which I suggest we should link to. Ttiotsw (talk) 08:49, 1 August 2009 (UTC)[reply]
Is it worth linking to in the article? Stupidstudent (talk) 21:19, 2 August 2009 (UTC)[reply]

Roadrunner beaten.

Roadrunner is no longer the fastest supercomputer.

http://money.cnn.com/news/newsfeeds/articles/marketwire/0559346.htm

http://www.top500.org/ —Preceding unsigned comment added by 99.130.196.154 (talk) 05:30, 16 November 2009 (UTC)[reply]

So why didn't you update the table? —Preceding unsigned comment added by 64.149.235.218 (talk) 05:27, 18 November 2009 (UTC)[reply]

Historical omission

The Atlas Computer was the world's fastest computer in 1962. Should it be added to the list?

A period of time existed when all computers were deeded "fast". After a while, that term lost its meaning. When it came time for the justification of a faster machine for specifically fusion bomb design, which also used the single word prefix "super", guys from the bomb labs went before Congress to justify their purchase with the prefix. The page is actually pretty messed up as it is. 143.232.210.38 (talk) 17:51, 13 July 2010 (UTC)[reply]

Operating system makeover

Quote: Operating system section

are at least as complex as those for smaller machines. Historically, their user interfaces tended to be less developed, as the OS developers had limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). These computers, often priced at millions of dollars, are sold to a very small market and the R&D budget for the OS was often limited. The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.

Comment: irrelevant to supercomputing aspects of operating systems. The quote is about

  • servers in general;
  • computer programming in general;
  • marketing (probably stale) in general;
  • business budgeting for R&D.

Also "are sold to a very small" needs a citation or rewording. This drivel is probably worse than the space nothing would invite. Besides Supercomputing is to large, (for those of us who use very old supercomputers to edit with); and actually factually, "some sections may need expansion", per its WP:BCLASS classification (italics mine).


For the same bulleted reasons above I removed

It is interesting to note that this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as AMD and NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.

My thought upon reading it was an innocent and natural opposition: "It would be more interesting to note the operating system aspects of supercomputing", and I think that will apply to most readers.


Likewise

In the future, the highest-performance systems are likely to use a variant of Linux but with incompatible system-unique features (especially for the highest-end systems at secure facilities).[citation needed]

The encyclopedic facts of supercomputing operating systems might be that the "incompatible system-unique features" are in actuality instruction sets.

Rather it might have said:

Supercomputing operating systems involve designing operating-system-specializing microprocessors with specially designed instruction sets in their controller sections, and these designs are not likely to remain state-of-the-art, and that's why supercomputers are expensive—they are custom built.

That was an educated guess.

Yeah, and it's got a generally right conclusion (custom built hardware) but wrong way of getting there. The OS is generally a port (a transport or copy) of existing software with minimal revisions. Your fallacy is that the software is no longer designed. The last firm which attempted a complete supercomputer OS design was ETA Systems and their attempt (EOS) was a major factor on two levels which killed the company. Arguably, Thinking Machines made a similar set of mistakes, but that's a harder case to explain, because of the separation of the CM from its workstation servers. The CM was more of an attached processor. 143.232.210.38 (talk) 18:00, 13 July 2010 (UTC)[reply]

Here some WP:NOR removed

In the future, the highest-performance systems are likely to use a variant of Linux but with incompatible system-unique features (especially for the highest-end systems at secure facilities).[citation needed]

Thank you. We are all welcome. — CpiralCpiral 23:06, 26 December 2009 (UTC)[reply]

All Computers Made Before The 60s Were Supercomputers of Their Time

Weren't all computers made before the 60s supercomputers of their time? --Matthew Bauer (talk) 02:57, 21 June 2010 (UTC)[reply]

NO. the computer maufacturers each made a range of machines, and the smaller ones were never the fastest of their time. an example is the IBM 1401. -Arch dude (talk) 17:14, 21 June 2010 (UTC)[reply]

Modern desktop faster than 10 yo supercomputer???

The article included a phrase "a single modern desktop PC is now more powerful than a ten-year-old supercomputer" and then proceeds to claim that a $4000 workstation (presumably from 2010) outperforms a supercomputer from the 1990's. That's not 10 years, that's up to 20 years difference. For example, the state-of-the art supercomputer of the year 2000 did 7 TFLOPS (double precision), while a high-end desktop may do 0.1 TFLOPS in double precision and 1 TFLOPS in single precision with GPGPU. Also in terms of memory capacity and bandwidth, there is quite a gap between 10-year-old supercomputers and present-day desktops, which cannot be equipped with multi-TB RAM memory and 100s of TB hard disk space.

I removed this statement. Han-Kwang (t) 20:22, 28 June 2010 (UTC)[reply]

'All electricity to heat'

The `Supercomputer challenges, technologies` section contains this line:

"A typical TOP500 supercomputer consumes between 1 and 10 megawatt of electricity and converts all of it into heat."

Bold emphasis mine. Surely all the power is not converted to heat or else there would be no actual computing being done.

It's all converted to heat in the process of being used for computing. Even if all of the other components (power supplies, fans, etc.) were 100% efficient, the electricity expended in switching each transistor (the basic element of the computational process) converts electricity to heat. This is why a powerful processor needs a big heatsink. -Arch dude (talk) 15:24, 31 January 2011 (UTC)[reply]

Deep Thought should be added to pop culture/fictional supercomputer list

Deep Thought as well as the Earth(in the fictional context of Hitchhikers Guide to the Galaxy) should be added to this section. —Preceding unsigned comment added by 132.198.196.62 (talk) 16:35, 17 March 2011 (UTC)[reply]

Supercomputer uses in research section

I'm adding a section on the uses of supercomputers in research. While this information is scattered throughout the article, there is a lot to be said on this that isn't said. Also, I haven't found an article anywhere else that addresses this, so I figure this is as good a place as any to organize this information. This appears to be an article that has had a lot of effort put into it, so as a newcomer to this page (and as one that has little education in the subject) I don't want to step on any toes; any helpful emendations to this section would be welcome. Kant66 (talk) 02:41, 18 May 2011 (UTC)[reply]

Welcome and thanks. I (one of many editors of this article) Think that your section is appropriate and is in the correct place in the article. Please continue to contribute. Now for a few suggestions. First, I think the section would benefit by a more "historical" perspective: your thesis is correct, but supercomputers have contributed to "bleeding-edge" research since their inception, and each generation tends to attack relevant problems that are then solved and no longer need supercomputers, or that can be attacked by newer commodity computers, or (as in the case of weather predction) still require supercomputers but have become operational problems rather than research problems. May I suggest that you try to fine an example problem in (say) each decade since (say) 1960 that was attacked as a resaerch problem using supercomputing? For each such problem and attack, mention, the computer and the outcome. -Arch dude (talk) 01:25, 19 May 2011 (UTC)[reply]
I added a table with some decade-by-decade examples. I also added another section to cover current uses of supercomputing. I wasn't sure if it would be better to fold this section into the timeline or to keep it separate. Also, I'm aware that the term "supercomputer" didn't come around until until the 1960s, but thought that the 1940s and 1950s operational capabilities provide a nice basis for comparison. However, I'm open to starting in the 1960s.Kant66 (talk) 16:40, 25 May 2011 (UTC)[reply]

Likely new record for the quasi-supercomputing category

That section currently reads "The fastest cluster, Folding@home, reported 8.8 petaflops of processing power as of May 2011.". Compare this with the Bitcoin mining network, which BitcoinWatch.com estimates to use 65.5 petaflops as of today (with a rather impressive historical growth rate too.) 99.58.56.97 (talk) 18:40, 8 June 2011 (UTC)[reply]

SC conference/show?

Does it have a WP article? I can't find it. FuFoFuEd (talk) 14:59, 19 June 2011 (UTC)[reply]

K Computer

Although the K computer is correctly referenced as the fastest supercomputer in the lead and the Timeline of supercomputers (at the end of the article), it also should be referenced in:

  • Modern supercomputer architecture
  • The fastest supercomputers today, Current fastest supercomputer system

and possibly in:

  • Timeline of supercomputer uses (at the beginning)
  • Supercomputer challenges, technologies

replacing the Tianhe-1A. --RoyGoldsmith (talk) 12:26, 22 June 2011 (UTC)[reply]

I started to look at this page, and suddenly music started playing in the back of my mind: a shadow hanging over me, Oh, yesterday came suddenly. This article is just dated, i reads like yesterday... Oh, I believe in yesterday...

I do not have time to fix/rewrite it and I do not want to put a rewrite tag on it yet, so let us see if I can talk someone to adding refs so the unref-tags can come off and then gradually bring it up to date, etc. So who is playing with this page these days? History2007 (talk) 20:51, 7 July 2011 (UTC)[reply]

I guess now that there is no response, need to say it is out of date with some type of tag. Even the computer on the main image is now outrun by the iPad. So every aspect is outdated. But I changed the Cray 1985 image so at least the reader gets a feeling for what a modern machine looks like from the outside. History2007 (talk) 20:06, 8 July 2011 (UTC)[reply]
What is out-of-date about it? I strongly dispute that "every aspect is outdated". Most of the article appears to be about general principles and definitions, approaches and limitations to them, origins and history, etc. Very little appears to be intended to be outdated-current (however, "fastest" or "largest" should definitely be written as "as of [month/year]..." or somesuch) or forward-looking that has been disproven by reality (like a trip to Epcot Center). The thumb is not supposed to be a current top-performer, but more an iconic or representative image of the idea. I agree with the existing, that a purpose-built supercomputer is more representative than a generic device that can do better. DMacks (talk) 20:20, 8 July 2011 (UTC)[reply]
Oh, don't get upset now. I only tagged it after no one responded to offer to work on it. How about your fixing the unref issue, then we see. Overall, my feeling was/is that it is just yester-news, but I really do not want to spend time cleaning it up yet. So would you like to help? History2007 (talk) 20:28, 8 July 2011 (UTC)[reply]
This isn't quite my field any more, I was just coming in from a notice-board message. I agree with the concern there that this tagging seemed like a heavy-handed (overly broad-brush) approach. Which is why I specifically said what topics seemed okay that got swept in, and asked for clarification about the scope of the problem. Major sections of the article appears to be intended to give historical perspective, not just document the latest/recent advances (I agree with the dateness of the newsy sections). The new lede image is a great one. DMacks (talk) 20:50, 8 July 2011 (UTC)[reply]
I think the problem is that right from the start it seems stuck with the CDC mentality in the lede and that yesterday's approach tone continues throughout, just presenting history not architectural issues. That was why it really made me sing Yesterday as I read it. I will get to it sooner or later, but hopefully someone will do some clean up before I can focus on it. History2007 (talk) 21:03, 8 July 2011 (UTC)[reply]
Ah yeah, looking closer at some of the general intro stuff, I agree that it's deeperly stale than I thought. DMacks (talk) 21:09, 8 July 2011 (UTC)[reply]
Anyway, I have started cleaning up the peripheral material such as the processor types, etc. and started History of supercomputing to discuss the history items (which are pretty incomplete here anyway) there. Once those are in good shape I will discuss the architectural issues, modern trends, etc. here, building on that. So I will gradually fix it among other things that I am doing. My guess is that in a month or so it will probably in better shape with suitable sub-articles. History2007 (talk) 22:34, 9 July 2011 (UTC)[reply]

By the way DMacks, a month later, let me note that I have not forgotten about this article, but there is plenty of peripheral material that needs to be written before I can fix things here, e.g. how the Power 775 (I have started that now) is moving back to water-cooled systems vs the Blue Gene low powered approach etc. That type of thing needs to be done before I can do a section on heat management, and eventually the whole issue of heat in supercomputing will probably need to have a separate article - it really deserves a page. And of course there need to be sections on how the OS issues get handled, etc. and all of that may take well over two months to do, in order to get it right. History2007 (talk) 00:14, 14 August 2011 (UTC)[reply]

The next generation BlueGene is expected to be water cooled, according to the prototype boards shown at SC10, and other public information so water cooled and low power are not mutually exclusive, though high power pretty much mandates water cooling (the old NEC earth simulator building cutaway would be fun to show as a contrary example). Indulis.b (talk) 00:54, 27 September 2011 (UTC)[reply]
Let us see if they get a water cooled B-G working. By the way, is there a source that compares air-cooled to the old VW aircooling somehow? That would be fun to add. In any case, since you mentioned it, I built Aquasar along those lines. History2007 (talk) 08:00, 28 September 2011 (UTC)[reply]

Why Russia has no supercomputer?

I thought Russia is a technology developed country in almost all sectors...219.151.158.84 (talk) 17:10, 9 December 2011 (UTC)[reply]

Interesting observation. But as of Nov 2011, this system in Moscow is number 18 in the world. In June 2011 they announced plans for larger systems, but that is in the future. I added a section here anyway.
As a whole how many personal computers does Russia export? How many memory chips do they sell? Korea and Taiwan sell more memory chips across the world than Russia. So there does not seem to be a base for that in their computer industry. The Japanese have a large computing industry infrastructure, etc.
But, a major new development was that the Chinese have a supercomputer with "their own CPU" now. Slower than Tianhee, but a major issue. I will clean up this article, one day, one day and add all that... History2007 (talk) 22:55, 9 December 2011 (UTC)[reply]
PS, FYI I built a quick page for T-Platforms which is the main company there. Yo can look them up on the web for more info. History2007 (talk) 18:57, 12 December 2011 (UTC)[reply]

Removed citation from Wired.com that indicated power costs were $1 per Watt

I have just edited the section under "Current fastest supercomputer system" to a more up-to-date max power usage and removed what I believed not to be factual data. Is a Wired article even a valid source? I don't think any of their articles are scrutinized heavily in the industry. If someone can find a reputable source indicating that power is $1 per watt in Japan, I would love to see it. — Preceding unsigned comment added by JEIhrig (talkcontribs) 06:52, 19 January 2012 (UTC)[reply]

The total operating costs are about $10M per year. That was probably what they meant. So just under $1M per month to keep it running, which sounds to be in the right ballpark. Anyway, I fixed that for now. But almost nothing in this article has been double checked - there are many, many errors of commission and omission. I have been intending to work on it... intending to work on it... Soon.... soon... History2007 (talk) 09:44, 19 January 2012 (UTC)[reply]

Article clean up

I have eventually freed up from other things to start cleaning up here. The basic strategy is to have a series of well sourced and error free sub-articles that deal with each of the aspects such as architecture, distribution vs centralization, software issues, etc. Then this article will be a backbone that refers to those via Mains. It does need to get to be in "very good shape" given that it gets viewed about 1 million times a year and we should be careful to spread only correct information here.

I have written a couple of articles now, e.g. history, architecture, etc. and will do a few more, as I clean up here. I have not removed the outdated flag yet because there are still a few issues to resolve, but I should be able to fix them and remove the flag after I fixing the lede as well. History2007 (talk) 21:50, 8 February 2012 (UTC)[reply]

Claim of 132 exaflop computer is suspect

See http://en.wikipedia.org/wiki/Talk:FLOPS#ISRO_.3E100_EFlops_by_2017_is_highly_unlikely Isamil (talk) 17:43, 28 May 2012 (UTC)[reply]