Talk:Exascale computing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

folding@home[edit]

It is a historical moment for f@h to break 1 exaflop, but can somebody explain why this article states it is the first supercomputer to break the 1 exaflop barrier, and not the bitcoin network? I could be completely misunderstanding the issue, does it have something to do with the simplicity/precision of the calculations? Did the bitcoin network not pass 1 exaflop in 2013? 68.168.176.67 (talk) 00:58, 28 March 2020 (UTC) https://www.forbes.com/sites/reuvencohen/2013/11/28/global-bitcoin-computing-power-now-256-times-faster-than-top-500-supercomputers-combined/#f241ba56e5e4[reply]

OK, replying to myself - debating about the definition of "supercomputer" aside, I just don't think it's accurate to say that the barrier was first broken by F@H, I think it's ambiguous, are you saying "F@H broke 1 exaflop and is the first computer to do so" or "F@H broke 1 exaflop for the first time in F@H's history". I think the use of the word "barrier" implies the former, unless I'm totally reading it wrong. Neither of the cited articles state that F@H is the first to break 1 exaflop as far as I can tell 68.168.176.67 (talk) 01:06, 28 March 2020 (UTC)[reply]

Untitled[edit]

NNSA is an agency of the Department of Energy. I propose that the potential sponsors of exascale computing be identified as the Department of Energy's National Nuclear Security Agency and the Department of Energy's Office of Science.

How to deliver HPC Service from Exascale by Keith Vickers, OCF[edit]

Big data and Exascale are two themes that cropped up last year and will no doubt be heavily discussed this year. But why? For Exascale at least, for the mere mortals on the street they will not even get close to it in the next 5 years – if not ever. Why invest such large sums of cash and time when focusing on what you do best i.e. the science, R&D etc is a better return on your investment.

That does not mean to say it is an irrelevant discussion. For some, Exascale might be required over short bursts or for specific highly intensive projects. OCF is seeing some ‘jostling’ in the world of academia, with academics seriously looking at Exascale, but the reality is there will be very few implementations that are Exascale, if any in UK academia. Let the Leviathans – IBM, HP, Google, Amazon, Microsoft, the global Telecoms providers – build the framework and infrastructure. Let specialist HPC service providers sit on top of this infrastructure providing the key skills to interface between Leviathans and the man (or woman) in the lab.

In my opinion the pursuit of Exascale will deliver the same sorts of benefits for the man (or woman) in the lab that the US Space Programme did – not everyone has a Saturn 5 rocket but we’ve all got Teflon coated pans somewhere. Look for the spinoffs that you can adopt and put to effective use. Faster inter-connects, better parallelism, lower power consumption etc.. (TobiasMant (talk) 15:44, 31 January 2012 (UTC))[reply]

Dude, the teflon coated pans thing is so old and so wrong. Teflon has nothing to do with the space programme, and almost everyone knows it by now..

132.8 EFlops?[edit]

I think the article must be in error; this would be roughly 1000x the processing power of all 2012 Top500 computers combined, and would _massively_ distort the trend in http://en.wikipedia.org/wiki/File:Supercomputers.png

I removed this information until a more reliable source can be found (the cited reference also refers to "pentaflops"...) 193.170.132.220 (talk) 19:52, 16 May 2014 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified 3 external links on Exascale computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 02:25, 26 September 2017 (UTC)[reply]

Semi-protected edit request on 24 March 2019[edit]

"add the announcement made on 18 March regarding the future US exascale computer (it disappeared but I don't understand why, as it's a correct announcement) "

On 18 March 2019, the United States Department of Energy and Intel announced the first exaFLOP supercomputer would be operational at Argonne National Laboratory by the end of 2021. The computer, named "Aurora" is to be delivered to Argonne by Intel and Cray.[1] Emmel (talk) 11:29, 24 March 2019 (UTC)[reply]

 Done NiciVampireHeart 22:40, 27 March 2019 (UTC)[reply]

References

  1. ^ "U.S. Department of Energy and Intel to deliver first exascale supercomputer1". Argonne National Laboratory. 2019-03-18. Retrieved 2019-03-18.

Fugaku and Frontier[edit]

As I read these two articles ( https://www.pcmag.com/news/us-takes-supercomputer-top-spot-with-first-true-exascale-machine?utm_source=email&utm_campaign=whatsnewnow&utm_medium=title and https://top500.org/news/ornls-frontier-first-to-break-the-exaflop-ceiling/), the Fugaku is only potentially an exascale machine, whereas Frontier has been documented as such (this fact needs to be put into the article). 2600:6C67:1C00:5F7E:38F2:2574:E3E8:B972 (talk) 00:57, 31 May 2022 (UTC)[reply]

Chinese exascale computers[edit]

What about multiple reports that China built for at least 2 systems in 2021 that already reached exascale computing? Should we for at least mention those reports or consider them rumors? Is it a good idea to stick strictly to TOP500 report just because it has become convenient? The articles were published in a few technological magazines:

2601:1C0:CB01:2660:64D7:D462:5C1A:8409 (talk) 17:57, 28 June 2022 (UTC)[reply]

It might be worth putting it in the China section, but carefully as "rumoured". The reason why "carefully" is that everyone claims exascale, but it is all in the measurement. Firstly, as the opening line of the WP page the traditional understanding is that it is HPL in double precision as per TOP500. Otherwise everyone will claim exascale of different measurements, and people have tried that. Secondly, I don't doubt that there are secretive computers that have reached exascale performance before Frontier, and it may be worth clarifying Frontier is the world's first *publicly* announced exascale computer. However there's a lot of rumours and no public confirmation that the rumoured comparable measurements make these computers exascale, as per the HPCwire article. 2 experts did note that they think these computers exist, but you do need to carefully document what kind of evidence you are providing for this. - Master Of Ninja (talk) 17:55, 29 June 2022 (UTC)[reply]
The references to HPC Wire and The Next Platform do not present the two Chinese computers as ”rumours”, but as facts – and these guys know what they are talking about. Jack Dongarra accepts them as facts, as does David Kahaner. I have not seen this disputed anywhere. You will find Top500 leaders, like AMD, in their marketing implying that they were the first to cross into exascale, but this is by conveniently failing to mention that they are talking about Top500 – i.e. confusing the map for the territory. gnirre (talk) 23:20, 14 November 2022 (UTC)[reply]

Exascale should not demand Double Precision in this article[edit]

Hi guys!

The definition of exascale as demanding double precision is not proper. TLDR: read the reference. It is not what it says.

And the reference is pretty good, so I suggest the article is adapted to match it, rather than trying to find som other reference.

I will now quote relevant parts from the reference, chapter 2. Do not stop reading after the first sentence:

”For most scientific and engineering applications, Exascale implies 1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaflops1). [...] However, there are critical Defense and Intelligence problems for which the operations would be over the integers or some other number field. Thus a true Exascale system has to execute a fairly rich instruction set at 1018 operations per second lest it be simply a special purpose machine for one small family of problems.”

-- ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems (2008)

So there you have it. ”Integer” performance and ”other number field” performances are also included in this report on ExaScale Computing and Exascale Systems – which is what this article is about.

And this is not the report being sloppy in the definitions section. It goes on for example to recount applications of exascale that explicitly does not use ”high precision”. See figure 5.3 which leaves a blank on ”High-precision arithmetic” for Energy, Biology, Climate and Industrial.

This article is not about Top500 and its Double Precision Linpack benchmark. I understand this probably is what has been causing the confusion. That has its own article.

The reference is good but also pretty old – 2008. Most interestingly it does not mention AI/neural network computation/deep learning as an application area which as of 2021 is an extremely relevant application area of exascale. So actually someone might want to find a newer reference. gnirre (talk) 23:10, 14 November 2022 (UTC)[reply]