The Top500 list regularly polls the organisations that run supercomputers creating a list of largest, fastest, most efficient and so on. The 50th release of the Top500 list was released yesterday. And there have been some changes.
The first is the rise in the number of Chinese supercomputers, putting them ahead of the United States for the first time. And quite a few of these are high spec beasts. They are up from 160 systems (32%) in June to 202 systems (40.4%) on the latest list. The entry bar is increasing, with the previous entry point being at 432.2TFlop/s, and now it’s 26.9% higher at 548.7TFlop/s.
There has been an increase in the number of computers using 10Gb/s Ethernet interconnects. This has stolen shares from both 1Gb/s Ethernet and Infiniband. Similarly, the use of coprocessors is still increasing. Nvidia cards provide the bulk of these in 17% of systems. Intel‘s Xeon Phi providing cards for 2.4% of platforms. But if you want a power efficient system, then the PEZY-SC2 accelerator is the way to go, with 4 of these systems in the top 5 of the Green 500 (measuring power efficiency in GFlop/s per Watt.). Nvidia’s own DGX Saturn V platform takes the fourth spot, which previously held the top spot only a year ago.
There are lots of new items finally claiming Linux’s dominance of the list, with all 500 systems running some form of Linux. ZDNet and others are heralding this event. But it’s been inevitable for a time. Between November 2001 and Jun 2004, Linux took control of this list. It was rapidly increasing its presence over the previous bespoke Unixes (Unixii?) or even specific operating systems that ran supercomputers.
Linux’s support for clustered commodity hardware, allowed companies to build ‘cheap’ supercomputers. The other benefit was an open source operating system, that many suppliers took on to replace their own operating systems. Using Linux helped reduce maintenance of their bespoke code. Using Linux’s code to provide the basis of their environment, they only needed to add a little extra. Maybe some specific device drivers or memory management code. Linux’s natural support for more esoteric architectures then drew other researchers to use Linux to explore other interconnects and topologies. The core Linux environment has always had a lot of different models and research to draw upon, selecting the most scalable architectures to support on an ongoing basis. Linux deserves this achievement.
Reversal of this dominance is unlikely without a major change in computing architecture. Quantum computing, anyone?