Japan to build fastest supercomputer in the world

nvidia-dgx-1-supercomputer

The Japanese government is planning to build the fastest supercomputer in history, to go with the other fastest supercomputer in history it already owns – the “K Computer”.

The K Computer – located at the Riken scientific institution in Japan – was this week declared the fastest supercomputer in the world by the High Performance Conjugate Gradients Benchmark project.

Using a new metric for ranking high-performance computing systems, HPCG found that the K Computer achieves 603 teraflops, beating China’s Tianhe-2, which reaches 580 teraflops, and Japan’s Oakforest-PACS, at 386 teraflops.

Japan has 13 other supercomputers on the HPCG Benchmark list, behind the US, which has about 18, and Germany, with about 20 or so. Other countries with several supercomputers on the list are Russia and China.

Now Japan’s Prime Minister Shinzo Abe has called on the country’s technology sector to accelerate innovations in order to meet the data demands of new and growing markets, such as robotics, augmented reality, electric vehicles and renewable energy.

Another motivation for the Japanese government’s directive is that currently a lot of the country’s data processing is done by foreign companies such as Google and Microsoft.

The new supercomputer, named AI Bridging Cloud Infrastructure – or ABCI for short –will be made available for a fee to Japanese corporations, according to a report on the Reuters website.

The meaning of metrics 

Opinions actually vary on which supercomputers are the fastest – it depends on what criteria you use. It’s a bit like saying a Windows machine is faster than a Mac – it’s probably best not to go there.

However, the processing speed of supercomputers and servers at data centres is becoming a critical issue as it is generally accepted that the amount of data they will be required to process will continue to grow at enormous rates.

And no matter how many predictions are made about exactly how much data processing will be needed in the future, there are always stories about data centre overload.

Today is global electronics discounts day, or Black Friday – November 25 – and over the next week or two, there is likely to be a number of stories about shopping websites and data centres straining under the weight of all the online bargain hunters.

And a few months ago, the Pokémon Go game terrorised data centres everywhere because millions of people were using their mobile devices to look for the little critters on the augmented reality game.

According to Instor.com, the game’s players – who could be counted in the tens of millions globally – were overloading servers, causing the system to crash. There were “hundreds of reports of server problems”, says Instor.

If several augmented reality games like Pokémon Go become similarly popular at the same time, it would bring down the entire Internet. Probably.

AI is prepared for the big data deluge 

Data centres have been preparing for the growth in big data for some time, and supercomputers may be one possible answer.

The general perception of supercomputers are of government-owned machines crunching numbers for scientists who could be studying anything, ranging from the weather to the general population.

But with the launch of machines such as Nvidia’s SaturnV, desktop supercomputers – or at least data centre supercomputers – may become more commonplace.

The DGX SaturnV, to give the machine its full name, has – count ’em – 63,488 Gigabytes of RAM, and 60,512 Intel Xeon E5-2698v4 CPU cores.

Nvidia adds that the SaturnV is ready for the heavy-duty demands of artificial intelligence processing, claiming the machine is “the world’s first AI supercomputer in a box… delivering performance equal to 250 conventional servers”.

Nvidia says the SaturnV is part of its efforts to build autonomous driving software. Moreover, its data centre revenues grew by almost 200 per cent last year, and clearly the company is hoping to continue the trend with SaturnV.

But it’s still some distance behind the market leaders in the data centre sector, and in supercomputers.

According to statistics on the Top500.org – which monitors developments in supercomputing and has placed the Nvidia SaturnV as the 28th fastest supercomputer in the world – HPE has the largest “vendors system share”, followed by Lenovo and Cray.

Top500.org Supercomputer Vendors System Share

  1. HPE – 22.4 per cent
  2. Lenovo – 18.4
  3. Cray – 11.2
  4. Sugon – 9.4
  5. IBM – 6.6
  6. HPE/SGI – 5.6
  7. Bull – 4
  8. Inspur – 3.6
  9. Huawei – 3.2
  10. Dell – 2.4
  11. Others – 13.2

Source: Top500.org