Meta has built an AI supercomputer it says will be world’s fastest by end of 2022

0
210

“AI supercomputer” — a high-speed computer developed mainly for training machine learning systems — is being built by the social media conglomerate Meta, the latest technology business to do so. According to the company, the firm claims that its new AI Research SuperCluster, or RSC, is already among the fastest computers of its kind and, when completed in mid-2022, will be the quickest machine in the world.

According to Meta CEO Mark Zuckerberg, the company has constructed “what we consider to be the world’s fastest artificial intelligence supercomputer.” “We’re calling it RSC, which stands for Artificial Intelligence Research SuperCluster, and it’ll be finished later this year.”

The announcement underlines the essential importance of artificial intelligence research to organizations such as Meta. Rivals like Microsoft and Nvidia have already launched their own “AI supercomputers,” which are distinct from the traditional supercomputers we are used to seeing. It will be used to train various systems across Meta’s companies, ranging from content moderation algorithms used to identify hate speech on social media platforms like Facebook and Instagram to augmented reality capabilities that will be included in the company’s future AR gear in the future. Meta has confirmed that RSC would build experiences for the metaverse, which is the company’s adamant branding for a network of virtual venues ranging from offices to online arenas.

“RSC will assist Meta’s artificial intelligence researchers in the development of new and better AI models that can learn from trillions of examples, work across hundreds of different languages, seamlessly analyze text, images, and video together, develop new augmented reality tools, and much more,” write Meta engineers Kevin Lee and Shubho Sengupta in a blog post outlining the announcement.

Also Read for More Info:   Crypto.com CEO admits hundreds of customer accounts were hacked

Among other things, we hope RSC will assist us in developing entirely new artificial intelligence systems that can power real-time voice translations for large groups of people who speak different languages so that they can seamlessly collaborate on a research project or play an augmented reality game together.

Approximately one and a half years ago, Meta’s engineers started working on RSC, completely redesigning the machine’s different systems — including cooling, power, networking, and cabling — from the ground up. It is now operational, with 760 Nvidia GGX A100 computers and 6,080 networked GPUs (a kind of processor that is especially effective at addressing machine learning challenges) comprising Phase One of RSC. Meta claims that it is already delivering up to a 20-fold improvement in performance on its regular machine vision research tasks compared to previous generations.

On the other hand, phase two of the RSC will be completed by the end of 2022. The system will then have around 16,000 total GPUs and will be capable of training artificial intelligence systems “with more than a trillion parameters on data sets as big as exabyte.” This raw number of graphics processing units (GPUs) is simply one measure of a system’s total capability; nonetheless, for comparison, Microsoft’s AI supercomputer constructed in collaboration with research lab OpenAI comprises 10,000 GPUs.

These figures are unique, but they raise the issue of what precisely an artificial intelligence supercomputer is. In addition, how does it compare to what we often think of as supercomputers — massive machines used by universities and governments to crunch data in complicated realms such as space, nuclear physics, and climate change?

Also Read for More Info:   The FTC is reportedly probing Meta’s VR business for antitrust violations

High-performance computers, often known as HPCs, and supercomputers are two kinds of systems that are strikingly comparable in their capabilities. Because of their size and appearance, both are more similar to datacenters than individual computers. They both depend on enormous numbers of linked processors to transmit data at blisteringly fast rates. However, as Hyperion Research’s HPC expert Bob Sorensen explains to The Verge, there are significant distinctions between the two types of computing. “AI-based HPCs exist in a realm that is separate from that of their conventional HPC counterparts,” says Sorensen, and the most significant difference is the emphasis placed on accuracy.

The short explanation is that machine learning tasks require less accuracy than the tasks assigned to traditional supercomputers. As a result, “AI supercomputers” (a term coined recently) can perform more calculations per second than their traditional counterparts using the same hardware as their regular counterparts. The company Meta claims to have constructed the “world’s fastest AI supercomputer,” however, this is not necessarily a direct parallel to the supercomputers that you see in the news regularly (rankings of which are compiled by the independent Top500.org and published twice a year).

You should be aware that both supercomputers and artificial intelligence supercomputers perform calculations using floating-point arithmetic, which is a mathematical shorthand that is extremely useful for performing calculations with vast and tiny numbers (the “floating point” in question is the decimal point, which “floats” between significant figures). Although numerous formats may be used to vary the degree of precision used in floating-point computations, most supercomputers assess their performance in terms of 64-bit floating-point operations per second, also known as FLOPs, or floating-point operations per second. However, since AI computations need less precision, AI supercomputers are often measured in 32-bit or even 16-bit floating-point operations per second (FLOPs). Therefore, comparing the two systems is not always an apples-to-apples comparison, albeit this caution does not reduce the extraordinary strength and capability of artificial-intelligence supercomputers.

Also Read for More Info:   Microsoft to buy video game maker Activision Blizzard for $68.7B

Sorensen adds one further word of caution to the mix. As is often the case with the “speeds and feeds” method to evaluating technology, the much-heralded high speeds are not usually reflective of actual performance. “High-performance computing (HPC) suppliers often give performance statistics that represent the absolute highest speed at which their system can operate. “We refer to this as the theoretical peak performance,” Sorensen explains. “However, the true test of a successful system design is capable of performing well on the tasks for which it was created. Indeed, when executing real-world applications on certain high-performance computing systems, it is not unusual for them to reach less than 25% of their so-called peak performance.”

For want of a better expression, the genuine usefulness of supercomputers may be found in the work they execute rather than in their theoretical peak performance. In the case of Meta, that job entails developing moderation systems at a time when public confidence in the company is at an all-time low, as well as creating a new computing platform that it can dominate in the face of competition from companies such as Google, Microsoft, and Apple, among others. Although an artificial intelligence supercomputer provides the corporation with raw computing capacity, Meta is still responsible for developing a successful strategy on its own.

LEAVE A REPLY

Please enter your comment!
Please enter your name here