Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
SAN DIEGO, May 19, 2022 — GigaIO, provider of an open rack-scale computing platform for advanced scale workflows, today announced that GigaIO FabreX for composable infrastructure is now natively supported in NVIDIA Bright Cluster Manager 9.2. The integration, led by NVIDIA in collaboration with GigaIO, ensures customers can build easy-to-manage, platform-independent compute clusters that scale in minutes to handle the most demanding AI and HPC workloads.
This new integration is an example of GigaIO’s strategy to deliver an open platform that allows customers to access the benefits of composable infrastructure via the enterprise-class tools they already use. “With our strategy of native integration into leading software tools such as NVIDIA Bright Cluster Manager, our goal is to be invisible to data center managers so that their users can seamlessly submit jobs and not even need to know about the magic of our software-defined hardware reconfiguring resources on the fly,” said Alan Benjamin, CEO of GigaIO.
“Enterprises building AI and HPC computing infrastructure are seeking solutions that provide performance, flexibility, and efficiency,” said Charlie Boyle, vice president of DGX systems at NVIDIA. “With native support for Bright Cluster Manager 9.2, GigaIO FabreX customers can now compose and manage their compute systems to suit the needs of unique workloads from a single management interface.”
GigaIO’s universal dynamic memory fabric, FabreX, enables an entire server rack to be treated as a single compute resource. Resources normally located inside of a server, including accelerators, storage, and even memory, can now be pooled in accelerator or storage enclosures, where they are available to all of the servers in a rack. These resources continue to communicate over a native PCIe memory fabric just as they would if they were plugged into the server motherboard.
NVIDIA Bright Cluster Manager is an enterprise-class software solution that simplifies building and managing HPC clusters from edge to core to cloud, transparently to the customer, by combining provisioning, monitoring, and management capabilities in a single tool. Version 9.2 extends the goals of eliminating complexity and enabling flexibility by adding built-in support for composable infrastructure using GigaIO FabreX, where nodes can now be composed using Bright Cluster Manager Shell or BrightView.
Auto-scaling in Bright Cluster Manager creates a dynamic, multi-purpose infrastructure, and FabreX extends that agility to each hardware element in a rack by enabling the creation of composable GigaIO GigaPods and GigaClusters with cascaded and interlinked switches. PCIe devices can be monitored and health-checked, as well as pooled and assigned to nodes within a cluster using Bright Cluster Manager. Clusters may have several different fabrics defined, and Bright Cluster Manager streamlines the fabric configuration process.
Support for FabreX with NVIDIA Bright Cluster Manager allows users to handle more workloads while maximizing resource utilization, minimizing cost, and managing everything from a single state-of-the art user interface. Cloud-like agility is now easier for on-prem infrastructure, allowing cloud bursting as needed within a single interface.
GigaIO’s dedication to native integration with software tools like Bright Cluster Manager provides a best-in-class experience for customers who can continue to use their favorite tools without needing to alter their software stack. Native integration frees customers from having to rely on plug-ins with limited capabilities, learn new software, manage yet another pane of glass, and pay additional per-node license fees.
This development expands GigaIO’s close collaboration with NVIDIA. GigaIO has been a member of the NVIDIA Partner Network (NPN) since 2020 and was accepted last month into NVIDIA Inception, a program designed to nurture cutting-edge startups. GigaIO shares many common customers with NVIDIA, including the Texas Advanced Computing Center and the San Diego Supercomputer Center.
Dr. Frank Würthwein, Director of the San Diego Supercomputer Center, is a beneficiary of this collaboration. “Our research requires that we aggregate disparate computational elements, such as GPUs, x86 processors, and storage systems into highly usable and reconfigurable systems,” he said. “GigaIO’s FabreX technology combined with NVIDIA Bright Cluster Manager makes it possible to dynamically bring these elements together in a very low-latency, high-performance interconnect while allowing for distinct, non-interfering workflows to co-exist on the same infrastructure.”
Learn more about the native integration of FabreX for composable infrastructure in NVIDIA Bright Cluster Manager 9.2, available now.
Headquartered in Carlsbad, California, GigaIO provides the world’s only open rack-scale computing platform, delivering the elasticity of the cloud at a fraction of the TCO (Total Cost of Ownership). With its universal dynamic memory fabric, FabreX, and its innovative open architecture using industry-standard PCI Express/soon CXL technology, GigaIO breaks the constraints of the server box, liberating resources to shorten time to results. Contact [email protected], visit www.gigaio.com.
Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!
There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…
In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…
A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign European computing: its first European factory, housed in the C Read more…
That supercomputers produce impactful, lasting value is a basic tenet among the HPC community. To make the point more formally, Hyperion Research has issued a new report, The Economic and Societal Benefits of Linux Super Read more…
The Department of Energy's Oak Ridge National Laboratory (ORNL) has selected Doug Kothe to be the next Associate Laboratory Director for its Computing and Computational Sciences Directorate (CCSD), HPCwire has learned. Kothe will fill the vacancy created by the retirement of Jeff Nichols, whose last day is July 1. Kothe will transition into the position on June 6. As director of the United States' Exascale Computing Project... Read more…
Most users of HPC or Batch systems need to analyze data with multiple operations to get meaningful results. That’s really driven by the nature of scientific research or engineering processes – it’s rare that a single task generates the insight you need. Read more…
Almost exactly a year ago, Google launched its Tensor Processing Unit (TPU) v4 chips at Google I/O 2021, promising twice the performance compared to the TPU v3. At the time, Google CEO Sundar Pichai said that Google’s datacenters would “soon have dozens of TPU v4 Pods, many of which will be... Read more…
There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…
A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…
The Department of Energy's Oak Ridge National Laboratory (ORNL) has selected Doug Kothe to be the next Associate Laboratory Director for its Computing and Computational Sciences Directorate (CCSD), HPCwire has learned. Kothe will fill the vacancy created by the retirement of Jeff Nichols, whose last day is July 1. Kothe will transition into the position on June 6. As director of the United States' Exascale Computing Project... Read more…
Almost exactly a year ago, Google launched its Tensor Processing Unit (TPU) v4 chips at Google I/O 2021, promising twice the performance compared to the TPU v3. At the time, Google CEO Sundar Pichai said that Google’s datacenters would “soon have dozens of TPU v4 Pods, many of which will be... Read more…
HPCwire is pleased to present our interview with SC22 General Chair Candace Culhane, program/project director at Los Alamos National Lab and an HPCwire 2022 Per Read more…
Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…
Intel is extending its roadmap for infrastructure processors through 2026, the company said at its Vision conference being held in Grapevine, Texas. The company's IPUs (infrastructure processing units) are megachips that are designed to improve datacenter efficiency by offloading functions such as networking control, storage management and security that were traditionally... Read more…
Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…
Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…
Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…
Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…
AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…
IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…
Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…
The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…
PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…
MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…
Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…
Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…
A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…
The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…
Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…
Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…
Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.