Technology

NetApp targets E-Sequence at AI and neoclouds with EF50 and EF80


NetApp has refreshed its E-Sequence line with two all-flash fashions – the EF50 and EF80 – geared toward synthetic intelligence (AI) coaching, inferencing and high-performance computing (HPC) workloads.

The launch comes with a claimed efficiency enhance of two.5x for these E-Sequence arrays. E-Sequence has lengthy been the speedy possibility in NetApp’s portfolio for purposes that require devoted bandwidth reasonably than the superior storage performance of the Ontap-based FAS and AFF traces.

The brand new arrays are constructed to sort out the “data-starving” drawback seen in GPU-heavy environments the place storage I/O doesn’t preserve GPU utilisation at optimum ranges. In accordance with NetApp, the EF80 delivers greater than 100GBps learn throughput and 57GBps write throughput, and targets checkpoint writes throughout generative AI (GenAI) coaching, for instance.

The two.5x efficiency enchancment over the earlier era is a major soar from the prevailing EF fashions. When it comes to density, NetApp packs 1.5 petabytes of storage right into a 2U chassis, a transfer designed to curb the ever-growing energy and cooling footprint in fashionable datacentres.

Focusing on neoclouds

Sandeep Singh, senior vice-president and common supervisor for storage with NetApp, mentioned: “We’re delivering confirmed and inexpensive high-performance, excessive efficiency, for essentially the most performance-intensive and demanding workloads.

“That features AI use circumstances inclusive of AI coaching, AI inferencing, HPC workloads and transactional database workloads, not just for the enterprises, but in addition for neoclouds, sovereign AI clouds and AI-powered manufacturing use circumstances.”

Singh mentioned EF-Sequence is meant to function the high-speed “scratch area” in a tiered structure, typically paired with parallel file methods resembling Lustre or BeeGFS. This permits the E-Sequence to behave as a high-performance engine on the entrance finish, whereas bigger, extra persistent knowledge shops sit behind it.

E-Sequence heritage

The E-Sequence has occupied a singular area inside NetApp. Its DNA doesn’t come from the corporate’s well-known WAFL-based filers, however from the 2011 acquisition of Engenio (from LSI). This gave NetApp a much-needed block-storage play and for workloads the place the bells and whistles of Ontap had been truly a hindrance.

E-Sequence has been a quiet workhorse, typically discovered working with choices from IBM, Dell, and Teradata. Whereas NetApp’s major AFF/FAS traces deal with unified storage with heavy knowledge discount, the E-Sequence has remained the go-to for devoted, high-duty-cycle purposes. Its SANtricity working system favours uncooked efficiency and ease.

Powering the rack

As the main focus within the datacentre shifts from mere capability to energy draw, E-Sequence density is a major defence. Singh famous that storage draw is a “small fraction” of what a rack of GPUs pulls. By feeding GPUs extra effectively and decreasing idle time, the web impact is a extra streamlined, if nonetheless power-hungry, AI infrastructure.

Singh mentioned: “While you put it within the context of GPUs, these arrays allow the GPUs to not be sitting idle and ravenous. And to that a part of it, if you happen to can optimise it, you’re decreasing wasted energy and boosting utilisation of these GPUs. That additionally goes into the equation for purchasers.”

This launch places NetApp into tighter competitors with the likes of Pure Storage and its FlashArray//XL, in addition to Dell’s Venture Lightning. Whereas Pure focuses on a unified structure, NetApp’s technique stays “horses for programs”, utilizing E-Sequence for uncooked throughput whereas holding ONTAP for general-purpose enterprise knowledge administration.