Everpure’s Evergreen One for AI brings Exa flash and GPU-based service-level agreements
Everpure has introduced Evergreen One for AI, a performance-backed consumption mannequin for synthetic intelligence (AI) that extends to make use of of its FlashBlade//Exa high-performance storage. In the meantime, the corporate – referred to as Pure Storage till lately – has introduced the beta launch of its Datastream automated AI pipeline equipment.
Evergreen One for AI differs from present versatile capability presents within the Everpure vary by offering use of FlashBlade//Exa and service-level agreements (SLA) primarily based on graphics processing unit (GPU) depend. The purpose right here is to make sure that the storage surroundings supplies the throughput to maintain GPU sources totally utilised.
FlashBlade//Exa, Everpure’s highest-performance platform, was beforehand excluded from the Evergreen One consumption mannequin.
Exa goals at AI and high-performance computing (HPC) workloads that demand extraordinarily excessive throughput, seemingly in clients between giant enterprise customers of AI and the hyperscalers.
At its launch, FlashBlade//Exa launched an structure to the Pure product line by which metadata and bulk storage are disaggregated with totally different {hardware} and protocols in use.
Kaycee Lai, vice-president for AI with Everpure, mentioned Evergreen One for AI shifted the monetary and operational threat away from the shopper. “Particularly, we’ve got an providing which we name Evergreen One for AI,” he mentioned. “The massive distinction for AI is that we set the efficiency degree of the providing primarily based on the variety of GPUs that you’ve … it’s an SLA-backed efficiency assure.”
Evergreen One and Flex are Pure Storage’s pay-as-you-go procurement fashions, whereas Perpetually entails upfront buy with built-in upgrades.
Automating the RAG pipeline
Everpure additionally introduced the beta availability of Datastream. First previewed in late 2024, Datastream is a “single SKU” equipment that integrates Nvidia GPUs with Everpure storage. It’s designed to deal with the “knowledge readiness” problem, mentioned Lai. This refers back to the oft-cited statistic that knowledge groups spend 80% of their time getting ready unstructured knowledge to be used.
The equipment automates the retrieval-augmented technology (RAG) pipeline, which incorporates ingest, curation and vectorisation of information. By offering an built-in {hardware} and software program stack, Everpure goals to offer an “straightforward button” for enterprises constructing chatbots or autonomous brokers, he mentioned.
The software program functionality behind Datastream was constructed in-house, although it could possibly hook up with third-party knowledge sources together with Dell, HP and NetApp environments, in addition to cloud-resident knowledge. This flexibility permits the equipment to behave as a central hub for AI readiness no matter the place the information lives.
“Immediately, folks run RAG pipelines … they do the chunking, the embedding, the indexing to ensure that the information goes to be correct and related in order that chatbot brokers can devour them in a particular format,” mentioned Lai. “That takes up about 80% of most knowledge groups’ time as a result of there’s no normal instrument.”
Underpinning efficiency
To assist these launches, Everpure revealed new benchmarks supposed to validate its {hardware} beneath AI stress. In MLPerf 2.0 testing, the corporate claimed the highest spot for checkpointing – a vital perform for saving the state of a mannequin throughout lengthy coaching runs – reporting outcomes as much as two occasions higher than rivals reminiscent of Huawei and Huge.
The corporate additionally cited Spec Storage AI picture benchmarks, the place it outperformed NetApp’s AFX platform by roughly 20%, he mentioned.

