Hitachi Vantara claims Hitachi iQ probably the most full AI stack
Amongst storage participant claims about how well-suited their merchandise are to AI workloads, Hitachi Vantara has a novel backstory to help its arguments. Particularly, that the Japanese array maker is a part of a large manufacturing conglomerate that makes every thing from nuclear energy stations and high-speed trains, to air conditioners and family home equipment, and handles its knowledge utilizing Hitachi Vantara merchandise.
Additionally key to the narrative is that the corporate affords a converged infrastructure portfolio – Hitach iQ – that mixes Nvidia GPUs and Enterprise AI software program with Hitachi Vantara’s VSP One storage arrays, Hammerspace file storage and knowledge orchestration, Hitachi Vantara server merchandise, plus Cisco networking tools.
“Our group makes use of Nvidia’s Omniverse digital twin ecosystem, which gives coaching knowledge for AI that permits for improvement and extension of robotic capability in manufacturing,” mentioned CTO for AI at Hitachi Vantara, Jason Hardy.
Converged infrastructure for AI
In the meantime, Hitachi Vantara’s AI converged product household, Hitachi iQ, is an entire converged infrastructure that may go from one to 16 SuperMicro servers, every with eight Nvidia GPUs for AI processing utilizing Nvidia’s HGX configuration.
Then there are a number of Hitachi HA G3 servers that share the (object storage) contents of VSP One array nodes. A few of these servers run the Nvidia AI Enterprise software program layer in Kubernetes containers. Others run Hammerspace storage software program that permits parallelised entry between GPUs and storage.
Lastly, Cisco Nexus switches join the entire thing. Relating to the position of the VSP One array – the flagship of the Hitachi Vantara array household – it’s linked to Hammerspace servers to supply object storage to the majority of the info which these servers distribute in file mode.
IQ Time Machine: VSP One provides LLM a reminiscence
“To base the entire thing on our VSP array affords some advantages,” mentioned Hardy. “Amongst them is our new Hitachi iQ Time Machine performance, which permits submission to an LLM of earlier variations of paperwork and knowledge that has since been up to date.”
Hardy’s level right here is that such paperwork in different techniques could have been up to date and subsequently previous variations will probably be misplaced to LLMs that interrogate that dataset. The RAG-like perform rests on the retention of historic knowledge in object storage on the VSP One array, and iQ Studio – the chatbot that gives the Hitachi iQ infrastructure to Hitachi Vantara – gives this by way of a timeline within the interface.
For instance, if a member of the finance crew needs to ask the AI about an occasion, they’ll hover over the date and doubtlessly see particulars notified by way of a doc ingested on the time. And so, clients can entry knowledge from completely different time intervals by way of an LLM.
Knowledge storage is a crucial element for AI initiatives as a result of it should deal efficiently with three constraints – the array should talk very quickly with the GPUs; for RAG, knowledge must be in a format suitable with Nvidia’s software program modules that construct AI purposes; and, lastly, they’re required to assist enterprises put together and check knowledge that they undergo AI.
With Hitachi iQ, which fits method past simply storage performance, Hitachi goals to sort out these three challenges on the identical time.