

We're excited to announce the launch of the industry's leading HBM4E Memory Controller IP. This new solution delivers breakthrough performance with advanced reliability features enabling designers to address the demanding memory bandwidth requirements of next-generation AI accelerators and graphics processing units (GPUs).
“Given the insatiable bandwidth demands of AI, it's imperative for the memory ecosystem to continue aggressively advancing memory performance,” said Simon Blake-Wilson, SVP and general manager of Silicon IP, at Rambus. “As a leading silicon IP provider for AI applications, we are bringing the industry's leading HBM4E Controller IP solution to the market as a key enabler for breakthrough performance in next-generation AI processors and accelerators.”
Our HBM4E Controller enables a new generation of HBM memory deployments for cutting-edge AI accelerators, graphics and HPC applications. The HBM4E Controller is capable of supporting operation up to 16 Gigabits per second (Gbps) per pin providing an unprecedented throughput of 4.1 Terabytes per second (TB/s) to each memory device. For an AI accelerator with eight attached HBM4E devices, this translates to over 32 TB/s of memory bandwidth for next-generation AI workloads. The Rambus HBM4E Controller IP can be paired with third-party standard or TSV PHY solutions to instantiate a complete HBM4E memory subsystem in a 2.5D or 3D package as part of an AI SoC or custom base die solution.
Our team will be at Embedded World next week to chat with you about our new HBM4E Controller, as well as our other Silicon IP offerings. Let us know if you'll be there and we can schedule some time to meet!
If you have any questions about our HBM4E Controller or any of our other Interface IP products, please feel free to reply directly to me here.
Thanks!
|