 
 
  
 
 
  We're excited to announce the launch of the industry’s first HBM4 Memory  Controller IP. Our new solution supports the advanced feature set of HBM4  devices, and will enable designers to address the demanding memory  bandwidth requirements of next-generation AI accelerators and graphics  processing units (GPUs).
 
  "With Large Language Models (LLMs) now exceeding a trillion parameters  and continuing to grow, overcoming bottlenecks in memory bandwidth and  capacity is mission critical to meeting the real-time performance  requirements of AI training and inference," said Neeraj Paliwal, SVP and  general manager of Silicon IP, at Rambus. "As the leading silicon IP provider  for AI 2.0, we are bringing the industry’s first HBM4 Controller IP solution to  the market to help our customers unlock breakthrough performance in their  state-of-the-art processors and accelerators."
 
  Our HBM4 Controller enables a new generation of HBM memory  deployments for cutting-edge AI accelerators, graphics and HPC applications.  The HBM4 Controller supports the JEDEC Spec of 6.4 Gigabits per second  (Gbps). Our Controller is further capable of supporting operation up to 10  Gbps providing a throughput of 2.56 Terabytes per second (TB/s) to each  memory device. Our HBM4 Controller IP can be paired with third-party or  customer PHY solutions to instantiate a complete HBM4 memory subsystem.
 
  Have questions about our solutions? Join us at AI Hardware & Edge AI  Summit this week! We'll be in booth #31 to chat about our interface IP  offerings, including the newly announced HBM4 Controller. Still need to  register? Use RAMBUSGOLD15 and save 15% on your ticket! Register  with the link here.
 
 
 
  
 
 
 
 
  |