sitemap contact  
  HOME > Press Room > News > 2020
 
  Flex Logix Technologies, Inc.,  
 

FLEX LOGIX ANNOUNCES EFLX EFPGA AND NNMAX AI INFERENCE IP MODEL SUPPORT FOR THE VELOCE STRATO EMULATION PLATFORM FROM MENTOR

 
 
 
 

Veloce Strato models used to verify InferX X1 AI Inference Accelerator designs now in fabrication

MOUNTAIN VIEW, Calif. – July 21, 2020 – Flex Logix® Technologies, Inc., a
leading supplier of embedded FPGA (eFPGA) and AI Inference IP, architecture and
software, today announced support for EFLX® eFPGA IP and nnMAX™ AI Inference
IP emulation models for use on Mentor's Veloce® Strato™ emulation platform. These
models are designed to enable customers to significantly lower development costs,
speed time to market, and lower overall risk by enabling real-time testing well ahead
of silicon availability.

“Customers want first time success for their SoCs because anything less leads to
unnecessary development cost and delays in product availability,” said Geoff Tate,
CEO and co-founder of Flex Logix. “Compared to software simulation, emulation
models, such as those for Veloce Strato, allow SoC architectural exploration and final
verification to be done more rapidly and thoroughly while providing software
developers a platform to debug their software well in advance of silicon. 

The Flex Logix models developed for use with the Veloce emulation platform have
been proven in the verification of Flex Logic's own SoC, InferX™ X1, which are now
in fab.  InferX X1 has a 2x2 nnMAX array and 1x1 EFLX eFPGA.

“Next-generation SoCs for edge computing, such as Flex Logix's InferX X1 edge
inference co-processor, require a high-degree of hardware and software
programmability and power efficiency tailored to targeted workloads,” said Ravi
Subramanian, senior vice president, IC Verification, Mentor, a Siemens business. 
“By using the Veloce emulation platform, our mutual customers  can confidently
leverage Flex Logix's AI inferencing and eFPGA models designed for Veloce to
rapidly verify and tapeout their next-generation SoCs. We are delighted that
FlexLogix themselves have proven-in these capabilities with their own InferX SoC
built to efficiently handle highly-intensive neural-network inferencing workloads.”



About Flex Logix
Flex Logix provides solutions for making flexible chips and accelerating neural
network inferencing. Its eFPGA platform enables chips to be flexible to handle
changing protocols, standards, algorithms and customer needs and to implement
reconfigurable accelerators that speed key workloads 30-100x compared to
processors. Flex Logix's second product line, nnMAX, utilizes its eFPGA and
interconnect technology to provide modular, scalable neural inferencing from 1 to
>100 TOPS using a higher throughput/$ and throughput/watt compared to other
architectures. Flex Logix is headquartered in Mountain View, California. 





 
  Copyright © 2011 MAOJET TECHNOLOY CORP. ALL RIGHTS RESERVED.