X1M Boards Deliver High Performance and Ultra Low Power in a Tiny M.2 Form Factor
MOUNTAIN VIEW, Calif., March 29, 2022 /PRNewswire/ — Flex Logix® Technologies, Inc., supplier of the most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, today announced production availability of its InferXâ„¢ X1M boards. At roughly the size of a stick of gum, the new InferX X1M boards pack high performance inference capabilities into a low-power M.2 form factor for space and power constrained applications such as robotic vision, industrial, security, and retail analytics.
“With the general availability of our X1M board, customers designing edge servers and industrial vision systems can now incorporate superior AI inference capabilities with high-accuracy, high throughput and low power on complex models,” said Dana McCarty, Vice President of Sales and Marketing for Flex Logix’s Inference Products. “By incorporating an X1M board, customers can not only design new and exciting new AI capabilities into their systems, but they also have a faster path to production ramp versus designing their own custom card design.”
About the InferX X1M Board
Featuring Flex Logix’s InferX X1 edge inference accelerator, the InferX X1M board offers the most efficient AI inference acceleration for advanced edge AI workloads such as Yolov5. The boards are optimized for large models and megapixel images at batch=1. This provides customers with the high-performance, low-power object detection and other high-resolution image processing capabilities needed for edge servers and industrial vision systems.
The InferX X1M M.2 board fits within the low power requirements of the M.2 specification. To help its customers to market quickly, Flex Logix also provides a suite of software tools to accompany the boards. This includes tools to port trained ONNX models to run on the X1M, and simple runtime framework to support inference processing within both Linux and Windows.
Also included in the software tools is an InferX X1 driver with external APIs designed for applications to easily configure & deploy models, as well as internal APIs for handling low-level functions designed to control and monitor the X1M board.
About Flex Logix
Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry’s most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix’s eFPGA platform enables chips to flexibly handle changing protocols, standards, algorithms, and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix is headquartered in Mountain View, California and also has offices in Austin, Texas. For more information, visit https://flex-logix.com.