Summit

DEEPX Demonstration of Its Product Portfolio for Empowering Edge AI Solutions

Tim Park, director of strategic marketing for DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Park demonstrates DEEPX’s comprehensive product portfolio. With a range of edge AI chips targeting various applications, from low power to high performance, DEEPX offers compelling solutions with a strong […]

DEEPX Demonstration of Its Product Portfolio for Empowering Edge AI Solutions Read More +

DEEPX Demonstration of Empowering Edge AI Technology with Flexibility, Accuracy and Power Efficiency

Jay Kim, EVP of Technology for DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Kim demonstrates the key features embedded in DEEPX’s edge AI chip technology. Kim outlines its improved flexibility, accuracy, and power/performance efficiency, attributes that enable businesses to access vital data for

DEEPX Demonstration of Empowering Edge AI Technology with Flexibility, Accuracy and Power Efficiency Read More +

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK

Jay Kim, EVP of Technology for DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Kim demonstrates the simplicity of using DEEPX’s software development kit (SDK). Kim shows how to choose a target application and select an AI software framework in just two easy steps.

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK Read More +

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys

Tom Michiels, System Architect for ARC Processors at Synopsys, presents the “How Transformers Are Changing the Nature of Deep Learning Models” tutorial at the May 2023 Embedded Vision Summit. The neural network models used in embedded real-time applications are evolving quickly. Transformer networks are a deep learning approach that has become dominant for natural language

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys Read More +

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive

Steve Teig, CEO of Perceive, presents the “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN” tutorial at the May 2023 Embedded Vision Summit. Generative adversarial networks, or GANs, are widely used to create amazing “fake” images and realistic, synthetic training data. And yet, despite their name, mainstream GANs

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive Read More +

“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” a Presentation from Avnet

Monica Houston, Technical Solutions Manager at Avnet, presents the “Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” tutorial at the May 2023 Embedded Vision Summit. Selecting the appropriate type of image sensor is essential for reliable and accurate performance of vision applications. In this talk, Houston explores some of the critical factors to

“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” a Presentation from Avnet Read More +

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai

Oren Debbi, CEO and Co-founder of Visionary.ai, presents the “Can AI Solve the Low Light and HDR Challenge?” tutorial at the May 2023 Embedded Vision Summit. The phrase “garbage in, garbage out” is applicable to machine and human vision. If we can improve the quality of image data at the source by removing noise, this

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai Read More +

“Modernizing the Development of AI-based IoT Devices with Wedge,” a Presentation from Midokura, a Sony Group Company

Dan Mihai Dumitriu, Chief Technology Officer of Midokura, a Sony Group Company, presents the “Modernizing the Development of AI-based IoT Devices with Wedge” tutorial at the May 2023 Embedded Vision Summit. IoT device development has traditionally relied on a monolithic approach, with all firmware developed by a single vendor using a rigid waterfall model, typically

“Modernizing the Development of AI-based IoT Devices with Wedge,” a Presentation from Midokura, a Sony Group Company Read More +

“Using a Neural Processor for Always-sensing Cameras,” a Presentation from Expedera

Sharad Chole, Chief Scientist and Co-founder of Expedera, presents the “Using a Neural Processor for Always-sensing Cameras” tutorial at the May 2023 Embedded Vision Summit. Always-sensing cameras are becoming a common AI-enabled feature of consumer devices, much like the always-listening Siri or Google assistants. They can enable a more natural and seamless user experience, such

“Using a Neural Processor for Always-sensing Cameras,” a Presentation from Expedera Read More +

“A New, Open-standards-based, Open-source Programming Model for All Accelerators,” a Presentation from Codeplay Software

Charles Macfarlane, Chief Business Officer at Codeplay Software, presents the “New, Open-standards-based, Open-source Programming Model for All Accelerators” tutorial at the May 2023 Embedded Vision Summit. As demand for AI grows, developers are attempting to squeeze more and more performance from accelerators. Ideally, developers would choose the accelerators best suited to their applications. Unfortunately, today

“A New, Open-standards-based, Open-source Programming Model for All Accelerators,” a Presentation from Codeplay Software Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top