Synopsys

Synopsys Demonstration of Deep Learning Inference and Sparse Optical Flow

Gordon Cooper, product marketing manager at Synopsys, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Cooper demonstrates combining deep learning with traditional computer vision by using the DesignWare EV6x Embedded Vision Processor¹s vector DSP and CNN engine. The tightly integrated CNN engine executes deep learning inference (using TinyYOLO, but any graph […]

Synopsys Demonstration of Deep Learning Inference and Sparse Optical Flow Read More +

Synopsys Demonstration of Android Neural Network Acceleration with EV6x

Gordon Cooper, product marketing manager, and Mischa Jonker, software engineer, both of Synopsys, deliver a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Cooper and Jonker demonstrate how the DesignWare EV6x Embedded Vision Processor with deep learning can offload application processor tasks to increase performance and reduce power consumption, using an Android Neural

Synopsys Demonstration of Android Neural Network Acceleration with EV6x Read More +

Computer Vision for Augmented Reality in Embedded Designs

Augmented reality (AR) and related technologies and products are becoming increasingly popular and prevalent, led by their adoption in smartphones, tablets and other mobile computing and communications devices. While developers of more deeply embedded platforms are also motivated to incorporate AR capabilities in their products, the comparative scarcity of processing, memory, storage, and networking resources

Computer Vision for Augmented Reality in Embedded Designs Read More +

“Designing Smarter, Safer Cars with Embedded Vision Using EV Processor Cores,” a Presentation from Synopsys

Fergus Casey, R&D Director for ARC Processors at Synopsys, presents the “Designing Smarter, Safer Cars with Embedded Vision Using Synopsys EV Processor Cores” tutorial at the May 2018 Embedded Vision Summit. Consumers, the automotive industry and government regulators are requiring greater levels of automotive functional safety with each new generation of cars. Embedded vision, using

“Designing Smarter, Safer Cars with Embedded Vision Using EV Processor Cores,” a Presentation from Synopsys Read More +

“New Deep Learning Techniques for Embedded Systems,” a Presentation from Synopsys

Tom Michiels, System Architect for Embedded Vision at Synopsys, presents the “New Deep Learning Techniques for Embedded Systems” tutorial at the May 2018 Embedded Vision Summit. In the past few years, the application domain of deep learning has rapidly expanded. Constant innovation has improved the accuracy and speed of learning and inference. Many techniques are

“New Deep Learning Techniques for Embedded Systems,” a Presentation from Synopsys Read More +

Implementing Vision with Deep Learning in Resource-constrained Designs

DNNs (deep neural networks) have transformed the field of computer vision, delivering superior results on functions such as recognizing objects, localizing objects within a frame, and determining which pixels belong to which object. Even problems like optical flow and stereo correspondence, which had been solved quite well with conventional techniques, are now finding even better

Implementing Vision with Deep Learning in Resource-constrained Designs Read More +

CS12235_ev_Fig3

Implementing High-performance Deep Learning Without Breaking Your Power Budget

This article was originally published at Synopsys' website. It is reprinted here with the permission of Synopsys. Examples of applications abound where high-performance, low-power embedded vision processors are used: a mobile phone using face recognition to identify a user, an augmented or mixed reality headset identifying your hands and the layout of your living room

Implementing High-performance Deep Learning Without Breaking Your Power Budget Read More +

NCS_Summary

The Evolution of Deep Learning for ADAS Applications

This technical article was originally published at Synopsys' website. It is reprinted here with the permission of Synopsys. Embedded vision solutions will be a key enabler for making automobiles fully autonomous. Giving an automobile a set of eyes – in the form of multiple cameras and image sensors – is a first step, but it

The Evolution of Deep Learning for ADAS Applications Read More +

Figure2

Software Frameworks and Toolsets for Deep Learning-based Vision Processing

This article provides both background and implementation-detailed information on software frameworks and toolsets for deep learning-based vision processing, an increasingly popular and robust alternative to classical computer vision algorithms. It covers the leading available software framework options, the root reasons for their abundance, and guidelines for selecting an optimal approach among the candidates for a

Software Frameworks and Toolsets for Deep Learning-based Vision Processing Read More +

“Designing Scalable Embedded Vision SoCs from Day 1,” a Presentation from Synopsys

Pierre Paulin, Director of R&D for Embedded Vision at Synopsys, presents the "Designing Scalable Embedded Vision SoCs from Day 1" tutorial at the May 2017 Embedded Vision Summit. Some of the most critical embedded vision design decisions are made early on and affect the design’s ultimate scalability. Will the processor architecture support the needed vision

“Designing Scalable Embedded Vision SoCs from Day 1,” a Presentation from Synopsys Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top