Presented by
-
Director of AI and Machine Learning Technologies, NXP Semiconductors
-
Systems Engineer, NXP Semiconductors
Sign in to access this content and additional site features.
Machine learning is a rapidly growing field, especially as deep learning moves to the edge and more and more engineers build applications that include some form of vision- or voice-based machine learning technology. The number of deep learning frameworks, tools and other capabilities that have become available to allow developers to build and deploy neural network models continues to expand. TensorFlow Lite, as an inference engine, is one such example of a deep learning tool that has gained tremendous popularity in recent years. A relative newcomer to this field is Glow, the open source neural network compiler.
This session explores the trade-offs and features of TensorFlow Lite and the Glow NN compiler, with a focus on how to target these technologies for MCUs to work within the various resource constraints such as memory and power.
Ask our i.MX RT community and get expert advice.
Subscribe to our newsletter to stay updated with our latest developments and if you need further assistance, we are here to help.
Subscribe Contact Support