Deep learning is set to radically transform the machine vision landscape. It is facilitating new applications and disrupting long-established markets. The product managers with FLIR have the privilege of visiting companies across a wide range of industries; they report that every company they have recently visited is developing deep learning systems.
It’s never been easier to initiate the project, but where to start? This article follows a straightforward framework when considering building a deep learning inference system for less than $600.
What is Deep Learning Inference?
Inference uses a deep-learning-trained neural network to make predictions on the latest data. Inference is much better at answering complex and subjective questions in contrast to conventional rules-based image analysis.
Through the optimization of networks, making them run on low power hardware, inference can be conducted 'on the edge' near the data source. This eradicates the system’s dependence on a central server for image analysis, resulting in lower latency, greater reliability, and enhanced security.
1. Selecting the Hardware
The aim of this guide is to construct a dependable, top-quality system for deployment in the field. While this guide is limited in scope, merging traditional computer vision techniques with deep learning inference can offer high accuracy and computational efficiency by leveraging the advantages provided by each approach.
The Aaeon UP Squared-Celeron-4GB-32GB single-board computer is equipped with the memory and CPU power necessary for this approach. Its X64 Intel CPU runs the same software as conventional desktop PCs, streamlining development in contrast to ARM-based, single-board computers (SBCs).
The code that facilitates deep learning inference utilizes branching logic; exclusive hardware can vastly accelerate the implementation of this code.
The Intel® Movidius™ Myriad™ 2 Vision Processing Unit (VPU) is an extremely powerful and efficient inference accelerator, which has been incorporated into our FLIR’s latest inference camera, the Firefly DL.
Source: FLIR Systems
|USB3 Vision Deep Learning
|Single Board Computer
|3m USB 3 cablel
||Ubuntu 16.04/18.04, TensorFlow,
Intel NCSDK, FLIR Spinnaker SDK
2. Software Requirements
There are a number of free tools available for the construction, training and deployment of deep learning inference models. This project makes use of a wide range of free and open-source software.
Each software package offers free installation instructions on the respective websites. This guide makes the assumption that you are familiar with the fundamentals of the Linux console.
|Convert to Movidius
Firefly DL camera
|Run inference on
Figure 1. Deep learning inference workflow and the associated tools for each step. Image Credit: FLIR Systems
3. Detailed Guide
‘Getting Started with Firefly Deep Learning on Linux’ offers an introduction on how to retrain a neural network and convert the ensuing file into a firefly compatible format, as well as how to display the results utilizing SpinView. Users are given the step-by-step process on how to train and convert inference networks themselves using terminal.
Neural Networks Supported by the Firefly Deep Learning demonstrates which neural networks have been tested to function on the Firefly-DL.
Tips on creating Training Data for Deep Learning Neural Networks covers how to generate an efficient deep learning neural network by producing high quality training data for a specific application.
Troubleshooting neural network graph conversion issues offers helpful tips on how to resolve issues that can arise when converting inference network files to Firefly compatible format.
This information has been sourced, reviewed and adapted from materials provided by FLIR Systems.
For more information on this source, please visit FLIR Systems.