Google Coral Edge TPU

Google Coral Edge TPU通過代理店Gravitylink向け全球銷商!

Google Coral USB Accelerator Installation Guide

2020-07-10 11:09:24 | Google AIY
Google Coral USB accelerator is a USB device that provides Edge TPU as a computer co-processor. When connected to a Linux, Mac or Windows host, it can speed up the reasoning speed of machine learning models.
All you need to do is download the Edge TPU runtime and TensorFlow Lite library on the computer connected to the USB Accelerator. Then, use the sample application to perform image classification.
System Requirements:
A computer with one of the following operating systems:
· Linux Debian 6.0 or higher, or any of its derivatives (such as Ubuntu 10.0+), and x86-64 or ARM64 system architecture (support Raspberry Pi, but we only tested Raspberry Pi 3 Model B + and Raspberry Pi 4)
· MacOS 10.15 with MacPorts or Homebrew installed
·Windows 10
-A usable USB port (for best performance, please use a USB 3.0 port)
-Python 3.5, 3.6 or 3.7
Operating Procedures
1. Install Edge TPU runtime
Edge TPU runtime is required to communicate with Edge TPU. You can install it on the host, Linux, Mac or Windows by following the instructions below.
1) Linux system
①Add the official Debian package to your system;
② Install Edge TPU runtime:
Connect the USB Accelerator to the computer using the included USB 3.0 cable. If it is inserted, please delete and re-insert it to make the newly installed udev rules take effect.
※ Install at maximum working frequency (optional)
The above command will install Linux's standard Edge TPU runtime, which will run the device at the default clock frequency. You can install the runtime version, which runs at maximum frequency (twice the default value). This can increase the speed of inference, but at the same time also increase power consumption. USB Accelerator will become very hot.
If you are not sure whether your application needs to improve performance, you should use the default operating frequency. Otherwise, you can install the maximum frequency runtime as follows:
sudo apt-get install libedgetpu1-max
You cannot install two versions of the runtime at the same time, but you can switch by simply installing the alternate runtime, as shown above.
Note: When operating the device at maximum frequency, the metal on the USB Accelerator may become very hot. This may cause burns. To avoid injury, keep the device out of reach when operating the device at the maximum frequency, or use the default frequency.
2) Mac system
① Download and unzip the Edge TPU runtime
② Install Edge TPU runtime
The installation script will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the official USB Accelerator data sheet.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer. Then continue to install the TensorFlow Lite library.
3) Windows system:
① Click to download the latest official compressed package. Unzip the ZIP file, and then double-click the install.bat file.
A console window will open to run the installation script, and it will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the Coral USB Accelerator data sheet provided by Google.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer.
2.Install the TensorFlow Lite library
There are multiple ways to install the TensorFlow Lite API, but to start using Python, the easiest option is to install the tflite_runtime library. The library provides the most basic code (mainly Interpreter API) required to run inference using Python, which saves a lot of disk space.
To install it, follow the TensorFlow Lite Python quick start and then return to this page after running the pip3 install command.
3.Use the TensorFlow Lite API to run the model
It is now possible to infer on the Edge TPU. Perform image classification using sample code and models.
1) From GitHub: Download the sample model
2) Download bird classifier models, label files and bird photos
3) Run the image classifier using photos of birds
The inferred speed may vary depending on the host system and whether USB 3.0 connection is used.
To run other types of neural networks, check out the official sample projects, including examples of performing real-time object detection, pose prediction, key phrase detection, on-device transfer learning, and more.
AI hardware and software supporting Google Edge TPU
The Model Play and Tiorb AIX developed by Gravitylink can perfectly support Edge TPU. AIX is an artificial intelligence hardware that integrates the two core functions of computer vision and intelligent voice interaction. The built-in AI acceleration chip (Coral Edge TPU/intel movidius) supports edge deep learning inference and provides a reliable Performance support.
Model Play is an AI model resource platform for global developers, built-in diversified AI models, combined with Tiorb AIX, based on Google open source neural network architecture and algorithms, to build autonomous transfer learning functions without writing code, by selecting pictures, defining models and the category name to complete the AI model training.

最新の画像もっと見る

コメントを投稿