Google Coral Edge TPU

Google Coral Edge TPU通過代理店Gravitylink向け全球銷商!

Smart Speaker Prototype AIY Voice Kit Unboxing

2020-07-28 11:02:51 | Google AIY
Google AIY is an artificial intelligence project that aims to develop some artificial intelligence DIY projects. There are currently two projects, one is a voice kit and the other is a video kit. At present, I only bought the AIY voice kit. Let's just unpack it and see what it is.
The outer packaging is very simple. The front is the finished product picture. The back is the internal parts, including a main circuit board Voice HAT, microphone, speaker, an arcade style button, two pieces of paper, and some connecting wires.
There is a very thick manual with detailed assembly instructions and some gameplay introductions. It can be said to be a magazine. In fact, this kit was originally a gift for the first issue of Map Pi magazine.
Voice HAT is the core of this kit, speakers, microphones, buttons, etc. are connected to it. And it is connected to the Raspberry Pi. For this thing, I bought a Raspberry Pi.
Well, start assembling according to the instructions. First plug the Voice HAT into the Raspberry Pi.
Then start to connect the speaker. The green place on the Voice HAT can be connected to a speaker. The positive and negative wires of the speaker are inserted into the screw holes and the bolts are tightened. In fact, a speaker can be connected to the right side, but there is no welding in this place. That’s a regret.
Now start to connect the microphone. This is easier that just plug it in and it's OK.
The most complicated origami paper starts below, the first is the inner paperboard.
Then fold the outer cardboard and tuck the inner side.
OK, the last thing, the arcade-style buttons can be installed. This button is really good. In fact, there is an LED inside.
Then fix the microphone on it and align it with the hole on it. The manual says to use double-sided tape. Without this thing, it is directly fixed with tape.
Ok, seal it, and the assembly is complete.
The next step is to insert the card, turn it on, and... of course, here is a smart speaker. Compared with Google Home Mini, it is a bit big.

Why being engaged in 5G? Talking about Edge AI and Model Play

2020-07-21 11:52:24 | Google AIY
5G will change the world, which will not shock people, not just publicity. why? Because the capabilities of 5G technology can change existing technologies in unimaginable ways.
Research shows that by 2035, 5G is expected to provide 12.3 trillion U.S. dollars in global economic output and support 22 million jobs worldwide, with huge potential. This technology will not only support devices, but can also change lives. In addition to mobile device technology, we also see that the fields of artificial intelligence, the Internet of Things and robotics will be affected by 5G. In this article, we will explore the potential of 5G in these areas.
Self-driving car
As the Internet of Things binds our physical world and brings its entities to the digital platform, 5G is critical to its sustainability. From finding obstacles, interacting with smart signs and following maps, to establishing communication between other manufacturers/cars, the responsibility of these cars is huge.
All of this can only happen when large amounts of data are transmitted and processed in real time. To this end, a network with equal speed and potential is needed, and 5G seems to be able to provide such a network. 5G has high capacity, low latency, and safety, all of which must put self-driving cars on the road.
Smart City
The city we will have in the future will be different from the city we live in today. They will include connected devices, interactive autonomous vehicles, on-demand smart buses, driverless taxis, etc. Smart cities will also include smart buildings, which will enable companies to increase efficiency by regulating energy consumption.
Data from these cities will help us understand how to use resources in a specific area and how to optimize resource use. Although the possibilities are endless, we will need the next-generation network-5G to make it a reality.
IoT technology
The Internet of Things has begun to change the world, but the integration of 5G will completely change it. It will connect billions of other devices to the Internet. Although the home Internet of Things has great potential, the real deal lies in the Industrial Internet of Things.
From manufacturing, agriculture to retail, healthcare, etc., the Internet of Things will be omnipresent. 5G will fully expand its coverage. For example, 5G in healthcare will enable robotic surgery, personalized medicine, wearable healthcare, etc.
robot technology
We all know the potential that robotics brings to the industry, but many people may not know what can be done with 5G collaboration. In order to operate efficiently, robots need to exchange large amounts of data with systems and employees. To this end, the capacity and capabilities of 5G networks are required.
For example, in agriculture, robots can easily monitor the condition of crop fields and send near real-time video and information back to farmers. After receiving the instructions, the robot can perform the required operations, such as trimming, spraying or harvesting crops. They can also measure features and transmit them to remote scientists.
Why is it so important? The world’s population is growing, and our needs are also growing. In order to maintain food supplies, new technologies need to be brought to the field.
AI entertainment
One obvious use of 5G networks is to address and support the growing demand for mobile video. The data capacity, speed and low latency of the network will promote innovative entertainment methods, including virtual reality and augmented reality. We may see a lot of innovation in AR and VR, but not only in the entertainment field-companies will also see benefits.
AI, Internet of Things and 5G – Why?
We also see a lot of confusion hovering over AI and IoT. One thing we all understand is that all of this boils down to data and processes large amounts of data in real-time.
However, we do not yet have a network that can support this function, but 5G promises:
Low power consumption
Utilize IoT sensors that will last a long time
Compared with 4G, supports more devices
Provides incredible high-speed data connections.
Deliver data with low latency so that more data can be processed.
From predictive maintenance and cost reduction to problem solving/making necessary changes, 5G will revolutionize the industry.
Network optimization and distribution
For example, 5G will enable network slicing, during which you can use a portion of the network bandwidth to prioritize and meet specific needs. This means that the network can be appropriately sliced and distributed among participants according to the priority of the task and used for a given task.
5G's low latency
Remember, 5G is definitely about speed, not speed. Low latency enables 5G networks to provide very near real-time video transmission for sports or security purposes. In industries such as construction and healthcare, where regular and real-time coordination is a key industry, this feature may prove to be extremely beneficial.
In the field of construction, low latency enables effective video conferencing between members to complete work.
In medical care, medical service providers can monitor patients with the same efficiency even when they are outside the hospital.
Edge artificial intelligence with low latency, high efficiency and low consumption
Edge TPU is a supplement to Google Cloud TPU and Google cloud services. It provides end-to-end, cloud-to-end, hardware + software basic architecture to facilitate the deployment of AI-based solutions for customers.
Edge TPU can be used in more and more industrial application scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, voice recognition, etc. It can be used in manufacturing, on-premises, healthcare, retail, smart spaces, transportation, etc.
LG's internal IT service department has tested Edge TPUs and plans to use them on testing equipment in the product line. Shingyoon Hyun, chief technology officer of LG CNS organization, said that the current LG inspection device processes more than 200 display panel images per second, and all problems are manually inspected. The accuracy of the existing system is about 50%, and Google AI can Increase the accuracy to 99.9%.
Model Play is an AI model resource platform for global developers, with built-in diversified AI models, compatible with Tiorb AIX, and supports Google Edge TPU edge artificial intelligence computing chips, accelerating professional development.
In addition, Model Play provides a complete and easy-to-use migration learning model training tool and a wealth of model examples, which can be perfectly matched and combined with Tiorb AIX to realize the rapid development of various artificial intelligence applications. Based on Google's open source neural network architecture and algorithm, the autonomous migration learning function is built. Users do not need to write code. AI model training can be completed by selecting pictures, defining models and category names, realizing easy learning and easy development of artificial intelligence.

Edge Computing and Local AI, Google Coral Hardware is too Modest Compared to Intel

2020-07-15 11:24:08 | Google AIY
AI enables machines to perform various tasks that used to belong only to the human domain. For example, set up AI-driven cameras to discover product defects, in order to know that if quality control is required on the factory production line. How to analyze medical data? Machine learning can identify potential tumors from the scan and label them to the doctor.
However, they are only useful if such applications are fast and safe. There are not many devices that use AI cameras in factories to process images for a few minutes.And if they are sent to the cloud for analysis, no patient is willing to risk exposing their medical data.
These are the problems that Google is trying to solve through a plan called Coral.
"Traditionally, data from AI devices is sent to large computing instances, which are located in centralized data centers, and machine learning models can be run quickly," Coral product manager Vikram Tank explained to The Verge via email. "Coral is Google’s hardware and software component platform that helps you build devices using local AI, that is providing hardware acceleration for neural networks on edge devices."
You may have never heard of Coral before ("Graduated" from the Beta version last October), but it is part of the rapidly growing field of AI. Market analysts predict that, more than 750 million edge AI chips and computers will be sold by 2020 and will grow to 1.5 billion by 2024. Although most of them will be installed in consumer devices such as telephones, a large part is for enterprise customers in the industry: such as automobiles and healthcare.
In order to meet customer needs, Coral provides two main types of products: accelerators and development boards for prototyping new ideas, and modules designed to power the AI brain of production equipment such as smart cameras and sensors. In both devices, the core of the hardware is Google's Edge TPU, which is an ASIC chip optimized to run lightweight machine learning algorithms-similar to the water-cooled TPU used in Google's cloud servers.
Tank said that although individual engineers can use their hardware to create interesting projects, for example, Coral provides guidance on how to build AI marshmallow sorters and smart bird feeders), the long-term focus is on corporate customers Healthcare and other industries.
As an example of a Coral positioning solution, Tank provides a scenario of self-driving cars that uses machine vision to identify objects on the street.
He said: "Cars traveling at 65 mph will cross nearly 10 feet in 100 milliseconds. Therefore, for example, any "processing delay" caused by a slow mobile connection will "increase the risk of critical use cases." "Coral can analyze on the device without having to wait for a slow connection to determine whether the stop sign or the street light in front is on. This is much safer.
Tank said there are similar benefits in improving privacy. He said: "Consider medical device manufacturers who want to use image recognition to perform real-time analysis of ultrasound images." Sending these images to the cloud will provide a potential weak link for hackers to locate, but analyzing the images on the device can make patients and doctors "confident that the data processed on the device will not exceed their control."
Google's Edge TPU, a micro-processing chip optimized for AI, is the core of most Coral products.
Tank said that although Coral’s market target is the business world, the project actually stems from Google’s “AIY” - DIY machine learning suite. The AIY kit was launched in 2017 and is supported by Raspberry Pi computers, allowing anyone to build their own smart speakers and smart cameras, and has achieved great success in the STEM and maker markets.
Tank said that the AIY team quickly noticed that although some customers just wanted to follow the instructions and make toys, other customers wanted to use hardware to make their own device prototypes. Coral was created to cater to these customers.
The problem with Google is that dozens of companies have businesses similar to Coral. The business scope of these companies starts with startups such as Xnor in Seattle, which make AI cameras efficient enough to run on solar power. And then comes to powerful companies like Intel, which launched one of the first enterprise USB accelerators in 2017. And it acquired the company in December last year for $2 billion. Chip maker Habana Labs improves its edge AI products (among other functions).
In view of the large number of competitors, the Coral team said that by closely integrating its hardware with Google’s AI service ecosystem, it can be different.
This series of products, including chips, cloud training, and development tools, has long been the main force of Google's artificial intelligence series. As far as Coral is concerned, there is an AI model library specially compiled for its hardware, and AI services on Google Cloud, which are directly integrated with various Coral modules (such as its environmental sensors).
In fact, Coral is tightly integrated with Google’s AI ecosystem. Edge TPU-powered hardware can only be used in conjunction with Google’s ML framework TensorFlow. Verge talks about competitors in the AI edge market, which may be a limiting factor.
"Edge products are processed specifically for their platforms, and our products support all major AI frameworks and models on the market," a spokesperson for AI edge company Kneron told The Verge. (Kneron said that its assessment is not "negative," and that Google’s entry into the market is welcomed because it "verifies and drives innovation in this area.")
However, it is unclear what business Coral is currently doing. Google will certainly not push Coral as strongly as the Cloud AI service, and the company has yet to disclose any sales data and targets. However, a source familiar with the matter did tell The Verge that most of Coral's orders are for a single device, including AI accelerators and development boards. And only a few customers order 10,000 devices in the enterprise.
For Google, the appeal of Coral may not necessarily be income, but just to learn more about how AI is applied in important areas. In today's world of practical machine learning, all roads inevitably flock to artificial intelligence at the edge.
AI hardware and software supporting Google Edge TPU edge artificial intelligence computing chip
The Model Play and Tiorb AIX launched by Gravitylink can also perfectly support Edge TPU. Tiorb AIX is an artificial intelligence hardware that integrates two core functions of computer vision and intelligent voice interaction. It is equipped with professional AI edge computing chips and various sensors. Model Play is an AI model resource platform for global developers. It has built-in diversified AI models, compatible with Tiorb AIX, and supports Google Edge TPU, an edge artificial intelligence computing chip, to accelerate professional-level development.
In addition, Model Play provides complete and easy-to-use transfer learning model training tools and rich model examples, which can be combined with Titan AIX to achieve rapid development of various artificial intelligence applications. Based on Google's open source neural network architecture and algorithms, it builds an autonomous transfer learning function. Users do not need to write code. They can complete AI model training by selecting pictures, defining models and category names, and realize easy learning and development of artificial intelligence.

Google Coral USB Accelerator Installation Guide

2020-07-10 11:09:24 | Google AIY
Google Coral USB accelerator is a USB device that provides Edge TPU as a computer co-processor. When connected to a Linux, Mac or Windows host, it can speed up the reasoning speed of machine learning models.
All you need to do is download the Edge TPU runtime and TensorFlow Lite library on the computer connected to the USB Accelerator. Then, use the sample application to perform image classification.
System Requirements:
A computer with one of the following operating systems:
· Linux Debian 6.0 or higher, or any of its derivatives (such as Ubuntu 10.0+), and x86-64 or ARM64 system architecture (support Raspberry Pi, but we only tested Raspberry Pi 3 Model B + and Raspberry Pi 4)
· MacOS 10.15 with MacPorts or Homebrew installed
·Windows 10
-A usable USB port (for best performance, please use a USB 3.0 port)
-Python 3.5, 3.6 or 3.7
Operating Procedures
1. Install Edge TPU runtime
Edge TPU runtime is required to communicate with Edge TPU. You can install it on the host, Linux, Mac or Windows by following the instructions below.
1) Linux system
①Add the official Debian package to your system;
② Install Edge TPU runtime:
Connect the USB Accelerator to the computer using the included USB 3.0 cable. If it is inserted, please delete and re-insert it to make the newly installed udev rules take effect.
※ Install at maximum working frequency (optional)
The above command will install Linux's standard Edge TPU runtime, which will run the device at the default clock frequency. You can install the runtime version, which runs at maximum frequency (twice the default value). This can increase the speed of inference, but at the same time also increase power consumption. USB Accelerator will become very hot.
If you are not sure whether your application needs to improve performance, you should use the default operating frequency. Otherwise, you can install the maximum frequency runtime as follows:
sudo apt-get install libedgetpu1-max
You cannot install two versions of the runtime at the same time, but you can switch by simply installing the alternate runtime, as shown above.
Note: When operating the device at maximum frequency, the metal on the USB Accelerator may become very hot. This may cause burns. To avoid injury, keep the device out of reach when operating the device at the maximum frequency, or use the default frequency.
2) Mac system
① Download and unzip the Edge TPU runtime
② Install Edge TPU runtime
The installation script will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the official USB Accelerator data sheet.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer. Then continue to install the TensorFlow Lite library.
3) Windows system:
① Click to download the latest official compressed package. Unzip the ZIP file, and then double-click the install.bat file.
A console window will open to run the installation script, and it will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.
You can read more about performance settings in the Coral USB Accelerator data sheet provided by Google.
Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer.
2.Install the TensorFlow Lite library
There are multiple ways to install the TensorFlow Lite API, but to start using Python, the easiest option is to install the tflite_runtime library. The library provides the most basic code (mainly Interpreter API) required to run inference using Python, which saves a lot of disk space.
To install it, follow the TensorFlow Lite Python quick start and then return to this page after running the pip3 install command.
3.Use the TensorFlow Lite API to run the model
It is now possible to infer on the Edge TPU. Perform image classification using sample code and models.
1) From GitHub: Download the sample model
2) Download bird classifier models, label files and bird photos
3) Run the image classifier using photos of birds
The inferred speed may vary depending on the host system and whether USB 3.0 connection is used.
To run other types of neural networks, check out the official sample projects, including examples of performing real-time object detection, pose prediction, key phrase detection, on-device transfer learning, and more.
AI hardware and software supporting Google Edge TPU
The Model Play and Tiorb AIX developed by Gravitylink can perfectly support Edge TPU. AIX is an artificial intelligence hardware that integrates the two core functions of computer vision and intelligent voice interaction. The built-in AI acceleration chip (Coral Edge TPU/intel movidius) supports edge deep learning inference and provides a reliable Performance support.
Model Play is an AI model resource platform for global developers, built-in diversified AI models, combined with Tiorb AIX, based on Google open source neural network architecture and algorithms, to build autonomous transfer learning functions without writing code, by selecting pictures, defining models and the category name to complete the AI model training.

The Chinese Version of the AI Model Platform - Model Play, is Officially Open to Developers!

2020-07-07 11:39:31 | Google AIY
Model Play is the first AI model platform based on Google Edge TPU chip. It is reported that the Chinese version of Model Play has been launched, and developers can officially visit the site.
The most direct application of machine learning is to apply the model to the actual business to solve the problem. The AI model is an important guarantee for the real landing of artificial intelligence technology in production practice and the promotion of industrial development. It is also an important part of the artificial intelligence ecosystem.
Model Play is an AI model resource communication and trading platform for global users. It provides a rich and diverse functional model for machine learning and deep learning. It supports multiple types of mainstream smart terminal hardware such as Tiorb AIX and helps users quickly create and deploy models, significantly improve model development and application efficiency, and lower the threshold for artificial intelligence development and application.
The AI model in the Model Play platform is compatible with mainstream edge computing AI chips in many types of markets, including Google Coral Edge TPU, Intel Movidius, and Nvidia Jetson Nano. Especially Google Coral Edge TPU, it can run directly with TiOrb AIX after downloading the AI model.
Edge TPU can be used in more and more industrial use scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, speech recognition, etc. It can be applied to various fields such as manufacturing, local deployment, healthcare, retail, smart space, transportation and so on. Its small size and low energy consumption, but excellent performance, can deploy high-precision AI on the edge.
Users can not only publish their trained AI models on Model Play, but also download models they are interested in, in order to retrain and expand their AI ideas, and realize the idea-prototype-product process.
Based on a rich and diverse model resource library, Model Play is suitable for a wide range of AI application scenarios. No matter it is a smart product design team, a production enterprise, or even an education industry or an individual developer group, you can get valuable content here.
Model Play has also opened an activity to collect models for global developers. Interested developers can give it a try and share your AI ideas with global developers.
Artificial intelligence technology will lead a new round of industrial transformation. For example, Google clearly put forward its development strategy from "mobile first" to "artificial intelligence first" at its 2017 annual developer conference. Microsoft's fiscal year 2017 annual report made artificial intelligence the company's development vision for the first time. The field of artificial intelligence is at the forefront of innovation and entrepreneurship. The McKinsey company report pointed out that in 2016, the global artificial intelligence research and development investment exceeded 30 billion US dollars and is in a stage of rapid growth; the world-renowned venture capital research institute CB Insights report shows that there are more than thousands of newly established artificial intelligence startups in the world, and the total amount of artificial intelligence the investment exceeds US$20 billion, a year-on-year increase of as much as 2 times.
"Tiorb AIX" Mini "Super Brain"
The DIY computer with AI visual recognition function-Tiorb AIX, focuses on AI programming and development needs, tools that can be used by ordinary developers and technical experts. Built-in edge computing hardware Coral Edge TPU version and Intel Movidius version, based on the original model library, users can conduct secondary development according to their own needs: find human faces, different types of objects detection, expression recognition... make yourself exclusive AI device.