跳至主要内容

Why Google Coral is the BEST Choice for You?

"Advanced neural network processing for low-power devices"

Coral is a hardware and software platform for building intelligent devices with fast neural network inferencing.

Yep, Google Coral serial products are all available at #Gravitylink



At the heart of our devices is the Edge TPU coprocessor. This is a small ASIC built by Google that's specially-designed to execute state-of-the-art neural networks at high speed, with a low power cost.

Performance

The Edge TPU is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt).

The following chart compares the inference time for several popular vision models in TensorFlow Lite format, when executed either on a modern embedded CPU or on the Coral Dev Board (lower is better).

As a part of Google Research, our team is working with other machine learning teams at Google to help build the next generation of neural networks for low-power devices. There is constant progress being made with TensorFlow tools that optimize models for embedded devices, and with new neural network architectures that are specially-designed to provide fast inferencing speeds in a small package.

For example, the new EfficientNet-EdgeTPU model provides new levels of performance that balance low latency with high accuracy on the Edge TPU. It comes in three sizes (small, medium, and large), offering increasing levels of accuracy with trade-offs in inference latency.

Flexibility and scalability

We offer the Edge TPU in multiple form factors to suit various prototyping and production environments—from embedded systems deployed in the field, to network systems operating on-premise.

For example, our USB Accelerator simply plugs into a desktop, laptop, or embedded system such as a Raspberry Pi so you can quickly prototype your application. From there, you can scale to production systems by adding our Mini PCIe or M.2 Accelerator to your hardware system.

If you're looking for a fully-integrated system, you can get started with our Dev Board—a single-board computer based on NXP's i.MX 8M system-on-chip. Then you can scale to production by connecting our System-on-Module (included on the Dev Board) to your own baseboard.

Model compatibility

The Edge TPU supports a variety of model architectures built with TensorFlow, including models built with Keras.

Our workflow to create models for Coral is based on the TensorFlow framework. No additional APIs are required to build or run your model. You only need a small runtime package, which delegates the execution of your model to the Edge TPU.

To build a compatible model, you need to convert a trained model into the TensorFlow Lite format and quantize all parameter data (you can use either quantization-aware training or full integer post-training quantization). Then pass the model to our Edge TPU Compiler and it's ready to execute using the TensorFlow Lite API.



Pre-compiled models

We have verified many popular model architectures for image classification, object detection, semantic segmentation, pose estimation, keyphrase detection, and more to come.

If you want to try your application using one of these models, you can download a pre-trained version of our models.

Mendel Linux

To ease development with our fully-integrated systems (the Dev Board and System-on-Module), we created a derivative of Debian Linux called Mendel.

We've optimized Mendel for embedded systems by making it very lightweight. So although you can connect a keyboard and monitor to get a shell interface, you won't find any desktop apps. You will find a familiar Linux interface and a Debian packaging system, providing access the extensive Debian software archives and a huge range of customizations.

Mendel also comes bundled with the tools you need to build your headless ML applications, including standard Python and C++ libraries, the Edge TPU API, and the Edge TPU runtime. Additionally, we include a tool called MDT (Mendel Development Tool) that makes it easy to connect securely (using SSH/mDNS), transfer files, and run other commands from a remote computer.



Simultaneous inferencing

For applications that run multiple models, you can execute your models concurrently on a single Edge TPU by co-compiling the models so they share the Edge TPU scratchpad memory. Or, if you have multiple Edge TPUs in your system, you can increase performance by assigning each model to a specific Edge TPU and run them in parallel.

On-device training

Although the Edge TPU is primarily intended for inferencing, you can also use it to accelerate transfer-learning with a pre-trained model. To simplify this process, we've created a Python API that executes the backbone of your model on the Edge TPU during training, and then calculates and saves new weight parameters for the final layer.

评论

此博客中的热门博文

How to Retrain an object detection model

This tutorial shows you how to retrain an object detection model to recognize a new set of classes. You'll use a technique called transfer learning to retrain an existing model and then compile it to run on an Edge TPU device—you can use the retrained model with either the Coral Dev Board or the Coral USB Accelerator. Specifically, this tutorial shows you how to retrain a MobileNet V1 SSD model (originally trained to detect 90 objects from the COCO dataset) so that it detects two pets: Abyssinian cats and American Bulldogs (from the Oxford-IIIT Pets Dataset). But you can reuse these procedures with your own image dataset, and with a different pre-trained model. The steps below show you how to perform transfer-learning using either last-layers-only or full-model retraining. Most of the steps are the same; just keep an eye out for the different commands depending on the technique you desire. Note: These instructions do not require deep experience with TensorFlow o...

How to retrain an image classification model?

Got a tutorial from Google Coral Team: This tutorial shows you how to retrain an image classification model to recognize a new set of classes. You'll use a technique called transfer learning to retrain an existing model and then compile it to run on an Edge TPU device—you can use the retrained model with either the Coral Dev Board or the Coral USB Accelerator. Specifically, this tutorial shows you how to retrain a  quantized  MobileNet V1 model to recognize different types of flowers (adopted from TensorFlow's docs). But you can reuse these procedures with your own image dataset, and with a different pre-trained model. Tip:  If you want a shortcut to train an image classification model, try Cloud AutoML Vision. It's a web-based tool that allows you to train a model with your own images, optimize it, and then export it for the Edge TPU. Set up the Docker container Prepare your dataset Retrain your classification model Compile the model for the Edge TPU Run...

Introducing Google Coral Edge TPU Device--Mini PCIe Accelerator

Mini PCIe Accelerator A PCIe device that enables easy integration of the Edge TPU into existing systems. Supported host OS: Debian Linux Half-size Mini PCIe form factor Supported Framework: TensorFlow Lite Works with AutoML Vision Edge https://store.gravitylink.com/global/product/miniPcIe The Coral Mini PCIe Accelerator is a PCIe module that brings the Edge TPU coprocessor to existing systems and products. The Mini PCIe Accelerator is a half-size Mini PCIe card designed to fit in any standard Mini PCIe slot. This form-factor enables easy integration into ARM and x86 platforms so you can add local ML acceleration to products such as embedded platforms, mini-PCs, and industrial gateways. https://store.gravitylink.com/global/product/miniPcIe Features Google Edge TPU ML accelerator Standard Half-Mini PCIe card Supports Debian Linux and other variants on host CPU About Edge TPU  The Edge TPU is a small ASIC designed by Google that provi...