跳至主要内容

Models for Google Coral Edge TPU

In the lists below, each "Edge TPU model" link provides a .tflite file that is pre-compiled to run on the Edge TPU. You can run these models on your Coral device using our example code. (Remember to download the model's corresponding labels file.)

To get Google Coral Edge TPU products at Gravitylink online store:
 https://store.gravitylink.com/global





Coral Dev board  $149.99
USB Accelerator  $74.99
Mini PCIe Accelerator  $34.99
M.2 Accelerator A+E key  $34.99
M.2 Accelerator B+M key  $34.99
System-on-Module (SoM)
$114.99
  

For many of the models, we've also provided a link for "All model files," which is an archive file that includes the following:
  • Trained model checkpoints
  • Frozen graph for the trained model
  • Eval graph text protos (to be easily viewed)
  • Info file containing input and output information
  • Quantized TensorFlow Lite model that runs on CPU (included with classification models only)

Download this "All model files" archive to get the checkpoint file you'll need if you want to use the model as your basis for transfer-learning, as shown in the tutorials to retrain a classification model and retrain an object detection model.
If you'd like to download all models at once, you can clone our Git repo https://github.com/google-coral/edgetpu and then find the models in test_data/.
Notice: These are not production-quality models; they are for demonstration purposes only.
To build your own model for the Edge TPU, you must use the Edge TPU Compiler.
All models trained on ImageNet used the ILSVRC2012 dataset.

Image classification


EfficientNet-EdgeTpu (S)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224


EfficientNet-EdgeTpu (M)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 240x240

EfficientNet-EdgeTpu (L)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 300x300

MobileNet V1 (ImageNet)

Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224

MobileNet V2 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224

MobileNet V2 (iNat insects)
Recognizes 1,000+ types of insect
Dataset: iNaturalist
Input size: 224x224

MobileNet V2 (iNat plants)
Recognizes 2,000+ types of plants
Dataset: iNaturalist
Input size: 224x224

MobileNet V2 (iNat birds)
Recognizes 900+ types of birds
Dataset: iNaturalist
Input size: 224x224

Inception V1 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224

Inception V2 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224

Inception V3 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 299x299

Inception V4 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 299x299

Object detection


MobileNet SSD v1 (COCO)
Detects the location of 90 types objects
Dataset: COCO
Input size: 300x300
MobileNet SSD v2 (COCO)

Detects the location of 90 types objects
Dataset: COCO
Input size: 300x300
MobileNet SSD v2 (Faces)

Detects the location of human faces
Dataset: Open Images v4
Input size: 320x320
(Does not require a labels file)

Semantic segmentation


MobileNet v2 DeepLab v3 (0.5 depth multiplier)
Recognizes and segments 20 types of objects
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 0.5

MobileNet v2 DeepLab v3 (1.0 depth multiplier)
Recognizes and segments 20 types of objects
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 1.0

On-device retraining (classification)


MobileNet v1 embedding extractor
This model is compiled with the last fully-connected layer removed so that it can be used as an embedding extractor for on-device transfer-learning with the SoftmaxRegression API. This model does not perform classifications on its own, and must be paired with the SoftmaxRegression API.

MobileNet v1 with L2-norm
This is a modified version of MobileNet v1 that includes an L2-normalization layer and other changes to be compatible with the ImprintingEngine API. It's built for the Edge TPU but the last fully-connected layer executes on the CPU to enable retraining.



评论

此博客中的热门博文

How to Retrain an object detection model

This tutorial shows you how to retrain an object detection model to recognize a new set of classes. You'll use a technique called transfer learning to retrain an existing model and then compile it to run on an Edge TPU device—you can use the retrained model with either the Coral Dev Board or the Coral USB Accelerator. Specifically, this tutorial shows you how to retrain a MobileNet V1 SSD model (originally trained to detect 90 objects from the COCO dataset) so that it detects two pets: Abyssinian cats and American Bulldogs (from the Oxford-IIIT Pets Dataset). But you can reuse these procedures with your own image dataset, and with a different pre-trained model. The steps below show you how to perform transfer-learning using either last-layers-only or full-model retraining. Most of the steps are the same; just keep an eye out for the different commands depending on the technique you desire. Note: These instructions do not require deep experience with TensorFlow o...

How to retrain an image classification model?

Got a tutorial from Google Coral Team: This tutorial shows you how to retrain an image classification model to recognize a new set of classes. You'll use a technique called transfer learning to retrain an existing model and then compile it to run on an Edge TPU device—you can use the retrained model with either the Coral Dev Board or the Coral USB Accelerator. Specifically, this tutorial shows you how to retrain a  quantized  MobileNet V1 model to recognize different types of flowers (adopted from TensorFlow's docs). But you can reuse these procedures with your own image dataset, and with a different pre-trained model. Tip:  If you want a shortcut to train an image classification model, try Cloud AutoML Vision. It's a web-based tool that allows you to train a model with your own images, optimize it, and then export it for the Edge TPU. Set up the Docker container Prepare your dataset Retrain your classification model Compile the model for the Edge TPU Run...

Introducing Google Coral Edge TPU Device--Mini PCIe Accelerator

Mini PCIe Accelerator A PCIe device that enables easy integration of the Edge TPU into existing systems. Supported host OS: Debian Linux Half-size Mini PCIe form factor Supported Framework: TensorFlow Lite Works with AutoML Vision Edge https://store.gravitylink.com/global/product/miniPcIe The Coral Mini PCIe Accelerator is a PCIe module that brings the Edge TPU coprocessor to existing systems and products. The Mini PCIe Accelerator is a half-size Mini PCIe card designed to fit in any standard Mini PCIe slot. This form-factor enables easy integration into ARM and x86 platforms so you can add local ML acceleration to products such as embedded platforms, mini-PCs, and industrial gateways. https://store.gravitylink.com/global/product/miniPcIe Features Google Edge TPU ML accelerator Standard Half-Mini PCIe card Supports Debian Linux and other variants on host CPU About Edge TPU  The Edge TPU is a small ASIC designed by Google that provi...