In the lists below, each "Edge TPU model" link provides a
To get Google Coral Edge TPU products at Gravitylink online store:
https://store.gravitylink.com/global
Coral Dev board $149.99
USB Accelerator $74.99
Mini PCIe Accelerator $34.99
.tflite
file that is pre-compiled to run on the Edge TPU. You can run these models on your Coral device using our example code. (Remember to download the model's corresponding labels file.)To get Google Coral Edge TPU products at Gravitylink online store:
https://store.gravitylink.com/global
Coral Dev board $149.99
USB Accelerator $74.99
Mini PCIe Accelerator $34.99
M.2 Accelerator A+E key $34.99
M.2 Accelerator B+M key $34.99
System-on-Module (SoM)
$114.99
For many of the models, we've also provided a link for "All model files," which is an archive file that includes the following:
- Trained model checkpoints
- Frozen graph for the trained model
- Eval graph text protos (to be easily viewed)
- Info file containing input and output information
- Quantized TensorFlow Lite model that runs on CPU (included with classification models only)
Download this "All model files" archive to get the checkpoint file you'll need if you want to use the model as your basis for transfer-learning, as shown in the tutorials to retrain a classification model and retrain an object detection model.
If you'd like to download all models at once, you can clone our Git repo
https://github.com/google-coral/edgetpu
and then find the models in test_data/
.
Notice: These are not production-quality models; they are for demonstration purposes only.
To build your own model for the Edge TPU, you must use the Edge TPU Compiler.
All models trained on ImageNet used the ILSVRC2012 dataset.
Image classification
EfficientNet-EdgeTpu (S)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Dataset: ImageNet
Input size: 224x224
EfficientNet-EdgeTpu (M)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 240x240
Dataset: ImageNet
Input size: 240x240
EfficientNet-EdgeTpu (L)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 300x300
Dataset: ImageNet
Input size: 300x300
MobileNet V1 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Dataset: ImageNet
Input size: 224x224
MobileNet V2 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Dataset: ImageNet
Input size: 224x224
MobileNet V2 (iNat insects)
Recognizes 1,000+ types of insect
Dataset: iNaturalist
Input size: 224x224
Dataset: iNaturalist
Input size: 224x224
MobileNet V2 (iNat plants)
Recognizes 2,000+ types of plants
Dataset: iNaturalist
Input size: 224x224
Dataset: iNaturalist
Input size: 224x224
MobileNet V2 (iNat birds)
Recognizes 900+ types of birds
Dataset: iNaturalist
Input size: 224x224
Dataset: iNaturalist
Input size: 224x224
Inception V1 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Dataset: ImageNet
Input size: 224x224
Inception V2 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Dataset: ImageNet
Input size: 224x224
Inception V3 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 299x299
Dataset: ImageNet
Input size: 299x299
Inception V4 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 299x299
Dataset: ImageNet
Input size: 299x299
Object detection
MobileNet SSD v1 (COCO)
Detects the location of 90 types objects
Dataset: COCO
Input size: 300x300
Dataset: COCO
Input size: 300x300
MobileNet SSD v2 (COCO)
Detects the location of 90 types objects
Dataset: COCO
Input size: 300x300
Dataset: COCO
Input size: 300x300
MobileNet SSD v2 (Faces)
Detects the location of human faces
Dataset: Open Images v4
Input size: 320x320
(Does not require a labels file)
Dataset: Open Images v4
Input size: 320x320
(Does not require a labels file)
Semantic segmentation
MobileNet v2 DeepLab v3 (0.5 depth multiplier)
Recognizes and segments 20 types of objects
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 0.5
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 0.5
MobileNet v2 DeepLab v3 (1.0 depth multiplier)
Recognizes and segments 20 types of objects
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 1.0
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 1.0
On-device retraining (classification)
MobileNet v1 embedding extractor
This model is compiled with the last fully-connected layer removed so that it can be used as an embedding extractor for on-device transfer-learning with the
SoftmaxRegression
API. This model does not perform classifications on its own, and must be paired with the SoftmaxRegression
API.
For details, read Retrain a classification model on-device with backpropagation.
MobileNet v1 with L2-norm
This is a modified version of MobileNet v1 that includes an L2-normalization layer and other changes to be compatible with the
ImprintingEngine
API. It's built for the Edge TPU but the last fully-connected layer executes on the CPU to enable retraining.
For details, read Retrain a classification model on-device with weight imprinting.
评论
发表评论