跳至主要内容

Google Coral Latest Updates: Edge TPU runtime now available for Mac and Windows

Google released an update for the Edge TPU runtime and compiler with various bug fixes.
The big news is that the Edge TPU runtime and the Python libraries are now available for Mac and Windows!
This means you can now use the Coral USB Accelerator when connected to any computer running either Debian Linux, macOS, or Windows 10.
If you want to use Mac or Windows with the USB Accelerator, follow our updated guide to get started with the USB Accelerator.
If you want to update your existing Linux computer or Dev Board with the latest tools, simply update your Debian packages as follows:
  1. sudo apt-get update
  2.  
  3. sudo apt-get install edgetpu python3-edgetpu
And if you're using the TensorFlow Lite API, you must also update the tflite_runtime module, as per the TensorFlow Lite Python quickstart. For example, here's how to update TensorFlow Lite on the Coral Dev Board:
  1. pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
Updates in this release
  • Edge TPU runtime is now v13
  • Edge TPU compiler is now 2.0.291256449
  • Edge TPU Python library (edgetpu module) is 2.13.0
  • TensorFlow Lite runtime (tflite_runtime module) is now based on TF 2.1
If you update the Edge TPU runtime, then you must update either the edgetpu or tflite_runtime module, depending on which API you use to run inferences.
BTW, Google Coral distributor Gravitylink: all Coral products are available online at Gravitylink Store.

评论

此博客中的热门博文

How to Retrain an object detection model

This tutorial shows you how to retrain an object detection model to recognize a new set of classes. You'll use a technique called transfer learning to retrain an existing model and then compile it to run on an Edge TPU device—you can use the retrained model with either the Coral Dev Board or the Coral USB Accelerator. Specifically, this tutorial shows you how to retrain a MobileNet V1 SSD model (originally trained to detect 90 objects from the COCO dataset) so that it detects two pets: Abyssinian cats and American Bulldogs (from the Oxford-IIIT Pets Dataset). But you can reuse these procedures with your own image dataset, and with a different pre-trained model. The steps below show you how to perform transfer-learning using either last-layers-only or full-model retraining. Most of the steps are the same; just keep an eye out for the different commands depending on the technique you desire. Note: These instructions do not require deep experience with TensorFlow o...

How to retrain an image classification model?

Got a tutorial from Google Coral Team: This tutorial shows you how to retrain an image classification model to recognize a new set of classes. You'll use a technique called transfer learning to retrain an existing model and then compile it to run on an Edge TPU device—you can use the retrained model with either the Coral Dev Board or the Coral USB Accelerator. Specifically, this tutorial shows you how to retrain a  quantized  MobileNet V1 model to recognize different types of flowers (adopted from TensorFlow's docs). But you can reuse these procedures with your own image dataset, and with a different pre-trained model. Tip:  If you want a shortcut to train an image classification model, try Cloud AutoML Vision. It's a web-based tool that allows you to train a model with your own images, optimize it, and then export it for the Edge TPU. Set up the Docker container Prepare your dataset Retrain your classification model Compile the model for the Edge TPU Run...

Introducing Google Coral Edge TPU Device--Mini PCIe Accelerator

Mini PCIe Accelerator A PCIe device that enables easy integration of the Edge TPU into existing systems. Supported host OS: Debian Linux Half-size Mini PCIe form factor Supported Framework: TensorFlow Lite Works with AutoML Vision Edge https://store.gravitylink.com/global/product/miniPcIe The Coral Mini PCIe Accelerator is a PCIe module that brings the Edge TPU coprocessor to existing systems and products. The Mini PCIe Accelerator is a half-size Mini PCIe card designed to fit in any standard Mini PCIe slot. This form-factor enables easy integration into ARM and x86 platforms so you can add local ML acceleration to products such as embedded platforms, mini-PCs, and industrial gateways. https://store.gravitylink.com/global/product/miniPcIe Features Google Edge TPU ML accelerator Standard Half-Mini PCIe card Supports Debian Linux and other variants on host CPU About Edge TPU  The Edge TPU is a small ASIC designed by Google that provi...