Intelligent Mobile Projects with TensorFlow
上QQ阅读APP看书,第一时间看更新

Quick installation and example 

Perform the following steps to install and run an object detection inference:

  1. In your TensorFlow source root you created in Chapter 1,Getting Started with Mobile TensorFlow, get the TensorFlow models repo, which contains the TensorFlow Object Detection API as one of its research models:
git clone https://github.com/tensorflow/models
  1. Install the matplotlib, pillow, lxml, and jupyter libraries. On Ubuntu or Mac, you can run:
sudo pip install pillow
sudo pip install lxml
sudo pip install jupyter
sudo pip install matplotlib
  1. Go to the models/research directory, then run the following command:
protoc object_detection/protos/*.proto --python_out=.

This will compile all the Protobufs in the  object_detection/protos directory to make the TensorFlow Object Detection API happy. Protobuf, or Protocol Buffer, is an automated way to serialize and retrieve structured data, and it’s lightweight and more efficient than XML. All you need to do is write a .proto file that describes the structure of your data, then use protoc, the proto compiler, to generate code that automatically parses and encodes the protobuf data. Notice the --python_out parameter specifies the language of the generated code. In a later section of this chapter, when we discuss how to use a model in iOS, we’ll use the protoc compiler with --cpp_out so the generated code is in C++. For complete documentation on Protocol Buffers, see https://developers.google.com/protocol-buffers.

  1. Still inside models/research, run export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim and then python object_detection/builders/model_builder_test.py to verify everything works.
  2. Launch the jupyter notebook command and open http://localhost:8888 in a browser. Click object_detection first, then select the object_detection_tutorial.ipynb notebook and run the demo cell by cell.