Adding TensorFlow to your Objective-C iOS app
First, follow these steps to add TensorFlow with the image classification feature to your Objective-C iOS app (we'll start with a new app, but you can just skip the first step if you need to add TensorFlow to an existing app):
- In your Xcode, click File | New | Project..., select Single View App, then Next, enter HelloTensorFlow as Product Name, select Objective-C as Language, then click Next and choose a location for the project before hitting Create. Close the project window in Xcode (since we'll open the project's workspace file due to its use of pod later).
- Open a Terminal window, cd to where your project is located at, then create a new file named Podfile, with the following content:
target 'HelloTensorFlow' pod 'TensorFlow-experimental'
- Run the command pod install to download and install the TensorFlow pod.
- Open the HelloTensorFlow.xcworkspace file in Xcode, then drag and drop the two files (ios_image_load.mm and ios_image_load.h) that handle image loading from the TensorFlow iOS sample directory tensorflow/examples/ios/simple to the HelloTensorFlow project folder.
- Drag and drop the two models, quantized_stripped_dogs_retrained.pb and dog_retrained_mobilenet10_224.pb, the label file dog_retrained_labels.txt, and a couple of test image files to the project folder—after that, you should see something like this:
Figure 2.6 Adding utility files, model files, label file and image files
- Rename ViewController.m to ViewController.mm, as we'll mix C++ code and Objective-C code in this file to call TensorFlow C++ API and process the image input and inference results. Then, before @interface ViewController, add the following #include and function prototype:
#include <fstream> #include <queue> #include "tensorflow/core/framework/op_kernel.h" #include "tensorflow/core/public/session.h" #include "ios_image_load.h" NSString* RunInferenceOnImage(int wanted_width, int wanted_height, std::string input_layer, NSString *model);
- At the end of ViewController.mm, add the following code copied from the tensorflow/example/ios/simple/RunModelViewController.mm, with a slight change in the function RunInferenceOnImage to accept different retrained models with different input sizes and input layer name:
namespace { class IfstreamInputStream : public ::google::protobuf::io::CopyingInputStream { ... static void GetTopN( ... bool PortableReadFileToProto(const std::string& file_name, ... NSString* FilePathForResourceName(NSString* name, NSString* extension) { ... NSString* RunInferenceOnImage(int wanted_width, int wanted_height, std::string input_layer, NSString *model) {
- Still in the ViewController.mm, at the end of viewDidLoad method, first add the code that adds a label to let users know what they can do with the app:
UILabel *lbl = [[UILabel alloc] init]; [lbl setTranslatesAutoresizingMaskIntoConstraints:NO]; lbl.text = @"Tap Anywhere"; [self.view addSubview:lbl];
Then the constraints to center the label in the screen:
NSLayoutConstraint *horizontal = [NSLayoutConstraint constraintWithItem:lbl attribute:NSLayoutAttributeCenterX relatedBy:NSLayoutRelationEqual toItem:self.view attribute:NSLayoutAttributeCenterX multiplier:1 constant:0]; NSLayoutConstraint *vertical = [NSLayoutConstraint constraintWithItem:lbl attribute:NSLayoutAttributeCenterY relatedBy:NSLayoutRelationEqual toItem:self.view attribute:NSLayoutAttributeCenterY multiplier:1 constant:0]; [self.view addConstraint:horizontal]; [self.view addConstraint:vertical];
Finally, add a tap gesture recognizer there:
UITapGestureRecognizer *recognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tapped:)]; [self.view addGestureRecognizer:recognizer];
- In the tap handler, we first create two alert actions to allow the user to select a retrained model:
UIAlertAction* inceptionV3 = [UIAlertAction actionWithTitle:@"Inception v3 Retrained Model" style:UIAlertActionStyleDefault handler:^(UIAlertAction * action) { NSString *result = RunInferenceOnImage(299, 299, "Mul", @"quantized_stripped_dogs_retrained"); [self showResult:result]; }]; UIAlertAction* mobileNet = [UIAlertAction actionWithTitle:@"MobileNet 1.0 Retrained Model" style:UIAlertActionStyleDefault handler:^(UIAlertAction * action) { NSString *result = RunInferenceOnImage(224, 224, "input", @"dog_retrained_mobilenet10_224"); [self showResult:result]; }];
Then create a none action and add all three alert actions to an alert controller and present it:
UIAlertAction* none = [UIAlertAction actionWithTitle:@"None" style:UIAlertActionStyleDefault handler:^(UIAlertAction * action) {}]; UIAlertController* alert = [UIAlertController alertControllerWithTitle:@"Pick a Model" message:nil preferredStyle:UIAlertControllerStyleAlert]; [alert addAction:inceptionV3]; [alert addAction:mobileNet]; [alert addAction:none]; [self presentViewController:alert animated:YES completion:nil];
- The result of the inference is shown as another alert controller in the method showResult:
-(void) showResult:(NSString *)result { UIAlertController* alert = [UIAlertController alertControllerWithTitle:@"Inference Result" message:result preferredStyle:UIAlertControllerStyleAlert]; UIAlertAction* action = [UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleDefault handler:nil]; [alert addAction:action]; [self presentViewController:alert animated:YES completion:nil]; }
The core code related to calling TensorFlow is in the RunInferenceOnImage method, modified slightly based on the TensorFlow iOS simple app, consisting of first creating a TensorFlow session and a graph:
tensorflow::Session* session_pointer = nullptr; tensorflow::Status session_status = tensorflow::NewSession(options, &session_pointer); ... std::unique_ptr<tensorflow::Session> session(session_pointer); tensorflow::GraphDef tensorflow_graph; NSString* network_path = FilePathForResourceName(model, @"pb"); PortableReadFileToProto([network_path UTF8String], &tensorflow_graph); tensorflow::Status s = session->Create(tensorflow_graph);
Then loading the label file and the image file, and converting the image data to appropriate Tensor data:
NSString* labels_path = FilePathForResourceName(@"dog_retrained_labels", @"txt"); ... NSString* image_path = FilePathForResourceName(@"lab1", @"jpg"); std::vector<tensorflow::uint8> image_data = LoadImageFromFile([image_path UTF8String], &image_width, &image_height, &image_channels); tensorflow::Tensor image_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1, wanted_height, wanted_width, wanted_channels})); auto image_tensor_mapped = image_tensor.tensor<float, 4>(); tensorflow::uint8* in = image_data.data(); float* out = image_tensor_mapped.data(); for (int y = 0; y < wanted_height; ++y) { const int in_y = (y * image_height) / wanted_height; ... }
And finally, calling the TensorFlow session's run method with the image tensor data and the input layer name, getting the returned output results and processing it to get the top five results with confidence values greater than the threshold:
std::vector<tensorflow::Tensor> outputs; tensorflow::Status run_status = session->Run({{input_layer, image_tensor}},{output_layer}, {}, &outputs); ... tensorflow::Tensor* output = &outputs[0]; const int kNumResults = 5; const float kThreshold = 0.01f; std::vector<std::pair<float, int> > top_results; GetTopN(output->flat<float>(), kNumResults, kThreshold, &top_results);
In the rest of the book, we'll implement different versions of the RunInferenceOnxxx method to run different models with different inputs. So if you don't fully understand some of the previous code, don't worry; with a few more apps built, you'll feel comfortable writing your own inference logic for a new custom model.
Also, the complete iOS app, HelloTensorFlow, is in the book's source code repo.
Now, run the app in the Simulator or on an actual iOS device, first you'll see the following message box asking you to select a retrained model:
Figure 2.7 Selecting different retrained model for inference
Then you will see the inference results after selecting a model:
Figure 2.8 Inference results based on different retrained models
Notice that the MobileNet retrained model runs a lot faster, about one second on an iPhone 6, than the Inception v3 retrained model, runs about seven seconds on the same iPhone.