Adding TensorFlow to your Swift iOS app
Swift has become one of the most elegant modern programming languages since its birth in June 2014. So it'll be both fun and useful for some developers to integrate modern TensorFlow into their modern Swift-based iOS app. Steps to do that are similar to the steps for the Objective-C-based app but with some Swift-related trick. If you have already followed the steps for the Objective-C part, you may find some of the steps here are repetitive, but complete steps are provided anyway for those who may skip the Objective-C section and get to Swift directly:
- In your Xcode, click File | New | Project..., select Single View App, then Next, enter HelloTensorFlow_Swift as Product Name, select Swift as Language, then click Next and choose a location for the project before hitting Create. Close the project window in Xcode (as we'll open the project's workspace file due to its use of pod later).
- Open a Terminal Window, cd to where your project is located at, then create a new file named Podfile, with the following content:
target 'HelloTensorFlow_Swift' pod 'TensorFlow-experimental'
- Run the command pod install to download and install the TensorFlow pod;
- Open the HelloTensorFlow_Swift.xcworkspace file in Xcode, then drag and drop the two files (ios_image_load.mm and ios_image_load.h) that handle image loading from the TensorFlow iOS sample directory tensorflow/examples/ios/simple to the HelloTensorFlow_Swift project folder. When you add the two files to the project, you'll see a message box, as shown in the following screenshot, asking you if you would like to configure an Objective-C bridging header, which is needed for Swift code to call C++ or Objective-C code. So click the Create Bridging Header button:
Figure 2.9 Creating Bridging Header when adding C++ file
- Also drag and drop the two models, quantized_stripped_dogs_retrained .pb and dog_retrained_mobilenet10_224.pb, the label file dog_retrained_labels.txt and a couple of test image files to the project folder – after that you should see something like this:
Figure 2.10 Adding utility files, model files, label file and image files
- Create a new file named RunInference.h with the following code (one trick is that we have to use an Objective-C class as a wrapper to the RunInferenceOnImage method in the next step for our Swift code to make an indirect call to it. Otherwise, a build error will occur):
#import <Foundation/Foundation.h> @interface RunInference_Wrapper : NSObject - (NSString *)run_inference_wrapper:(NSString *)name; @end
- Create another file named RunInference.mm which starts with the following include objects and prototype:
#include <fstream> #include <queue> #include "tensorflow/core/framework/op_kernel.h" #include "tensorflow/core/public/session.h" #include "ios_image_load.h" NSString* RunInferenceOnImage(int wanted_width, int wanted_height, std::string input_layer, NSString *model);
- Add RunInference.mm in the following code to implement the RunInference_Wrapper defined in its .h file:
@implementation RunInference_Wrapper - (NSString *)run_inference_wrapper:(NSString *)name { if ([name isEqualToString:@"Inceptionv3"]) return RunInferenceOnImage(299, 299, "Mul", @"quantized_stripped_dogs_retrained"); else return RunInferenceOnImage(224, 224, "input", @"dog_retrained_mobilenet10_224"); } @end
- At the end of RunInference.mm, add the exact same methods as in the ViewController.mm in the Objective-C section, slightly different from the methods in the tensorflow/example/ios/simple/RunModelViewController.mm:
class IfstreamInputStream : public namespace { class IfstreamInputStream : public ::google::protobuf::io::CopyingInputStream { ... static void GetTopN( ... bool PortableReadFileToProto(const std::string& file_name, ... NSString* FilePathForResourceName(NSString* name, NSString* extension) { ... NSString* RunInferenceOnImage(int wanted_width, int wanted_height, std::string input_layer, NSString *model) {
- Now open the ViewController.swift, at the end of the viewDidLoad method, first add the code that adds a label to let users know what they can do with the app:
let lbl = UILabel() lbl.translatesAutoresizingMaskIntoConstraints = false lbl.text = "Tap Anywhere" self.view.addSubview(lbl)
Then the constraints to center the label in the screen:
let horizontal = NSLayoutConstraint(item: lbl, attribute: .centerX, relatedBy: .equal, toItem: self.view, attribute: .centerX, multiplier: 1, constant: 0) let vertical = NSLayoutConstraint(item: lbl, attribute: .centerY, relatedBy: .equal, toItem: self.view, attribute: .centerY, multiplier: 1, constant: 0) self.view.addConstraint(horizontal) self.view.addConstraint(vertical)
Finally, add a tap gesture recognizer there:
let recognizer = UITapGestureRecognizer(target: self, action: #selector(ViewController.tapped(_:))) self.view.addGestureRecognizer(recognizer)
- In the tap handler, we first add an alert action to allow the user to select the Inception v3 retrained model:
let alert = UIAlertController(title: "Pick a Model", message: nil, preferredStyle: .actionSheet) alert.addAction(UIAlertAction(title: "Inception v3 Retrained Model", style: .default) { action in let result = RunInference_Wrapper().run_inference_wrapper("Inceptionv3") let alert2 = UIAlertController(title: "Inference Result", message: result, preferredStyle: .actionSheet) alert2.addAction(UIAlertAction(title: "OK", style: .default) { action2 in }) self.present(alert2, animated: true, completion: nil) })
Then create another action for the MobileNet retrained model, as well as a none action, before presenting it:
alert.addAction(UIAlertAction(title: "MobileNet 1.0 Retrained Model", style: .default) { action in let result = RunInference_Wrapper().run_inference_wrapper("MobileNet") let alert2 = UIAlertController(title: "Inference Result", message: result, preferredStyle: .actionSheet) alert2.addAction(UIAlertAction(title: "OK", style: .default) { action2 in }) self.present(alert2, animated: true, completion: nil) }) alert.addAction(UIAlertAction(title: "None", style: .default) { action in }) self.present(alert, animated: true, completion: nil)
- Open the HelloTensorFlow_Swift-Bridging-Header.h file, and add one line of code to it: #include "RunInference.h".
Now run the app in the simulator, you'll see an alert controller asking you to select a model:
Figure 2.11 Selecting a retrained model for inference
And the inference results for different retrained models:
Figure 2.12 Inference results for different retrained models
There you go. Now that you know what it takes to add powerful TensorFlow models to your iOS apps, whether it's written in Objective-C or Swift, there's no reason to stop you from adding AI to your mobile apps, unless Android is your thing. But you know we'll certainly take care of Android too.