Intelligent Mobile Projects with TensorFlow
上QQ阅读APP看书,第一时间看更新

Setting up TensorFlow on GPU-powered Ubuntu

One of the benefits of using a good deep learning framework, such as TensorFlow, is its seamless support for using a Graphical Processing Unit (GPU) in model training. It'd be a lot faster training a non-trivial TensorFlow-based model on a GPU than on a CPU, and currently NVIDIA offers the best and most cost-effective GPUs supported by TensorFlow. And Ubuntu is the best OS for running NVIDIA GPUs with TensorFlow. You can easily buy one GPU for a few hundred bucks and install it on an inexpensive desktop with an Ubuntu system. You can also install NVIDIA GPU on Windows but TensorFlow's support for Windows is not as good as that for Ubuntu.

To train the models deployed in the apps in this book, we use NVIDIA GTX 1070, which you can purchase on Amazon or eBay for about $400. There's a good blog by Tim Dettmers on which GPUs to use for deep learning (http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/). After you get such a GPU and install it on your Ubuntu system, and before you install the GPU-enabled TensorFlow, you need to install NVIDIA CUDA 8.0 (or 9.0) and the cuDNN (CUDA-Deep Neural Network library) 6.0 (or 7.0), both are supported by TensorFlow 1.4.

An alternative to setting up your own GPU-powered Ubuntu with TensorFlow is to use TensorFlow in a GPU-enabled cloud service such as Google Cloud Platform's Cloud ML Engine (https://cloud.google.com/ml-engine/docs/using-gpus). There are pros and cons of each option. Cloud services are generally time-based billing. If your goal is to train or retrain models to be deployed on mobile devices, meaning the models are not super complicated, and if you plan to do machine learning training for a long time, it'd be more cost effective and satisfying to have your own GPU.

Follow these steps to install CUDA 8.0 and cuDNN 6.0 on Ubuntu 16.04 (you should be able to download and install CUDA 9.0 and cuDNN 7.0 in a similar way):

  1. Find the NVIDIA CUDA 8.0 GA2 release at https://developer.nvidia.com/cuda-80-ga2-download-archive, and make the selections shown in the following screenshot:

Fig 1.1 Getting ready to download CUDA 8.0 on Ubuntu 16.04

  1. Download the base installer as shown in the following screenshot:

Fig 1.2 Choosing the CUDA 8.0 installer file for Ubuntu 16.04

  1. Open a new Terminal and run the following commands (you'll also need to add the last two commands to your .bashrc file for the two environment variables to take effect next time you launch a new Terminal):
sudo dpkg -i /home/jeff/Downloads/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb  
sudo apt-get update 
sudo apt-get install cuda-8-0 
export CUDA_HOME=/usr/local/cuda 
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
  1. Download the NVIDIA cuDNN 6.0 for CUDA 8.0 at https://developer.nvidia.com/rdp/cudnn-download – you'll be asked to sign up (for free) with an NVIDIA Developer account first before you can download it, as shown in the next screenshot (choose the highlighted cuDNN v6.0 Library for Linux):

Fig 1.3 Selecting cuDNN 6.0 for CUDA 8.0 on Linux

  1. Unzip the downloaded file, assuming it's under the default ~/Downloads directory, and you'll see a folder named cuda, with two subfolders named include and lib64
  2. Copy the cuDNN include and lib64 files to the CUDA_HOME's lib64 and include folders:
sudo cp ~/Downloads/cuda/lib64/* /usr/local/cuda/lib64 
sudo cp ~/Downloads/cuda/include/cudnn.h /usr/local/cuda/include 

Now we're ready to install the GPU-enabled TensorFlow 1.4 on Ubuntu (the first two steps given here are the same as those described in the section Setting up TensorFlow on MacOS):

  1. Download the TensorFlow 1.4.0 source code (zip or tar.gz) from the TensorFlow releases page on GitHub: https://github.com/tensorflow/tensorflow/releases
  2. Uncompress the downloaded file and drag the folder to your home directory
  3. Download the bazel installer, bazel-0.5.4-installer-linux-x86_64.sh at https://github.com/bazelbuild/bazel/releases
  4. Open a new Terminal Window, then run the following commands to install the tools and packages needed to build TensorFlow:
sudo pip install six numpy wheel  
cd ~/Downloads 
chmod +x bazel-0.5.4-installer-linux-x86_64.sh 
./bazel-0.5.4-installer-linux-x86_64.sh --user
  1. Build from the TensorFlow source with GPU-enabled support and generate a pip package file with the .whl file extension:
cd ~/tensorflow-1.4.0 
./configure 
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package 
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg 
  1. Install the TensorFlow 1.4.0 GPU package:
sudo pip install --upgrade /tmp/tensorflow_pkg/tensorflow-1.4.0-cp27-cp27mu-linux_x86_64.whl 

Now, if all goes well, you can launch IPython and enter the following scripts to see the GPU information TensorFlow is using:

In [1]: import tensorflow as tf 
 
In [2]: tf.__version__ 
Out[2]: '1.4.0' 
 
In [3]: sess=tf.Session() 
2017-12-28 23:45:37.599904: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 
2017-12-28 23:45:37.600173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:  
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7845 
pciBusID: 0000:01:00.0 
totalMemory: 7.92GiB freeMemory: 7.60GiB 
2017-12-28 23:45:37.600186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 

Congratulations! You're now ready to train the deep learning models used in the apps in this book. Before we start having fun with our new toy and use it to train our cool models and then deploy and run them on mobile devices, let's first see what it takes to be ready for developing mobile apps.