GNU-Linux Rapid Embedded Programming
上QQ阅读APP看书,第一时间看更新

The C compiler

The C compiler is a program that translates the C language into a binary format that the CPU can understand and execute. This is the basic way (and the most powerful one) to develop programs into a GNU/Linux system.

Despite this fact, most developers prefer using high-level languages other than C due the fact the C language has no garbage collection, no object-oriented programming and other issues, giving up part of the execution speed that a C program offers. However, if we have to recompile the kernel (the Linux kernel is written in C-plus few assemblers) to develop a device driver or to write high-performance applications, then the C language is a must-have.

As we already saw in the preceding chapters, we can have a compiler and a cross-compiler, and until now, we've already used the cross-compiler several times to recompile the kernel and the bootloaders. However, we can decide to use a native compiler too. In fact, using native compilation may be easier but, in most cases, very time consuming. That's why, it's really important to know the pros and cons.

Programs for embedded systems are traditionally written and compiled using a cross-compiler for that architecture on a host PC. In other words, we use a compiler that can generate code for a foreign machine architecture, which means a different CPU instruction set from the compiler host's one.

Native and foreign machine architecture

The developer kits shown in this book are ARM machines, while (most probably) our host machine is an x86 (that is, a normal PC). So, if we try to compile a C program on our host machine, the generated code cannot be used on an ARM machine and vice versa.

Let's verify it! Here's the classic Hello World program:

#include <stdio.h> 
 
int main() 
{ 
    printf("Hello World\n"); 
 
    return 0; 
} 

Now, we will compile it on my host machine using the following command:

$ make CFLAGS="-Wall -O2" helloworld
cc -Wall -O2 helloworld.c -o helloworld
Tip

You should notice here that we've used the make command instead of the usual cc command. This is a perfectly equivalent way to execute the compiler due the fact that even without a Makefile, the make command already knows how to compile a C program.

We can verify that this file is for the x86 (that is the PC) platform using the file command:

$ file helloworld
helloworld: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dyna mically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1] =0f0db5e65e1cd09957ad06a7c1b7771d949dfc84, not stripped
Tip

Note that the output may vary according to your host machine platform.

Now, we can just copy the program into one developer kit (for instance, the the BeagleBone Black) and try to execute it:

root@bbb:~# ./helloworld
-bash: ./helloworld: cannot execute binary file

As we expected, the system refuses to execute the code generated for a different architecture!

On the other hand, if we use a cross-compiler for this specific CPU architecture, the program will run as a charm! Let's verify this by recompiling the code, but paying attention to specify that we wish to use the cross-compiler instead. So, delete the previously generated x86 executable file (just in case) using the rm helloworld command and then recompile it using the cross-compiler:

$ make CC=arm-linux-gnueabihf-gcc CFLAGS="-Wall -O2" helloworld
arm-linux-gnueabihf-gcc -Wall -O2 helloworld.c -o helloworld
Tip

Note that the cross-compiler's filename has a special meaning: the form is <architecture>-<platform>-<binary-format>-<tool-name>. So, the filename arm-linux-gnueabihf-gcc means ARM architecture, Linux platform, GNU EABI Hard-Float (gnueabihf) binary format, and GNU C Compiler (gcc) tool.

Now, we will use the file command again to see whether the code is indeed generated for the ARM architecture:

$ file helloworld
helloworld: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), d ynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sh a1]=31251570b8a17803b0e0db01fb394a6394de8d2d, not stripped

If we transfer the file as before on the BeagleBone Black and try to execute it, we will get the following lines of code:

root@bbb:~# ./helloworld
Hello World!

Therefore, we will see that the cross-compiler ensures that the generated code is compatible with the architecture we are executing it on.

Tip

In reality, in order to have a perfectly functional binary image, we have to make sure that the library versions, header files (also the headers related to the kernel), and cross-compiler options match the target exactly or, at least, they are compatible. In fact, we cannot execute cross-compiled code against the glibc on a system having, for example, musl libc (or it can run in an unpredictable manner). In this case, we have perfectly compatible libraries and compilers, but in general, the embedded developer should perfectly know what they are doing. A common trick to avoid compatibility problems is to use static compilation, but in this case, we get huge binary files.

Now, the question is, when should we use the compiler and when should we use the cross-compiler?

We should compile on an embedded system for the following reasons:

  • There would be no compatibility issues as all the target libraries will be available. In cross-compilation, it becomes difficult when we need all the libraries (if the project uses any) in the ARM format on the host PC. So, we not only have to cross-compile the program but also its dependencies. If the same version dependencies are not installed on the embedded system's rootfs, then good luck with troubleshooting!
  • It's easy and quick.

We should cross-compile for the following reasons:

  • We are working on a large codebase, and we don't want to waste too much time compiling the program on the target, which may take from several minutes to several hours (or it may even be impossible). This reason might be strong enough to overpower the other reasons in favor of compiling on the embedded system itself.
  • PCs nowadays have multiple cores, so the compiler can process more files simultaneously.
  • We are building a full Linux system from scratch.

In any case, here, I will show you an example of both native compilation and cross-compilation of a software package so that you can understand the differences between them.

Compiling a C program

As the first step, let's see how we can compile a C program. To keep it simple, we'll start compiling a user-space program in the upcoming sections, and we will compile some kernel space code.

Knowing how to compile a C program can be useful because it may happen that a specific tool (most probably) written in C is missing in our distribution or it's present, but with an outdated version. In both cases, we need to recompile it!

To show the differences between a native compilation and a cross-compilation, we will explain both methods. However, a word of caution for you here is that this guide is not exhaustive at all! In fact, the cross-compilation steps may vary according to the software packages we will cross-compile.

The package we will use is the PicoC interpreter. Each real programmer (TM) knows the C compiler, which is normally used to translate a C program into the machine language, but (maybe) not all of them know that a C interpreter exists too!

Tip

Actually, there are many C interpreters, but we focus our attention on PicoC due its simplicity in cross-compiling it.

As we already know, an interpreter is a program that converts the source code into executable code on the fly and does not need to parse the complete file and generate code at once.

This is quite useful when we need a flexible way to write brief programs to resolve easy tasks. In fact, to fix bugs in the code and/or change the program behavior, we simply have to change the program source and then re-execute it without any compilation at all. We just need an editor to change our code!

For instance, if we wish to read some bytes from a file, we can do this using a standard C program, but for this easy task, we can write a script for an interpreter too. The choice of the interpreter is up to the developer, and since we are C programmers, the choice is quite obvious. That's why we have decided to use PicoC.

Note

The PicoC tool is quite far from being able to interpret all C programs! In fact, this tool implements a fraction of the features of a standard C compiler. However, it can be used for several common and easy tasks. Consider PicoC as an education tool and avoid using it in a production environment!

The native compilation

Well, as the first step, we need to download the PicoC source code from its repository at:  git://github.com/zsaleeba/picoc.git into our embedded system (the repository is browseable at: https://github.com/zsaleeba/picoc). This time, we decided to use the BeagleBone Black and the command is as follows:

root@bbb:~# git clone git://github.com/zsaleeba/picoc.git
Note

A screenshot of the preceding repository can be found in the chapter_03/picoc/picoc-git.tgz file of the book's example code repository.

When finished, we can start compiling the PicoC source code using the following lines of code:

root@bbb:~# cd picoc/
root@bbb:~/picoc# make
Tip

If we get the following error, during the compilation we can safely ignore it:

 /bin/sh: 1: svnversion: not found

However, during the compilation we get the following lines of code:

platform/platform_unix.c:5:31: fatal error: readline/readline.h: No su ch file or
directory
#include <readline/readline.h>
 ^
compilation terminated.
<builtin>: recipe for target 'platform/platform_unix.o' failed
make: *** [platform/platform_unix.o] Error 1

Bad news is that we have got an error! This is because the readline library is missing. Hence, we need to install it to keep this going. Recalling what we said in Searching a software package section in  Chapter 2Managing the System Console, in order to discover which package's name holds a specific tool, we can use the following command to discover the package that holds the readline library:

root@bbb:~# apt-cache search readline

The command output is quite long, but if we carefully look at it, we can see the following lines:

libreadline5 - GNU readline and history libraries, run-time libraries
libreadline5-dbg - GNU readline and history libraries, debugging libra ries
libreadline-dev - GNU readline and history libraries, development file s
libreadline6 - GNU readline and history libraries, run-time libraries
libreadline6-dbg - GNU readline and history libraries, debugging libra ries
libreadline6-dev - GNU readline and history libraries, development fil es

This is exactly what we need to know! The required package is named libreadline-dev.

Tip

In the Debian distribution, all libraries' packages are prefixed by the lib string, while the -dev postfix is used to mark the development version of a library package. Note also that we choose the libreadline-dev package, intentionally leaving the system to choose to install version 5 or 6 of the library. The development version of a library package holds all the needed files that allow the developer to compile their software to the library itself and/or some documentation about the library functions. For instance, into the development version of the readline library package (that is, into the libreadline6-dev package), we can find the header and the object files needed by the compiler. We can see these files using the following command:

 root@bbb:~# dpkg -L libreadline6-dev | \
 egrep '\.(so|h)'
 /usr/include/readline/rltypedefs.h
 /usr/include/readline/readline.h
 /usr/include/readline/history.h
 /usr/include/readline/keymaps.h
 /usr/include/readline/rlconf.h
 /usr/include/readline/rlstdc.h
 /usr/include/readline/chardefs.h
 /usr/lib/arm-linux-gnueabihf/libreadline.so
 /usr/lib/arm-linux-gnueabihf/libhistory.so

So, let's install it:

root@bbb:~# aptitude install libreadline-dev

When finished, we can relaunch the make command to definitely compile our new C interpreter:

root@bbb:~/picoc# make
gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o clib
rary.o clibrary.c ... gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc pi
coc.o table.o lex.o parse.o expression.o heap.o type.o variable.o cl
ibrary.o platform.o include.o debug.o platform/platform_unix.o platfor
m/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdl
ib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/std
bool.o cstdlib/unistd.o -lm -lreadline 

Well, now, the tool is successfully compiled as expected!

To test it, we can use the standard Hello World program again, but with a little modification. In fact, the main() function is not defined as before! This is due to the fact that PicoC returns an error if we use the typical function definition. Here is the code:

#include <stdio.h> 
 
int main() 
{ 
    printf("Hello World\n"); 
 
    return 0; 
} 

Now, we can directly execute it (that is, without compiling it) using our new C interpreter:

root@bbb:~/picoc# ./picoc helloworld.c
Hello World

An interesting feature of PicoC is that it can execute the C source file like a script. We don't need to specify a main() function as C requires and the instructions are executed one by one from the beginning of the file as a normal scripting language does.

Just to show it, we can use the following script that implements the Hello World program as a C-like script (note that the main() function is not defined):

printf("Hello World!\n"); 
return 0; 

If we put the preceding code into the helloworld.picoc file, we can execute it using the following lines of code:

root@bbb:~/picoc# ./picoc -s helloworld.picoc
Hello World!

Note that this time, we add the -s option argument to the command line in order to instruct the PicoC interpreter that we wish to use its scripting behavior.

The cross-compilation

Now, let's try to cross-compile the PicoC interpreter on the host system. However, before continuing, we've to point out that this is just an example of a possible cross-compilation useful to expose a quick and dirty way to recompile a program when the native compilation is not possible. As already reported earlier, the cross-compilation works perfectly for the bootloader and the kernel, while for user-space application, we must ensure that all involved libraries (and header files) used by the cross-compiler are perfectly compatible with the ones present on the target machine. Otherwise, the program may not work at all! In our case, everything is perfectly compatible, so we can go further.

As we did earlier, we need to download the PicoC's source code using the same git command. Then, we have to enter the following command into the newly created picoc directory:

$ cd picoc/
$ make CC=arm-linux-gnueabihf-gcc
arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnvers ion -n`" -c -o picoc.o picoc.c
...
platform/platform_unix.c:5:31: fatal error: readline/readline.h: No su ch file or directory
compilation terminated.
<builtin>: recipe for target 'platform/platform_unix.o' failed
make: *** [platform/platform_unix.o] Error 1
Note

We specified the CC=arm-linux-gnueabihf-gcc command-line option to force the cross-compilation. However, as already stated, the cross-compilation commands may vary according to the compilation method used by the single software package.

The system returns a linking error due to the fact that the readline library is missing. However, this time, we cannot install it as before since we need the ARM version (specifically, the armhf version) of this library, and my host system is a normal PC!

Tip

Actually, a way to install a foreign package into a Debian/Ubuntu distribution exists, but it's not a simple task nor is it an argument of this book. A curious reader may take a look at the Debian/Ubuntu  Multiarch at:  https://help.ubuntu.com/community/MultiArch .

Now, we have to resolve this issue, and we have two possibilities:

  • We can try to find a way to install the missing package.
  • We can try to find a way to continue the compilation without it.

The former method is quite complex since the readline library has other dependencies, and we may take a lot of time trying to compile them all, so let's try to use the latter option.

Knowing that the readline library is just used to implement powerful interactive tools (such as recalling a previous command line to re-edit it) and since we are not interested in the interactive usage of this interpreter, we can hope to avoid using it. So, looking carefully at the code, we see that the USE_READLINE define exists. Changing the code as shown here should resolve the issue, allowing us to compile the tool without the readline support:

$ git diff
diff --git a/Makefile b/Makefile
index 6e01a17..c24d09d 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,6 @@
CC=gcc
CFLAGS=-Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`"
-LIBS=-lm -lreadline
+LIBS=-lm
TARGET = picoc
SRCS = picoc.c table.c lex.c parse.c expression.c heap.c type.c \
diff --git a/platform.h b/platform.h
index 2d7c8eb..c0b3a9a 100644
--- a/platform.h
+++ b/platform.h
@@ -49,7 +49,6 @@
 # ifndef NO_FP
 # include <math.h>
 # define PICOC_MATH_LIBRARY
-# define USE_READLINE
 # undef BIG_ENDIAN
 # if defined(__powerpc__) || defined(__hppa__) || defined(__sparc__)
 # define BIG_ENDIAN
Note

The preceding patch can be found in the chapter_03/picoc/picoc-drop-readline.patch file of the book's example code repository.

The preceding output is in the unified context diff format. So, the preceding code means that in the Makefile file, the -lreadline option must be removed from the LIBS variable and that in the platform.h file, the USE_READLINE define must be commented out.

After all the changes are in place, we can try to recompile the package with the same command as we did earlier:

$ make CC=arm-linux-gnueabihf-gcc
arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnvers ion -n`" -c -o table.o table.c
... arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnvers
ion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o ty
pe.o variable.o clibrary.o platform.o include.o debug.o platform/platf
orm_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstd
lib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/c
type.o cstdlib/stdbool.o cstdlib/unistd.o -lm

Great! We did it! Now, just to verify that everything is working correctly, we can simply copy the picoc file into our BeagleBone Black and test it as we did earlier.

Compiling a kernel module

As a special example of cross-compilation, we'll take a look at a very simple code that implements a dummy module for the Linux kernel (the code does nothing, but it prints some messages on the console), and we'll try to cross-compile it.

Let's consider this following kernel C code of the dummy module:

#include <linux/module.h> 
#include <linux/init.h> 
 
/* This is the function executed during the module loading */ 
static int dummy_module_init(void) 
{ 
    printk("dummy_module loaded!\n"); 
    return 0; 
} 
 
/* This is the function executed during the module unloading */ 
static void dummy_module_exit(void) 
{ 
    printk("dummy_module unloaded!\n"); 
    return; 
} 
 
module_init(dummy_module_init); 
module_exit(dummy_module_exit); 
 
MODULE_AUTHOR("Rodolfo Giometti <giometti@hce-engineering.com>"); 
MODULE_LICENSE("GPL"); 
MODULE_VERSION("1.0.0"); 

Apart from some defines relative to the kernel tree, the file holds two main functions, dummy_module_init() and dummy_module_exit(), and some special definitions, in particular, module_init() and module_exit(), that address the first two functions as the entry and exit the functions of the current module (that is, the functions that are called at module loading and unloading).

Then, consider the following Makefile:

ifndef KERNEL_DIR 
$(error KERNEL_DIR must be set in the command line) 
endif 
PWD := $(shell pwd) 
CROSS_COMPILE = arm-linux-gnueabihf- 
 
# This specifies the kernel module to be compiled 
obj-m += module.o 
 
# The default action 
all: modules 
 
# The main tasks 
modules clean: 
    make -C $(KERNEL_DIR) ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- \ 
        SUBDIRS=$(PWD) $@ 
Note

The C code of the dummy module (dummy.c) and the Makefile can be found in the chapter_03/module directory of the book's example code repository.

OK, now, to cross-compile the dummy module on the host PC, we can use the following command:

$ make KERNEL_DIR=~/A5D3/armv7_devel/KERNEL/
make -C /home/giometti/A5D3/armv7_devel/KERNEL/ \
 SUBDIRS=/home/giometti/github/chapter_03/module modules
make[1]: Entering directory '/home/giometti/A5D3/armv7_devel/KERNEL'
CC [M] /home/giometti/github/chapter_03/module/dummy.o
Building modules, stage 2.
MODPOST 1 modules
CC /home/giometti/github/chapter_03/module/dummy.mod.o
LD [M] /home/giometti/github/chapter_03/module/dummy.ko
make[1]: Leaving directory '/home/giometti/A5D3/armv7_devel/KERNEL'
Tip

It's important to note that when a device driver is released as a separate package with a Makefile compatible with Linux's file, we can compile it natively too! However, even in this case, we need to install a kernel source tree on the target machine. Not only that, the sources must also be configured in the same manner of the running kernel. Otherwise, the resulting driver will not work at all! In fact, a kernel module will only load and run with the kernel it was compiled against.

The cross-compilation result is now stored in the dummy.ko file. , in fact we have:

$ file dummy.ko
dummy.ko: ELF 32-bit LSB relocatable, ARM, EABI5 version 1 (SYSV), Bui ldID[sha1]=ecfcbb04aae1a5dbc66318479ab9a33fcc2b5dc4, not stripped
Tip

The kernel modules have been compiled for the SAMA5D3 Xplained, but of course, it can be cross-compiled for the other developer kits in a similar manner.

So, let's copy our new module to the SAMA5D3 Xplained using the scp command through the USB Ethernet connection:

$ scp dummy.ko root@192.168.8.2:
root@192.168.8.2's password:
dummy.ko 100% 3228 3.2KB/s 00:00 

Now, if we switch on the SAMA5D3 Xplained, we can use the modinfo command to get some information on the kernel module:

root@a5d3:~# modinfo dummy.ko
filename: /root/dummy.ko
version: 1.0.0
license: GPL
author: Rodolfo Giometti <giometti@hce-engineering.com>
srcversion: 1B0D8DE7CF5182FAF437083
depends: 
vermagic: 4.4.6-sama5-armv7-r5 mod_unload modversions ARMv7 thumb2 p2v8

Then, to load and unload it to and from the kernel, we can use the insmod and rmmod commands:

root@a5d3:~# insmod dummy.ko
[ 3151.090000] dummy_module loaded!
root@a5d3:~# rmmod dummy.ko
[ 3153.780000] dummy_module unloaded!

As expected, the dummy's messages have been displayed on the serial console.

Note

If we are using an SSH connection, we have to use the dmesg or tail -f /var/log/kern.log command to see the kernel's messages. The modinfo, insmod, and rmmod commands are explained in detail in the following section.