Understanding the Salt configuration
One of the basic ideas around the Salt configuration is that a configuration management system should require as little configuration as possible. A concerted effort has been made by the developers to assign defaults that will apply to as many deployments as possible, while still allowing users to fine-tune the settings to their own needs.
If you are just starting with Salt, you may not need to change anything. In fact, most of the time the Master
configuration will be exactly what is needed for a small installation, while Minions will require almost no changes, if any.
Following the configuration tree
By default, most operating systems (primarily Linux-based) will store the Salt configuration in the /etc/salt/
directory. Unix distributions will often use the /usr/local/etc/salt/
directory instead, while Windows uses the C:\salt\ directory
. These locations were chosen in order to follow the design most commonly used by the operating system in question, while still using a location that was easy to make use of. For the purpose of this book, we will refer to the /etc/salt/
directory
, but you can go ahead and replace it with the correct directory for your operating system.
There are other paths that Salt makes use of as well. Various caches are typically stored in /var/cache/salt/
, sockets are stored in /var/run/salt/
, and State trees, Pillar trees, and Reactor files are stored in /srv/salt/
, /srv/pillar/
, and /srv/reactor/
, respectively. However, as we will see later, in Exploring the SLS directories section, these are not exactly configuration files.
Inside the /etc/salt/
directory, there will generally be one of two files: Master and Minion (both will appear if you treat your Master as a Minion). When the documentation refers to Master configuration, it generally means the /etc/salt/master
file, and of course Minion configuration refers to the /etc/salt/minion
file. All configuration for these two daemons can technically go into their respective file.
However, many users find reasons to break out their configuration into smaller files. This is often for organizational reasons, but there is a practical reason too: because Salt can manage itself, it is often easier to have it manage smaller, templated files, rather than one large, monolithic file.
Because of this, the Master can also include any file with a .conf
extension, found in the /etc/salt/master.d/
directory (and the Minion likewise in the minion.d/
directory). This is in keeping with the numerous other services that also maintain similar directory structures.
Other subsystems inside Salt also make use of the .d/
directory structure. Notably, Salt Cloud makes use of a number of these directories. The /etc/salt/cloud
, /etc/salt/cloud.providers
, and /etc/salt/cloud.profiles
files can also be broken out into the /etc/salt/cloud.d/
, /etc/salt/cloud.providers.d/
, and /etc/salt/cloud.profiles.d/
directories, respectively. Additionally, it is recommended to store cloud maps in the /etc/salt/cloud.maps.d/
directory.
While other configuration formats are available elsewhere in Salt, the format of all of these core configuration files is YAML (except for cloud maps, which will be discussed in Chapter 5, Taking Salt Cloud to the Next Level). This is by necessity; Salt needs a stable starting point from which to configure everything else. Likewise, the /etc/salt/
directory is hard-coded as the default starting point to find these files, though that may be overridden using the --config-dir
(or -C
) option:
# salt-master --config-dir=/other/path/to/salt/
Inside the /etc/salt/
directory, there is also a pki/
directory, inside which is a master/
or minion/
directory (or both). This is where the public and private keys are stored.
The Minion will only have three files inside the /etc/salt/pki/minion
directory: minion.pem
(the Minion's private RSA key), minion.pub
(the Minion's public RSA key), and minion_master.pub
(the Master's public RSA key).
The Master will also keep its RSA keys in the /etc/salt/pki/master/
directory: master.pem
and master.pub
. However, at least three more directories will also appear in here. The minions.pre/
directory contains the public RSA keys for Minions that have contacted the Master but have not yet been accepted. The minions/
directory contains the public RSA keys for Minions that have been accepted on the Master. And the minions_rejected/
directory will contain keys for any Minion that has contacted the Master, but been explicitly rejected.
There is nothing particularly special about these directories. The salt-key
command on the Master is essentially a convenience tool for the user that moves public key files between directories, as requested. If needed, users can set up their own tools to manage the keys on their own, just by moving files around.
As mentioned, Salt also makes use of other directory trees on the system. The most important of these are the directories that store SLS files, which are, by default, located in /srv/
.
Of the SLS directories, /srv/salt/
is probably the most important. This directory stores the State SLS files, and their corresponding top files. It also serves as the default root directory for Salt's built-in file server. There will typically be a top.sls
file, and several accompanying .sls
files and/or directories. The layout of this directory was covered in more detail in Chapter 1, Reviewing a Few Essentials.
A close second is the /srv/pillar/
directory. This directory maintains a copy of the static pillar definitions, if used. Like the /srv/salt/
directory, there will typically be a top.sls
file and several accompanying .sls
files and directories. But while the top.sls
file matches the format used in /srv/salt/
, the accompanying .sls
files are merely collections of key/value pairs. While they can use Salt's Renderer (discussed later in the The Renderer section), the resulting data does not need to conform to Salt's State compiler (also discussed later in this chapter, in Plunging into the State compiler section).
Another directory which will hopefully find its way into your arsenal is the /srv/reactor/
directory. Unlike the others, there is no top.sls
file in here. That is because the mapping is performed inside the Master configuration instead of the top system. However, the files in this directory do have a specific format, which will be discussed in detail in Chapter 4, Managing Tasks Asynchronously.
Examining the Salt cache
Salt also maintains a cache directory, usually at /var/cache/salt/
(again, this may differ on your operating system). As before, both the Master and the Minion have their own directory for cache data. The Master cache directory contains more entries than the Minion cache, so we'll jump into that first.
Probably the first cache directory that you'll run across in every day use is the jobs/
directory. In a default configuration, this contains all the data that the Master stores about the jobs that it executes.
This directory uses hashmap-style storage. That means that a piece of identifying information (in this case, a job ID, or JID), has been processed with a hash algorithm, and a directory or directory structure has been created using a part or all of the hash. In this case, a split hash model has been used, where a directory has been created using the first two characters of the hash, and another directory under it has been created with the rest of the hash.
The default hash type for Salt is MD5. This can be modified by changing the hash_type
value in the Master configuration:
hash_type: md5
Keep in mind that the hash_type
is an important value that should be decided upon when first setting up a new Salt infrastructure, if MD5 is not the desired value. If it is changed (say, to SHA1) after an infrastructure has been using another value for a while, then any part of Salt that has been making use of it must be cleaned up manually. The rest of this book will assume that MD5 is used.
The JID is easy to interpret: it is a date and time stamp. For instance, a job ID of 20141203081456191706
refers to a job that was started on December 3, 2014, at 56 seconds and 191706 milliseconds past 8:14 AM. The MD5 of that JID would be f716a0e8131ddd6df3ba583fed2c88b7
. Therefore, the data that describes that job would be located at the following path:
/var/cache/salt/master/jobs/f7/16a0e8131ddd6df3ba583fed2c88b7
In that directory, there will be a file called jid
. This will of course contain the job ID. There will also be a series of files with a .p
extension. These files are all serialized by msgpack
.
First, there is a a file called .minions.p
(notice the leading dot), which contains a list of Minions that were targeted by this job. This will look something like so:
[ "minion1", "minion2", "minion3" ]
The job itself will be described by a file called .load.p
:
{ "arg": [ "" ], "fun": "test.ping", "jid": "20141203081456191706", "tgt": "*", "tgt_type": "glob", "user": "root" }
There will also be one directory for each Minion that was targeted by that job and that contains the return information for that job, for that Minion. Inside that directory will be a file called return.p
that contains the return data, serialized by msgpack
. Assuming that the job in question did a simple test.ping
, the return
will look like the following:
{ "fun": "test.ping", "fun_args": [], "id": "minion1", "jid": "20141203081456191706", "retcode": 0, "return": true, "success": true }
Once Salt has started issuing jobs, another cache directory will show up, called minions/
. This directory will contain one entry per Minion, with cached data about that Minion. Inside this directory are two files: data.p
and mine.p
.
The data.p
file contains a copy of the Grains and Pillar data for that Minion. A (shortened) data.p
file may look like the following:
{ "grains": { "biosreleasedate": "01/09/2013", "biosversion": "G1ET91WW (2.51 )", "cpu_model": "Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz", "cpuarch": "x86_64", "os": "Ubuntu", "os_family": "Debian", }, "pillar": { "role": "web" } }
The mine.p
file contains mine data. This is not covered in detail in this book but, in short, a Minion can be configured to cache the return data from specific commands, in the cache directory on the Master, so that other Minions can look it up. For instance, if the output from test.ping
and network.ip_addrs
has been configured, the contents of the mine.p
file will look as follows:
{ "network.ip_addrs": [ "192.168.2.101" ], "test.ping": true }
In a default installation, Salt will keep its files in the /srv/salt/
directory. However, an external file server, by definition, maintains an external file store. For instance, the gitfs
external file server keeps its files on a Git server, such as GitHub. However, it is incredibly inefficient to ask the Salt Master to always serve files directly from the Git. So, in order to improve efficiency, a copy of the Git tree is stored on the Master.
The contents and layout of this tree will vary among the external file server modules. For instance, the gitfs
module doesn't store a full directory tree as one might see in a normal Git checkout; it only maintains the information used to create that tree, using whatever branches are available. Other external file servers, however, may contain a full copy of the external source, which is updated periodically. The full path to this cache may look like this:
/var/cache/salt/master/gitfs/
where gitfs
is the name of the file server module.
In order to keep track of the file changes, a directory called hash/
will also exist inside the external file server's cache. Inside hash/
, there will be one directory per environment (that is, base
, dev
, prod
, and so on). Each of those will contain what looks like a mirror image of the file tree. However, each actual file name will be appended with .hash.md5
(or the appropriate hash name, if different), and the contents will be the value of the checksum for that file.
In addition to the file server cache, there will be another directory called file_lists/
that contains one directory per enabled file server. Inside that directory will be one file per environment, with a .p
extension (such as base.p
for the base environment). This file will contain a list of files and directories belonging to that environment's directory tree. A shortened version might look like this:
{ "dirs": [ ".", "vim", "httpd", ], "empty_dirs": [ ], "files": [ "top.sls", "vim/init.sls", "httpd/httpd.conf", "httpd/init.sls", ], "links": [] }
This file helps Salt with a quick lookup of the directory structure, without having to constantly descend into a directory tree.
The Minion doesn't maintain nearly as many cache directories as the Master, but it does have a couple. The first of these is the proc/
directory, which maintains the data for active jobs on the Minion. It is easy to see this in action. From the Master, issue a sleep command to a Minion:
salt myminion test.sleep 300 --async
This will kick off a process on the Minion which will wait for 300
seconds (5 minutes) before returning True to the Master. Because the command includes the --async
flag, Salt will immediately return a JID to the user.
While this process is running, log into the Minion and take a look at the /var/cache/salt/minion/proc/
directory. There should be a file bearing the name of the JID. The unpacked contents of this file will look like the following:
{'arg': [300], 'fun': 'test.sleep', 'jid': '20150323233901672076', 'pid': 4741, 'ret': '', 'tgt': 'myminion', 'tgt_type': 'glob', 'user': 'root'}
This file will exist until the job is completed on the Minion. If you'd like, you can see the corresponding file on the Master. Use the hashutil.md5_digest
function to find the MD5 value of the JID:
# salt myminion hashutil.md5_digest 20150323233901672076
The other directory that you are likely to see on the Minion is the extmods/
directory. If custom modules have been synced to the Minion from the Master (using the _modules
, _states
, etc. directories on the Master), they will appear here.
This is also easy to see in action. On the Master, create a _modules/
directory inside /srv/salt/
. Inside this directory, create a file called mytest.py
, with the following contents:
def ping(): return True
Then, from the Master, use the saltutil
module to sync your new module to a Minion:
salt myminion saltutil.sync_modules
After a moment, Salt will report that it has finished:
myminion: - modules.mytest
Log into the Minion and look inside /var/cache/salt/minion/extmods/modules/
. There will be two files: mytest.py
and mytest.pyc
. If you look at the contents of mytest.py
, you will see the custom module that you created on the Master. You will also be able to execute the mytest.ping
function from the Master:
# salt myminion mytest.ping myminion: True