Topic: tech juniper jaut prev next
tech juniper jaut > Module 11: Salt
Salt is an automation management system. It is a configuration management and remote execution tool. It is centered around an ‘event bus’. It allows the construction of an event-driven architecture. Salt runs on a control server known as the Salt Master; managed systems are called minions.
Salt minions run a ‘Salt Minion Process’ (agent). ZeroMQ is used as the event bus. Salt is very flexible; actual setups vary. Salt can be run as masterless (manage the local device only), agentless (the minion needs SSH and Python only), and proxy minion (the proxy minion manages the minion via NETCONF or REST or similar).
curl -o bootstrap-salt.sh -L https://bootstrap.saltstack.com
On a master;
sudo sh bootstrap-salt.sh -M
On a minion;
sudo sh bootstrap-salt.sh
A machine can run as both a master and a minion by running both commands. This is useful for lab testing.
Edit the /etc/salt/minion
file on the minion server. Change the master:
parameter in the file and restart the process with sudo service salt-minion
restart
.
On the master, run sudo salt-key --list all
. The key of the desktop will be
listed under ‘Unaccepted Keys:’. Run sudo salt-key --accept=desktop
to
accept the key.
A module is a file containing executable code, such as Python functions. Salt
comes with many modules. Specific functions can be referenced as
module_name.function_name
;
salt [options] '<target>' <module>.<function> [arguments]
Shell-style globbing is used to determine the minions to apply the command to. Example to ping all minions through the event bus (not ICMP);
salt '*' test.ping
In this ‘ping’, the job request is sent as a ‘job’ event. The minions receiving the ‘job’ event generate a ‘ret’ event and send this to the master.
Salt events consist of a tag, which is a unique identifier for the event, and a dictionary, containing details of the job or the result.
Grains contain static information about the device. This information refreshes slowly; e.g. the hardware components and networking settings of a machine.
sudo salt '*' grains.ls
sudo salt '*' grains.get cpu_model
The Salt Pillar system provides various data associated with minions. The default location is /srv/pillar
. The Pillar Top file, top.sls
, defines which minions have access to which Pillar data;
base:
desktop:
- desktop_vars
The file desktop_vars.sls
contains variables for the ‘desktop’ minion;
course: JUAT
vendor: Juniper
The pillar data is generated when the minion starts and can be refreshed with
the command sudo salt desktop saltutil.refresh_pillar
.
sudo salt desktop pillar.ls
sudo salt desktop pillar.get course
SaLt State (SLS) files are YAML files, first passed through Jinja2. Other renderers can be used, including pure Python for the most flexibility. The renderer is defined on the first line using ‘#!’ notation.
The Junos Proxy Minion allows execution of commands via NETCONF without having a full minion installed on the Junos device. The Junos Execution Module defines functions that can be executed on the device per request. The Junos Syslog Engine receives Junos events as syslog messages and sends them to the ZeroMQ bus. The Junos State Module defines functions that enforce a certain state.
Junos does not support minions on Junos devices. Instead, a proxy minion process runs, either on the same device as the server or on another server. PyEZ facts are stored as Salt grains.
One proxy minion process can only manage one Junos device. Each proxy minion uses about 100MB of RAM. Junos PyEZ and jxmlease must be installed on any server running a proxy minion.
Long-running processes managed by Salt. If an engine stops, Salt restarts it automatically. One use of Salt engines is to inject events into the message bus. Salt’s Reactor system gives Salt the capability to respond to events that occur on the message bus.
Define variables for the proxy minion to connect to the managed device;
/srv/pillar/top.sls
base:
dev1:
- proxy_data_dev1
/srv/pillar/proxy_data_dev1.sls
proxy:
proxytype: junos
host: ...
username: ...
password: ...
port: 830
Tell the proxy minion where the master is in the /etc/salt/proxy
config file
on the server running the proxy minions.
salt.modules.junos
is a module for interacting with Junos devices. For
example, to display interface information;
sudo salt 'dev*' junos.cli "show interfaces ge-0/0/0"
sudo salt 'dev*' junos.rpc get-interface-information interface-name=ge-0/0/0 terse=True --out=json
A more solid method of configuring devices is to use the Salt State system and the Junos state module. Execution functions can be called from .sls files.
Salt deals with the state of managed systems in a declarative manner. Define
the required state for managed devices, reference state functions and provide
parameters. Call the state.apply
execution function to apply the declarative
state to devices. The basic syntax for the state file is;
State name:
module.function:
- param_1: val_1
- ...
salt.states.junos
contains state functions for Junos. They are named
similarly to the execution functions, but work differently. Execution
functions are for performing ad-hoc functions on one or more devices. State
functions enforce a declared state on a device.
The configuration file ‘ospf.conf’ is stored on the Salt file server in the
directory /srv/salt
. Salt runs a lightweight file server over the ZeroMQ
bus. The configuration file is not processed with Junja2.
Provision OSPF:
junos.install_config:
- name: salt:///configs/ospf.conf
- diffs_file: /home/lab/ospf-{{grains.id}}.diff
Run sudo salt 'dev*' state.apply provision_ospf
to implement the declared
state on the minions. The changes will be stored in .diff files on the proxy
minions.
The State Top File defines which minions must have which states applied. It is
different to the Pillar Top File. It is stored in /srv/salt/top.sls
by
default. A highstate run puts all minions into the states as defined in the
State Top File. Start a highstate run by calling state.apply
with no
arguments. Here is an example State Top File;
base:
dev*:
- provision_ospf
- provision_services
/srv/pillar/dns_data.sls
dns_servers:
- ...
- ...
/srv/pillar/interfaces_dev1.sls
interfaces:
- name: ge-0/0/0
unit: ...
address: ...
This approach uses the device configuration as the source of truth, checking that all the configured BGP sessions are up;
/srv/salt/validate_running_bgp_sessions.sls
{% set bgp_response = salt['junos.rpc']('get-bgp-summary-information','','json') %}
{% for peer in bgp_response['rpc_reply']['bgp-information'][0]['bgp-peer'] %]
validate_bgp_session_state_with_{{ peer['peer-address'][0]['data'] }}:
loop.until:
- name: junos.rpc
- condition: m_ret['rpc_reply']['bgp-information']['bgp-peer']['peer-state'] == 'Established'
- period: 5
- timeout: 20
- m_args:
- get-bgp-neighbor-information
- m_kwargs:
neighbor-address: {{ peer['peer-address'][0]['data'] }}
{% endfor %}
Apply the state with;
sudo salt 'dev*' state.apply validate_running_bgp_sessions
If someone has modified the configuration after the desired state was configured on the device, removing some BGP neighbors, the above approach will not work, because the session will not be present on the device, rather than present and down. Instead, use the pillar data as the source of truth to check for sessions;
/srv/salt/validate_desired_bgp_sessions.sls
{% for peer in pillar["neighbors"] %}
validate_bgp_session_state_with_{{ peer['peer_ip'] }}:
loop.until:
- name: junos.rpc
- condition: m_ret['rpc_reply']['bgp-information']['bgp-peer']['peer-state'] == 'Established'
- period: 5
- timeout: 20
- m_args:
- get-bgp-neighbor-information
- m_kwargs:
neighbor-address: {{ peer['peer_ip'] }}
{% endfor %}
/srv/pillar/bgp-1.sls
neighbors:
- peer_ip: ...
- peer_ip: ...