Ansible Automation on Ubuntu 20.04 in GNS3

Ansible is a useful tool for automating tasks on various systems, including servers and network devices. It can be a little tricky to set up, but it’s ok once you get the hang of it. Today we’ll just try to get ansible installed on an Ubuntu 20.04 server and able to connect to another Ubuntu server, a Rocky Linux server, and a Cisco IOSv router.

Topology

Ansible in GNS3

Our simple subnet of 172.16.0.0/24 will hold a Ubuntu 20.04 server at the top at 172.16.0.1 acting as the ansible controller. The others will all be managed nodes, they don’t have ansible installed. Ansible uses python3 on Linux servers to execute various commands, and it’s not installed by default on Rocky Linux 8.5, so I installed it. I also added a firewall to the Rocky node to allow SSH in, which is how ansible connects to nodes. The SSH server has been enabled on the Cisco IOSv router as well.

Installation

Following ansible’s official installation instructions, this quick script will get it installed. Comments are inline:

#Create apt file for ansible
touch /etc/apt/sources.list.d/ansible.list

#Add ppa to above file
echo '
deb http://ppa.launchpad.net/ansible/ansible/ubuntu focal main
' >> /etc/apt/sources.list.d/ansible.list

#Add key for ppa
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367

#Update and install
apt-get update
apt-get install ansible

Ansible should now be installed!

Create inventory

The next thing to do is create an inventory file. Ansible keeps it’s config files in /etc/ansible, so we’ll start by writing a config to /etc/ansible/hosts. This file supports multiple formats, we’ll use yaml because I like yaml (don’t judge me!). This simple script (it’s just an echo command and redirect) will set up the hosts file:

echo '

servers:
  hosts:
    ubuntu:
      ansible_host: 172.16.0.2
    rocky:
      ansible_host: 172.16.0.3
  vars:
    ansible_python_interpreter: /usr/bin/python3  #specify python3 for ubuntu and rocky


routers:
  hosts:
    cisco:
      ansible_host: 172.16.0.4
      ansible_network_os: cisco.ios.ios                     #OS is Cisco IOS!
      ansible_connection: ansible.netcommon.network_cli     #connect to IOS via CLI, no python
      ssh_args: -oKexAlgorithms=+diffie-hellman-group1-sha1 #key exchange algorithm to support older IOS

' > /etc/ansible/hosts

Verify inventory connections

The below command will check the integrity of your inventory (hosts) file:

ansible-inventory --list -y

Then once you have worked out any inventory file issues, “ping” your hosts. This is not an ICMP echo-request/echo-reply (commonly known as ping), ansible will actually try to log in to each host via SSH. Ansible assumes you have SSH public/private key pair authentication enabled, but since this is in a test environment in GNS3 we’ll just pass the user and password to ansible on the command line:

ansible all -m ping --extra-vars "ansible_user=james ansible_password=james"
---

ubuntu | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
rocky | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
cisco | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

With this, we’ve verified that we have configured our hosts and can properly connect to them from the controller.

Next we’ll try to make some changes with playbooks! Hope you liked this one.

Nagios Core – NCPA Agent on Ubuntu and Rocky Linux

Last time, we installed Nagios Core on Ubuntu Server 20.04. We saw how to add hosts and do a ping check to monitor network connectivity. Today we’ll learn how to check CPU, memory and processes. Nagios provides a free agent to do this that we can install on most standard Linux distributions (Ubuntu, Redhat, Debian, Amazon, etc) as well as Windows and MacOS.

This agent can monitor various services on its host machine, has a REST API and a nice web interface. It replaces the NRPE agent that came before it.

Topology

Topology in GNS3

Same as the previous post, our Ubuntu server at 172.16.0.1/24 will monitor the other two hosts, Rocky Linux and Cisco IOSv. We’ll be installing the NCPA agent on Rocky Linux 8.5 at 172.16.0.2.

Installation

On Rocky Linux 8.5, installation is pretty quick using the official Nagios repository. We’ll add the repository and install the package:

rpm -Uvh https://repo.nagios.com/nagios/8/nagios-repo-8-1.el8.noarch.rpm
yum install ncpa -y

Then we need to allow incoming connections through the firewall on tcp port 5693, which is the default port that NCPA uses:

firewall-cmd --zone=public --add-port=5693/tcp --permanent
firewall-cmd --reload

If everything is installed correctly, you should be able to reach the NCPA web interface using the Rocky Linux server’s IP address. Make you use https and port 5693 –> https://<ip address or fqdn>:5693. The default password is “mytoken” which you’ll want to change via the configuration file before putting this in production. But for now it should look like this:

NCPA Web Interface

Nagios Core Server Configuration

Heading back to the Ubuntu server where Nagios Core is installed, we’ll configure new services for “rocky” in /usr/local/nagios/etc/objects/hosts.cfg (we configured the host for “rocky” last time). The following script will set up new services to check via the NCPA agent on the Rocky server:

echo "

define service {
    host_name               rocky
    service_description     CPU Usage
    check_command           check_ncpa!-t 'mytoken' -P 5693 -M cpu/percent -w 20 -c 40 -q 'aggregate=avg'
    max_check_attempts      5
    check_interval          5
    retry_interval          1
    check_period            24x7
    notification_interval   60
    notification_period     24x7
    contacts                nagiosadmin
    register                1
}

define service {
    host_name               rocky
    service_description     Memory Usage
    check_command           check_ncpa!-t 'mytoken' -P 5693 -M memory/virtual -w 50 -c 80 -u G
    max_check_attempts      5
    check_interval          5
    retry_interval          1
    check_period            24x7
    notification_interval   60
    notification_period     24x7
    contacts                nagiosadmin
    register                1
}

define service {
    host_name               rocky
    service_description     Process Count
    check_command           check_ncpa!-t 'mytoken' -P 5693 -M processes -w 150 -c 200
    max_check_attempts      5
    check_interval          5
    retry_interval          1
    check_period            24x7
    notification_interval   60
    notification_period     24x7
    contacts                nagiosadmin
    register                1
}

" >> /usr/local/nagios/etc/objects/hosts.cfg

Then we need to download check_ncpa.py from the Nagios github website into /usr/local/nagios/libexec which is the directory where Nagios check scripts go. check_ncpa.py is a script that performs checks on the NCPA agent. This will install it:

wget --no-check-certificate https://raw.githubusercontent.com/NagiosEnterprises/ncpa/master/client/check_ncpa.py -P /usr/local/nagios/libexec/
chmod 755 /usr/local/nagios/libexec/check_ncpa.py #Make accessible and executable
sed -i 's/python/python3/g' /usr/local/nagios/libexec/check_ncpa.py #change 'python' to 'python3'

Then create a command in /usr/local/nagios/etc/objects/commands.cfg that defines the “check_ncpa” command:

echo "

define command {
    command_name    check_ncpa
    command_line    \$USER1\$/check_ncpa.py -H \$HOSTADDRESS\$ \$ARG1\$
}

" >> /usr/local/nagios/etc/objects/commands.cfg

Now reload nagios:

systemctl restart nagios

Remember, if you have any troubles, this command will probably help you out. It’s Nagios Core’s tool to check your config:

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Now that we have the new services loaded, let’s see how they show up in the Nagios web interface.

Nagios Core Web Interafce

We should be able to see that “CPU Usage”, “Memory Usage” and “Process Count” are now showing up as services by going to “Hosts” (left nav pane) –> rocky (click on name) –> “View Status Detail For This Host”.

Hope you liked!

Nagios Core 4.4.6 Monitoring on Ubuntu 20.04

Nagios is a time-tested network monitoring tool that network engineers and systems administrators alike have used since its creation in 1999 to monitor networks and alert engineers if something goes wrong. At some point around 2009, Nagios became Nagios Core so that the company could release some more products. Nagios Core remains free and open source, though.

It has many ways to monitor various network nodes, including ping checks, agents and SNMP. Today, we’ll see if we can just get the server and web interface installed, along with a couple basic ping checks.

Let’s get started!

Topology

Topology in GNS3

We’ll monitor across a simple subnet of 172.16.0.0/24. We’ll install Nagios Core on Ubuntu server at the top, and monitor the Rocky Linux and Cisco IOSv router at the bottom.

Installation

While it’s possible to install from the standard Ubuntu repositories, that version is very old and doesn’t seem to work very well out of the box. Building the most recent version from source works the best.

There are a number of steps, so I will include comments inline about what commands we’re entering. To install:

#Install necessary libraries
apt-get update
apt-get install -y autoconf gcc libc6 make wget unzip apache2 php libapache2-mod-php7.4 libgd-dev

#Download Nagios Core
cd /tmp
wget -O nagioscore.tar.gz https://github.com/NagiosEnterprises/nagioscore/archive/nagios-4.4.6.tar.gz

#Extract and enter directory
tar xzf nagioscore.tar.gz

#Compile
cd /tmp/nagioscore-nagios-4.4.6/
./configure --with-httpd-conf=/etc/apache2/sites-enabled
make all

#Set up users
make install-groups-users
usermod -a -G nagios www-data

#Install
make install
make install-daemoninit
make install-commandmode

#Install a sample script config
make install-config

#Install Apache config files
sudo make install-webconf
sudo a2enmod rewrite
sudo a2enmod cgi

#Create a user account for nagios
sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

#Restart apache
systemctl restart apache

#Prepare for plugins installation. Need plugins to do anything at all with nagios
sudo apt-get install -y autoconf gcc libc6 libmcrypt-dev make libssl-dev wget bc gawk dc build-essential snmp libnet-snmp-perl gettext

#Download and extract plugins
cd /tmp
wget --no-check-certificate -O nagios-plugins.tar.gz https://github.com/nagios-plugins/nagios-plugins/archive/release-2.3.3.tar.gz
tar zxf nagios-plugins.tar.gz

#Compile plugins and install
cd /tmp/nagios-plugins-release-2.3.3/
./tools/setup
./configure
make
make install

That was a lot! Hopefully you got it installed ok. The nagios web interface can be accessed from http://<fqdn or ip>/nagios. It should look something like this:

Nagios Web Interface

Now that we have it installed, it’s time to set up some monitoring. You’d think this could be done via the web interface but it cannot. It needs to be done from nagios config files.

Monitoring Configuration

There are a number of configuration files, they should all be located in /usr/local/nagios/etc/objects. The nagios program itself is located at /usr/local/nagios/bin/nagios, but the installation process registered a service with systemd, so we’ll mostly be using that. We need to create “hosts”, which are other severs, workstations or routers to monitor. We’ll run this script, I have commented in-line:

#Edit the main nagios config file at /usr/local/nagios/etc/nagios.cfg, add line to add a config file called "hosts.cfg"
echo "cfg_file=/usr/local/nagios/etc/objects/hosts.cfg" >> /usr/local/nagios/etc/nagios.cfg

#Write a config to ping hosts
echo "

define host{
    host_name                       rocky
    alias                           rocky
    address                         172.16.0.2
    check_command                   check-host-alive
    max_check_attempts              5
    check_period                    24x7
    notification_interval           30
    notification_period             24x7
}
define host{
    host_name                       cisco_iosv
    alias                           cisco_iosv
    address                         172.16.0.3
    check_command                   check-host-alive
    max_check_attempts              5
    check_period                    24x7
    notification_interval           30
    notification_period             24x7
}

" >> /usr/local/nagios/etc/objects/hosts.cfg

Before you restart nagios and apply the configuration, you can check to see if there’s any errors using a built-in tool that nagios has:

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
---
<Edited for brevity>

Total Warnings: 2
Total Errors:   0

Things look okay - No serious problems were detected during the pre-flight check

If there’s any errors, it should give you some advice on how to fix them. Otherwise, restart nagios:

systemctl restart nagios

After the checks run, you should be seeing that your hosts are up by click on “Hosts” on the nav bar to the left:

Nagios is a sizeable piece of software and very extensible. There are many things you can do with it far beyond what we’ve done here. However we’ve gotten the core functionality working – pinging stuff.

We’ll take a look at how to do more than that in the next post. Stay tuned!

Syslog Server on Ubuntu 20.04

Running a syslog server that can collect logs from various devices on your network is really simple with Ubuntu Server 20.04. Using built-in software Rsyslog, you can quickly configure it to be either a syslog client or a server. Since most network devices have the capability to send logs to an external server, you can quickly set up your Ubuntu server act as a central log collection point.

What many folks don’t know is that syslog is actually a standard application-layer network protocol, not just software. It is defined in RFC 5424. It’s because of this standard protocol that network devices and servers alike are able to easily send and store logs. Without a standard protocol, it would be much more difficult to pull that off.

Let’s set up syslog on Ubuntu 20.04!

Topology

Topology in GNS3

The Ubuntu server at 10.0.0.1 will act as our syslog server while the other Ubuntu server and Cisco router will act as clients, sending their logs to the server.

Server Configuration

Since Rsyslog is already installed on Ubuntu (and others), there’s no installation. First we need to edit /etc/rsyslog.conf and uncomment these lines:

module(load="imudp")
input(type="imudp" port="514")

module(load="imtcp")
input(type="imtcp" port="514")

They will activate the server on TCP and UDP port 514 for incoming syslog messages. With just this configuration, the syslog server will work. But we’ll make one more modification – we want each IP address to have it’s own file. Otherwise all messages get dumped in the main file at /var/log/syslog.

We’ll create a file at /etc/rsyslog.d/30-custom.conf and place a couple of simple rules in it:

if $fromhost-ip startswith '10.0.0.2' then /var/log/network/10.0.0.2.log
& stop
if $fromhost-ip startswith '10.0.0.3' then /var/log/network/10.0.0.3.log
& stop

Create and change the ownership of the /var/log/network directory:

mkdir /var/log/network
chown syslog:adm /var/log/network

And restart Rsyslog:

systemctl restart rsyslog

And we’re done!

Client Configuration

For a Cisco IOSv device, the following command will turn on logging to a remote server:

logging host 10.0.0.1

For Ubuntu, just add the following line to /etc/rsyslog.conf:

*.* @@10.0.0.1:514

And restart the service:

systemctl restart rsyslog

Verification

To verify that syslog messages are in fact going to the server, we need to initiate an event.

For Cisco IOSv, shutting/no shutting any interface will do the trick. In config mode on the interface, just issue these commands:

Router(config-if)# shut
Router(config-if)# no shut

While you might be tempted to go check /var/log/network/10.0.0.2.log right away for syslog messages, it might be worth it to do a packet capture first to see if logs are indeed leaving the Cisco router and heading for the syslog server.

A capture between the two shows the following lone packet when we issue those shut commands:

Syslog protocol in Wireshark

Then check the /var/log/network/10.0.0.3.log on the syslog server to see if the message was properly written:

cat /var/log/network/10.0.0.3.log
---
Jan 27 13:48:17 10.0.0.3 45: *Jan 27 13:48:16.540: %LINK-3-UPDOWN: Interface GigabitEthernet0/1, changed state to down

Initiating an event on the Ubuntu client is as easy as shutting down a service (I’m sure there’s others too). I happen to have Nginx web server running on this guy so I’ll stop it:

systemctl stop nginx

The in the file on the syslog server:

cat /var/log/network/10.0.0.2.log
---
Jan 27 23:21:21 u20vm systemd[1]: Stopping A high performance web server and a reverse proxy server...
Jan 27 23:21:21 u20vm systemd[1]: nginx.service: Succeeded.
Jan 27 23:21:21 u20vm systemd[1]: Stopped A high performance web server and a reverse proxy server.

Hope you liked this one.

How To Ping Sweep With Python in GNS3

I debated for a long time whether to include coding in my networking blog. But since it seems the future of networking lies in code and automation, I believe it is time for some code.

Today we’ll look at how we can quickly ping sweep a subnet using python. If you are looking for some resources on learning python, you might check out this free ebook on python for network engineers or getting a course on Udemy for python. If you’d like me to create a python learning resource in the future, please let me know in the comments or contact form!

Topology

Topology in GNS3

We’ll use a simple IP subnet of 172.16.0.0/24 to sweep. Connected devices have been placed randomly at 172.16.0.51, 172.16.0.121 and 172.16.0.253. We’ll write the sweep python code on the Ubuntu 20.04 server at 172.16.0.1. Let’s get started!

Simple python ping sweep script

The basic logic of the code will be to loop through all hosts in the 172.16.0.0/24 subnet, from 172.16.0.1 to 172.16.0.254 (.0 and .255 are the network and broadcast addresses, so no need to ping them) and ping each IP address once. If it responds, we’ll print to the CLI that it worked.

We’ll use two modules, ipaddress and subprocess. ipaddress is a handy network tool for working with IP addresses. Knowing where to start and stop in the loop is relatively simple with a /24 subnet, what if it were 172.16.0.0/19? Just incrementing the fourth octet by 1 each time won’t work. You’ll go to 172.16.0.256, which isn’t a valid IP address. That’s where ipaddress helps out. subprocess lets us call the ping command from python.

Here’s our code:

import ipaddress
import subprocess

mynet = ipaddress.ip_network('172.16.0.0/24') #create an ipaddress object for 172.16.0.0/24

for host in mynet.hosts():                    #loop through each host of 172.16.0.0/24
    host = str(host)                          #change from ipaddress object to string to hand to ping command
    proc = subprocess.run(                    #use subprocess to call ping command, split to multiple lines cuz its long
        ['ping', host, '-c', '1'],            #calling ping here, putting in host to ping
        stderr=subprocess.DEVNULL,            #silence ping command errors
        stdout=subprocess.DEVNULL             #silence ping command output
        )
    if  proc.returncode == 0:                 #return code of 0 from ping command means it got a reply
        print(f'{host} is alive!')            #say this host is alive if we got a reply

Hopefully this is pretty straightforward. The magic is happening in a couple spots. The first is with this line:

mynet = ipaddress.ip_network('172.16.0.0/24')

This creates an “object” that holds the network we’re working with. This object has super powers, one of them is visible in this line:

for host in mynet.hosts():

It lets us move through the hosts of the subnet, from 172.16.0.1 to 172.16.0.254, each time the host IP is assigned to host. We can then hand host to the ping command.

The second spot with magic is here:

    proc = subprocess.run( 
        ['ping', host, '-c', '1'],
        stderr=subprocess.DEVNULL, 
        stdout=subprocess.DEVNULL 
        )

This is just spawning another process, ping, from python.

When we run the script, we should see all the IP addresses that are alive!

python3 ping_sweep.py #might be python or python3 based on your OS

172.16.0.1 is alive!
172.16.0.51 is alive!
172.16.0.121 is alive!
172.16.0.253 is alive!

There’s only one problem – the script takes quite a while to run. The issue is that each time the ping command runs, python waits until it finishes before moving to the next. This is known as “blocking”, which basically means the script comes to a halt while it’s waiting for a ping process to finish. A common /24 size subnet of 254 hosts takes a good while to complete.

What if we could ping them all at the same time, or close to it? Well, this leads us into the dark, dangerous world of multi-threading, parallel processing, multiprocessing, and asynchronous processing. Even the words are ominous-sounding. But don’t worry, it’s not so bad with the help of a handy module called asyncio.

Super-charged ping sweep with asyncio

The issue we’re faced with is that we need to do multiple things at once. There are many ways to solve this problem, and some people spend their whole careers in this complex field. Recently though, the python asyncio is getting popular because it’s relatively easy to work with and not so terribly complicated compared to others. As of python 3.6, it’s part of the standard library.

Here’s the same ping sweep, this time written using the python asyncio module:

import ipaddress
import asyncio

async def ping(host):                              #add the "async" keyword to make a function asynchronous
    host = str(host)                               #turn ip address object to string
    proc = await asyncio.create_subprocess_shell(  #asyncio can smoothly call subprocess for you
            f'ping {host} -c 1',                   #ping command
            stderr=asyncio.subprocess.DEVNULL,     #silence ping errors
            stdout=asyncio.subprocess.DEVNULL      #silence ping output
            )
    stdout,stderr = await proc.communicate()       #get info from ping process
    if  proc.returncode == 0:                      #if process code was 0
        print(f'{host} is alive!')                 #say it's alive!

loop = asyncio.get_event_loop()                    #create an async loop
tasks = []                                         #list to hold ping tasks

mynet = ipaddress.ip_network('172.16.0.0/24')      #ip address module
for host in mynet.hosts():                         #loop through subnet hosts
    task = ping(host)                              #create async task from function we defined above
    tasks.append(task)                             #add task to list of tasks

tasks = asyncio.gather(*tasks)                     #some magic to assemble the tasks
loop.run_until_complete(tasks)                     #run all tasks (basically) at once

No denying it, this is more complicated. Might be a bit foreign even if you’re familiar with python. The key thing here is that we define a single ping task in an async function. Then when we loop through the subnet hosts, we create a task from that function, instead of running it on the spot. Then we call the asyncio module at the end to gather up the tasks and run them all asynchronously, which has the effect of appearing to run them all at once.

Also note that there’s no subprocess module here. Asyncio has built-in subprocess management (check the documentation here), so no need for the standard subprocess module.

While the output is the same, you’ll notice this takes about a second to run compared to minutes for the first script:

python3 ping_sweep.py

172.16.0.1 is alive!
172.16.0.51 is alive!
172.16.0.121 is alive!
172.16.0.253 is alive!

Please let me know in the comments if you want to see more content like this!

iperf2 vs iperf3: What’s the difference?

At first glance, you might be tempted to use iperf3 simply because it is one more than iperf2 (don’t worry, I’m guilty of this crime as well). It’s not an unfair assumption to think that iperf3 is the most recent version of the software, because of the name. It’s common to have two different versions of software in parallel existence, so the new one can take hold while the older version slowly dies away. Python2 and Python3 come to mind. This is not the case with iperf, however.

I recently wrote a post on how to use iperf3 to test bandwidth. Shortly after that one of the authors of iperf2, Bob McMahon, reached out to me. He pointed out that iperf2 is very much actively developed with some cool new features having been added recently. Under the surface they are very different projects, maintained by different teams with different goals.

Today we’ll take a look at some of the differences between the two.

Topology

Ubuntu 20.04 and Rocky Linux 8.5 VM’s in GNS3

We have a really basic topology here, Ubuntu 20.04 and Rocky Linux 8.5 connected on a single link with IP subnet 10.0.0.0/30. Both VM’s have iperf2 and iperf3 installed.

Bandwidth Test

For a bandwidth test, the two are almost identical. You can perform a bandwidth test using either with the same commands. For this test, the Ubuntu VM will be the client, and Rocky the server. Start the server on Rocky like this:

iperf -s

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

And from Ubuntu perform a test like this:

iperf -c 10.0.0.2
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size:  238 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.1 port 36528 connected with 10.0.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.90 GBytes  1.63 Gbits/sec

These commands will work using iperf2 or iperf3, however it should be noted you can’t use an iperf2 client with an iperf3 server, and vice-versa. Also, they use different TCP ports by default. Even if you used an iperf3 client with an iperf2 server and manually set the TCP port to be the same, you will get an error. They are not compatible:

iperf3 -c 10.0.0.2 -p 5001
iperf3: error - received an unknown control message

Supported Operating Systems

iperf2 is the clear winner here, primarily because it has up-to-date Windows packages available for easy download right on the sourceforge page. I avoid Windows when I can, but it has a tendency to be unavoidable due to it’s sheer installation base. iperf3 apparently had some unofficial builds a while back but nothing officially supported. You’ll need to compile it yourself to work on Windows which can be an inconvenience at best.

iperf2 downloads page

For Linux, many operating systems come with iperf2 preinstalled, Ubuntu 20.04 is one such example. iperf3 is just a command away though, with package managers. For macOS, the Homebrew package manager can quickly get you iperf2 or iperf3.

Feature: iperf3 authentication (not encryption)

Description of authentication features in iperf3

iperf3 supports authenticating clients to the server using public key/private key as well as a users file. I decided to try it out. To avoid a hassle I just used the exact commands they provided in the man file. You first generate a public key and private key on the server:

openssl genrsa -des3 -out private.pem 2048
openssl rsa -in private.pem -outform PEM -pubout -out public.pem
openssl rsa -in private.pem -out private_not_protected.pem -outform PEM

Then create a “credentials.csv” file with hashed passwords. The following commands will get a hashed password for you:

S_USER=james S_PASSWD=james
echo -n "{$S_USER}$S_PASSWD" | sha256sum | awk '{ print $1 }'
----
0b0c98028105e9e4d3f100280eac29bba90af614d1c75612729228e4d160c601 #This is the hash of "james"

Then create a “credentials.csv” file that looks like this:

username,sha256
james,0b0c98028105e9e4d3f100280eac29bba90af614d1c75612729228e4d160c601

Now start the server:

iperf3 -s --rsa-private-key-path ./private_not_protected.pem --authorized-users-path ./credentials.csv

Then from the client, copy the public key over:

scp james@10.0.0.1:public.pem .

Then run the client:

iperf3 -c 10.0.0.1 --rsa-public-key-path ./public.pem --username james

You’ll be asked for the password. If you get it right, the server will display a message that authentication succeeded:

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Authentication successed for user 'james' ts 1639396545
Accepted connection from 10.0.0.2, port 32784
[  5] local 10.0.0.1 port 5201 connected to 10.0.0.2 port 32786
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   194 MBytes  1.63 Gbits/sec                  
[  5]   1.00-2.00   sec   204 MBytes  1.71 Gbits/sec

Feature: iperf2 isochronous mode

One of the coolest features of iperf2 is its “isochronous” option. This option is designed to simulate video streaming network traffic. You can hear Bob McMahon explain it himself on his youtube video on this feature.

Using the parameters and commands he describes in his video, we’ll run on a test. The Ubuntu server will be the iperf2 server:

iperf -s -e -i 1

Then on Rocky Linux we’ll run the client test:

[james@localhost ~]$ iperf -c 10.0.0.1 -i 1 --isochronous=60:40m,10m
------------------------------------------------------------
Client connecting to 10.0.0.1, TCP port 5001 with pid 1640
UDP isochronous: 60 frames/sec mean=40.0 Mbit/s, stddev=10.0 Mbit/s, Period/IPG=16.67/0.005 ms
TCP window size:  340 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.2 port 49150 connected with 10.0.0.1 port 5001 (ct=1.44 ms)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
[  3] 0.00-1.00 sec   214 MBytes  1.79 Gbits/sec  1708/0          0       67K/562 us  398346.93
[  3] 1.00-2.00 sec   217 MBytes  1.82 Gbits/sec  1738/0        230      145K/608 us  374676.21
[  3] 2.00-3.00 sec   205 MBytes  1.72 Gbits/sec  1640/0        427      142K/583 us  368710.26
[  3] 3.00-4.00 sec   212 MBytes  1.78 Gbits/sec  1697/0        575      118K/920 us  241770.85
[  3] 4.00-5.00 sec   200 MBytes  1.68 Gbits/sec  1599/0        371      134K/853 us  245702.38
[  3] 5.00-6.00 sec   200 MBytes  1.68 Gbits/sec  1598/0        423      117K/529 us  395941.50

On the server we get our output:

james@u20vm:~$ iperf -s -e -i 1
------------------------------------------------------------
Server listening on TCP port 5001 with pid 3045
Read buffer size:  128 KByte
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 49150
[ ID] Interval            Transfer    Bandwidth       Reads   Dist(bin=16.0K)
[  4] 0.0000-1.0000 sec   213 MBytes  1.79 Gbits/sec  4631    503:1500:1008:577:276:191:138:438
[  4] 1.0000-2.0000 sec   217 MBytes  1.82 Gbits/sec  4018    570:838:812:502:255:231:164:646
[  4] 2.0000-3.0000 sec   204 MBytes  1.71 Gbits/sec  5074    590:1537:1637:511:316:152:115:216
[  4] 3.0000-4.0000 sec   212 MBytes  1.78 Gbits/sec  3924    599:805:717:464:266:264:246:563
[  4] 4.0000-5.0000 sec   200 MBytes  1.68 Gbits/sec  3876    575:953:672:462:258:242:188:526
[  4] 5.0000-6.0000 sec   200 MBytes  1.68 Gbits/sec  4046    656:1040:687:476:258:242:238:449

Unfortunately the version of iperf that is available in Ubuntu 20.04 repositories (2.0.13) doesn’t support isochronous TCP mode mentioned in the video. You would need to compile from source or use Windows for that. A newer version will be included (probably already has been by the time you’re reading this) in Ubuntu 22.04 LTS.

Various smaller differences

There are many other spots that iperf2 and iperf3 are different.

  • iperf2 supports an “enhanced output mode” using -e that is totally revamped (used it above in the isochronous section).
  • iperf3 supports json output using the -j option.
  • iperf2 supports a bidirectional test which performs tests from the client and server simultaneously using -d
  • iperf2 uses a multi-threaded architecture, while iperf3 uses single-threaded. To be honest, I haven’t seen any way that this actually affects performance of the application. I’d be really curious if anyone has some input on this.

I hope this was helpful, and I hope I did both of these cool programs a small amount of justice. I’m really curious to see if anyone has any other input or differences they know about. Please fee free to comment or reach out directly.

How To Install Free Range Routing (FRR) on Ubuntu 20.04 and Rocky Linux 8.5

The latest version of my favorite routing protocol software, Free Range Routing 8.1 was recently released on November 9th.

Free Range Routing is a fork of the Quagga project that improves upon it and adds lots more features and new protocols. My favorite protocol that is added is EIGRP, which was originally a Cisco proprietary protocol until Cisco released a draft RFC in 2013. Free Range Routing makes it easy to spin up a Linux router and exchange routes via EIGRP. Since Cisco routers speak EIGRP, you can also exchange routes with them too! Today we’ll just exchange routes between Ubuntu 20.04 and Rocky Linux 8.5 via EIGRP.

Topology

Ubuntu and Rocky Linux in GNS3

We have a simple network here with an Ubuntu and Rocky Linux VM’s acting as IP routers. Without adding routes, Ubuntu does not know about 172.16.0.0/24, and Rocky does not know about 192.168.0.0/24. EIGRP can educate them. I should mention – each of the Alpine nodes has a default route pointing to the .1 in their subnet (Ubuntu and Rocky), which is a typical setup in most networks.

Installation

In a previous post, I installed FRR on Ubuntu 18.04 via the snap store. You can still do that, but it looks like the snap version hasn’t been updated with 8.1. I’m sure it will be updated soon, but let’s install it via the binary packages that FRR provides just to do something different.

For Rocky Linux, you can find instructions here. They are RPM packages for CentOS, and in my testing I found them to work fine for Rocky Linux. Per their instructions, we’ll run these commands:

FRRVER="frr-stable"
curl -O https://rpm.frrouting.org/repo/$FRRVER-repo-1-0.el8.noarch.rpm
sudo yum install ./$FRRVER*
sudo yum install frr frr-pythontools

We’ll need to modify /etc/frr/daemons and turn on the protocols we want, in this case EIGRP:

vi /etc/frr/daemons
---

eigrpd=yes #find this line and set to yes

Then you’ll need to restart frr:

systemctl restart frr

The process is similar on Ubuntu. The debian-based instructions are on this page. Following those, we’ll run these commands:

curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
FRRVER="frr-stable"
echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
sudo apt update && sudo apt install frr frr-pythontools

We’ll need do modify the daemons file similar to above and run the exact same systemctl command to restart frr.

Installation is complete!

Configure FRR and EIGRP

To setup EIGRP routing, we’ll enter the FRR vtysh configuration tool that should be familiar if you’ve used either Quagga or Cisco IOS routers. On Ubuntu we’ll do this:

vtysh
---

Hello, this is FRRouting (version 8.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

u20vm# conf t
u20vm(config)# router eigrp 10
u20vm(config-router)# network 10.0.0.0/30
u20vm(config-router)# network 192.168.0.0/24
u20vm(config-router)# ^Z
u20vm# exit

On Rocky Linux, it’s almost exactly the same but the second network to add is 172.16.0.0/24:

vtysh
---

Hello, this is FRRouting (version 8.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

rl85vm# conf t
rl85vm(config)# router eigrp 10
rl85vm(config-router)# network 10.0.0.0/30
rl85vm(config-router)# network 172.16.0.0/24
rl85vm(config-router)# ^Z
rl85vm# exit

Since Rocky is runnning firewalld by default, you’ll need to either stop it with systemctl stop firewalld or go through the process to allow EIGRP-related traffic through the firewall.

We should be able to see that each router has the other’s connected route now installed in its table. On Ubuntu we can see 172.16.0.0/24 is installed from vtysh with show ip route (edited somewhat for brevity):

u20vm# show ip route
---

E   10.0.0.0/30 [90/28160] is directly connected, ens3, weight 1, 00:41:19
C>* 10.0.0.0/30 is directly connected, ens3, 00:41:57
E>* 172.16.0.0/24 [90/30720] via 10.0.0.2, ens3, weight 1, 00:40:55
E   192.168.0.0/24 [90/28160] is directly connected, ens4, weight 1, 00:40:43
C>* 192.168.0.0/24 is directly connected, ens4, 00:41:57

And likewise on Rocky Linux we can see 192.168.0.0/24 is installed:

rl85vm# show ip route
---

E   10.0.0.0/30 [90/28160] is directly connected, ens3, weight 1, 00:43:07
C>* 10.0.0.0/30 is directly connected, ens3, 00:43:45
E   172.16.0.0/24 [90/28160] is directly connected, ens4, weight 1, 00:42:50
C>* 172.16.0.0/24 is directly connected, ens4, 00:43:45
E>* 192.168.0.0/24 [90/30720] via 10.0.0.1, ens3, weight 1, 00:42:38
localhost.localdomain# 

A wireshark (if you’re running GNS3) will show the EIGRP messages flowing. If you catch it right at the start, you can see updates messages and not just hellos:

Wireshark capture of EIGRP traffic between Ubuntu and Rocky Linux

Verify

This should be easy, we’ll just ping between the Alpine Linux nodes. (make sure each has a default route pointing to .1)

/ # ping 172.16.0.1
PING 172.16.0.1 (172.16.0.1): 56 data bytes
64 bytes from 172.16.0.1: seq=0 ttl=63 time=2.662 ms

It works!

Hope you liked it.

How To Test Network Bandwidth With iperf3 in Linux

Testing network bandwidth in multiple flavors in Linux is simple with a tool called iperf. There’s two main versions – iperf2 and iperf3. Project maintainers apparently completely rewrote iperf3 from scratch to make the the tool simpler and to support some new features.

Update 12/12/2021: One of the authors of iperf2 reached out to me. Iperf2 is currently very much actively developed. You can find the most recent code on its sourceforge.net page. Iperf3 was indeed rewritten from scratch as the wikipedia page says, but mostly to meet the U.S. Department of Energy’s use cases. Iperf3’s github page clearly states the the DoE owns the project.

For testing bandwidth properly, you need to be running in server mode on one endpoint and client mode on the other. For this experiement, we will run the server on Rocky Linux 8.5 and the client on Ubuntu 20.04.

Topology

iperf3 test in GNS3

This is about as simple of a topology as I can think of. Two nodes on either end of a single link, Ubuntu at 10.0.0.1/30 running iperf3 client and Rocky at 10.0.0.2/30 running iperf3 server.

Iperf3 installation

On Ubuntu, iperf3 can be installed from distribution sources with apt-get:

apt-get install iperf3

Same on Rocky Linux but with yum:

yum install iperf3

Run iperf3 bandwidth test

First we need to start the server process on Rocky Linux with one command:

iperf3 -s

Then you should see the server listening for incoming tests:

iperf3 server listening on Rocky Linux 8.5

Then from the Ubuntu client, one command will run the test:

iperf3 -c 10.0.0.2

The output will give us our bandwith test results, which can be see on either the client or server:

Connecting to host 10.0.0.2, port 5201
[  5] local 10.0.0.1 port 59628 connected to 10.0.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   176 MBytes  1.48 Gbits/sec  685    230 KBytes       
[  5]   1.00-2.00   sec   173 MBytes  1.45 Gbits/sec  738    113 KBytes       
[  5]   2.00-3.00   sec   170 MBytes  1.42 Gbits/sec  1004    191 KBytes       
[  5]   3.00-4.00   sec   175 MBytes  1.47 Gbits/sec  714    123 KBytes       
[  5]   4.00-5.00   sec   182 MBytes  1.52 Gbits/sec  458    163 KBytes       
[  5]   5.00-6.00   sec   204 MBytes  1.71 Gbits/sec  443    314 KBytes       
[  5]   6.00-7.00   sec   180 MBytes  1.51 Gbits/sec  910    130 KBytes       
[  5]   7.00-8.00   sec   191 MBytes  1.60 Gbits/sec  849    123 KBytes       
[  5]   8.00-9.00   sec   172 MBytes  1.44 Gbits/sec  564    170 KBytes       
[  5]   9.00-10.00  sec   184 MBytes  1.54 Gbits/sec  412    225 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.76 GBytes  1.52 Gbits/sec  6777             sender
[  5]   0.00-10.04  sec  1.76 GBytes  1.51 Gbits/sec                  receiver

iperf Done.

A wireshark capture in GNS3 between the two hosts (or tcpdump on the links if you’re not in GNS3) will show the packets flying while the test is running:

Wireshark capture from GNS3 of iperf3 test

Hope you liked it!

Telnet to Ubuntu Server 20.04 in GNS3 Instead of VNC

If you’re using Ubuntu VM’s inside of GNS3, you’re probably sick of using a VNC client to access its command line.

The first big drawback to using VNC is that you can’t (or at least it’s not immediately obvious how to) paste text or commands you’ve found into the terminal. You have to retype everything, which is a real bummer.

The second big drawback is that a VNC session can’t be automated (or at least I don’t know of a good tool to do that). Since VNC is like RDP in that the session is visual, a human being or really advanced AI is required to interact with the session.

Having access to a VM in GNS3 via telnet to its terminal is a real benefit. You can set it up pretty quickly in Ubuntu 20.04. Full disclosure – this method only gets you access after the device has booted and arrived at the login prompt. There is a way to allow access earlier than that so the boot process can be viewed, I just haven’t gotten to it yet.

Set your VM to not be “linked base”

One mistake I often make in GNS3 is forgetting to make my VM not a “linked base” when I want to make permanent changes. A linked base is basically a clone of your VM. Any changes you make, files you download or programs you install will be blown away when you delete the device from the GNS3 canvas. To disable this functionality temporarily to make permanent changes, go to the device in the left pane and click “configure template”. On the advanced tab, uncheck “Use as a linked base VM”:

When you are done configuring the telnet capability, you can recheck this box. All linked base VM’s you drag out afterwards will have the telnet capability.

Create the ttyS0.service

You first need to create a systemd service for serial access. We need to create a file called ttyS0.service in the /lib/systemd/system/ directory:

vi /lib/systemd/system/ttyS0.service

The file contents should look like this:

[Unit]
Description=Serial Console Service

[Service]
ExecStart=/sbin/getty -L 115200 ttyS0 vt102
Restart=always

[Install]
WantedBy=multi-user.target

getty is program that manages tty sessions, physical or virtual terminals. It will run the login prompt when a connection is detected. 115200 is the baud rate, ttyS0 is a device file that points to the current terminal, and vt102 is the terminal emulator.

Load the service in systemd

Just a few commands will load the new service in systemd, and the script will run on boot to activate your serial device and allow telnet. Run these commands:

#Make file executable
chmod 755 ttyS0.service

#Reload systemd
systemctl daemon-reload

#Enable the service
systemctl enable ttyS0

#Start the service
systemctl start ttyS0

Your service is good to go!

Change the console type to telnet

You need to first shut down your VM so you can change the console type. Once it’s shutdown, you can configure the device on the canvas, or the template in the pane to the left, or both. The template will make changes for all VM’s dragged onto the canvas in the figure. Either way, configure the node by right clicking on it, and clicking on “configure” or “configure template”. At the very bottom, you should see a dropdown for “console type”. Change it to “telnet”:

Log in via telnet!

Just double-click on your VM. You won’t see any output on the telnet window while the VM is booting up because the service hasn’t fired yet. But when it does, you should see the login prompt:

Bonus tip – turn off dhcp in netplan

I had to turn off dhcp in Ubuntu’s netplan network configuration tool to get it to stop hanging at boot. There should be a yaml file in /etc/netplan/ (the yaml file name might differ per system) where you can turn it off. My netplan config looks like this:

network:
  ethernets:
    ens3:
      dhcp4: false
      optional: yes
  version: 2

Hope that helps!

Dockerized phpipam in GNS3

If you’re keeping track of all your IP addresses from your environment in a really big and messy Excel file, you may want to consider switching to an IP address management tool. One such tool is phpipam, which is a web-based tool that allows you to store your IP addresses in a central database (a SQL database, to be specific). The reasons why that approach would be far superior to an Excel file are pretty clear – first of all no more emailing a million different copies of that Excel file. But it has other advantages as well, for example if your software development team wants to check the availability/reserve a new IP address, subnet or vlan from code, they can do it via the phpipam API without ever clicking on anything.

A testing instance of phpipam can be brought into your GNS3 environment quickly using Docker! It requires a little hacking, but nothing too ambitious. If you haven’t got GNS3 or Docker installed or you don’t know how to add a Docker image to GNS3, check out my post on that topic.

Topology

We’re not doing anything fancy here, just the phpipam docker container connected to the “NAT” cloud node. By default, the NAT cloud node uses a virtual adapter with IP subnet 192.168.122.0/24. The NAT adapter is at .1, and I’ll set my phpipam container to use .2. This setup will allow us to access the phpipam web server in the container at 192.168.122.2 via a web browser from our desktop computer that runs GNS3.

Build a custom docker image

We’re going to quickly build a custom docker image from the official phpipam image on Docker Hub. If you’re using a GNS3 VM, you can do this via a cli session on the VM. If you’re using Linux, just do this from any terminal. Make a directory for your Dockerfile:

mkdir jamesphpipam
vi Dockerfile

Now we’re going to write the docker commands for our custom image. MySQL server needs to be installed, and also the directory “/run/mysqld” needs to be created as well so MySQL can create a Unix socket there:

FROM phpipam/phpipam-www

RUN apk add mysql mysql-client
RUN mkdir /run/mysqld

Now we have an image (it’s based on Alpine Linux) ready to fire up in GNS3. You’ll need to add it from GNS3 preferences -> docker containers -> new. Go through all the screens and use defaults, except you’ll want to set the “start command” to “/bin/sh” to give you command line access when you double click on it from the GNS3 canvas.

Configure MySQL and Apache

First we need to open up the cli on the container and set its IP address to 192.168.122.2 (ip addr add 192.168.122.2/24 dev eth0). Start up both mysqld and httpd (MySQL Server and Apache Web Server), like this:

httpd
mysqld --user=root &

Make sure you use the ampersand at the end of your mysqld command, so it runs in the background.

To set the MySQL user and password, I had to login to the MySQL cli and run these commands in the phpipam docker container:

mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY 'SomeSecret';

Now we should be able to access the phpipam page at 192.168.122.2 from any web browser!

Configure new phpipam installation

If you click on “New phpipam installation”, it will take you to a page to select the SQL database installation type:

Let’s select “Automatic database installation”. Then we just put in the user “root” and password “SomeSecret” that we entered in our mysql cli earlier:

And our database is installed! Now we just need to set the admin password on the next screen:

Click on “Proceed to login”, login with user “admin” and the password you just set. You’ll be taken to the main phpipam page!

Hit me up if you run into any snags!