How To Ping Sweep With Python in GNS3

I debated for a long time whether to include coding in my networking blog. But since it seems the future of networking lies in code and automation, I believe it is time for some code.

Today we’ll look at how we can quickly ping sweep a subnet using python. If you are looking for some resources on learning python, you might check out this free ebook on python for network engineers or getting a course on Udemy for python. If you’d like me to create a python learning resource in the future, please let me know in the comments or contact form!


Topology in GNS3

We’ll use a simple IP subnet of to sweep. Connected devices have been placed randomly at, and We’ll write the sweep python code on the Ubuntu 20.04 server at Let’s get started!

Simple python ping sweep script

The basic logic of the code will be to loop through all hosts in the subnet, from to (.0 and .255 are the network and broadcast addresses, so no need to ping them) and ping each IP address once. If it responds, we’ll print to the CLI that it worked.

We’ll use two modules, ipaddress and subprocess. ipaddress is a handy network tool for working with IP addresses. Knowing where to start and stop in the loop is relatively simple with a /24 subnet, what if it were Just incrementing the fourth octet by 1 each time won’t work. You’ll go to, which isn’t a valid IP address. That’s where ipaddress helps out. subprocess lets us call the ping command from python.

Here’s our code:

import ipaddress
import subprocess

mynet = ipaddress.ip_network('') #create an ipaddress object for

for host in mynet.hosts():                    #loop through each host of
    host = str(host)                          #change from ipaddress object to string to hand to ping command
    proc =                    #use subprocess to call ping command, split to multiple lines cuz its long
        ['ping', host, '-c', '1'],            #calling ping here, putting in host to ping
        stderr=subprocess.DEVNULL,            #silence ping command errors
        stdout=subprocess.DEVNULL             #silence ping command output
    if  proc.returncode == 0:                 #return code of 0 from ping command means it got a reply
        print(f'{host} is alive!')            #say this host is alive if we got a reply

Hopefully this is pretty straightforward. The magic is happening in a couple spots. The first is with this line:

mynet = ipaddress.ip_network('')

This creates an “object” that holds the network we’re working with. This object has super powers, one of them is visible in this line:

for host in mynet.hosts():

It lets us move through the hosts of the subnet, from to, each time the host IP is assigned to host. We can then hand host to the ping command.

The second spot with magic is here:

    proc = 
        ['ping', host, '-c', '1'],

This is just spawning another process, ping, from python.

When we run the script, we should see all the IP addresses that are alive!

python3 #might be python or python3 based on your OS is alive! is alive! is alive! is alive!

There’s only one problem – the script takes quite a while to run. The issue is that each time the ping command runs, python waits until it finishes before moving to the next. This is known as “blocking”, which basically means the script comes to a halt while it’s waiting for a ping process to finish. A common /24 size subnet of 254 hosts takes a good while to complete.

What if we could ping them all at the same time, or close to it? Well, this leads us into the dark, dangerous world of multi-threading, parallel processing, multiprocessing, and asynchronous processing. Even the words are ominous-sounding. But don’t worry, it’s not so bad with the help of a handy module called asyncio.

Super-charged ping sweep with asyncio

The issue we’re faced with is that we need to do multiple things at once. There are many ways to solve this problem, and some people spend their whole careers in this complex field. Recently though, the python asyncio is getting popular because it’s relatively easy to work with and not so terribly complicated compared to others. As of python 3.6, it’s part of the standard library.

Here’s the same ping sweep, this time written using the python asyncio module:

import ipaddress
import asyncio

async def ping(host):                              #add the "async" keyword to make a function asynchronous
    host = str(host)                               #turn ip address object to string
    proc = await asyncio.create_subprocess_shell(  #asyncio can smoothly call subprocess for you
            f'ping {host} -c 1',                   #ping command
            stderr=asyncio.subprocess.DEVNULL,     #silence ping errors
            stdout=asyncio.subprocess.DEVNULL      #silence ping output
    stdout,stderr = await proc.communicate()       #get info from ping process
    if  proc.returncode == 0:                      #if process code was 0
        print(f'{host} is alive!')                 #say it's alive!

loop = asyncio.get_event_loop()                    #create an async loop
tasks = []                                         #list to hold ping tasks

mynet = ipaddress.ip_network('')      #ip address module
for host in mynet.hosts():                         #loop through subnet hosts
    task = ping(host)                              #create async task from function we defined above
    tasks.append(task)                             #add task to list of tasks

tasks = asyncio.gather(*tasks)                     #some magic to assemble the tasks
loop.run_until_complete(tasks)                     #run all tasks (basically) at once

No denying it, this is more complicated. Might be a bit foreign even if you’re familiar with python. The key thing here is that we define a single ping task in an async function. Then when we loop through the subnet hosts, we create a task from that function, instead of running it on the spot. Then we call the asyncio module at the end to gather up the tasks and run them all asynchronously, which has the effect of appearing to run them all at once.

Also note that there’s no subprocess module here. Asyncio has built-in subprocess management (check the documentation here), so no need for the standard subprocess module.

While the output is the same, you’ll notice this takes about a second to run compared to minutes for the first script:

python3 is alive! is alive! is alive! is alive!

Please let me know in the comments if you want to see more content like this!

iperf2 vs iperf3: What’s the difference?

At first glance, you might be tempted to use iperf3 simply because it is one more than iperf2 (don’t worry, I’m guilty of this crime as well). It’s not an unfair assumption to think that iperf3 is the most recent version of the software, because of the name. It’s common to have two different versions of software in parallel existence, so the new one can take hold while the older version slowly dies away. Python2 and Python3 come to mind. This is not the case with iperf, however.

I recently wrote a post on how to use iperf3 to test bandwidth. Shortly after that one of the authors of iperf2, Bob McMahon, reached out to me. He pointed out that iperf2 is very much actively developed with some cool new features having been added recently. Under the surface they are very different projects, maintained by different teams with different goals.

Today we’ll take a look at some of the differences between the two.


Ubuntu 20.04 and Rocky Linux 8.5 VM’s in GNS3

We have a really basic topology here, Ubuntu 20.04 and Rocky Linux 8.5 connected on a single link with IP subnet Both VM’s have iperf2 and iperf3 installed.

Bandwidth Test

For a bandwidth test, the two are almost identical. You can perform a bandwidth test using either with the same commands. For this test, the Ubuntu VM will be the client, and Rocky the server. Start the server on Rocky like this:

iperf -s

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)

And from Ubuntu perform a test like this:

iperf -c
Client connecting to, TCP port 5001
TCP window size:  238 KByte (default)
[  3] local port 36528 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.90 GBytes  1.63 Gbits/sec

These commands will work using iperf2 or iperf3, however it should be noted you can’t use an iperf2 client with an iperf3 server, and vice-versa. Also, they use different TCP ports by default. Even if you used an iperf3 client with an iperf2 server and manually set the TCP port to be the same, you will get an error. They are not compatible:

iperf3 -c -p 5001
iperf3: error - received an unknown control message

Supported Operating Systems

iperf2 is the clear winner here, primarily because it has up-to-date Windows packages available for easy download right on the sourceforge page. I avoid Windows when I can, but it has a tendency to be unavoidable due to it’s sheer installation base. iperf3 apparently had some unofficial builds a while back but nothing officially supported. You’ll need to compile it yourself to work on Windows which can be an inconvenience at best.

iperf2 downloads page

For Linux, many operating systems come with iperf2 preinstalled, Ubuntu 20.04 is one such example. iperf3 is just a command away though, with package managers. For macOS, the Homebrew package manager can quickly get you iperf2 or iperf3.

Feature: iperf3 authentication (not encryption)

Description of authentication features in iperf3

iperf3 supports authenticating clients to the server using public key/private key as well as a users file. I decided to try it out. To avoid a hassle I just used the exact commands they provided in the man file. You first generate a public key and private key on the server:

openssl genrsa -des3 -out private.pem 2048
openssl rsa -in private.pem -outform PEM -pubout -out public.pem
openssl rsa -in private.pem -out private_not_protected.pem -outform PEM

Then create a “credentials.csv” file with hashed passwords. The following commands will get a hashed password for you:

S_USER=james S_PASSWD=james
echo -n "{$S_USER}$S_PASSWD" | sha256sum | awk '{ print $1 }'
0b0c98028105e9e4d3f100280eac29bba90af614d1c75612729228e4d160c601 #This is the hash of "james"

Then create a “credentials.csv” file that looks like this:


Now start the server:

iperf3 -s --rsa-private-key-path ./private_not_protected.pem --authorized-users-path ./credentials.csv

Then from the client, copy the public key over:

scp james@ .

Then run the client:

iperf3 -c --rsa-public-key-path ./public.pem --username james

You’ll be asked for the password. If you get it right, the server will display a message that authentication succeeded:

Server listening on 5201
Authentication successed for user 'james' ts 1639396545
Accepted connection from, port 32784
[  5] local port 5201 connected to port 32786
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   194 MBytes  1.63 Gbits/sec                  
[  5]   1.00-2.00   sec   204 MBytes  1.71 Gbits/sec

Feature: iperf2 isochronous mode

One of the coolest features of iperf2 is its “isochronous” option. This option is designed to simulate video streaming network traffic. You can hear Bob McMahon explain it himself on his youtube video on this feature.

Using the parameters and commands he describes in his video, we’ll run on a test. The Ubuntu server will be the iperf2 server:

iperf -s -e -i 1

Then on Rocky Linux we’ll run the client test:

[james@localhost ~]$ iperf -c -i 1 --isochronous=60:40m,10m
Client connecting to, TCP port 5001 with pid 1640
UDP isochronous: 60 frames/sec mean=40.0 Mbit/s, stddev=10.0 Mbit/s, Period/IPG=16.67/0.005 ms
TCP window size:  340 KByte (default)
[  3] local port 49150 connected with port 5001 (ct=1.44 ms)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
[  3] 0.00-1.00 sec   214 MBytes  1.79 Gbits/sec  1708/0          0       67K/562 us  398346.93
[  3] 1.00-2.00 sec   217 MBytes  1.82 Gbits/sec  1738/0        230      145K/608 us  374676.21
[  3] 2.00-3.00 sec   205 MBytes  1.72 Gbits/sec  1640/0        427      142K/583 us  368710.26
[  3] 3.00-4.00 sec   212 MBytes  1.78 Gbits/sec  1697/0        575      118K/920 us  241770.85
[  3] 4.00-5.00 sec   200 MBytes  1.68 Gbits/sec  1599/0        371      134K/853 us  245702.38
[  3] 5.00-6.00 sec   200 MBytes  1.68 Gbits/sec  1598/0        423      117K/529 us  395941.50

On the server we get our output:

james@u20vm:~$ iperf -s -e -i 1
Server listening on TCP port 5001 with pid 3045
Read buffer size:  128 KByte
TCP window size:  128 KByte (default)
[  4] local port 5001 connected with port 49150
[ ID] Interval            Transfer    Bandwidth       Reads   Dist(bin=16.0K)
[  4] 0.0000-1.0000 sec   213 MBytes  1.79 Gbits/sec  4631    503:1500:1008:577:276:191:138:438
[  4] 1.0000-2.0000 sec   217 MBytes  1.82 Gbits/sec  4018    570:838:812:502:255:231:164:646
[  4] 2.0000-3.0000 sec   204 MBytes  1.71 Gbits/sec  5074    590:1537:1637:511:316:152:115:216
[  4] 3.0000-4.0000 sec   212 MBytes  1.78 Gbits/sec  3924    599:805:717:464:266:264:246:563
[  4] 4.0000-5.0000 sec   200 MBytes  1.68 Gbits/sec  3876    575:953:672:462:258:242:188:526
[  4] 5.0000-6.0000 sec   200 MBytes  1.68 Gbits/sec  4046    656:1040:687:476:258:242:238:449

Unfortunately the version of iperf that is available in Ubuntu 20.04 repositories (2.0.13) doesn’t support isochronous TCP mode mentioned in the video. You would need to compile from source or use Windows for that. A newer version will be included (probably already has been by the time you’re reading this) in Ubuntu 22.04 LTS.

Various smaller differences

There are many other spots that iperf2 and iperf3 are different.

  • iperf2 supports an “enhanced output mode” using -e that is totally revamped (used it above in the isochronous section).
  • iperf3 supports json output using the -j option.
  • iperf2 supports a bidirectional test which performs tests from the client and server simultaneously using -d
  • iperf2 uses a multi-threaded architecture, while iperf3 uses single-threaded. To be honest, I haven’t seen any way that this actually affects performance of the application. I’d be really curious if anyone has some input on this.

I hope this was helpful, and I hope I did both of these cool programs a small amount of justice. I’m really curious to see if anyone has any other input or differences they know about. Please fee free to comment or reach out directly.

How To Install Free Range Routing (FRR) on Ubuntu 20.04 and Rocky Linux 8.5

The latest version of my favorite routing protocol software, Free Range Routing 8.1 was recently released on November 9th.

Free Range Routing is a fork of the Quagga project that improves upon it and adds lots more features and new protocols. My favorite protocol that is added is EIGRP, which was originally a Cisco proprietary protocol until Cisco released a draft RFC in 2013. Free Range Routing makes it easy to spin up a Linux router and exchange routes via EIGRP. Since Cisco routers speak EIGRP, you can also exchange routes with them too! Today we’ll just exchange routes between Ubuntu 20.04 and Rocky Linux 8.5 via EIGRP.


Ubuntu and Rocky Linux in GNS3

We have a simple network here with an Ubuntu and Rocky Linux VM’s acting as IP routers. Without adding routes, Ubuntu does not know about, and Rocky does not know about EIGRP can educate them. I should mention – each of the Alpine nodes has a default route pointing to the .1 in their subnet (Ubuntu and Rocky), which is a typical setup in most networks.


In a previous post, I installed FRR on Ubuntu 18.04 via the snap store. You can still do that, but it looks like the snap version hasn’t been updated with 8.1. I’m sure it will be updated soon, but let’s install it via the binary packages that FRR provides just to do something different.

For Rocky Linux, you can find instructions here. They are RPM packages for CentOS, and in my testing I found them to work fine for Rocky Linux. Per their instructions, we’ll run these commands:

curl -O$FRRVER-repo-1-0.el8.noarch.rpm
sudo yum install ./$FRRVER*
sudo yum install frr frr-pythontools

We’ll need to modify /etc/frr/daemons and turn on the protocols we want, in this case EIGRP:

vi /etc/frr/daemons

eigrpd=yes #find this line and set to yes

Then you’ll need to restart frr:

systemctl restart frr

The process is similar on Ubuntu. The debian-based instructions are on this page. Following those, we’ll run these commands:

curl -s | sudo apt-key add -
echo deb $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
sudo apt update && sudo apt install frr frr-pythontools

We’ll need do modify the daemons file similar to above and run the exact same systemctl command to restart frr.

Installation is complete!

Configure FRR and EIGRP

To setup EIGRP routing, we’ll enter the FRR vtysh configuration tool that should be familiar if you’ve used either Quagga or Cisco IOS routers. On Ubuntu we’ll do this:


Hello, this is FRRouting (version 8.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

u20vm# conf t
u20vm(config)# router eigrp 10
u20vm(config-router)# network
u20vm(config-router)# network
u20vm(config-router)# ^Z
u20vm# exit

On Rocky Linux, it’s almost exactly the same but the second network to add is


Hello, this is FRRouting (version 8.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

rl85vm# conf t
rl85vm(config)# router eigrp 10
rl85vm(config-router)# network
rl85vm(config-router)# network
rl85vm(config-router)# ^Z
rl85vm# exit

Since Rocky is runnning firewalld by default, you’ll need to either stop it with systemctl stop firewalld or go through the process to allow EIGRP-related traffic through the firewall.

We should be able to see that each router has the other’s connected route now installed in its table. On Ubuntu we can see is installed from vtysh with show ip route (edited somewhat for brevity):

u20vm# show ip route

E [90/28160] is directly connected, ens3, weight 1, 00:41:19
C>* is directly connected, ens3, 00:41:57
E>* [90/30720] via, ens3, weight 1, 00:40:55
E [90/28160] is directly connected, ens4, weight 1, 00:40:43
C>* is directly connected, ens4, 00:41:57

And likewise on Rocky Linux we can see is installed:

rl85vm# show ip route

E [90/28160] is directly connected, ens3, weight 1, 00:43:07
C>* is directly connected, ens3, 00:43:45
E [90/28160] is directly connected, ens4, weight 1, 00:42:50
C>* is directly connected, ens4, 00:43:45
E>* [90/30720] via, ens3, weight 1, 00:42:38

A wireshark (if you’re running GNS3) will show the EIGRP messages flowing. If you catch it right at the start, you can see updates messages and not just hellos:

Wireshark capture of EIGRP traffic between Ubuntu and Rocky Linux


This should be easy, we’ll just ping between the Alpine Linux nodes. (make sure each has a default route pointing to .1)

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=63 time=2.662 ms

It works!

Hope you liked it.

How To Test Network Bandwidth With iperf3 in Linux

Testing network bandwidth in multiple flavors in Linux is simple with a tool called iperf. There’s two main versions – iperf2 and iperf3. Project maintainers apparently completely rewrote iperf3 from scratch to make the the tool simpler and to support some new features.

Update 12/12/2021: One of the authors of iperf2 reached out to me. Iperf2 is currently very much actively developed. You can find the most recent code on its page. Iperf3 was indeed rewritten from scratch as the wikipedia page says, but mostly to meet the U.S. Department of Energy’s use cases. Iperf3’s github page clearly states the the DoE owns the project.

For testing bandwidth properly, you need to be running in server mode on one endpoint and client mode on the other. For this experiement, we will run the server on Rocky Linux 8.5 and the client on Ubuntu 20.04.


iperf3 test in GNS3

This is about as simple of a topology as I can think of. Two nodes on either end of a single link, Ubuntu at running iperf3 client and Rocky at running iperf3 server.

Iperf3 installation

On Ubuntu, iperf3 can be installed from distribution sources with apt-get:

apt-get install iperf3

Same on Rocky Linux but with yum:

yum install iperf3

Run iperf3 bandwidth test

First we need to start the server process on Rocky Linux with one command:

iperf3 -s

Then you should see the server listening for incoming tests:

iperf3 server listening on Rocky Linux 8.5

Then from the Ubuntu client, one command will run the test:

iperf3 -c

The output will give us our bandwith test results, which can be see on either the client or server:

Connecting to host, port 5201
[  5] local port 59628 connected to port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   176 MBytes  1.48 Gbits/sec  685    230 KBytes       
[  5]   1.00-2.00   sec   173 MBytes  1.45 Gbits/sec  738    113 KBytes       
[  5]   2.00-3.00   sec   170 MBytes  1.42 Gbits/sec  1004    191 KBytes       
[  5]   3.00-4.00   sec   175 MBytes  1.47 Gbits/sec  714    123 KBytes       
[  5]   4.00-5.00   sec   182 MBytes  1.52 Gbits/sec  458    163 KBytes       
[  5]   5.00-6.00   sec   204 MBytes  1.71 Gbits/sec  443    314 KBytes       
[  5]   6.00-7.00   sec   180 MBytes  1.51 Gbits/sec  910    130 KBytes       
[  5]   7.00-8.00   sec   191 MBytes  1.60 Gbits/sec  849    123 KBytes       
[  5]   8.00-9.00   sec   172 MBytes  1.44 Gbits/sec  564    170 KBytes       
[  5]   9.00-10.00  sec   184 MBytes  1.54 Gbits/sec  412    225 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.76 GBytes  1.52 Gbits/sec  6777             sender
[  5]   0.00-10.04  sec  1.76 GBytes  1.51 Gbits/sec                  receiver

iperf Done.

A wireshark capture in GNS3 between the two hosts (or tcpdump on the links if you’re not in GNS3) will show the packets flying while the test is running:

Wireshark capture from GNS3 of iperf3 test

Hope you liked it!

Telnet to Ubuntu Server 20.04 in GNS3 Instead of VNC

If you’re using Ubuntu VM’s inside of GNS3, you’re probably sick of using a VNC client to access its command line.

The first big drawback to using VNC is that you can’t (or at least it’s not immediately obvious how to) paste text or commands you’ve found into the terminal. You have to retype everything, which is a real bummer.

The second big drawback is that a VNC session can’t be automated (or at least I don’t know of a good tool to do that). Since VNC is like RDP in that the session is visual, a human being or really advanced AI is required to interact with the session.

Having access to a VM in GNS3 via telnet to its terminal is a real benefit. You can set it up pretty quickly in Ubuntu 20.04. Full disclosure – this method only gets you access after the device has booted and arrived at the login prompt. There is a way to allow access earlier than that so the boot process can be viewed, I just haven’t gotten to it yet.

Set your VM to not be “linked base”

One mistake I often make in GNS3 is forgetting to make my VM not a “linked base” when I want to make permanent changes. A linked base is basically a clone of your VM. Any changes you make, files you download or programs you install will be blown away when you delete the device from the GNS3 canvas. To disable this functionality temporarily to make permanent changes, go to the device in the left pane and click “configure template”. On the advanced tab, uncheck “Use as a linked base VM”:

When you are done configuring the telnet capability, you can recheck this box. All linked base VM’s you drag out afterwards will have the telnet capability.

Create the ttyS0.service

You first need to create a systemd service for serial access. We need to create a file called ttyS0.service in the /lib/systemd/system/ directory:

vi /lib/systemd/system/ttyS0.service

The file contents should look like this:

Description=Serial Console Service

ExecStart=/sbin/getty -L 115200 ttyS0 vt102


getty is program that manages tty sessions, physical or virtual terminals. It will run the login prompt when a connection is detected. 115200 is the baud rate, ttyS0 is a device file that points to the current terminal, and vt102 is the terminal emulator.

Load the service in systemd

Just a few commands will load the new service in systemd, and the script will run on boot to activate your serial device and allow telnet. Run these commands:

#Make file executable
chmod 755 ttyS0.service

#Reload systemd
systemctl daemon-reload

#Enable the service
systemctl enable ttyS0

#Start the service
systemctl start ttyS0

Your service is good to go!

Change the console type to telnet

You need to first shut down your VM so you can change the console type. Once it’s shutdown, you can configure the device on the canvas, or the template in the pane to the left, or both. The template will make changes for all VM’s dragged onto the canvas in the figure. Either way, configure the node by right clicking on it, and clicking on “configure” or “configure template”. At the very bottom, you should see a dropdown for “console type”. Change it to “telnet”:

Log in via telnet!

Just double-click on your VM. You won’t see any output on the telnet window while the VM is booting up because the service hasn’t fired yet. But when it does, you should see the login prompt:

Bonus tip – turn off dhcp in netplan

I had to turn off dhcp in Ubuntu’s netplan network configuration tool to get it to stop hanging at boot. There should be a yaml file in /etc/netplan/ (the yaml file name might differ per system) where you can turn it off. My netplan config looks like this:

      dhcp4: false
      optional: yes
  version: 2

Hope that helps!

Dockerized phpipam in GNS3

If you’re keeping track of all your IP addresses from your environment in a really big and messy Excel file, you may want to consider switching to an IP address management tool. One such tool is phpipam, which is a web-based tool that allows you to store your IP addresses in a central database (a SQL database, to be specific). The reasons why that approach would be far superior to an Excel file are pretty clear – first of all no more emailing a million different copies of that Excel file. But it has other advantages as well, for example if your software development team wants to check the availability/reserve a new IP address, subnet or vlan from code, they can do it via the phpipam API without ever clicking on anything.

A testing instance of phpipam can be brought into your GNS3 environment quickly using Docker! It requires a little hacking, but nothing too ambitious. If you haven’t got GNS3 or Docker installed or you don’t know how to add a Docker image to GNS3, check out my post on that topic.


We’re not doing anything fancy here, just the phpipam docker container connected to the “NAT” cloud node. By default, the NAT cloud node uses a virtual adapter with IP subnet The NAT adapter is at .1, and I’ll set my phpipam container to use .2. This setup will allow us to access the phpipam web server in the container at via a web browser from our desktop computer that runs GNS3.

Build a custom docker image

We’re going to quickly build a custom docker image from the official phpipam image on Docker Hub. If you’re using a GNS3 VM, you can do this via a cli session on the VM. If you’re using Linux, just do this from any terminal. Make a directory for your Dockerfile:

mkdir jamesphpipam
vi Dockerfile

Now we’re going to write the docker commands for our custom image. MySQL server needs to be installed, and also the directory “/run/mysqld” needs to be created as well so MySQL can create a Unix socket there:

FROM phpipam/phpipam-www

RUN apk add mysql mysql-client
RUN mkdir /run/mysqld

Now we have an image (it’s based on Alpine Linux) ready to fire up in GNS3. You’ll need to add it from GNS3 preferences -> docker containers -> new. Go through all the screens and use defaults, except you’ll want to set the “start command” to “/bin/sh” to give you command line access when you double click on it from the GNS3 canvas.

Configure MySQL and Apache

First we need to open up the cli on the container and set its IP address to (ip addr add dev eth0). Start up both mysqld and httpd (MySQL Server and Apache Web Server), like this:

mysqld --user=root &

Make sure you use the ampersand at the end of your mysqld command, so it runs in the background.

To set the MySQL user and password, I had to login to the MySQL cli and run these commands in the phpipam docker container:

mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY 'SomeSecret';

Now we should be able to access the phpipam page at from any web browser!

Configure new phpipam installation

If you click on “New phpipam installation”, it will take you to a page to select the SQL database installation type:

Let’s select “Automatic database installation”. Then we just put in the user “root” and password “SomeSecret” that we entered in our mysql cli earlier:

And our database is installed! Now we just need to set the admin password on the next screen:

Click on “Proceed to login”, login with user “admin” and the password you just set. You’ll be taken to the main phpipam page!

Hit me up if you run into any snags!

Add Dockerized Bind DNS Server to GNS3

I posted a while ago on how to install and configure the Bind DNS server on Ubuntu 18.04, and got a request from a reader with help on getting Dockerized Bind into GNS3. This post is the result of my tinkering with that lab.

The organization that oversees the Bind open source project also releases an official Docker image through the Docker hub that anyone can access. Docker container technology can be a tricky at first for systems and network engineers to wrap their heads around. Docker containers are not an entire operating system – full operating systems are designed to run many processes at once. Docker containers are designed to run one process and one process only. They only contain the software and libraries needed to run that process.

GNS3 has a very cool integration with Docker, however. It allows you to add full network adapters to your containers and copies in some handy tools to make the command line environment usable. But since many of the familiar OS tools are not included in most Docker containers like they would be with a standard OS, it can be challenging to get things working right.

If you are using Ubuntu Linux, feel free to check out my guide on installing GNS3 and Docker on Ubuntu 20.04. If you are using Windows or Mac with the GNS3 VM, Docker is already installed on the VM.


My topology is simple – a single vlan and IP subnet of My Bind DNS server will reside at, with two Alpine Linux containers at and I walk through getting Alpine Linux containers installed on the post I linked above, if you need help.

Build your own image based off the official ISC Bind image

First open up a shell or terminal on the GNS3 VM or wherever the GNS3 server is located. If you don’t know how to open a shell, they walk you through it on the official GNS3 docs:

Create a directory where you can write your Dockerfile and build the image:

mkdir jamesbind
cd jamesbind

vi Dockerfile

Feel free to use whatever text editor you like, I’m a vi person. We’re going to write a Dockerfile that looks like this:

FROM internetsystemsconsortium/bind9:9.11
RUN apt-get update
RUN apt-get install vim -y

Basically all this does is pull the official Bind Docker image, and run some commands on the image. Namely we are updating apt-get and installing vi. We need to do this because this docker image does not have a text editor installed, and we have to edit the Bind configuration files.

Full disclosure: there is another, much better way than manually editing config files from inside the container. You can write the config files in the same folder as the Dockerfile, and add them to the Docker image when you build it. However, I think it’s best for learning and troubleshooting purposes to manually edit the text files, so that’s the route I’m going.

Build your image(-t switch gives it a “tag”, which is basically a name):

docker build -t jamesbind .

Don’t forget the period at the end, that’s important. You should now have a fresh docker image with bind and vi installed in it.

Add your image to GNS3

From the GNS3 preferences window, you can now add your image to the list of devices available.

Click through and use the defaults except when you get to the “Start command” window. You’ll want to set that to /bin/bash:

Now you’re ready to use your image in GNS3!

Fire up Bind

Drag all the containers out, connect and double click on them to get a terminal. You should be able to configure IP settings normally using iproute2 commands (ip addr add dev eth0, etc). For the Bind container, let’s write our config files. As I mentioned many cycles ago in my Bind server post, there are three Bind config files:

/etc/bind/named.conf.options –> Configures BIND9 options
/etc/bind/named.conf.local –> Sets zone file name and gives its location
/etc/bind/zones/ –> The actual zone file with DNS records.

First let’s hop into our Bind container (just double click on it) and configure named.conf.options. Mine looks like this:

options {
        directory "/var/cache/bind";
        listen-on { any; };

Now on to named.conf.local. This is where you declare your zone. Mine is going to be, I just made it up.

zone "" {
    type master;
    file "/etc/bind/zones/";

Now for the zone file that we indicated above. It needs to be created, so lets create both the zones folder and zone file:

mkdir zones
cd zones
@               IN      SOA (
                                2               ; Serial
                                604800	        ; Refresh
                                86400           ; Retry
                                2419200         ; Expire
                                604800 )        ; Negative Cache TTL
@               IN      NS
ns              IN      A
alpine1         IN      A
alpine2         IN      A

Finally, fire up Bind by running the “named -g” command. This will run it in the foreground, with debug output which will be handy. Alternatively, you can just run “named” and it’ll go in the background. When you run it, you’ll be looking for a line that says your zone file was loaded. “all zones loaded” seems to be a lie, if there’s errors on your zone, it’ll say that and then say all zones were loaded. Make sure you read the output carefully:

named -g
<...removed for brevity...>
26-Oct-2021 23:49:14.231 zone loaded serial 2
26-Oct-2021 23:31:49.828 all zones loaded
26-Oct-2021 23:31:49.829 running

In your Alpine containers, add “nameserver” to resolv.conf to tell them to use the Bind server for DNS resolution:

echo "nameserver" > /etc/resolv.conf

Testing your setup

First let’s ping (the Bind container) from alpine-1:


PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=1.080 ms
64 bytes from seq=1 ttl=64 time=1.073 ms

It works! We can see in a wireshark packet capture the DNS request from and response from

Pinging to also works:


PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.999 ms
64 bytes from seq=1 ttl=64 time=1.087 ms


The Bind configuration files are really sensitive to anything that’s left out. Be sure and check to see if you forgot a semicolon or that your zone file is properly formatted with all required entries in place. And again, I highly recommend using the “named -g” when you are testing, it’ll give you some big hints as to what is wrong with your configuration.

If your Bind server is running with no config errors and something still isn’t working, it could be a network issue. Make sure and do a packet capture to see if packets are actually flowing and they’re what you expect! Sometimes after troubleshooting for a long time I do a packet capture only to find packets were never leaving the network interface due to something I forget, like adding an IP address or route somewhere.

Good luck! Feel free to reach out with questions about your lab, I’m always happy to help.

Install GNS3 and Docker on Ubuntu 20.04 for Cisco and Linux Network Labs

Every major OS has its place, so I’m not hoping to get into that discussion, but I find that Ubuntu Linux works really well for creating network labs in GNS3. If you’re not familiar with GNS3, you’re missing out. It allows you to pull in real VM’s, and even Docker containers into an emulated network environment for testing and experimentation. You can run Cisco routers and switches, other vendor network vendors, Windows Desktop and Server, Linux and any other OS that is supported by Linux’s QEMU/KVM hypervisor which is pretty much anything. GNS3 has many features, but today we’ll just look at getting it installed, along with Docker.

Why is Ubuntu better to run GNS3? You may have noticed that on Windows or Mac versions of GNS3, the server has to run on a VM to work properly. That server VM runs a Linux OS, specifically Ubuntu. So using Ubuntu as your desktop OS means you’re cutting out all of that complexity with the server VM, not to mention the additional RAM consumed. Simply put, GNS3 runs the way it’s supposed to on Ubuntu. Not to knock the Windows and Mac versions, the GNS3 team worked hard on those. But in my humble and honest opinion, Ubuntu just works better for GNS3.

Most folks stick to using VM’s in GNS3, but the Docker integration is pretty awesome and has some very real benefits over VM’s. Any docker container you have installed on the same system as the GNS3 server can be pulled into GNS3, although whether it will work properly depends somewhat on what the container has installed in it.

GNS3 Installation

The official GNS3 Ubuntu releases can be found at their PPA at:

The PPA can be added and GNS3 installed with just a few quick commands, although it’s a relatively big download:

sudo add-apt-repository ppa:gns3/ppa
sudo apt-get update
sudo apt-get install gns3-gui

When you first run GNS3, you’ll notice that the default option is not a VM, it’s to run the server locally. No VM needed!

At this point, GNS3 is installed, although you may have to run this command to get wireshark captures working:

chmod 755 /usr/bin/dumpcap

Docker Installation

I’ll just be following the official Docker instructions here, they work great:

These are to install the repository, which is probably the “best” option. There is a convenience one-liner script, but we all know that’s not a good habit to get into, so we’ll avoid that.

First install dependencies:

 sudo apt-get update
 sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \

Add the Docker official GPG key:

curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the stable repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

And install:

 sudo apt-get update
 sudo apt-get install docker-ce docker-ce-cli

To avoid getting permissions errors in GNS3, you’ll need to add your user to the docker group. You’ll need to log out/log in or restart for this to take effect:

sudo usermod -aG docker ${USER}

Add a Docker container to your GNS3

Now that Docker is installed, pulling a Docker image from the Docker Hub is easy. A popular one is Alpine Linux because it’s so small, but packs lots of popular tools and libraries:

docker pull alpine

Now you should be able to add this image to GNS3. Go into GNS3, go to preferences, and all the way at the bottom where it says “Docker containers”. Click on “new”, and you should be able to select the Alpine Linux image from the drop-down menu:

Click through and leave the defaults, but you might want two network adapters instead of one, in case you want it to be a router. Now just drag a couple containers out onto the canvas:

At this point, you should be able to double click on these and get a busybox shell, which will let you configure IP settings and the like. You may have noticed that the startup of these containers is near-instantaneous, and they consume very little RAM. One of the many perks of the lightweight nature of Docker containers. Enjoy!

Kea DHCP Server 1.6 on Ubuntu 18.04 with Cisco IOS


The Internet Services Consortium is a 501 non-profit (according to Wikipedia) that has developed and maintains both BIND DNS Server and ISC-DHCP-Server. Both of these open-source projects are used all over the world and are deployed on many, many servers. While ISC-DHCP-Server is handy and easy to configure, ISC has developed a new DHCP server called Kea that they say will eventually replace ISC-DHCP-Server. ISC claims Kea is better in nearly every way, so I’m going to give it a shot.

While all of ISC’s software is open-source, like many open-source maintainers they generate revenue by selling professional support for the software that they develop. Check out their support page for more info.


Pretty basic here. Kea will be installed on my Ubuntu 18.04 server and an IPv4 network is configured. We’ll see if we can use DHCP to get an IP address on my Cisco IOSv 15.6 router. I’m doing a packet capture in the middle which is always helpful for troubleshooting and verification.


Installation was pretty easy, provided you are ok with downloading a script from the internet and running it on your system. This method has some serious security concerns, so if you’re planning to run it on a production system you’ll want to probably take a more cautious approach. However my environment is entirely in GNS3 on a cloned Ubuntu VM so I can destroy the whole thing when I’m done. Please be careful out there, don’t talk to strangers.

It took me some time to find the official documentation, but as usual it was at

Running the ISC-Consortium’s bash script went pretty smooth for me, this process installs the package repository that ISC provides:

$ curl -1sLf \
>   '' \
>   | sudo bash
Executing the  setup script for the 'isc/kea-1-6' repository ...

  OK: Checking for required executable 'curl' ...
  OK: Checking for required executable 'apt-get' ...
  OK: Detecting your OS distribution and release using system methods ...
 ^^^^: OS detected as: ubuntu 18.04 (bionic)
FAIL: Checking for apt dependency 'apt-transport-https' ...
  OK: Updating apt repository metadata cache ...
  OK: Attempting to install 'apt-transport-https' ...
  OK: Checking for apt dependency 'gnupg' ...
  OK: Importing 'isc/kea-1-6' repository GPG key into apt ...
  OK: Checking if upstream install config is OK ...
  OK: Installing 'isc/kea-1-6' repository via apt ...
  OK: Updating apt repository metadata cache ...
 ^^^^: The repository has been installed successfully - You're ready to rock!


Once the repository is successfully installed, you can now install the packages:

$apt install isc-kea-dhcp4-server isc-kea-dhcp6-server isc-kea-admin


We’ll start with something really simple. I’ll set the range to be from to, the default gateway to (the Ubuntu box) and database set to “memfile” which for Kea means written to a csv file. This is pretty much the exact same functionality as ISC-DHCP-Server.


The default configuration file on my Ubuntu 18.04 box was located at /etc/kea/kea-dhcp4.conf. I deleted all the pre-added stuff and replaced with the following basic configuration:

# DHCPv4 configuration starts on the next line
"Dhcp4": {

# First we set up global values
    "valid-lifetime": 4000,
    "renew-timer": 1000,
    "rebind-timer": 2000,

# Next we set up the interfaces to be used by the server.
    "interfaces-config": {
        "interfaces": [ "eth0" ]

# And we specify the type of lease database
    "lease-database": {
        "type": "memfile",
        "persist": true,
        "name": "/var/lib/kea/dhcp4.leases"

# Finally, we list the subnets from which we will be leasing addresses.
    "subnet4": [
            "subnet": "",
            "pools": [
                     "pool": " -"
                     "name": "routers",
                     "data": ""
# DHCPv4 configuration ends with the next line

And restart Kea with

$systemctl restart isc-kea-dhcp4-server

#and also check status:

$systemctl status isc-kea-dhcp4-server

If everything is good, you should see a green dot from systemd telling you Kea is up and running.

In this step it’s really easy to get hung up because of invalid JSON. Make sure you close all your {‘s and [‘s.

If all is well, Kea should be running and ready to hand out DHCP leases.


Configuration of a DHCP client on an IOS router is dead simple. On the interface just issue “ip address dhcp”. The interface section of your show run will look like this:

interface GigabitEthernet0/0
 ip address dhcp
 duplex auto
 speed auto



Since we’ve specified in our config that we want to store DHCP leases in /var/lib/kea/dhcp4.leases, my first step is to check to see if there’s a lease in there for the Cisco router.

$cat /var/lib/kea/dhcp4.leases

It’s a little hard to read, but it’s there.


On the Cisco device you’ll be looking for a log message like this:

*Mar  7 22:39:47.135: %DHCP-6-ADDRESS_ASSIGN: Interface GigabitEthernet0/0 assigned DHCP address, mask, hostname

You’ll also want to check IP interfaces for an IP address and issue “show ip interface brief”:

Interface                  IP-Address      OK? Method Status                Protocol
GigabitEthernet0/0      YES DHCP   up                    up      
GigabitEthernet0/1         unassigned      YES unset  administratively down down    
GigabitEthernet0/2         unassigned      YES unset  administratively down down    
GigabitEthernet0/3         unassigned      YES unset  administratively down down

You’ll also want to check the packet capture for the standard DHCP set of 4 messages – Discover, Offer, Request, Acknowledge. Wireshark will recognize “bootp” as a filter:


The first sign of trouble will be that you’ll see nothing but “Discover” messages coming from the Cisco router, and no response from Kea. At that point, you’ll want to make sure Kea is running:

Then check the syslog file. On Ubuntu 18.04, this is in /var/log/syslog. Make sure to grep for “kea” because the file is usually pretty big, I used “cat /var/log/syslog | grep kea”:

In this particular instance, I had written invalid JSON in the Kea configuration file, so Kea failed to start.

I also had an instance where I was trying using a MySQL server to store leases, and my configuration wasn’t correct. Kea started but simply didn’t respond to DHCP requests. If you’re using SQL, make sure communication is happening between Kea and your SQL server.

Optional – MySQL Server for Leases storage

One of the options that Kea supports is storing lease data in a database. For this you can use a MySQL server (or others). ISC states in their documentation that the SQL database option is really for needs specific to large deployments, so it’s optional of course.

To install MySQL:

$sudo apt update
$sudo apt install mysql-server

You can check to make sure it’s running:

$systemctl status mysql

Status should be green:

Then you need to jump into mysql and create a database for Kea to use:

$mysql -u root -p
Enter password:
mysql>CREATE DATABASE james_kea;
Query OK, 1 row affected (0.00 sec)
mysql> CREATE USER 'james'@'localhost' IDENTIFIED BY 'james';
Query OK, 0 rows affected (0.01 sec)
mysql> GRANT ALL ON james_kea.* TO 'james'@'localhost';
Query OK, 0 rows affected (0.00 sec)
mysql> exit

Then initialize the Kea database with the kea-admin tool:

kea-admin db-init mysql -u james -p james -n james_kea

You’re going to need to change the “lease-database” section of /etc/kea/kea-dhcp4.conf to tell it to store leases in MySQL:

    "lease-database": {
    #    "type": "memfile", #I just commented these out.
    #    "persist": true,#I just commented these out.
    #    "name": "/var/lib/kea/dhcp4.leases" #I just commented these out.

Restart Kea for it to take effect.

As I mentioned before, if Kea can’t communicate with MySQL, it will start but simply not respond to DHCP requests. If you’re in that situation, make sure to check that your SQL and Kea configurations are correct.

You can take a look at the database via the mysql CLI to see if leases are being stored there:

$mysql -u root -p
mysql> show databases;
| Database           |
| information_schema |
| james_kea          |
| mysql              |
| performance_schema |
| sys                |
5 rows in set (0.00 sec)
mysql> use james_kea
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
| Tables_in_james_kea           |
| dhcp4_audit                   |
| dhcp4_audit_revision          |
| dhcp4_global_parameter        |
| dhcp4_global_parameter_server |
| lease4                        |
| lease4_stat                   |
| lease6                        |
| lease6_stat                   |
| lease6_types                  |
| lease_hwaddr_source           |
| lease_state                   |
| logs                          |
| modification                  |
| parameter_data_type           |
| schema_version                |
mysql> SELECT * FROM lease4;
| address   | hwaddr | client_id                   | valid_lifetime | expire              | subnet_id | fqdn_fwd | fqdn_rev | hostname | state | user_context |
| 167772260 | 
              ����      |  cisco-0cb0.c4ed.c800-Gi0/0 |           4000 | 2020-03-09 02:16:09 |         1 |        1 |        1 | router   |     0 | NULL         |
1 row in set (0.00 sec)


It shows up better on a terminal with a lot of width, but there is lease data stored for my Cisco router. Success.

Optional – MySQL Server/REST API for Configuration Storage

I was really hoping to get to test this, however I learned this functionality is provided through external libraries, or “hooks” that the ISC team offers through a premium package.

The “Configuration Backend”, as they call it, offers the ability to store configurations for networks, some options, and other stuff in MySQL and not a JSON config file. Changes to these configurations can be made through REST calls to the kea-ctrl-agent which processes and strips away the HTTP headers and forwards the JSON data to Kea. Essentially, you could make configuration changes from a script or program on the fly without having to restart or reconfigure the server. Pretty cool, I can see why they charge for it.

As of this writing in March of 2020, the ISC website says you need a premium support subscription for the premium hooks package.

DMVPN (Sans-Ipsec) on Linux/Cisco with Free Range Routing


Free Range Routing is a really cool fork of the Quagga project by Cumulus Linux, both are implementations of a number of network routing protocols for Linux. While Quagga implemented the classic protocols such as OSPF, BGP, RIP, and PIMD that have been in use for decades, FRR implements some new ones and adds features to the old protocols.

Free Range Routing is a bit tricky to build from source at the moment (February 2020) due to some newer packages such as libyang not being readily available in major distributions such as Ubuntu. However Ubuntu’s Snap technology makes installing a very recent version of Free Range Routing, well, a snap. I have another post that shows you how to install it and run EIGRP on Linux which is pretty cool because it’s Cisco proprietary (yes, Cisco still owns it despite publishing an RFC).

Another protcol that Free Range Routing implements is NHRP, which is the key component in another Cisco-originated technology, Dynamic Multipoint Virtual Private Network, or DMVPN for short. NHRP serves as the protocol that determines GRE tunnel endpoints.

For this post, I’m going to skip the encryption part of DMVPN that IPsec provides. It’s done with Strongswan on Linux, and it’s tricky to get right and I’ll cover it in another post. But you can get a nice unencrypted overlay network running with NHRP and GRE before you ever start encrypting. I don’t think there’s an official term for DMVPN without IPsec, but I’ve heard networking folk refer to it as “naked” DMVPN. Obviously you don’t want to do this on the Internet (despite the “Internet” node in my topology), but there are some viable use cases for naked DMVPN on a private network and I’ve personally seen it on production networks. Free Range Routing’s NHRP implementation is still experimental, so please be careful.


I don’t like how cluttered this drawing is, but I’m pretty sure all the info on there is relevant to the lab. The idea here is to get PC’s 1, 2 and 3 to be able to ping each other. The 2 Ubuntu routers and 1 Cisco router can’t establish routing with “The Internets” so they’ll need a tunneling mechanism to learn about the networks of the other routers that have PC’s connected. First I’ll get NHRP connected and get GRE tunnels working. Then I’ll run BGP to advertise, and over the “overlay” GRE network.


This post assumes the basic IP connectivity between the Ubuntu and Cisco routers has been set up, as well as LAN connectivity for the PC’s. Specifically, networks,,,,, and have all been configured. With all that boring stuff done, we’ll jump into more interesting things.


This is the DMVPN hub. First order of business is to create a GRE tunnel interface:

$ip tunnel add james_gre mode gre key 42 ttl 64 dev eth0
$ip addr add dev james_gre
$ip link set james_gre up

You might be wondering why the GRE IP address is and not Well apparently it has something to do with the way the good folks at Cumulus Linux implemented it. I’m sure they had their reasons, and they outline a couple in their documentation and on their Github NHRP readme page. That’s just the way it’s done with FRR, so I don’t argue.

Next we need to jump into the FRR CLI and configure NHRP. Remember to type the full path as it didn’t get added to $PATH when FRR was installed (or you can add it to your $PATH, or cd to /snap/bin):

Hello, this is FRRouting (version 7.2.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

# conf t
(config)#int james_gre
(config-if)#ip nhrp network-id 1
(config-if)#ip nhrp nhs dynamic nbma
(config-if)#ip nhrp registration no-unique

I’ve skipped setting up “NHRP redirect” (direct spoke-to-spoke communication) as I was unable to get it working in my lab between a Cisco spoke and Ubuntu spoke. I can make it work between two Ubuntu spokes or two Cisco spokes but that’s not really much fun. As I said this implementation is still a work in progress.

Next I’ll set up BGP routing to advertise the networks where the PC’s are connected. I’m using BGP because it lends itself to DMVPN for a number of reasons and also FRR’s NHRP implementation doesn’t support multicast yet, so no EIGRP or OSPF. The Hub will serve as a BGP “route reflector”, which basically means you can avoid a full mesh of BGP neighborships in iBGP (BGP where the AS is all the same). Sounds fancy but it actually simplifies things here, I promise.

(config)#router bgp 1
(config-router)#neighbor remote-as 1
(config-router)#neighbor remote-as 1
(config-router)#address-family ipv4 unicast
(config-router-af)#neighbor route-reflector-client
(config-router-af)#neighbor route-reflector-client


The Ubuntu spoke is pretty similar, except without any route reflector stuff.

$ip tunnel add james_gre mode gre key 42 ttl 64 dev eth0
$ip addr add dev james_gre
$ip link set james_gre up
Hello, this is FRRouting (version 7.2.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

# conf t
(config)#int james_gre
(config-if)#ip nhrp network-id 1
(config-if)#ip nhrp nhs dynamic nbma
(config-if)#ip nhrp registration no-unique
(config)#ip route
(config)#router bgp 1
(config-router)#neighbor remote-as 1
(config-router)#address-family ipv4 unicast

You may have noticed an additional command – I added a static route via “”. This was for compatibility with the Cisco device. I could avoid this step and use NHRP redirect with another Ubuntu spoke and they would communicate directly. However with a Cisco spoke in play I had to add the route and all traffic goes through the hub. Maybe it’ll be better next release. Or if there’s a configuration I’m missing, please comment or shoot me an email, I’d love to know because I practically broke my keyboard trying different configurations.


This is the tunnel and BGP configuration for my Cisco IOSv router:

interface Tunnel0
 ip address
 no ip redirects
 ip nhrp network-id 1
 ip nhrp nhs dynamic nbma
 ip nhrp registration no-unique
 ip nhrp shortcut
 tunnel source GigabitEthernet0/0
 tunnel mode gre multipoint
 tunnel key 42
router bgp 1
 bgp log-neighbor-changes
 network mask
 neighbor remote-as 1

You probably noticed that the IP address on the tunnel interface is not /32 like the Ubuntu routers, it’s a /24 which is ultimately the size of the DMVPN network. That’s the way Cisco does it, so I don’t argue.


You can run some useful show commands from the FRR CLI on the Ubuntu routers as well as the Cisco CLI. “show ip route” is really useful, you’ll notice “N” routes on the Ubuntu routers which is how FRR NHRP advertises other spoke IP addresses (Read the FRR docs for more detail):

# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* [0/0] via, eth0, 1d01h01m
C>* is directly connected, james_gre, 1d00h57m
N>* [10/0] is directly connected, james_gre, 02:56:29
N>* [10/0] is directly connected, james_gre, 12:37:57
C>* is directly connected, eth0, 1d01h02m
B> [200/0] via (recursive), 01:17:14
  *                         via, james_gre onlink, 01:17:14
B> [200/0] via (recursive), 01:16:53
  *                         via, james_gre onlink, 01:16:53
C>* is directly connected, eth1, 12:10:26

You can also run “show dmvpn” which should show the other endpoints:

# show dmvpn
Src                      Dst                      Flags  SAs  Identity                <br>                        n      0                            <br>                        n      0          

And a good old fashioned ping should confirm everything for you. Here I’m pinging from the PC behind the Ubuntu spoke to the PC behind the Cisco spoke:

PC2> ping
84 bytes from icmp_seq=1 ttl=62 time=3.523 ms
84 bytes from icmp_seq=2 ttl=62 time=3.828 ms


You can apparently send NHRP debugs to syslog, but I didn’t find them particularly helpful when I was troubleshooting. I found a packet capture to be the most useful. The the key is to get NHRP going, in a packet capture you should see a NHRP registration request and reply with Code=Success. If you see registration requests with no request at all, you have a problem. On my cisco I originally forgot to put in the tunnel key command, and it kept sending a request and getting an ICMP error response. Don’t forget your tunnel key.

DMVPN between Ubuntu and Cisco, pretty cool right?