Wireguard VPN on Ubuntu 20.04

Wireguard is an attempt to improve VPN tunnels in a number of ways – simpler code, less compute, easier configuration, the list goes on. If we’re comparing it to IPsec, I would say that yes, it’s a bit easier to configure. One of the main differences is that it does not rely on the classic two IPsec options for keys – PSK and X.509 certificates. Instead, it relies on public key/private key similar to SSH.

Today, we’re going to configure a very simple policy-based site-to-site Wireguard VPN. By “policy-based” I mean that tunneled traffic is determined by a pre-written configuration in the Wireguard configuration file, not by static or dynamic routes. By site-to-site, I mean this is not a remote-access (road warrior) VPN, it’s designed to connect the subnets that sit behind the two VPN peers.

Topology

The two Ubuntu20.04 machines are serving as routers and VPN peers. No routing is in place for PC1 and PC2 to talk to each other. While the two Ubuntu machines can connect to each other, there is no network-level encryption. That’s where Wireguard comes in.

Generate key pairs

If you haven’t installed it yet, wireguard can be installed with apt-get:

apt-get install wireguard

We’ll use the wg command to generate keys on Ubuntu20.04-1. These two commands will generate a private and public key pair:

wg genkey > private1.key
wg pubkey < private1.key > public1.key

We’ll do the same on Ubuntu20.04-2

wg genkey > private2.key
wg pubkey < private2.key > public2.key

Tunnel configurations

Then we’ll write a tunnel config file for Ubuntu20.04-1 in /etc/wireguard/wg0.conf:

[Interface]
PrivateKey = uOnrdfW/ANcU+fh+RUjlb3TQlFWIdwbOpDpAA+NkonY=
Address = 10.0.0.1/30
ListenPort = 8888

[Peer]
PublicKey = JjSDqdzjPOX+iIAaWCjxxg1yZ76jAd6jfSfv/1AlojI=
Endpoint = 12.0.0.2:8888
AllowedIPs = 10.0.0.0/30, 172.16.0.0/24

You’ll notice I took the keys out of the key files and pasted them into the config file. Under “Interface” we have Ubuntu20.04-1’s own private key, while the public key of Ubuntu20.04-2 is under “Peer”. The “Address” parameter is the “glue network” of the tunnel. This will be the virtual subnet that exists inside the tunnel encryption. The “AllowedIPs” parameter is where tunneled traffic is specified (interesting traffic). You need to put the destination subnet here. Since this is Ubuntu20.04-1 and 172.16.0.0/24 is the destination on the other side of the tunnel, we put 172.16.0.0/24 as part of the “AllowedIPs” parameter. Also put the glue network in here as well.

We can write a similar one for Ubuntu20.04-2 in /etc/wireguard/wg0.conf. For Ubuntu20.04-2, 192.168.0.0/24 is on the other side of the tunnel so 192.168.0.0/24 will be in “AllowedIPs”. Under “Interface”, we have its own private key, while the public key of Ubuntu20.04-1 is pasted under “Peer”.

[Interface]
PrivateKey = KJgjkPQVhOX5CyYYWr7B6v1AbI7H2kEtBi4wdhAES2g=
Address = 10.0.0.2/30
ListenPort = 8888

[Peer]
PublicKey = +GOlIMgLAnLZujraI8m4F6JyWZOpxWGRAPSUqkwrZyg=
Endpoint = 11.0.0.2:8888
AllowedIPs = 10.0.0.0/30, 192.168.0.0/24

To bring the tunnels up, on each Ubuntu machine run this command:

wg-quick up wg0

To verify the configuration is loaded, use “wg show”. I ran this one from Ubuntu20.04-2:

wg show
---
interface: wg0
  public key: JjSDqdzjPOX+iIAaWCjxxg1yZ76jAd6jfSfv/1AlojI=
  private key: (hidden)
  listening port: 8888

peer: +GOlIMgLAnLZujraI8m4F6JyWZOpxWGRAPSUqkwrZyg=
  endpoint: 11.0.0.2:8888
  allowed ips: 10.0.0.0/30, 192.168.0.0/24
  latest handshake: 28 minutes, 48 seconds ago
  transfer: 1.09 KiB received, 2.63 KiB sent

We should be able to ping the tunnel glue network IPs (here from Ubuntu20.04):

root@u20vm:/home/james# ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=5.22 ms

And PC1 and PC2 can ping each other too:

PC1> ping 172.16.0.2

84 bytes from 172.16.0.2 icmp_seq=1 ttl=62 time=4.942 ms

You’ll notice we did not have to add any routes for 192.168.0.0/24 and 172.16.0.0/24 to be able to reach each other. The Wireguard configuration added routing automatically, which is why I am calling this type of tunnel “policy-based”.

And the most fun part, Wireshark. We can see the traffic going back and forth, and the protocol is labeled “Wireguard”. Pretty cool, right?

Wireshark capture of Wireguard traffic

Hope you liked this one.

Rocky Linux 8.5 Installation

Since Redhat has decided to sunset CentOS, it is being replaced by Rocky Linux. You might be surprised at how quickly you can get a server up and running. We’ll also install the nginx webserver just to see how quickly we can get that going as well.

Let’s get right into it.

Download the installer

Rocky Linux Download Page

You can get the installer from https://rockylinux.org/download/ painlessly by downloading the minimal .iso image, it’s about 2GB. Once you have that, load it up on a USB (or a DVD, I guess. Does anybody still use those?!) or in my case, Qemu in GNS3 for a VM.

Follow the prompts

We’ll immediately be taken to the installer.

Initial installer page

Just select “Install Rocky Linux 8”. There’s just one page where you select your locale, software to install, partitions, etc. We can get through it minimally by just setting the user name and click “Begin Installation”, but feel free to play with the settings.

Installation takes about 5 minutes, depending on what is being installed and how much juice your system has.

Installing Rocky Linux
Installation complete

Reboot, and we’re ready to log in!

Install nginx web server

My login prompt came right up:

Login prompt

If you neglected to connect the network on the installation settings page like I did, you’ll need to do that now, using network manager’s nmcli.

nmcli connection modify ens3 ipv4.method auto
nmcli connection down ens3
nmcli connection up ens3

And with that we’re able to get an IP address via DHCP as well as DNS configuration, so Internet is good to go.

Let’s do something interesting – we’ll install nginx web server. If you’re familiar with Redhat commands, it uses the “yum” package manager to allow for quick installation of pre-built binaries. We can install nginx with one command:

sudo yum install nginx

Just that command will get it installed.

Nginx web server installation

We’ll need to start it up with systemd:

sudo systemctl start nginx

We’ll need to allow traffic to come through the firewall. For me this is a test server in a test environment, so I had no issue with turning off the firewall completely. If you are using this in production or it has a public IP address, obviously don’t do that – configure firewalld responsibly to only allow permitted web traffic. To turn the firewall off, just issue this command for systemd:

sudo systemctl stop firewalld

And now I can access the Rocky Linux Nginx default page!

Nginx web server default page customized for Rocky Linux

Enjoy your fresh installation of Rocky!

Basic Firewall With Iptables on Ubuntu 20.04

Ubuntu comes with iptables, a configuration utility that allows you to manage rules for Netfilter, the Linux Kernel firewall. Using iptables you can manipulate packets as they leave, enter, or are forwarded across network interfaces on a Linux operating system. Today we’ll look at how to block SSH traffic going through an Ubuntu 20.04 system acting as a router.

Topology

I have a basic configuration here with three IP subnets – 192.168.0.0/24 where the SSH client lives, 10.0.0.0/30 is a transit network between routers, and 172.16.0.0/24 where the SSH server lives. We will be configuring Ubuntu20.04-Firewall with iptables to block SSH traffic.

Installation

Iptables requires no external package installation with apt-get or otherwise, it comes stock-and-standard with a fresh Ubuntu 20.04 Server or Desktop OS. You can verify the status of your iptables rules like so:

iptables -S

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

By default, iptables is configured to pretty much do nothing. No packets are filtered and no NAT (network address translation) is configured.

Iptables syntax

The syntax for iptables can be quite confusing, and since I myself am not configuring them on a daily basis, I always need to reference documentation (or someone else’s blog) for how to configure something specific. It’s a good idea to take a quick look at the basic syntax though. The command structure looks like this, taken straight from the iptables manual page:

iptables [-t table] [mode] [chain] [rulenum] [rule-specification] [options]
  • table is which type of table you want to use. Usually it’s filter for dropping disallowed packets, or nat for translating packets. Filter is default if none is specified.
  • mode is an action, -A (append), -I (insert), -D (delete), -R (replace), -L (list), -P (set policy, is default action) are all valid modes.
  • chain is the part of the routing process to which your rule applies, there are five chains – PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING.
  • rulenum gives the rule a sequence, rules are applied one-by-one and when there is a match, the corresponding action is taken and all subsequent rules are ignored.
  • rule-specification is the actual rule itself. There are many parameters that can be specified here, like protocol, source, destination, etc.
  • options give you some customization – for example, you can add -v for verbose output.

Now that we have a basic idea of what iptables does, let’s drop some SSH packets.

Verifying what we’re blocking

On Ubuntu20.04-Firewall in my topology, I can see that Ubuntu20.04-SSH_Client at 192.168.0.2 has SSH access to Ubuntu20.04-SSH_Server at 172.16.0.2:

james@client$ssh james@172.16.0.2
james@172.16.0.2's password:

james@server$

SSH is an network application protocol that usually uses TCP port 22 to establish an encrypted command-line session between two computers. If we do a quick wireshark betwen the Ubuntu20.04-Firewall and CiscoIOSv15.6(1)T-1 router, we can see this SSH traffic traversing the link:

Wireshark capture of SSH traffic in GNS3

Starting at the top you can see the TCP handshake on port 22 starting with the SYN flag. A few packets later SSHv2 packets begin. All packets use a randomized source TCP port (source is different per session) and a destination TCP port of 22. So we can safely say that in this experiment, if we drop TCP destination port 22, we will effectively block SSH traffic between the client and server.

Configure the iptables rule

A rule to drop SSH packets on TCP 22 can be configured in one line on Ubuntu20.04-Firewall:

iptables -A FORWARD -p tcp --dport 22 -j DROP
  • -A specifies we are appending a rule.
  • FORWARD applies the rule to packets being forwarded from one interface to another
  • -p tcp is a rule-specification to apply the rule to TCP packets
  • --dport 22 applies the rule to destination TCP port 22
  • -j DROP tells iptables to drop the packet if the previous conditions match

Verify

Let’s see if the client can connect now:

james@client$ ssh james@172.16.0.2
ssh: connect to host 172.16.0.2 port 22: Connection timed out

It worked! Packets on TCP port 22 are being dropped at the Ubuntu firewall.

Please keep in mind – this is a simple block on TCP destination port 22. TCP and UDP ports use the concept of “well-known” which means servers running protocols use a port that everyone “knows”. This is so that someone connecting to a server for the first time with no previous knowledge of its configuration will be able to connect, since they both assume that TCP port 22 is used for SSH. However if someone configures SSH on the client and server in this example to use any other port, it will go right around the filter.

Reach out if you have issues or something to share!

Create Your Own Certificate Authority With OpenSSL and NGINX On Ubuntu 20.04

OpenSSL is probably the most widely used cryptographic tool in existence. Deployed on millions of machines, from cloud servers and containers to small embedded devices like network routers, it performs many functions to secure communications between devices. Security in computing and networking is a gigantic topic. So gigantic in fact, that no single blog post can come even close covering everything. To keep things simple, I’ll narrow things to just getting our own environment created, including a “certificate authority” (CA), a web server and a web client. These days, SSL is synonymous with TLS. TLS is just the name of the newer versions of the SSL protocol.

Topology

The network topology in this post isn’t very important, we’re going to be looking at how a certificate authority works with SSL encryption. But I’ve created a basic IP subnet 10.0.0.0/24 so we can watch the SSL traffic flow in Wireshark. All three machines are running Ubuntu 20.04. All SSL-related functions will be performed with OpenSSL.

A (very) basic explanation of a certificate authority

Certificate authorities, SSL, and Public Key Infrastructure (PKI) are a complex topic. To explain the whole thing would take lots of different explanations to break down each piece of the bigger picture and explain the part it plays. So to avoid having a gigantic explanation here, I’m going to sum up the function of a CA with a single sentence:

“A certificate authority provides a way for computers to verify that other computers they communicate with are in fact the domain name (i.e., example.com) they claim to be, and not a fraud.”

I hope that helps. I’ll take a crack at explaining PKI in another post.

Create the CA

A CA can be created with OpenSSL with a couple quick commands. When this process is complete, we will have two files, a private key and a digital certificate. First create a private key on our CA Ubuntu machine (at 10.0.0.3):

openssl genrsa -aes128 -out james_ca.key 2048

Now we’ll create a certificate. The private key’s corresponding public key will be on this certificate. We’ll be asked some questions after issuing the command, most of them are not important. The Common Name field should ideally be unique:

openssl req -x509 -new -nodes -key james_ca.key -sha256 -days 365 -out james_ca.pem
----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Idaho
Locality Name (eg, city) []:Boise
Organization Name (eg, company) [Internet Widgits Pty Ltd]:James
Organizational Unit Name (eg, section) []:James
Common Name (e.g. server FQDN or YOUR name) []:James_CA
Email Address []:someone@example.com

Our server is now a CA! The world’s CA’s are considered “trusted” because web browsers everywhere come with their certificates pre-installed. The CA we just created is trusted by no one and pre-installed in nothing, but we can get our Ubuntu client machine to trust the CA by manually loading the CA certificate into the client’s certificate store.

Load the CA certificate on the client

We’ll use secure copy (scp) which runs over SSH to copy the CA certificate to the Ubuntu client. The command looks like this:

scp james@10.0.0.3:james_ca.pem .

james_ca.pem                                  100% 1456   605.0KB/s   00:00

The first line above is the command, the second is the output. Now we need to add this to Ubuntu’s trusted certificate store. A couple commands will do the trick.

cp james_ca.pem /usr/local/share/ca-certificates/james_ca.crt
update-ca-certificates 

Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.

The first two lines are the commands, the rest is output. We’ve copied james_ca.pem (and changed its extension to .crt) to /usr/local/share/ca-certificates, which is where Ubuntu keeps certificates that it trusts. The update-ca-certificates command loads the new certificate for use in applications, like a web client or browser.

We have done manually what the world’s web browsers have built-in. We have loaded the certificate of a CA we trust. Now any other certificate presented to this client will be trusted if it has been signed by the CA.

Create a signing request on the server

Now we need to create what is called a “signing request”, which is actually just another file in a specific format. We take this signing request file to the CA, which it will use to create a signed certificate, which can be returned to the server and loaded into nginx web server for use. We’ll be asked the same set of questions this time since a certificate is going to be created. The important field is the Common Name, which is usually the server’s domain name, but in this simple topology it’s the IP address (since I haven’t set up DNS) that the client will use to access the nginx web server. The signing request is created with SSL like this:

openssl req -newkey rsa:2048 -keyout private.key -out james.csr

Generating a RSA private key
.................+++++
..............................................................................................+++++
writing new private key to 'private.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Idaho
Locality Name (eg, city) []:Boise
Organization Name (eg, company) [Internet Widgits Pty Ltd]:James
Organizational Unit Name (eg, section) []:James
Common Name (e.g. server FQDN or YOUR name) []:10.0.0.2
Email Address []:someone@example.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

The first line is the command, the rest is output. The passphrase you use for the private key will have to be removed later, I’ll show you how at that time.

Now we can take the file james.csr to the CA and produce a signed certificate.

Load the signing request on the CA

Let’s again use secure copy to bring the signing request file to the CA. On the CA’s cli:

scp james@10.0.0.2:james.csr .

The signing request file is now on the CA. We can create a certificated (server.pem) signed with the CA’s private key like this:

openssl x509 -req -in james.csr -days 365 -CA james_ca.pem -CAkey james_ca.key -out server.pem

Just a quick recap of the different files in use here:

  • james.csr is the signing request from the server
  • james_ca.pem is the CA’s certificate
  • james_ca.key is the CA’s private key
  • server.pem is the CA-signed certificate that we can now return to the server for use in nginx

Great! We’ve got a freshly signed certificate ready to go. Let’s copy it to the server and load it into nginx.

Load the signed certificate into nginx

Let’s use scp again to copy server.pem to the server. On the server’s cli:

scp james@10.0.0.3:server.pem .

Now we have the signed certificate on our server. By the way, nginx can be quickly installed like this:

apt-get install nginx

And check it’s working with systemctl status nginx:

As I said earlier, we’ll need to remove the passphrase encryption from the server’s private key before it can be used in nginx. It’s one command, like this:

openssl rsa -in private.key -out private_unencrypted.pem

Now that the private key has been unencrypted into a new file called private_unencrypted.pem, it can be used in the nginx config file. The default nginx config is in /etc/nginx/sites-available/default. I added these lines, they’ll be a bit different for yours if the folders and/or files are differently named:

listen 443 ssl default_server;
ssl_certificate /home/james/server.pem;
ssl_certificate_key /home/james/private_unencrypted.pem;
server_name 10.0.0.2;

Restart nginx:

systemctl restart nginx

If you don’t get any errors, you are good to go!

Verify SSL is working from the client

There are a lot of commands to get the certificate presented from the server, view it, verify it, etc. But for now, we can just validate the SSL connection. You can do that with curl really easy from the client:

curl https://10.0.0.2

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

.... omitted for brevity

We can see we properly got an HTML document back from the server. We can also see it in Wireshark if you capture traffic going to and from the client and server. We can see the entire SSL (TLS) handshake go through without any errors:

Ironically, the CA has not had its own certificate installed in its certificate store (unless you did that on your own). So we can actually see using curl from the command line of the CA machine what it would look like to try to access the server without the certificate installed. From the CA:

curl https://10.0.0.2

curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

This is the command line equivalent of this screen in your browser:

Since the CA’s certificate hasn’t been installed in it’s certificate store, the connection to the server at 10.0.0.2 can’t be verified and so you get a nasty message.

This was a long one but I hope it helped you understand keys, certificates and certificate authorities! It sure helped me!

Dockerized phpipam in GNS3

If you’re keeping track of all your IP addresses from your environment in a really big and messy Excel file, you may want to consider switching to an IP address management tool. One such tool is phpipam, which is a web-based tool that allows you to store your IP addresses in a central database (a SQL database, to be specific). The reasons why that approach would be far superior to an Excel file are pretty clear – first of all no more emailing a million different copies of that Excel file. But it has other advantages as well, for example if your software development team wants to check the availability/reserve a new IP address, subnet or vlan from code, they can do it via the phpipam API without ever clicking on anything.

A testing instance of phpipam can be brought into your GNS3 environment quickly using Docker! It requires a little hacking, but nothing too ambitious. If you haven’t got GNS3 or Docker installed or you don’t know how to add a Docker image to GNS3, check out my post on that topic.

Topology

We’re not doing anything fancy here, just the phpipam docker container connected to the “NAT” cloud node. By default, the NAT cloud node uses a virtual adapter with IP subnet 192.168.122.0/24. The NAT adapter is at .1, and I’ll set my phpipam container to use .2. This setup will allow us to access the phpipam web server in the container at 192.168.122.2 via a web browser from our desktop computer that runs GNS3.

Build a custom docker image

We’re going to quickly build a custom docker image from the official phpipam image on Docker Hub. If you’re using a GNS3 VM, you can do this via a cli session on the VM. If you’re using Linux, just do this from any terminal. Make a directory for your Dockerfile:

mkdir jamesphpipam
vi Dockerfile

Now we’re going to write the docker commands for our custom image. MySQL server needs to be installed, and also the directory “/run/mysqld” needs to be created as well so MySQL can create a Unix socket there:

FROM phpipam/phpipam-www

RUN apk add mysql mysql-client
RUN mkdir /run/mysqld

Now we have an image (it’s based on Alpine Linux) ready to fire up in GNS3. You’ll need to add it from GNS3 preferences -> docker containers -> new. Go through all the screens and use defaults, except you’ll want to set the “start command” to “/bin/sh” to give you command line access when you double click on it from the GNS3 canvas.

Configure MySQL and Apache

First we need to open up the cli on the container and set its IP address to 192.168.122.2 (ip addr add 192.168.122.2/24 dev eth0). Start up both mysqld and httpd (MySQL Server and Apache Web Server), like this:

httpd
mysqld --user=root &

Make sure you use the ampersand at the end of your mysqld command, so it runs in the background.

To set the MySQL user and password, I had to login to the MySQL cli and run these commands in the phpipam docker container:

mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY 'SomeSecret';

Now we should be able to access the phpipam page at 192.168.122.2 from any web browser!

Configure new phpipam installation

If you click on “New phpipam installation”, it will take you to a page to select the SQL database installation type:

Let’s select “Automatic database installation”. Then we just put in the user “root” and password “SomeSecret” that we entered in our mysql cli earlier:

And our database is installed! Now we just need to set the admin password on the next screen:

Click on “Proceed to login”, login with user “admin” and the password you just set. You’ll be taken to the main phpipam page!

Hit me up if you run into any snags!

Add Dockerized Bind DNS Server to GNS3

I posted a while ago on how to install and configure the Bind DNS server on Ubuntu 18.04, and got a request from a reader with help on getting Dockerized Bind into GNS3. This post is the result of my tinkering with that lab.

The organization that oversees the Bind open source project also releases an official Docker image through the Docker hub that anyone can access. Docker container technology can be a tricky at first for systems and network engineers to wrap their heads around. Docker containers are not an entire operating system – full operating systems are designed to run many processes at once. Docker containers are designed to run one process and one process only. They only contain the software and libraries needed to run that process.

GNS3 has a very cool integration with Docker, however. It allows you to add full network adapters to your containers and copies in some handy tools to make the command line environment usable. But since many of the familiar OS tools are not included in most Docker containers like they would be with a standard OS, it can be challenging to get things working right.

If you are using Ubuntu Linux, feel free to check out my guide on installing GNS3 and Docker on Ubuntu 20.04. If you are using Windows or Mac with the GNS3 VM, Docker is already installed on the VM.

Topology

My topology is simple – a single vlan and IP subnet of 10.0.0.0/24. My Bind DNS server will reside at 10.0.0.3, with two Alpine Linux containers at 10.0.0.1 and 10.0.0.2. I walk through getting Alpine Linux containers installed on the post I linked above, if you need help.

Build your own image based off the official ISC Bind image

First open up a shell or terminal on the GNS3 VM or wherever the GNS3 server is located. If you don’t know how to open a shell, they walk you through it on the official GNS3 docs:

https://docs.gns3.com/docs/emulators/create-a-docker-container-for-gns3/

Create a directory where you can write your Dockerfile and build the image:

mkdir jamesbind
cd jamesbind

vi Dockerfile

Feel free to use whatever text editor you like, I’m a vi person. We’re going to write a Dockerfile that looks like this:

FROM internetsystemsconsortium/bind9:9.11
RUN apt-get update
RUN apt-get install vim -y

Basically all this does is pull the official Bind Docker image, and run some commands on the image. Namely we are updating apt-get and installing vi. We need to do this because this docker image does not have a text editor installed, and we have to edit the Bind configuration files.

Full disclosure: there is another, much better way than manually editing config files from inside the container. You can write the config files in the same folder as the Dockerfile, and add them to the Docker image when you build it. However, I think it’s best for learning and troubleshooting purposes to manually edit the text files, so that’s the route I’m going.

Build your image(-t switch gives it a “tag”, which is basically a name):

docker build -t jamesbind .

Don’t forget the period at the end, that’s important. You should now have a fresh docker image with bind and vi installed in it.

Add your image to GNS3

From the GNS3 preferences window, you can now add your image to the list of devices available.

Click through and use the defaults except when you get to the “Start command” window. You’ll want to set that to /bin/bash:

Now you’re ready to use your image in GNS3!

Fire up Bind

Drag all the containers out, connect and double click on them to get a terminal. You should be able to configure IP settings normally using iproute2 commands (ip addr add 10.0.0.1/24 dev eth0, etc). For the Bind container, let’s write our config files. As I mentioned many cycles ago in my Bind server post, there are three Bind config files:

/etc/bind/named.conf.options –> Configures BIND9 options
/etc/bind/named.conf.local –> Sets zone file name and gives its location
/etc/bind/zones/db.jamesmcclay.com –> The actual zone file with DNS records.

First let’s hop into our Bind container (just double click on it) and configure named.conf.options. Mine looks like this:

options {
        directory "/var/cache/bind";
        listen-on { any; };
};

Now on to named.conf.local. This is where you declare your zone. Mine is going to be jamesmcclay.com, I just made it up.

zone "jamesmcclay.com" {
    type master;
    file "/etc/bind/zones/db.jamesmcclay.com";
};

Now for the zone file that we indicated above. It needs to be created, so lets create both the zones folder and jamesmcclay.com zone file:

mkdir zones
cd zones
vi db.jamesmcclay.com
@               IN      SOA     ns.jamesmcclay.com.    root.jamesmcclay.com. (
                                2               ; Serial
                                604800	        ; Refresh
                                86400           ; Retry
                                2419200         ; Expire
                                604800 )        ; Negative Cache TTL
;
@               IN      NS      ns.jamesmcclay.com.
ns              IN      A       10.0.0.3
alpine1         IN      A       10.0.0.1
alpine2         IN      A       10.0.0.2

Finally, fire up Bind by running the “named -g” command. This will run it in the foreground, with debug output which will be handy. Alternatively, you can just run “named” and it’ll go in the background. When you run it, you’ll be looking for a line that says your zone file was loaded. “all zones loaded” seems to be a lie, if there’s errors on your zone, it’ll say that and then say all zones were loaded. Make sure you read the output carefully:

named -g
<...removed for brevity...>
26-Oct-2021 23:49:14.231 zone jamesmcclay.com/IN: loaded serial 2
26-Oct-2021 23:31:49.828 all zones loaded
26-Oct-2021 23:31:49.829 running

In your Alpine containers, add “nameserver 10.0.0.3” to resolv.conf to tell them to use the Bind server for DNS resolution:

echo "nameserver 10.0.0.3" > /etc/resolv.conf

Testing your setup

First let’s ping ns.jamesmcclay.com (the Bind container) from alpine-1:

ping ns.jamesmcclay.com

PING ns.jamesmcclay.com (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=1.080 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=1.073 ms

It works! We can see in a wireshark packet capture the DNS request from 10.0.0.1 and response from 10.0.0.3:

Pinging to alpine2.jamesmcclay.com also works:

ping alpine2.jamesmcclay.com

PING alpine2.jamesmcclay.com (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.999 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=1.087 ms

Troubleshooting

The Bind configuration files are really sensitive to anything that’s left out. Be sure and check to see if you forgot a semicolon or that your zone file is properly formatted with all required entries in place. And again, I highly recommend using the “named -g” when you are testing, it’ll give you some big hints as to what is wrong with your configuration.

If your Bind server is running with no config errors and something still isn’t working, it could be a network issue. Make sure and do a packet capture to see if packets are actually flowing and they’re what you expect! Sometimes after troubleshooting for a long time I do a packet capture only to find packets were never leaving the network interface due to something I forget, like adding an IP address or route somewhere.

Good luck! Feel free to reach out with questions about your lab, I’m always happy to help.

Install GNS3 and Docker on Ubuntu 20.04 for Cisco and Linux Network Labs

Every major OS has its place, so I’m not hoping to get into that discussion, but I find that Ubuntu Linux works really well for creating network labs in GNS3. If you’re not familiar with GNS3, you’re missing out. It allows you to pull in real VM’s, and even Docker containers into an emulated network environment for testing and experimentation. You can run Cisco routers and switches, other vendor network vendors, Windows Desktop and Server, Linux and any other OS that is supported by Linux’s QEMU/KVM hypervisor which is pretty much anything. GNS3 has many features, but today we’ll just look at getting it installed, along with Docker.

Why is Ubuntu better to run GNS3? You may have noticed that on Windows or Mac versions of GNS3, the server has to run on a VM to work properly. That server VM runs a Linux OS, specifically Ubuntu. So using Ubuntu as your desktop OS means you’re cutting out all of that complexity with the server VM, not to mention the additional RAM consumed. Simply put, GNS3 runs the way it’s supposed to on Ubuntu. Not to knock the Windows and Mac versions, the GNS3 team worked hard on those. But in my humble and honest opinion, Ubuntu just works better for GNS3.

Most folks stick to using VM’s in GNS3, but the Docker integration is pretty awesome and has some very real benefits over VM’s. Any docker container you have installed on the same system as the GNS3 server can be pulled into GNS3, although whether it will work properly depends somewhat on what the container has installed in it.

GNS3 Installation

The official GNS3 Ubuntu releases can be found at their PPA at:

https://launchpad.net/~gns3/+archive/ubuntu/ppa

The PPA can be added and GNS3 installed with just a few quick commands, although it’s a relatively big download:

sudo add-apt-repository ppa:gns3/ppa
sudo apt-get update
sudo apt-get install gns3-gui

When you first run GNS3, you’ll notice that the default option is not a VM, it’s to run the server locally. No VM needed!

At this point, GNS3 is installed, although you may have to run this command to get wireshark captures working:

chmod 755 /usr/bin/dumpcap

Docker Installation

I’ll just be following the official Docker instructions here, they work great:

https://docs.docker.com/engine/install/ubuntu/

These are to install the repository, which is probably the “best” option. There is a convenience one-liner script, but we all know that’s not a good habit to get into, so we’ll avoid that.

First install dependencies:

 sudo apt-get update
 sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

Add the Docker official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the stable repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

And install:

 sudo apt-get update
 sudo apt-get install docker-ce docker-ce-cli containerd.io

To avoid getting permissions errors in GNS3, you’ll need to add your user to the docker group. You’ll need to log out/log in or restart for this to take effect:

sudo usermod -aG docker ${USER}

Add a Docker container to your GNS3

Now that Docker is installed, pulling a Docker image from the Docker Hub is easy. A popular one is Alpine Linux because it’s so small, but packs lots of popular tools and libraries:

docker pull alpine

Now you should be able to add this image to GNS3. Go into GNS3, go to preferences, and all the way at the bottom where it says “Docker containers”. Click on “new”, and you should be able to select the Alpine Linux image from the drop-down menu:

Click through and leave the defaults, but you might want two network adapters instead of one, in case you want it to be a router. Now just drag a couple containers out onto the canvas:

At this point, you should be able to double click on these and get a busybox shell, which will let you configure IP settings and the like. You may have noticed that the startup of these containers is near-instantaneous, and they consume very little RAM. One of the many perks of the lightweight nature of Docker containers. Enjoy!

IPsec on Linux – Strongswan Configuration w/Cisco IOSv (IKEv2, Route-Based VTI, PSK)

IPsec is a cool tool for encrypting connections between network nodes, usually over the Internet (but not always). There are many different ways to configure an IPsec tunnel. Many tunnels use a policy-based approach which means the traffic that is sent through the tunnel is pre-defined using a “policy” that is part of the configuration. That doesn’t always work, you may need to dynamically change what traffic goes through using a routing protocol like OSPF or EIGRP without having to bring the tunnel down and reconfigure it . Thus there is another option, a “route-based” tunnel.

There are two different methods for creating a route-based IPsec tunnel, the first using GRE, which inserts a second GRE IP header into packets going into the tunnel. The second is VTI, which operates in a similar manner to GRE but under the hood it’s quite a different implementation.

VTI was originally way to save IP space on point-to-point links in the early networking days before subnetting. It was adapted as a way to assign routes to an IPsec tunnel. Part of its legacy is a “numbered” or “unnumbered” mode. Originally, VTI could inherit an IP from another interface and save IP address space. This unnumbered mode is pretty strange, so today we’ll take a look at numbered mode to keep things familiar.

Topology

Pretty simple, we’re trying to get the Window10 box at the bottom left to ping the Ubuntu Server 18.04 at the bottom right to ping each other. We’ll configure a tunnel between the Ubuntu box at the top left and the Cisco IOSv router at the top right. The underlying 1.1.1.0/30 network serves to look like a WAN network, while the network inside the VTI tunnel will be 10.0.0.0/30.

Installation

Ubuntu18.04-FRR-1


Strongswan on Ubuntu 18.04 is pretty easy with apt-get:

apt-get install strongswan

Configuration

Ubuntu18.04-FRR-1

First you need to add a config to /etc/ipsec.conf, something that looks like this:

conn tunnel 
        leftupdown=/usr/local/sbin/ipsec-notify.sh #run this script on start
        left=166.0.0.5
        leftsubnet=0.0.0.0/0 #all traffic
        right=166.0.0.1
        rightsubnet=0.0.0.0/0 #all traffic
        ike=aes256-sha2_256-modp1024!
        esp=aes-sha2_256!
        authby=secret
        auto=start
        keyexchange=ikev2
        mark=32 # only accepts packets with this mark
        type=transport

Then configure the PSK in /etc/ipsec.secrets:

1.1.1.2 1.1.1.1 : PSK '12345'

Then create the tunnel script that is referenced in the config. For me the file will be located at /home/james/ipsec-notify.sh:

ip link add vti0 type vti local 1.1.1.2 remote 1.1.1.1 key 32
ip link set vti0 up
ip addr add 10.0.0.2/30 dev vti0
ip lin set dev vti0 mtu 1400

Note the “key 32” in the first line above. That identifies what traffic strongswan should encrypt and corresponds to the “mark” in the strongswan config. It’s important.

Next you need to add a line for your VTI interface in /etc/sysctl.conf that looks like this to disable kernel policy lookups, this is a routed interface:

net.ipv4.conf.vti0.disable_policy=1

Finally, you need to tell Charon (Strongswan’s IKE daemon) to not handle routing. We’ll handle routing on our own. In /etc/strongswan.d/charon.conf, find this line:

install_routes = no

Make sure it’s set to no. Tell strongswan to restart and the tunnel should attempt a connection:

ipsec restart

CiscoIOSv15.6(2)T-1

My running-config is abbreviated, but it looks like this:

crypto ikev2 proposal james-proposal 
 encryption aes-cbc-256
 integrity sha256
 group 2
!
crypto ikev2 policy james-policy 
 proposal james-proposal
!
crypto ikev2 keyring james-ring
 peer remote-router-james
  address 1.1.1.2
  pre-shared-key 12345
!   
crypto ikev2 profile james-profile
 match identity remote address 1.1.1.2 255.255.255.255 
 authentication local pre-share
 authentication remote pre-share
 keyring local james-ring
!
crypto ipsec transform-set james-trans esp-aes esp-sha256-hmac 
 mode transport
!
crypto ipsec profile james-protect-vti
 set transform-set james-trans 
 set ikev2-profile james-profile
!
interface Tunnel0
 ip address 10.0.0.1 255.255.255.252
 ip mtu 1400
 tunnel source 1.1.1.1
 tunnel mode ipsec ipv4
 tunnel destination 1.1.1.2
 tunnel protection ipsec profile james-protect-vti

Static routing

Ubuntu18.04-FRR-1

Pretty simple, just add a route for 172.16.0.0/24 pointing to 10.0.0.1

ip route add 172.16.0.0/24 via 10.0.0.1

CiscoIOSv15.6(2)T-1

Simple here too:

ip route 192.168.0.0 255.255.255.0 10.0.0.2

Optional – EIGRP Routing

Remove those static routes and we’ll try a protocol to achieve the aforementioned dynamic routing.

Ubuntu18.04-FRR-1

Checkout my post on setting up EIGRP with Free Range Routing on Linux for the installation. Assuming you have that done, just log into the FRR shell on Ubuntu, it’s in /snap/bin, or just use the full path:

/snap/bin/frr.vtysh
Hello, this is FRRouting (version 7.2.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
james# 
james# conf t
james(config)# router eigrp 10
james(config-router)# network 192.168.0.0/24
james(config-router)# network 10.0.0.0/30
james(config-router)# end
james# 

CiscoIOSv15.6(2)T-1

On a Cisco IOSv router, it’s pretty simple:

Router(config)#router eigrp 10
Router(config-router)#network 172.16.0.0 0.0.0.255
Router(config-router)#network 10.0.0.0 0.0.0.3
Router(config-router)#end

Verification

Ubuntu18.04-FRR-1

Verifying the tunnel on Ubuntu is done with “ipsec statusall”, although there are more specific commands. This will do since we only have one tunnel. I’ve abbreviated the output, but these lines say it all:

Security Associations (1 up, 0 connecting):
      tunnel[1]: ESTABLISHED 58 minutes ago, 1.1.1.2[1.1.1.2]...1.1.1.1[1.1.1.1]
      tunnel[1]: IKEv2 SPIs: 9a137e96ee332b6b_i* 1599c6291be65825_r, pre-shared key reauthentication in 110 minutes
      tunnel[1]: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_1024
      tunnel{11}:  INSTALLED, TRANSPORT, reqid 1, ESP SPIs: c72f4e27_i de8fcbc0_o
      tunnel{11}:  AES_CBC_128/HMAC_SHA2_256_128, 0 bytes_i, 0 bytes_o, rekeying in 30 minutes
      tunnel{11}:   1.1.1.2/32 === 1.1.1.1/32

For routing, just issue “ip route” and you’ll see your static or EIGRP routes:

root@james:/snap/bin# ip route
1.1.1.0/30 dev eth1 proto kernel scope link src 1.1.1.2  
10.0.0.0/30 dev vti0 proto kernel scope link src 10.0.0.2  
172.16.0.0/24 via 10.0.0.1 dev vti0 proto 192 metric 20  
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.1

CiscoIOSv15.6(2)T-1

Lots of ways to show what’s going on with an IPsec tunnel on an IOS device, and “show crypto ipsec sa peer” is one of them. I usually use “i” to just include the status:

Router#show crypto ipsec sa peer 1.1.1.2 | i Status
        Status: ACTIVE(ACTIVE)
        Status: ACTIVE(ACTIVE)
        Status: ACTIVE(ACTIVE)
        Status: ACTIVE(ACTIVE)

Also “show ip route” will show your static or EIGRP routes (D for EIGRP):

      1.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
C        1.1.1.0/30 is directly connected, GigabitEthernet0/1
L        1.1.1.1/32 is directly connected, GigabitEthernet0/1
      10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
C        10.0.0.0/30 is directly connected, Tunnel0
L        10.0.0.1/32 is directly connected, Tunnel0
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.0.0/24 is directly connected, GigabitEthernet0/0
L        172.16.0.1/32 is directly connected, GigabitEthernet0/0
D     192.168.0.0/24 [90/26882560] via 10.0.0.2, 00:07:56, Tunnel0

Finally, don’t forget to ping from Windows:

Troubleshooting

There are lots of tools here, including the strongswan “ipsec statusall”, Cisco debug commands, and others. But the one that always let’s me know what’s wrong the fastest is a packet capture. Look for IKE negotiation packets (ISAKMP filter in Wireshark) if you’re tunnel isn’t coming up, and make sure traffic goes through the tunnel (ESP filter in Wireshark) when it’s up:

BIND9 DNS Server on Ubuntu 18.04 Linux (w/ Cisco Routing)

Background

BIND9 is an open-source DNS (Domain Name System – the text-to-ip-address resolution system we all rely upon) server application that many of the world’s DNS servers use to change IP addresses (e.g. 1.2.3.4) into strings of characters such as “questioncomputer.com”. DNS is an area of expertise in which some people spend their entire careers and as such, there is much to know about it. I’m no DNS expert, but since anyone can fire up their own DNS server with BIND9, I figured I’d give it a go. BIND9 of couse runs on Linux, today we’ll be using Ubuntu 18.04.

BIND9 is one of several applications that a non-profit company called “Internet Systems Consortium” produces, you can find their page on BIND9 here. They rely on companies that use BIND9 to purchase professional support, so if you work for an organization that does, please consider reaching out to them.

Topology

This topology includes 3 different routers and 3 different servers. The routers, aptly-named Router1, Router2, and Router3 each have 1 server connected. Two of the routers are Cisco IOSv routers, and Router3 is an Ubuntu box with Free Range Routing installed just for kicks. Routing is set up using OSPF, so everything can already ping everything else. If you’re interested in how set up an Ubuntu 18.04 server as a router with OSPF, please check out my post on installing FRR. In that post I ran EIGRP but configuring OSPF is pretty simple once you get FRR installed.

The DNS server at the top is running Ubuntu 18.04, as well as the generic Ubuntu 18.04 server at the bottom right. There is a Windows 10 desktop at the bottom left to provide some diversity.

Installation

Installation of BIND9 is pretty easy using Ubuntu’s package manager:

apt-get install bind9

Configuration

There are three text files needed to get a basic BIND9 configuration up and running, these are:
/etc/bind/named.conf.options –> Configures BIND9 options
/etc/bind/named.conf.local –> Sets zone file name and gives its location
/etc/bind/zones/db.jamesmcclay.com –> The actual zone file with DNS records.

First order of business is edit /etc/bind/named.conf.options to have a very, very basic configuration:

options {
        directory "/var/cache/bind";
        listen-on { any; };
};

Second, add a zone configuration to named.conf.local that indicates where the zone file will be kept:

zone "jamesmcclay.com" {
    type master;
    file "/etc/bind/zones/db.jamesmcclay.com";
};

Lastly, we’ll create “db.jamesmcclay.com” under the “zones” directory, because that’s what I put in named.conf.local. Obviously you’ll want to probably substitute something else for “jamesmcclay.com”:

@               IN      SOA     dns-server.jamesmcclay.com    dns-server.localhost (
                                2               ; Serial
                                604800  ; Refresh
                                86400           ; Retry
                                2419200 ; Expire
                                604800 )        ; Negative Cache TTL
;
@               IN      NS      dns-server
@               IN      A       172.16.0.2
Router1         IN      A       10.0.0.1
Router2         IN      A       10.0.0.2
Router3         IN      A       11.0.0.1
Win10           IN      A       192.168.0.2
U1804           IN      A       192.168.1.2
dns-server      IN      A       172.16.0.2

The values towards the top are mostly just setting default values for DNS, while the A records at the bottom are more important. Those are the IP addresses that will be returned for hostnames within “jamesmcclay.com”. Gotta make sure those are correct. They all correspond with the IP’s listed in the topology.

Now restart BIND9:

systemctl restart bind9

It should show green if your config is good:

$systemctl status bind9

Configuring the servers and desktops is as simple as setting the DNS server IP address to 172.16.0.2. On Ubuntu linux, you can do it with Netplan. The default config file for Netplan is at /etc/netplan/50-cloud-init.yaml, and I configured UbuntuServer18.04-2 at the bottom right of the topology like this:

network:
    ethernets:
        eth0:
            addresses: [192.168.1.2/24]
            dhcp4: false
            optional: true
            gateway4: 192.168.1.1
            nameservers:
                search: [jamesmcclay.com]
                addresses: [172.16.0.2]
    version: 2

For Windows 10 (and most others, I think) you can sift through the control panel to get to network adapter settings, but I just hit “window key + R” and type “ncpa.cpl” and press enter which takes me to network adapter configuration. My configuration on the Windows 10 desktop at the bottom left of the topology looks like this:

In “Advanced” I also set the search domain to “jamesmcclay.com” which appends that domain to any non-fully qualified hosts entered. You’ll see how that’s helpful in the verification section:

Verification

Simple pings will verify that you’re able to reach various hosts via their DNS names instead of IP addresses. On the UbuntuServer18.04-2 at the bottom right, I pinged the whole group using their fully qualified domain names:

$ping router1.jamesmcclay.com -c 1
64 bytes from 10.0.0.1 (10.0.0.1): icmp_seq=1 ttl=254 time=1.45 ms
$ping router2.jamesmcclay.com -c 1
64 bytes from 10.0.0.2 (10.0.0.2): icmp_seq=1 ttl=254 time=1.03 ms
$ping router3.jamesmcclay.com -c 1
64 bytes from 11.0.0.1: icmp_seq=1 ttl=64 time=0.635 ms
$ping win10.jamesmcclay.com -c 1
64 bytes from 192.168.0.2 (192.168.0.2): icmp_seq=1 ttl=126 time=2.11 ms
$ping dns-server.jamesmcclay.com -c 1
64 bytes from 172.16.0.2 (172.16.0.2): icmp_seq=1 ttl=62 time=0.913 ms

And since I entered search domains in my DNS server configurations, I can ping hosts without fully qualified domain names. From my Windows 10 desktop:

And as usual, you can do a packet capture to verify that the DNS packet flow is working correctly. If I look at packets leaving my Windows 10 desktop, I’ll see a DNS query from the desktop, and a response from BIND9:

Troubleshooting

If you’re having trouble, make sure all of your config files are formatted correctly. They get angry if you don’t format them right. Be sure and do a packet capture if you get stuck, that will pinpoint the problem pretty quickly. Also you can check the logs in the BIND9 server for messages that might have a clue. The command I use is “cat /var/log/syslog | grep bind”.

Apr 19 03:42:16 uvm1804 named[15145]: running as: named -f -u bind<br>Apr 19 03:42:16 uvm1804 named[15145]: loading configuration from '/etc/bind/named.conf'<br>Apr 19 03:42:16 uvm1804 named[15145]: reading built-in trust anchors from file '/etc/bind/bind.keys'<br>Apr 19 03:42:16 uvm1804 named[15145]: set up managed keys zone for view _default, file 'managed-keys.bind'<br>Apr 19 03:42:16 uvm1804 named[15145]: configuring command channel from '/etc/bind/rndc.key'<br>Apr 19 03:42:16 uvm1804 named[15145]: configuring command channel from '/etc/bind/rndc.key'<br>root@dns-server:/etc/netplan#

Loopback Adapter on Ubuntu 18.04 (Like on Cisco)

Background

For many years, Linux users used the old familiar network configuration text file located (usually) at ‘/etc/network/interfaces’ to configure IP addresses, DNS settings, gateway settings, and the like. It was even possible to create a virtual (read: loopback) interface with no associated physical interface using this file. While desktop versions of Ubuntu have used Network Manager for a long time to make things easier from a GUI, the CLI-only nature of an Ubuntu Server doesn’t need all the overhead of Network Manager.

Recent versions of Ubuntu Server (starting with 18.04) have moved to a new network configuration method, using a software package called “Netplan” developed by Ubuntu’s maintainer, Canonical. I’ve read the Ubuntu developers’ reasoning for making this switch, and to make a long story short, their decision was…. justified. The old method had many shortcomings. And while Netplan has a cool new and modern feel including configuration from a YAML file, the one thing that’s missing from it (best I can tell) is a way to create a loopback adapter. I scoured the Intertubes looking for a way to create one via Netplan but finally came to the conclusion that it doesn’t really have one. Others in the Ubuntu community seemed to agree but I could find no official documentation… if you know of some please leave a comment or send me an email, I’d love to know. I’ll update this article if loopback functionality is added to Netplan.

In any case, I’ve been using a method to create loopback adapters on Ubuntu Server and Desktop well before Netplan came along, simply because it’s the way I learned and it seems to work no matter what distribution or version I’m using. Loopback adapters are useful for all sorts of stuff in networking. You can create one, give it an IP address and advertise it in a routing protocol like OSPF, for example. Unlike physical interfaces, loopback adapters are much, much less likely to “go down”.

Topology

Nothing mind-blowing, we have a simple 10.0.0.0/24 network here. My Ubuntu Server 18.04 box will have a physical IP of 10.0.0.1/24 and a loopback adapter with IP address of 1.1.1.1/32. The Cisco IOSv box will have a physical address of 10.0.0.2/24 and a loopback adapter with IP address 2.2.2.2/32. Remember, neither of these boxes “knows” about the other’s loopack address, so we’ll have to do some sort of routing. Static routes are a simple solution, but most networking peeps are going to be looking to do a routing protocol. I’ll do OSPF just for kicks.

Configuration

Since there’s no external software or packages to install, I’ll jump right into configuration. This blog post assumes you have IPv4 routing enabled on both the Ubuntu Server and the Cisco router, and you have configured the network interfaces for 10.0.0.0/24.

Ubuntu18.04-1

On Ubuntu 18.04, you can add a “dummy” adapter with a single command like this:

 $ip link add name loop1 type dummy

Then turn it on and give it an IP address:

$ip link set loop1 up
$ip addr add 1.1.1.1/32 dev loop1

And you’re done. Keep in mind these commands don’t stick when you reboot. You can put these commands in a startup script, I’ll cover that in a another post.

CiscoIOSv15.6(2)T-3

Creating a loopback adapter on a Cisco IOS is simple as well, although I will say it would be nice if it supported CIDR notation (like other Cisco OS’s). Go into config mode and issue these commands:

Router(config)#int loop1
Router(config-if)#ip add 2.2.2.2 255.255.255.255

Verification

Ubuntu18.04-1

A good old fashion ping from the Ubuntu box to its own loopback adapter will tell you if it’s up:

$ping 1.1.1.1
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.038 ms                                                                                                                 
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=0.049 ms

The “ip route” command won’t show your /32, so try issuing the “ip addr show” command. To just show your loop1 adapter, add that on the end:

$ip addr show loop1
4: loop1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether de:9d:7c:37:7a:c2 brd ff:ff:ff:ff:ff:ff
    inet 1.1.1.1/32 scope global loop1
       valid_lft forever preferred_lft forever
    inet6 fe80::dc9d:7cff:fe37:7ac2/64 scope link 
       valid_lft forever preferred_lft forever

CiscoIOSv15.6(2)T-3

From enabled mode you can just issue “show ip int brief”:

Router#show ip int bri
Interface                  IP-Address      OK? Method Status                Protocol                                                                                                     
GigabitEthernet0/0         10.0.0.2        YES manual up                    up                                                                                                           
GigabitEthernet0/1         unassigned      YES NVRAM  administratively down down    
GigabitEthernet0/2         unassigned      YES NVRAM  administratively down down    
GigabitEthernet0/3         unassigned      YES NVRAM  administratively down down    
Loopback1                  2.2.2.2         YES manual up                    up

Optional – Static Routing

A static route is a quick and easy way (but not very scalable) to make the loopback adapters reachable across the network.

Ubuntu18.04-1

On Ubuntu, just issue a single command to add a static route to reach the Cisco router’s loopback. Make :

$ip route add 2.2.2.2/32 via 10.0.0.2

You can now ping it:

ping 2.2.2.2
64 bytes from 2.2.2.2: icmp_seq=1 ttl=255 time=1.36 ms
64 bytes from 2.2.2.2: icmp_seq=2 ttl=255 time=1.72 ms

CiscoIOSv15.6(2)T-3

It’s also a single command on the Cisco router’s configuration mode to add a static route to reach the Ubuntu box’s loopback adapter. You should be able to ping after adding it:

Router(config)#ip route 1.1.1.1 255.255.255.255 10.0.0.1
Router(config)#end
Router#ping 1.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

Optional – OSPF Routing

I have removed the static routes, so routing is no longer in place.

Ubuntu18.04-1

For setting up OSPF routing on Ubuntu, please see my post on that. It’s not too bad, just some commands to install Cumulus Linux Free Range Routing (FRR) from the Canonical Snap Store. Assuming you got that done, jump into the FRR CLI:

$/snap/bin/frr.vtysh
frr#conf t
frr(config)#router ospf
frr(config-router)#network 10.0.0.0/24 area 0
frr(config-router)#network 1.1.1.1/32 area 0
frr(config-router)#end
frr#show ip route
O   1.1.1.1/32 [110/10] via 0.0.0.0, loop1 onlink, 00:02:16
C>* 1.1.1.1/32 is directly connected, loop1, 00:19:40
O>* 2.2.2.2/32 [110/101] via 10.0.0.2, eth0, 00:00:08
O   10.0.0.0/24 [110/100] is directly connected, eth0, 00:03:14
C>* 10.0.0.0/24 is directly connected, eth0, 00:20:25

You’ll get the above route output as soon as the OSPF neighborship is up. You should be able to ping the Cisco’s loopback at 2.2.2.2.

CiscoIOSv15.6(2)T-3

On a Cisco IOS router, OSPF is of course already installed, firing it up is just a couple commands in config mode. Once again, CIDR notation would be nice. Does the world really need wildcard masks? They probably keep it that way to support automated systems that use it, but they could add CIDR notation support. Just sayin’.

Router(config)#router ospf 1
Router(config-router)#network 10.0.0.0 0.0.0.255 area 0
Router(config-router)#network 2.2.2.2 0.0.0.0 area 0
Router(config-router)#end
Router#show ip route
      1.0.0.0/32 is subnetted, 1 subnets
O        1.1.1.1 [110/11] via 10.0.0.1, 00:00:35, GigabitEthernet0/0
      2.0.0.0/32 is subnetted, 1 subnets
C        2.2.2.2 is directly connected, Loopback1
      10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
C        10.0.0.0/24 is directly connected, GigabitEthernet0/0
L        10.0.0.2/32 is directly connected, GigabitEthernet0/0

You should now be able to ping 2.2.2.2 from the Ubuntu box, and ping 1.1.1.1 from the Cisco router.

As I mentioned earlier, your “ip” commands won’t survive a reboot.